SlideShare une entreprise Scribd logo
1  sur  16
SM
Ensuring Software Success



Five Challenges for Agile Testing Teams
Solutions to Improve Agile Testing Results
A SmartBear White Paper
Everyday, Agile development teams are challenged to deliver high quality software as quickly as
possible. Yet testing can slow-down the go-to-market process. This white paper suggests time-saving
techniques that make the work of Agile testing easier and more productive. Comprised of precise
and targeted solutions to common Agile testing challenges, Smart Agile Testing offers tips and ad-
vice to ensure adequate test coverage and traceability, avoid build-induced code breakage, identify
and resolve defects early in the development process, improve API code quality, and ensure that new
releases don’t cause performance bottlenecks.

Contents
What Are the Most Common Challenges Facing Agile Testing Teams?..................................................................................................... 2	
Challenge 1: Inadequate Test Coverage .................................................................................................................................................................3	
Challenge 2: Accidental Broken Code Due to Frequent Builds.......................................................................................................................6	
Challenge 3: Detecting Defects Early, When They’re Easier and Cheaper to Fix...................................................................................8	
Challenge 4: Inadequate Testing for Your Published API.............................................................................................................................. 11	
Challenge 5: Ensure That New Releases Don’t Create Performance Bottlenecks............................................................................... 12	
About SmartBear Software........................................................................................................................................................................................ 16	




                                                                                                            www.smartbear.com/agile
What Are the Most Common Challenges Facing Agile Testing Teams?
    Agile development is a faster, more efficient and cost-effective method of delivering high-quality software.
    However, agile presents testing challenges beyond those of waterfall development. That’s because agile require-
    ments are more lightweight, and agile builds happen more frequently to sustain rapid sprints. Agile testing
    requires a flexible and streamlined approach that complements the speed of agile.

    Smart Agile Testing is a set of timesaving techniques specifically designed to make the work of agile testing
    teams easier and more productive. It is an empowering process that produces great results and has a simple
    mission: Get the best possible testing results with the least amount of work.

    These challenge- and solution-based techniques do not require major changes in your existing workflow. You
    can adopt them in increments, which enables you to focus on one specific challenge and meet it head-on with a
    precise, targeted solution. Following are five common challenges agile teams face and recommended solutions
    to handle them quickly and effectively.

    Recommended Solutions to the Five Common Challenges


      Challenge 1: Inadequate Test Coverage
    ¿¿ Linking tests to user stories (traceability) for insight into test coverage for each user story
    ¿¿ Integration with source check-in to find changed code that was not anticipated or planned for
    ¿¿ Analyzing specific metrics to identify traceability and missing test coverage

      Challenge 2: Accidentally Broken Code Due to Frequent Builds

    ¿¿ Running automated regression tests run on every build to discover broken code
    ¿¿ Analyzing specific metrics to identify regression runs and broken code

      Challenge 3: Finding Defects Early, When They’re Cheaper and Easier to Fix
    ¿¿ Performing peer reviews of source code and test artifacts to find early stage defects
    ¿¿ Using static analysis tools to identify early stage defects
    ¿¿ Analyzing peer review statistics and defect aging to address defects early when they’re least costly to fix

      Challenge 4: Inadequate Testing for Your Published API
    ¿¿ Running automated API tests on every build to ensure your API is working as designed
    ¿¿ Running load testing on your API calls to be sure your API is responsive
    ¿¿ Analyzing specific metrics to determine API test coverage and responsiveness

      Challenge 5: Ensuring That New Releases Don’t Create Performance Bottlenecks

    ¿¿ Conducting Application and API Load Testing ensures that performance is not impacted with the new release
    ¿¿ Implementing production monitoring to detect how your application is performing in production
    ¿¿ Analyzing specific metrics to identify bottlenecks in application / API performance




2
Challenge 1: Inadequate Test Coverage
    Inadequate test coverage can cause big problems. It’s often the result of too few tests written for each user
    story and lacking visibility into code that was changed unexpectedly. As we all know, developers sometimes
    change code beyond the scope of the features being released. They do it for many reasons such as to fix defects,
    to refactor the code, or because developers are bothered by the way the code works and just wants to improve
    it. Often these code changes are not tested, particularly when you are only writing tests for planned new fea-
    tures in a release.

    To eliminate this problem, it’s important to have visibility into all the code being checked in. By seeing the code
    check-ins, you can easily spot any missing test coverage and protect your team from unpleasant surprises once
    the code goes into production.

    How Can You Ensure Great Test Coverage of New Features?

    Before you can have adequate test coverage you first must have a clear understanding of the features being de-
    livered in the release. For each feature, you must understand how the feature is supposed to work, its constraints
    and validations, and its ancillary functions (such as logging and auditing). Agile developers build features based
    on a series of user stories (sometimes grouped by themes and epics). Creating your test scenarios at the user
    story level gives you the best chance of achieving optimal test coverage.

    Once your QA and development teams agree on the features to be delivered, you can begin creating tests for
    each feature. Coding and test development should be done in parallel to ensure the team is ready to test each
    feature as soon as it’s published to QA.

    Be sure to design a sufficient number of tests to ensure comprehensive results:

    ¿¿ Positive Tests: Ensure that the feature is working as designed, with full functionally, is cosmetically correct,
       and has user-friendly error messages.
    ¿¿ Negative Tests: Users often (okay, usually) start using software without first reading the manual. As a result,
       they may make mistakes or try things you never intended. For example, they may enter invalid dates, key-in
       characters such as dollar signs or commas into numeric fields, or enter too many characters (e.g., enter 100
       characters onto a field designed for no more than 50). Users also may attempt to save records without
       completing all mandatory fields, delete records that have established relationships with other records (such
       as master/detail scenarios), or enter duplicate records. When you design tests, it’s important to understand
       the constraints and validations for each requirement and create enough negative tests to ensure that each
       constraint and validation is fully vetted. The goal is to make the code dummy proof.
    ¿¿ Performance Tests: As we will see later in this paper, it’s a good idea to test new features under duress.
       There are ways to automate this but you should conduct some manual tests that perform timings with
       large datasets to ensure that performance doesn’t suffer too much when entering a large amount of data
       or when many there are many concurrent users.
    ¿¿ Ancillary Tests: Most well-designed systems write errors to log files, record changes to records via audits,
       and use referential integrity so whenever a master/child record is deleted, both records are deleted
       simultaneously. Many systems regularly run purge routines; be certain that as you add new features they




3
are covered by ancillary tests. Finally, most systems have security features that limit access to applications
        so specific people only have rights to specific functions. Ancillary tests ensure that log files and audits are
        written, referential integrity is preserved, security is embraced, and purge routines cleanly remove all related
        data as needed.


    Applying a sufficient number of tests to each feature to fully cover all the scenarios above is called traceability.
    Securing traceability is as simple as making a list of features including the number and breadth of tests
    that cover positive, negative, performance, and ancillary test scenarios. Listing these by feature ensures that
    no feature is insufficiently tested or not tested at all.


               To learn more about traceability, view this video.



    How Can You Detect Changes to Code Made Outside the Scope of New Features?

    It’s common for developers to make changes to code that go beyond the scope of features being released.
    However, if your testing team is unaware of changes made, you may end up with an unexpected defect because
    you couldn’t test them.

    So, what’s the solution? One approach is to branch the code using a source control management (SCM) sys-
    tem to ensure that any code outside the target release remains untouched. For source code changes, you need
    visibility into each module changed and the ability to link each module with a feature. By doing this, you can
    quickly identify changes made to features that aren’t covered by your testing.

    Consider assigning a testing team member to inspect all code changes to your source control system daily and
    ferret out changes made to untested areas. This can be cumbersome; it requires diligence and time. A good
    alternative is to have your source control system send daily code changes to a central place so your team can
    review them. Putting these changes in a central location helps implement rules that associate specific modules
    with specific features. That way, you’ll see when a change is made outside of a feature that’s being developed
    in the current release, and check off the reviewed changes so you can be certain each check-in is covered.

    One way to achieve this is to set up a trigger in your SCM that alerts you whenever a file is changed. Alternative-
    ly you can use a feature that inspects your source control system and sends all check-ins to a central repository
    for review. If you haven’t already built something like this, consider SmartBear’s QAComplete; it has a feature
    built specifically for this important task. Through its OpsHub connector, QAComplete can send all source code
    changes into the Agile Tasks area of the software, which you can then use to build rules that link source modules
    with features and to flag check-ins as being reviewed.

    What Are the Most Important Test Coverage Metrics?

    Metrics for adequate test coverage focus on traceability, test run progress, defect discovery, and defect fix rate.
    These include:




4
¿¿ Traceability Coverage: Count the number of tests you
       have for each requirement (user story). Organize the
       counts by test type (positive, negative, performance,
       and ancillary). Reviewing by user story shows if
       you have sufficient test coverage, so you can be
       confident of the results.
    ¿¿ Blocked Tests: Use this technique to identify
       requirements that cannot be fully tested because of
       defects or unresolved issues.
    ¿¿ Test Runs by Requirement (User Story): Count the
       number of tests you have run for each requirement,
       as well as how many have passed, failed, and are still
       awaiting run. This metric indicates how close you are
       to test completion for each requirement.
    ¿¿ Test Runs by Configuration: If you’re testing in
       different operating systems and browsers, it’s
       important to know how many tests you have run
       against each supported browser and OS. These counts indicate how much coverage you have.
    ¿¿ Daily Test Run Trending: Test run trending helps you visualize, day-by-day, how many tests have passed,
       failed, and are waiting to be run. This shows whether you can complete all testing before the test cycle is
       complete. If it shows you’re falling behind, run your highest priority tests first.
    ¿¿ Defects by Requirement (User Story): Understanding the number of defects discovered by requirement
       can trigger special focus on specific features so that you can concentrate on those with the most bugs. If
       you find that specific features tend to be most buggy, you’ll be able to run those more often to ensure full
       coverage of the buggy areas.
    ¿¿ Daily Defect Trending: Defect trending helps you visualize, day-by-day, how many defects are found
       and resolved. It also shows whether you can complete all high-priority defects before the testing cycle is
       complete. If you know you are lagging, focus the team on the most severe, highest priority defects first.
    ¿¿ Defect Duration: This shows how quickly defects are being fixed. Separating them by priority ensures that
       that the team addresses the most crucial items first. A long duration on high priority items also signals slow
       or misaligned development resources, which your team and the development team should resolve jointly.


    How Can You Ensure Your Test Coverage Team Is Working Optimally?

    As a best practice, we recommend that each day your testing team:

    ¿¿ Participates in Standup Meetings: Discuss impediments to test progress.
    ¿¿ Reviews Daily Metrics: When you spot issues such as high-priority defects becoming stale, work with the
       development leader to bring attention to them. If your tests are not trending to completion by the sprint
       end date, mitigate your risk by focusing on the highest priority tests. When you discover code changes that
       are not covered by tests, immediately appoint someone to create them.




5
Challenge 2: Accidentally Broken Code Due to Frequent Builds
    Performing daily builds introduces the risk of breaking existing code. If you rely solely on manual test runs, it’s
    not practical to fully regress your existing code each day. A better approach is to use an automated testing tool
    that records and runs tests automatically. This is a great way to test more stable features to ensure that new
    code has not broken them.

    Most agile teams perform continuous integration, which simply means that they check source code frequently
    (typically several times a day). Upon code check-in, they have an automated process for creating a software
    build. An automated testing tool can perform regression testing whenever you launch a new build. There
    are many tools on the market for continuous integration, including SmartBear’s Automated Build Studio, Cruise
    Control, and Hudson. It’s a best practice to have the build system automatically launch automated tests to
    detect the stability and integrity of the build.

    How Can You Get Started with Automated Testing?

    The best way to get started is to proceed with baby steps. Don’t try to create automated tests for every feature.
    Focus on the tests that provide the biggest bang for your buck. Here are some proven methods:

    ¿¿ Assign a Dedicated Resource: Few manual testers can do double duty and create both manual and
       automated regression tests. Automated testing requires a specialist with both programming and analytical
       skills. Optimize your efforts by dedicating a person to work solely on automation.
    ¿¿ Start Small: Create positive automated tests that are simple. For example, imagine you are creating an
       automated test to ensure that the order processing software can add a new order. Start by creating the test
       so that it adds a new order with all valid data (positive test). You’ll drive yourself crazy if you try to create
       a set of automated tests to perform every negative scenario that you can imagine. Don’t sweat it. You can
       always add more tests later. Focus on proving that your customers can add a valid order and that new code
       doesn’t break that feature.
    ¿¿ Conduct High-Use Tests: Create tests that cover the most frequently used software features. For example,
       in an order processing system, users create, modify, and cancel orders every day; be sure you have tests
       for that. However, if orders are exported rarely, don’t waste time automating the export process until you
       complete all the high-use tests.
    ¿¿ Automate Time-Intensive Tests/Test Activities: Next, focus on tests that require a long setup time. For
       example, you may have tests that require you to set up the environment (i.e., create a virtual machine
       instance, install a database, enter data into the database, and run a test). Automating the setup process
       saves substantial time during a release cycle. You may also find that a single test takes four hours to run by
       hand. Imagine the amount of time you will recoup by automating that test so you can run it by clicking a
       button!
    ¿¿ Prioritize Complex Calculation Tests: Focus on tests that are hard to validate. For example, maybe your
       mortgage software has complex calculations that are very difficult to verify because the formulas for
       producing the calculation are error-prone if done manually. By automating this test, you eliminate the
       manual calculations. This speeds up testing, ensures the calculation is repeatable, reduces the chance of
       human error, and raises confidence in the test results.
    ¿¿ Use Source Control: Store the automated tests you create in a source control system. This safeguards




6
against losing your work due to hard drive crashes and prevents overwriting of completed tests. Source
        control systems provide a safeguard by allowing you to check code in and out and retain prior test versions
        without fear of accidental overwriting.

    Once you create a base set of automated tests, schedule them to run on each build. Daily, identify tests that
    failed. Confirm if they flag a legitimate issue or if the failure is due to an unexpected change to the code. When
    a defect is identified, you should be very pleased that your adoption of test automation is paying dividends.

    Remember, start small and build your automated test arsenal over time. You’ll be very pleased by how much of
    your regression testing has been automated, which frees you and your team to perform deeper functional test-
    ing of new features.

    Reliable automated testing requires a proven tool. As you assess options, remember that SmartBear’s
    TestComplete is easy to learn and offers the added benefit of integrating with QAComplete, so you can schedule
    your automated tests to run unattended and view the run results on a browser.



               To learn more about traceability, see this video.



    What is Test-Driven Development?

    Agile practitioners sometimes use test-driven development (TDD) to improve unit testing. Using this approach,
    the agile developer writes code by using automated testing as the driver to code completion.

    Imagine a developer is designing an order entry screen. She might start by creating a prototype of the screen
    without connecting any logic, and then create an automated test of steps for adding an order. The automated
    test would validate field values, ensure that constraints were being enforced properly, etc. The test would be run
    before any logic was written into the order entry screen. The developer would then write code for the order entry
    screen and run automated tests to see if it passes. She would only consider the screen to be “done” when the
    automated test runs to completion without errors.

    To illustrate further, let’s say you’re writing an object
    that when called with a specific input, produces a specific
    output. By implementing a TDD approach, you can write
    code, run the automated tests, and continue that process
    recursively until attaining the expected input and output.

    Which Metrics Are Most Important for Successful Auto-
    mated Testing?

    As you grapple with this challenge, focus on metrics that
    analyze automated test coverage, automated test run
    progress, defect discovery, and defect fix rate, including:




7
¿¿ Feature Coverage: Count the number of automated tests for each feature. You’ll know when you have
       enough tests to be confident that you are fully covered from a regression perspective.
    ¿¿ Requirement/Feature Blocked: Use this metric to identify what requirements are blocking automation. For
       example, third-party controls require custom coding; current team members may lack the expertise to write
       them.
    ¿¿ Daily Test Run Trending: This shows you, day-by-day, the number of automated tests that are run, passed,
       and failed. Inspect each failed test and post defects for issues you find.
    ¿¿ Daily Test Runs by Host: When running automated tests on different host machines (i.e., machines with
       different operating systems or browser combinations), analyzing your runs by host alerts you to specific OS
       or browser combinations that introduce new defects.


    What Can You Do to Ensure Your Automated Test Team Is Working Optimally?

    We recommend that testing teams perform these tasks every day:

    ¿¿ Review Automated Run Metrics: When overnight automated test runs flag defects, do an immediate manual
       retest to rule out false positives. Then log real defects for resolution.
    ¿¿ Use Source Control: Review changes you’ve made to your automated tests and check them into your source
       control system for protection.
    ¿¿ Continue to build on your Automated Tests: Work on adding more automated tests to your arsenal following
       the guidelines described previously.


    Challenge 3: Detecting Defects Early, When They’re Easier and Cheaper to Fix
    You know that defects found late in the development cycle require
                                                                                          “A synergistic combination of formal
    more time and money to fix. And defects not found until production
                                                                                          inspections, static analysis, and formal
    are an even bigger problem. A primary goal of development and
                                                                                          testing can achieve combined defect
    testing teams is to identify defects as early as possible, reducing                   removal efficiency levels of 99%. Bet-
    the time and cost of rework. There are two ways to accomplish                         ter, this synergistic combination will
    this: Implement peer reviews, and use static analysis tools to scan                   lower development costs and sched-
    code to identify defects as early as possible. Is there value in using                ules and reduce technical debt by
    more than one approach? Capers Jones, the noted software quality                      more than 80% compared to testing
                                                                                          alone.” 1
    expert, explains the need for multiple techniques in a recent white
                                                                                                                    - Capers Jones
    paper available for download on SmartBear’s website.
                                                                                                Whitepaper from smartbear.com
    What Should You Review?

    As you’re developing requirements and breaking them into user
    stories, also conduct team reviews. You need to ensure every story is:

    ¿¿ Clear
    ¿¿ Supports the requirement
    ¿¿ Identifies constraints and validations the programmer and testers need to know




    1
     Capers Jones, Combining Inspections, Static Analysis, and Testing to Achieve Defect Removal Efficiency Above 95%, January 2012.
    Complimentary download from www.smartbear.com


8
There are two ways to accomplish this: Hold regular meetings so the team can review the user stories, or use a
    tool for online reviews to identify missing constraints and validations before you get too far into the develop-
    ment cycle. Defining the user stories also helps prevent defects that arise because the programmer failed to add
    logic for those constraints and validations. Tools such as SmartBear’s QAComplete make it easy to conduct user
    story reviews online.

    In addition to automated tests, your testers also need to define manual tests that offer comprehensive coverage
    for each user story. Then the team formally reviews the manual tests to ensure that nothing critical is missing.
    Programmers often suggest additional tests the tester has not considered, which prevent defects before the
    code goes into production.

    The second option is to simply allow the team to go online and review the tester’s plan to ensure nothing was
    overlooked. That’s the better approach because it allows team members to contribute during lulls in their work
    schedules. Tools such as SmartBear’s QAComplete make it easy to conduct manual test reviews online.

    What Are the Best Ways to Review Automated Tests?

    While developing or updating automated tests, it’s always a good idea to have another set of eyes take a look.
    Code reviews of scripted automated tests identify logic issues, missing test scenarios, and invalid tests. As
    discussed above, it’s critical that you check all automated tests into a source control system so you can rollback
    to prior versions if necessary.

    There are several techniques you can apply to automated code reviews. “Over-the-shoulder” reviews entail
    sitting with the automated test designer and reviewing the automated tests together. As you do this, you can
    recommend changes until you’re comfortable with the level of testing provided by your automation engineer.

    Remote teams cannot easily perform over-the-shoulder reviews. Although you could set up an online meeting
    and collaborate by sharing computer screens, it’s a much bigger challenge to manage time zone differences and
    find a convenient time to conduct a review.

    A convenient alternative to over-the-shoulder reviews is peer code reviews. Peer code review tools streamline
    the entire process by enabling an online workflow.

    Here’s an example: Your automation engineer checks the automation scripts into a source control system. Upon
    check-in, he uses the peer review tool to identify automation scripts that need code review. Using the tool, he
    selects a specific reviewer. The reviewer views the script in a window and makes notes. The reviewer can write
    notes on specific code lines to identify which lines of the automation script are being questioned. The tool also
    enables the reviewer to create a defect for the automation engineer to resolve when changing scripts. For the
    broadest integrations and most flexible workflow, consider SmartBear’s CodeCollaborator, the most Agile peer
    code review on the market.

    What Do You Gain from Peer Review?

    The lack of formal code reviews can adversely affect the overall testing effort, so you should strongly advocate




9
for them with the agile development team. When developers perform regular code reviews they identify logic er-
     rors, missing error routines, and coding errors (such as overflow conditions and memory leaks), and they dramati-
     cally reduce the number of defects. While code reviews can be done over-the-shoulder, it’s much more effective
     and efficient to use the right tool.

     How Can Static Analysis Help?

     The testing team also should strongly advocate the use of static analysis tools. Static analysis tools automatical-
     ly scan source code and identify endless loops, missing error handling, poor implementation of coding standards,
     and other common defects. By running static code analysis on every build, developers can prevent defects that
     might not be discovered until production.

     Although many static analysis tools can supplement the quality process by identifying bugs, they really
     only tell you that you have a bug. They can’t tell you how dangerous the bug is or the damage it could
     cause. Code review provides insight into the impact of the bug—is it a showstopper or is it trivial—so devel-
     opers can prioritize the fix. This is the main reason that SmartBear recommends that you use both code review
     and static analysis tools.

     What Are the Most Important Peer Code Review Metrics?

     Focus your metrics around peer code review progress
     and the number of defects discovered from peer reviews.
     Consider these:

     ¿¿ Lines of Code Inspected: Quantifies the amount of
        code that’s been inspected.
     ¿¿ Requirement (User Story) Review Coverage: Identifies
        which user stories have been reviewed.
     ¿¿ Manual Test Review Coverage: Reports which manual
        tests have been reviewed.
     ¿¿ Automated Test Review Coverage: Shows which
        automated tests have been reviewed.
     ¿¿ Code Review Coverage: Identifies which source code has been reviewed and which source check-ins remain
        to be reviewed.
     ¿¿ Static Analysis Issues Found: Identifies issues the static analysis scans found.
     ¿¿ Defects Discovered by Peer Review: Reports the number of defects discovered by peer reviews. You may
        categorize them by type of review (user story review, manual test review, etc.).


     What Can You Do Each Day to Ensure Your Testing Team Is Working Optimally?

     Each day, your testing team should:

     ¿¿ Perform Peer Reviews of user stories, manual tests, automated tests and source code. Log defects found
        during the review so you can analyze the result of using this strategy.




10
¿¿ Automatically Run Static Analysis to detect coding issues. Review the identified issues, configure the
        tolerance of your static review to ignore false positives, and log any true defects.
     ¿¿ Review Defect Metrics related to peer reviews to determine how well this strategy helps you reduce defects
        early in the coding phase.


     Challenge 4: Inadequate Testing for Your Published API
     Many testers focus on testing the user interface and miss the opportunity to perform API testing. If your soft-
     ware has a published API, your testing team needs a solid strategy for testing it.

     API testing often is omitted because of the misperception that it takes programming skills to call the properties
     and methods of your API. While programming skill can be helpful for both automated and API testers, it’s not
     essential if you have tools that allow you to perform testing without programming.

     How Do You Get Started with API Testing?

     Similar to automated testing, the best way to get started with API testing is to take baby steps. Don’t try to
     create tests for every API function. Focus on the tests that provide the biggest bang for your buck. Here are some
     guidelines to help you focus:

     ¿¿ Dedicated Resource: Don’t have your manual testers develop API tests. Have your automation engineer
        double as an API tester; the skill set is similar.
     ¿¿ High Use Functions: Create tests that cover the most frequently called API functions. The best way to
        determine the most called functions is to log the calls for each API function.
     ¿¿ Usability Tests: When developing API tests, be sure to create negative tests that force the API function to
        spit out an error. Because APIs are a black box to the end user, they often are difficult to debug. Therefore,
        if a function is called improperly, it’s important that the API returns a friendly and actionable message that
        explains what went wrong and how to fix it.
     ¿¿ Security Tests: Build tests that attempt to call functions without the proper security rights. Create tests that
        exercise the security logic. It can be easy for developers to enforce security constraints in the user interface
        but forget to enforce them in the API.
     ¿¿ Stopwatch-level Performance Tests: Time methods (entry and exit points) to analyze which methods take
        longer to process than anticipated.
     Once you create a base set of API tests, schedule them to run automatically on each build. Every day, identify
     any tests that failed to confirm that they’re legitimate issues and not just an expected change you weren’t
     aware of. If a test identifies a real issue, be happy that your efforts are paying off.

     API testing can be done by writing code to exercise each function, but if you want to save time and effort, use a
     tool. Remember, our mission is to get the most out of testing efforts with the least amount of work.

     When considering tools, take a look at SmartBear’s soapUI Pro. It’s easy to learn and has scheduling capabili-
     ties so your API tests can run unattended, and you can view the results easily.




11
Which API Metrics Should You Watch?

     Focus on API function coverage, API test run prog-
     ress, defect discovery, and defect fix rate. Here are
     some metrics to consider:

     ¿¿ Function Coverage: Identifies which functions
        API tests cover. Focus on the functions that
        are called most often. This metric enables you
        to determine if your testing completely covers
        your high-use functions.
     ¿¿ Blocked Tests: Identify API tests that are
        blocked by defects or external issues (for example, compatibility with the latest version of .NET).
     ¿¿ Coverage within Function: Most API functions contain several properties and methods. This metric identifies
        which properties and methods your tests cover to ensure that all functions are fully tested (or at least the
        ones used most often).
     ¿¿ Daily API Test Run Trending: This shows, day-by-day, how many API tests are run, passed, and failed.


     What Can You Do Each Day to Ensure Your API Testing Team Is Working Optimally?

     Testing teams should perform these things every day:

     ¿¿ Review API Run Metrics: Review your key metrics. If the overnight API tests found defects, retest them
        manually to rule out a false positive. Log all real defects for resolution.
     ¿¿ Continue to build on your API Tests: Work on adding more API tests to your arsenal using the guidelines
        described above.


     Challenge 5: Ensure That New Releases Don’t Create Performance Bottlenecks
     In a perfect world, adding new features in your current release would not cause any performance issues. But
     we all know that as software starts to mature with the addition of new features, the possibility of performance
     issues increase substantially. Don’t wait until your customers complain before you begin testing performance.
     That’s a formula for very unhappy customers.

     System slowdowns can be introduced in multiple places—your user interface, batch processes, and API—
     create processes that ensure all performance is monitored and issues are mitigated. Last but by no means
     least, you also should implement automatic production monitoring to check the speed systems for speed, which
     provides valuable statistics that enable you to improve performance.

     How Do You Get Started with Application Load Testing?

     Your user interface is the most visible place for performance issues to crop up. Users are very aware when they
     are waiting “too long” for a new record to be added.




12
When you’re ready for load testing, it is important to set a performance baseline for your application, website, or
     API:

     ¿¿ The response time for major features (e.g., adding and modifying items, running reports)
     ¿¿ The maximum number of concurrent users the software can handle
     ¿¿ Whether the application fails or generates errors when ”too many” visitors are using it
     ¿¿ Compliance with specific quality-of-service goals
     It’s very difficult to set a baseline without a tool. For instance, you could easily manually record the response
     time of every feature simply by using a stopwatch and recording the times on a spreadsheet. But trying to
     simulate hundreds or thousands of concurrent users is impossible without the right tool. You won’t have enough
     people connect at the same time to get you those statistics. When you’re looking at tools, consider SmartBear’s
     LoadComplete. It’s easy to learn, inexpensive, and can handle all the tasks needed to create your baseline.

     Once you establish a baseline, you need to run the same tests after each new software release to ensure it
     didn’t degrade performance. If performance suffers, you need to know what functions degraded so the technical
     team can address them. Once you create your baseline with LoadComplete, you can run those same load tests
     on each release without any additional work. That enables you to collect statistics to determine if a new release
     has adversely affected performance.

     How Do You Get Started with API Load Testing?

     Another place performance issues can surface is within your Web services API. If your user interface uses your
     API, it can impact not only performance of the API but also the overall user interface experience for your cus-
     tomers. Similar to application load testing, you need to create a baseline so you know what to expect in terms of
     average response time and learn what happens when a large number of users are calling your API.

     Use the same approach as with application load testing to set baselines and compare them against each code
     iteration. You can try to set the baseline manually, but it requires a lot of difficult work. You’d have to write
     harnesses that call your API simultaneously and also write logic to record performance statistics. A low cost API
     load-testing tool like SmartBear’s loadUI Pro saves you all that work.

     Run the same API performance tests with each new software release to ensure it does not degrade API perfor-
     mance and help identify which functions slow down the system.

     What Is Production Monitoring?

     Once you’ve shipped your software to production, do you know how well it’s performing? Is your application
     running fast or does it slow down during specific times of the day? Do your customers in Asia get similar perfor-
     mance as those in North America? How does your website’s performance compare to your competitors’? How
     soon do you learn if your application has crashed?

     These are just some of the thorny questions about software performance that can be difficult to answer. But not
     knowing the answer can adversely affect how your customers respond to your application… and your company.




13
Fortunately, you can address all of these critical questions by implementing production-monitoring tools. With
     them you can:

     ¿¿ Access website performance
     ¿¿ Receive an automatic e-mail or other notification if your website crashes
     ¿¿ Detect API, e-mail, and FTP issues
     ¿¿ Compare your website’s performance to your competitors’ sites
     When searching for the right performance-monitoring tool, consider SmartBear’s AlertSite products, the best on
     the market.

     What Are Some Metrics to Watch?

     Performance monitoring metrics need to focus on
     performance statistics and peer code review status.
     Here are some to consider:

     Load Test Metrics
     ¿¿ Basic Quality: Shows the effect of ramping up
        the number of users and what happens with the
        additional load.
     ¿¿ Load Time: Identifies how long your pages take
        to load.
     ¿¿ Throughput: Identifies the average response
        time for key actions taken.
     ¿¿ Server Side Metrics: Isolates the time your server
        takes to respond to requests.


     Production Monitoring Metrics
     ¿¿ Response Time Summary: Shows the response
        time your clients are receiving from your website.
        Also separates the DNS, redirect, first byte, and
        content download times so that you can better
        understand where time is being spent.
     ¿¿ Waterfall: Shows the total response time with
        detailed information by asset (images, pages,
        etc.).
     ¿¿ Click Errors: Errors your clients see when they click on specific links on your web page to make it easier to
        identify when a user goes down a broken path.




14
What Can You Do Every Day to Ensure You’re Working Optimally?

     Each day, testing teams should:
     ¿¿ Review Application Load Testing Metrics: Examine your key metrics and create defects for performance
        issues.
     ¿¿ API Load Testing Metrics: If they’re not performing as they should, create defects for resolution.
     ¿¿ Review Performance Monitoring Metrics: Review your key metrics and create defects for monitoring issues.


     Learn More

     If you’d like to learn more about any of the SmartBear products discussed in this whitepaper, request a free a
     trial or receive a personalized demo of any of the products, contact SmartBear Software at +1 978-236-7900.
     You’ll find additional information at http://www.smartbear.com.



         Learn about other Agile Solutions
     Visit our website to see why testers choose
     SmartBear products.




15
About SmartBear Software
 SmartBear Software provides tools for over one
 million software professionals to build, test, and
 monitor some of the best software applications and
 websites anywhere – on the desktop, mobile and in
 the cloud. Our users can be found worldwide, in small
 businesses, Fortune 100 companies, and government
 agencies. Learn more about the SmartBear Quality
 Anywhere Platform, our award-winning tools, or join
 our active user community at www.smartbear.com,
 on Facebook, or follow us on Twitter @smartbear.




SmartBear Software, Inc. 100 Cummings Center, Suite 234N Beverly, MA 01915
+1 978.236.7900 www.smartbear.com ©2012 by SmartBear Software Inc.
Specifications subject to change. 	                  WP-5CHA-032012-WEB

Contenu connexe

En vedette

Star bene - 29 e 30 Ottobre Teramo - Il Programma
Star bene - 29 e 30 Ottobre Teramo - Il ProgrammaStar bene - 29 e 30 Ottobre Teramo - Il Programma
Star bene - 29 e 30 Ottobre Teramo - Il ProgrammaL & L Comunicazione
 
Campli - Relazione di fine mandato 2014
Campli - Relazione di fine mandato 2014Campli - Relazione di fine mandato 2014
Campli - Relazione di fine mandato 2014L & L Comunicazione
 
Catalogo Elevah Faraone Italiano 2016
Catalogo Elevah Faraone Italiano 2016Catalogo Elevah Faraone Italiano 2016
Catalogo Elevah Faraone Italiano 2016L & L Comunicazione
 
Catalogo quadri - Faraone per la sicurezza - 2016
Catalogo quadri - Faraone per la sicurezza - 2016Catalogo quadri - Faraone per la sicurezza - 2016
Catalogo quadri - Faraone per la sicurezza - 2016L & L Comunicazione
 
Business Process Testing -BPT
Business Process Testing -BPT Business Process Testing -BPT
Business Process Testing -BPT Archana Survase
 
A heuristic approach for secure service composition adaptation final
A heuristic approach for secure service composition adaptation finalA heuristic approach for secure service composition adaptation final
A heuristic approach for secure service composition adaptation finalAniketos EU FP7 Project
 
Privacy identity and trust challenges for the future internet citizen fabio...
Privacy identity and trust challenges for the future internet citizen   fabio...Privacy identity and trust challenges for the future internet citizen   fabio...
Privacy identity and trust challenges for the future internet citizen fabio...Aniketos EU FP7 Project
 

En vedette (7)

Star bene - 29 e 30 Ottobre Teramo - Il Programma
Star bene - 29 e 30 Ottobre Teramo - Il ProgrammaStar bene - 29 e 30 Ottobre Teramo - Il Programma
Star bene - 29 e 30 Ottobre Teramo - Il Programma
 
Campli - Relazione di fine mandato 2014
Campli - Relazione di fine mandato 2014Campli - Relazione di fine mandato 2014
Campli - Relazione di fine mandato 2014
 
Catalogo Elevah Faraone Italiano 2016
Catalogo Elevah Faraone Italiano 2016Catalogo Elevah Faraone Italiano 2016
Catalogo Elevah Faraone Italiano 2016
 
Catalogo quadri - Faraone per la sicurezza - 2016
Catalogo quadri - Faraone per la sicurezza - 2016Catalogo quadri - Faraone per la sicurezza - 2016
Catalogo quadri - Faraone per la sicurezza - 2016
 
Business Process Testing -BPT
Business Process Testing -BPT Business Process Testing -BPT
Business Process Testing -BPT
 
A heuristic approach for secure service composition adaptation final
A heuristic approach for secure service composition adaptation finalA heuristic approach for secure service composition adaptation final
A heuristic approach for secure service composition adaptation final
 
Privacy identity and trust challenges for the future internet citizen fabio...
Privacy identity and trust challenges for the future internet citizen   fabio...Privacy identity and trust challenges for the future internet citizen   fabio...
Privacy identity and trust challenges for the future internet citizen fabio...
 

Smart Bear Whitepaper Agile Testing

  • 1. SM Ensuring Software Success Five Challenges for Agile Testing Teams Solutions to Improve Agile Testing Results A SmartBear White Paper Everyday, Agile development teams are challenged to deliver high quality software as quickly as possible. Yet testing can slow-down the go-to-market process. This white paper suggests time-saving techniques that make the work of Agile testing easier and more productive. Comprised of precise and targeted solutions to common Agile testing challenges, Smart Agile Testing offers tips and ad- vice to ensure adequate test coverage and traceability, avoid build-induced code breakage, identify and resolve defects early in the development process, improve API code quality, and ensure that new releases don’t cause performance bottlenecks. Contents What Are the Most Common Challenges Facing Agile Testing Teams?..................................................................................................... 2 Challenge 1: Inadequate Test Coverage .................................................................................................................................................................3 Challenge 2: Accidental Broken Code Due to Frequent Builds.......................................................................................................................6 Challenge 3: Detecting Defects Early, When They’re Easier and Cheaper to Fix...................................................................................8 Challenge 4: Inadequate Testing for Your Published API.............................................................................................................................. 11 Challenge 5: Ensure That New Releases Don’t Create Performance Bottlenecks............................................................................... 12 About SmartBear Software........................................................................................................................................................................................ 16 www.smartbear.com/agile
  • 2. What Are the Most Common Challenges Facing Agile Testing Teams? Agile development is a faster, more efficient and cost-effective method of delivering high-quality software. However, agile presents testing challenges beyond those of waterfall development. That’s because agile require- ments are more lightweight, and agile builds happen more frequently to sustain rapid sprints. Agile testing requires a flexible and streamlined approach that complements the speed of agile. Smart Agile Testing is a set of timesaving techniques specifically designed to make the work of agile testing teams easier and more productive. It is an empowering process that produces great results and has a simple mission: Get the best possible testing results with the least amount of work. These challenge- and solution-based techniques do not require major changes in your existing workflow. You can adopt them in increments, which enables you to focus on one specific challenge and meet it head-on with a precise, targeted solution. Following are five common challenges agile teams face and recommended solutions to handle them quickly and effectively. Recommended Solutions to the Five Common Challenges Challenge 1: Inadequate Test Coverage ¿¿ Linking tests to user stories (traceability) for insight into test coverage for each user story ¿¿ Integration with source check-in to find changed code that was not anticipated or planned for ¿¿ Analyzing specific metrics to identify traceability and missing test coverage Challenge 2: Accidentally Broken Code Due to Frequent Builds ¿¿ Running automated regression tests run on every build to discover broken code ¿¿ Analyzing specific metrics to identify regression runs and broken code Challenge 3: Finding Defects Early, When They’re Cheaper and Easier to Fix ¿¿ Performing peer reviews of source code and test artifacts to find early stage defects ¿¿ Using static analysis tools to identify early stage defects ¿¿ Analyzing peer review statistics and defect aging to address defects early when they’re least costly to fix Challenge 4: Inadequate Testing for Your Published API ¿¿ Running automated API tests on every build to ensure your API is working as designed ¿¿ Running load testing on your API calls to be sure your API is responsive ¿¿ Analyzing specific metrics to determine API test coverage and responsiveness Challenge 5: Ensuring That New Releases Don’t Create Performance Bottlenecks ¿¿ Conducting Application and API Load Testing ensures that performance is not impacted with the new release ¿¿ Implementing production monitoring to detect how your application is performing in production ¿¿ Analyzing specific metrics to identify bottlenecks in application / API performance 2
  • 3. Challenge 1: Inadequate Test Coverage Inadequate test coverage can cause big problems. It’s often the result of too few tests written for each user story and lacking visibility into code that was changed unexpectedly. As we all know, developers sometimes change code beyond the scope of the features being released. They do it for many reasons such as to fix defects, to refactor the code, or because developers are bothered by the way the code works and just wants to improve it. Often these code changes are not tested, particularly when you are only writing tests for planned new fea- tures in a release. To eliminate this problem, it’s important to have visibility into all the code being checked in. By seeing the code check-ins, you can easily spot any missing test coverage and protect your team from unpleasant surprises once the code goes into production. How Can You Ensure Great Test Coverage of New Features? Before you can have adequate test coverage you first must have a clear understanding of the features being de- livered in the release. For each feature, you must understand how the feature is supposed to work, its constraints and validations, and its ancillary functions (such as logging and auditing). Agile developers build features based on a series of user stories (sometimes grouped by themes and epics). Creating your test scenarios at the user story level gives you the best chance of achieving optimal test coverage. Once your QA and development teams agree on the features to be delivered, you can begin creating tests for each feature. Coding and test development should be done in parallel to ensure the team is ready to test each feature as soon as it’s published to QA. Be sure to design a sufficient number of tests to ensure comprehensive results: ¿¿ Positive Tests: Ensure that the feature is working as designed, with full functionally, is cosmetically correct, and has user-friendly error messages. ¿¿ Negative Tests: Users often (okay, usually) start using software without first reading the manual. As a result, they may make mistakes or try things you never intended. For example, they may enter invalid dates, key-in characters such as dollar signs or commas into numeric fields, or enter too many characters (e.g., enter 100 characters onto a field designed for no more than 50). Users also may attempt to save records without completing all mandatory fields, delete records that have established relationships with other records (such as master/detail scenarios), or enter duplicate records. When you design tests, it’s important to understand the constraints and validations for each requirement and create enough negative tests to ensure that each constraint and validation is fully vetted. The goal is to make the code dummy proof. ¿¿ Performance Tests: As we will see later in this paper, it’s a good idea to test new features under duress. There are ways to automate this but you should conduct some manual tests that perform timings with large datasets to ensure that performance doesn’t suffer too much when entering a large amount of data or when many there are many concurrent users. ¿¿ Ancillary Tests: Most well-designed systems write errors to log files, record changes to records via audits, and use referential integrity so whenever a master/child record is deleted, both records are deleted simultaneously. Many systems regularly run purge routines; be certain that as you add new features they 3
  • 4. are covered by ancillary tests. Finally, most systems have security features that limit access to applications so specific people only have rights to specific functions. Ancillary tests ensure that log files and audits are written, referential integrity is preserved, security is embraced, and purge routines cleanly remove all related data as needed. Applying a sufficient number of tests to each feature to fully cover all the scenarios above is called traceability. Securing traceability is as simple as making a list of features including the number and breadth of tests that cover positive, negative, performance, and ancillary test scenarios. Listing these by feature ensures that no feature is insufficiently tested or not tested at all. To learn more about traceability, view this video. How Can You Detect Changes to Code Made Outside the Scope of New Features? It’s common for developers to make changes to code that go beyond the scope of features being released. However, if your testing team is unaware of changes made, you may end up with an unexpected defect because you couldn’t test them. So, what’s the solution? One approach is to branch the code using a source control management (SCM) sys- tem to ensure that any code outside the target release remains untouched. For source code changes, you need visibility into each module changed and the ability to link each module with a feature. By doing this, you can quickly identify changes made to features that aren’t covered by your testing. Consider assigning a testing team member to inspect all code changes to your source control system daily and ferret out changes made to untested areas. This can be cumbersome; it requires diligence and time. A good alternative is to have your source control system send daily code changes to a central place so your team can review them. Putting these changes in a central location helps implement rules that associate specific modules with specific features. That way, you’ll see when a change is made outside of a feature that’s being developed in the current release, and check off the reviewed changes so you can be certain each check-in is covered. One way to achieve this is to set up a trigger in your SCM that alerts you whenever a file is changed. Alternative- ly you can use a feature that inspects your source control system and sends all check-ins to a central repository for review. If you haven’t already built something like this, consider SmartBear’s QAComplete; it has a feature built specifically for this important task. Through its OpsHub connector, QAComplete can send all source code changes into the Agile Tasks area of the software, which you can then use to build rules that link source modules with features and to flag check-ins as being reviewed. What Are the Most Important Test Coverage Metrics? Metrics for adequate test coverage focus on traceability, test run progress, defect discovery, and defect fix rate. These include: 4
  • 5. ¿¿ Traceability Coverage: Count the number of tests you have for each requirement (user story). Organize the counts by test type (positive, negative, performance, and ancillary). Reviewing by user story shows if you have sufficient test coverage, so you can be confident of the results. ¿¿ Blocked Tests: Use this technique to identify requirements that cannot be fully tested because of defects or unresolved issues. ¿¿ Test Runs by Requirement (User Story): Count the number of tests you have run for each requirement, as well as how many have passed, failed, and are still awaiting run. This metric indicates how close you are to test completion for each requirement. ¿¿ Test Runs by Configuration: If you’re testing in different operating systems and browsers, it’s important to know how many tests you have run against each supported browser and OS. These counts indicate how much coverage you have. ¿¿ Daily Test Run Trending: Test run trending helps you visualize, day-by-day, how many tests have passed, failed, and are waiting to be run. This shows whether you can complete all testing before the test cycle is complete. If it shows you’re falling behind, run your highest priority tests first. ¿¿ Defects by Requirement (User Story): Understanding the number of defects discovered by requirement can trigger special focus on specific features so that you can concentrate on those with the most bugs. If you find that specific features tend to be most buggy, you’ll be able to run those more often to ensure full coverage of the buggy areas. ¿¿ Daily Defect Trending: Defect trending helps you visualize, day-by-day, how many defects are found and resolved. It also shows whether you can complete all high-priority defects before the testing cycle is complete. If you know you are lagging, focus the team on the most severe, highest priority defects first. ¿¿ Defect Duration: This shows how quickly defects are being fixed. Separating them by priority ensures that that the team addresses the most crucial items first. A long duration on high priority items also signals slow or misaligned development resources, which your team and the development team should resolve jointly. How Can You Ensure Your Test Coverage Team Is Working Optimally? As a best practice, we recommend that each day your testing team: ¿¿ Participates in Standup Meetings: Discuss impediments to test progress. ¿¿ Reviews Daily Metrics: When you spot issues such as high-priority defects becoming stale, work with the development leader to bring attention to them. If your tests are not trending to completion by the sprint end date, mitigate your risk by focusing on the highest priority tests. When you discover code changes that are not covered by tests, immediately appoint someone to create them. 5
  • 6. Challenge 2: Accidentally Broken Code Due to Frequent Builds Performing daily builds introduces the risk of breaking existing code. If you rely solely on manual test runs, it’s not practical to fully regress your existing code each day. A better approach is to use an automated testing tool that records and runs tests automatically. This is a great way to test more stable features to ensure that new code has not broken them. Most agile teams perform continuous integration, which simply means that they check source code frequently (typically several times a day). Upon code check-in, they have an automated process for creating a software build. An automated testing tool can perform regression testing whenever you launch a new build. There are many tools on the market for continuous integration, including SmartBear’s Automated Build Studio, Cruise Control, and Hudson. It’s a best practice to have the build system automatically launch automated tests to detect the stability and integrity of the build. How Can You Get Started with Automated Testing? The best way to get started is to proceed with baby steps. Don’t try to create automated tests for every feature. Focus on the tests that provide the biggest bang for your buck. Here are some proven methods: ¿¿ Assign a Dedicated Resource: Few manual testers can do double duty and create both manual and automated regression tests. Automated testing requires a specialist with both programming and analytical skills. Optimize your efforts by dedicating a person to work solely on automation. ¿¿ Start Small: Create positive automated tests that are simple. For example, imagine you are creating an automated test to ensure that the order processing software can add a new order. Start by creating the test so that it adds a new order with all valid data (positive test). You’ll drive yourself crazy if you try to create a set of automated tests to perform every negative scenario that you can imagine. Don’t sweat it. You can always add more tests later. Focus on proving that your customers can add a valid order and that new code doesn’t break that feature. ¿¿ Conduct High-Use Tests: Create tests that cover the most frequently used software features. For example, in an order processing system, users create, modify, and cancel orders every day; be sure you have tests for that. However, if orders are exported rarely, don’t waste time automating the export process until you complete all the high-use tests. ¿¿ Automate Time-Intensive Tests/Test Activities: Next, focus on tests that require a long setup time. For example, you may have tests that require you to set up the environment (i.e., create a virtual machine instance, install a database, enter data into the database, and run a test). Automating the setup process saves substantial time during a release cycle. You may also find that a single test takes four hours to run by hand. Imagine the amount of time you will recoup by automating that test so you can run it by clicking a button! ¿¿ Prioritize Complex Calculation Tests: Focus on tests that are hard to validate. For example, maybe your mortgage software has complex calculations that are very difficult to verify because the formulas for producing the calculation are error-prone if done manually. By automating this test, you eliminate the manual calculations. This speeds up testing, ensures the calculation is repeatable, reduces the chance of human error, and raises confidence in the test results. ¿¿ Use Source Control: Store the automated tests you create in a source control system. This safeguards 6
  • 7. against losing your work due to hard drive crashes and prevents overwriting of completed tests. Source control systems provide a safeguard by allowing you to check code in and out and retain prior test versions without fear of accidental overwriting. Once you create a base set of automated tests, schedule them to run on each build. Daily, identify tests that failed. Confirm if they flag a legitimate issue or if the failure is due to an unexpected change to the code. When a defect is identified, you should be very pleased that your adoption of test automation is paying dividends. Remember, start small and build your automated test arsenal over time. You’ll be very pleased by how much of your regression testing has been automated, which frees you and your team to perform deeper functional test- ing of new features. Reliable automated testing requires a proven tool. As you assess options, remember that SmartBear’s TestComplete is easy to learn and offers the added benefit of integrating with QAComplete, so you can schedule your automated tests to run unattended and view the run results on a browser. To learn more about traceability, see this video. What is Test-Driven Development? Agile practitioners sometimes use test-driven development (TDD) to improve unit testing. Using this approach, the agile developer writes code by using automated testing as the driver to code completion. Imagine a developer is designing an order entry screen. She might start by creating a prototype of the screen without connecting any logic, and then create an automated test of steps for adding an order. The automated test would validate field values, ensure that constraints were being enforced properly, etc. The test would be run before any logic was written into the order entry screen. The developer would then write code for the order entry screen and run automated tests to see if it passes. She would only consider the screen to be “done” when the automated test runs to completion without errors. To illustrate further, let’s say you’re writing an object that when called with a specific input, produces a specific output. By implementing a TDD approach, you can write code, run the automated tests, and continue that process recursively until attaining the expected input and output. Which Metrics Are Most Important for Successful Auto- mated Testing? As you grapple with this challenge, focus on metrics that analyze automated test coverage, automated test run progress, defect discovery, and defect fix rate, including: 7
  • 8. ¿¿ Feature Coverage: Count the number of automated tests for each feature. You’ll know when you have enough tests to be confident that you are fully covered from a regression perspective. ¿¿ Requirement/Feature Blocked: Use this metric to identify what requirements are blocking automation. For example, third-party controls require custom coding; current team members may lack the expertise to write them. ¿¿ Daily Test Run Trending: This shows you, day-by-day, the number of automated tests that are run, passed, and failed. Inspect each failed test and post defects for issues you find. ¿¿ Daily Test Runs by Host: When running automated tests on different host machines (i.e., machines with different operating systems or browser combinations), analyzing your runs by host alerts you to specific OS or browser combinations that introduce new defects. What Can You Do to Ensure Your Automated Test Team Is Working Optimally? We recommend that testing teams perform these tasks every day: ¿¿ Review Automated Run Metrics: When overnight automated test runs flag defects, do an immediate manual retest to rule out false positives. Then log real defects for resolution. ¿¿ Use Source Control: Review changes you’ve made to your automated tests and check them into your source control system for protection. ¿¿ Continue to build on your Automated Tests: Work on adding more automated tests to your arsenal following the guidelines described previously. Challenge 3: Detecting Defects Early, When They’re Easier and Cheaper to Fix You know that defects found late in the development cycle require “A synergistic combination of formal more time and money to fix. And defects not found until production inspections, static analysis, and formal are an even bigger problem. A primary goal of development and testing can achieve combined defect testing teams is to identify defects as early as possible, reducing removal efficiency levels of 99%. Bet- the time and cost of rework. There are two ways to accomplish ter, this synergistic combination will this: Implement peer reviews, and use static analysis tools to scan lower development costs and sched- code to identify defects as early as possible. Is there value in using ules and reduce technical debt by more than one approach? Capers Jones, the noted software quality more than 80% compared to testing alone.” 1 expert, explains the need for multiple techniques in a recent white - Capers Jones paper available for download on SmartBear’s website. Whitepaper from smartbear.com What Should You Review? As you’re developing requirements and breaking them into user stories, also conduct team reviews. You need to ensure every story is: ¿¿ Clear ¿¿ Supports the requirement ¿¿ Identifies constraints and validations the programmer and testers need to know 1 Capers Jones, Combining Inspections, Static Analysis, and Testing to Achieve Defect Removal Efficiency Above 95%, January 2012. Complimentary download from www.smartbear.com 8
  • 9. There are two ways to accomplish this: Hold regular meetings so the team can review the user stories, or use a tool for online reviews to identify missing constraints and validations before you get too far into the develop- ment cycle. Defining the user stories also helps prevent defects that arise because the programmer failed to add logic for those constraints and validations. Tools such as SmartBear’s QAComplete make it easy to conduct user story reviews online. In addition to automated tests, your testers also need to define manual tests that offer comprehensive coverage for each user story. Then the team formally reviews the manual tests to ensure that nothing critical is missing. Programmers often suggest additional tests the tester has not considered, which prevent defects before the code goes into production. The second option is to simply allow the team to go online and review the tester’s plan to ensure nothing was overlooked. That’s the better approach because it allows team members to contribute during lulls in their work schedules. Tools such as SmartBear’s QAComplete make it easy to conduct manual test reviews online. What Are the Best Ways to Review Automated Tests? While developing or updating automated tests, it’s always a good idea to have another set of eyes take a look. Code reviews of scripted automated tests identify logic issues, missing test scenarios, and invalid tests. As discussed above, it’s critical that you check all automated tests into a source control system so you can rollback to prior versions if necessary. There are several techniques you can apply to automated code reviews. “Over-the-shoulder” reviews entail sitting with the automated test designer and reviewing the automated tests together. As you do this, you can recommend changes until you’re comfortable with the level of testing provided by your automation engineer. Remote teams cannot easily perform over-the-shoulder reviews. Although you could set up an online meeting and collaborate by sharing computer screens, it’s a much bigger challenge to manage time zone differences and find a convenient time to conduct a review. A convenient alternative to over-the-shoulder reviews is peer code reviews. Peer code review tools streamline the entire process by enabling an online workflow. Here’s an example: Your automation engineer checks the automation scripts into a source control system. Upon check-in, he uses the peer review tool to identify automation scripts that need code review. Using the tool, he selects a specific reviewer. The reviewer views the script in a window and makes notes. The reviewer can write notes on specific code lines to identify which lines of the automation script are being questioned. The tool also enables the reviewer to create a defect for the automation engineer to resolve when changing scripts. For the broadest integrations and most flexible workflow, consider SmartBear’s CodeCollaborator, the most Agile peer code review on the market. What Do You Gain from Peer Review? The lack of formal code reviews can adversely affect the overall testing effort, so you should strongly advocate 9
  • 10. for them with the agile development team. When developers perform regular code reviews they identify logic er- rors, missing error routines, and coding errors (such as overflow conditions and memory leaks), and they dramati- cally reduce the number of defects. While code reviews can be done over-the-shoulder, it’s much more effective and efficient to use the right tool. How Can Static Analysis Help? The testing team also should strongly advocate the use of static analysis tools. Static analysis tools automatical- ly scan source code and identify endless loops, missing error handling, poor implementation of coding standards, and other common defects. By running static code analysis on every build, developers can prevent defects that might not be discovered until production. Although many static analysis tools can supplement the quality process by identifying bugs, they really only tell you that you have a bug. They can’t tell you how dangerous the bug is or the damage it could cause. Code review provides insight into the impact of the bug—is it a showstopper or is it trivial—so devel- opers can prioritize the fix. This is the main reason that SmartBear recommends that you use both code review and static analysis tools. What Are the Most Important Peer Code Review Metrics? Focus your metrics around peer code review progress and the number of defects discovered from peer reviews. Consider these: ¿¿ Lines of Code Inspected: Quantifies the amount of code that’s been inspected. ¿¿ Requirement (User Story) Review Coverage: Identifies which user stories have been reviewed. ¿¿ Manual Test Review Coverage: Reports which manual tests have been reviewed. ¿¿ Automated Test Review Coverage: Shows which automated tests have been reviewed. ¿¿ Code Review Coverage: Identifies which source code has been reviewed and which source check-ins remain to be reviewed. ¿¿ Static Analysis Issues Found: Identifies issues the static analysis scans found. ¿¿ Defects Discovered by Peer Review: Reports the number of defects discovered by peer reviews. You may categorize them by type of review (user story review, manual test review, etc.). What Can You Do Each Day to Ensure Your Testing Team Is Working Optimally? Each day, your testing team should: ¿¿ Perform Peer Reviews of user stories, manual tests, automated tests and source code. Log defects found during the review so you can analyze the result of using this strategy. 10
  • 11. ¿¿ Automatically Run Static Analysis to detect coding issues. Review the identified issues, configure the tolerance of your static review to ignore false positives, and log any true defects. ¿¿ Review Defect Metrics related to peer reviews to determine how well this strategy helps you reduce defects early in the coding phase. Challenge 4: Inadequate Testing for Your Published API Many testers focus on testing the user interface and miss the opportunity to perform API testing. If your soft- ware has a published API, your testing team needs a solid strategy for testing it. API testing often is omitted because of the misperception that it takes programming skills to call the properties and methods of your API. While programming skill can be helpful for both automated and API testers, it’s not essential if you have tools that allow you to perform testing without programming. How Do You Get Started with API Testing? Similar to automated testing, the best way to get started with API testing is to take baby steps. Don’t try to create tests for every API function. Focus on the tests that provide the biggest bang for your buck. Here are some guidelines to help you focus: ¿¿ Dedicated Resource: Don’t have your manual testers develop API tests. Have your automation engineer double as an API tester; the skill set is similar. ¿¿ High Use Functions: Create tests that cover the most frequently called API functions. The best way to determine the most called functions is to log the calls for each API function. ¿¿ Usability Tests: When developing API tests, be sure to create negative tests that force the API function to spit out an error. Because APIs are a black box to the end user, they often are difficult to debug. Therefore, if a function is called improperly, it’s important that the API returns a friendly and actionable message that explains what went wrong and how to fix it. ¿¿ Security Tests: Build tests that attempt to call functions without the proper security rights. Create tests that exercise the security logic. It can be easy for developers to enforce security constraints in the user interface but forget to enforce them in the API. ¿¿ Stopwatch-level Performance Tests: Time methods (entry and exit points) to analyze which methods take longer to process than anticipated. Once you create a base set of API tests, schedule them to run automatically on each build. Every day, identify any tests that failed to confirm that they’re legitimate issues and not just an expected change you weren’t aware of. If a test identifies a real issue, be happy that your efforts are paying off. API testing can be done by writing code to exercise each function, but if you want to save time and effort, use a tool. Remember, our mission is to get the most out of testing efforts with the least amount of work. When considering tools, take a look at SmartBear’s soapUI Pro. It’s easy to learn and has scheduling capabili- ties so your API tests can run unattended, and you can view the results easily. 11
  • 12. Which API Metrics Should You Watch? Focus on API function coverage, API test run prog- ress, defect discovery, and defect fix rate. Here are some metrics to consider: ¿¿ Function Coverage: Identifies which functions API tests cover. Focus on the functions that are called most often. This metric enables you to determine if your testing completely covers your high-use functions. ¿¿ Blocked Tests: Identify API tests that are blocked by defects or external issues (for example, compatibility with the latest version of .NET). ¿¿ Coverage within Function: Most API functions contain several properties and methods. This metric identifies which properties and methods your tests cover to ensure that all functions are fully tested (or at least the ones used most often). ¿¿ Daily API Test Run Trending: This shows, day-by-day, how many API tests are run, passed, and failed. What Can You Do Each Day to Ensure Your API Testing Team Is Working Optimally? Testing teams should perform these things every day: ¿¿ Review API Run Metrics: Review your key metrics. If the overnight API tests found defects, retest them manually to rule out a false positive. Log all real defects for resolution. ¿¿ Continue to build on your API Tests: Work on adding more API tests to your arsenal using the guidelines described above. Challenge 5: Ensure That New Releases Don’t Create Performance Bottlenecks In a perfect world, adding new features in your current release would not cause any performance issues. But we all know that as software starts to mature with the addition of new features, the possibility of performance issues increase substantially. Don’t wait until your customers complain before you begin testing performance. That’s a formula for very unhappy customers. System slowdowns can be introduced in multiple places—your user interface, batch processes, and API— create processes that ensure all performance is monitored and issues are mitigated. Last but by no means least, you also should implement automatic production monitoring to check the speed systems for speed, which provides valuable statistics that enable you to improve performance. How Do You Get Started with Application Load Testing? Your user interface is the most visible place for performance issues to crop up. Users are very aware when they are waiting “too long” for a new record to be added. 12
  • 13. When you’re ready for load testing, it is important to set a performance baseline for your application, website, or API: ¿¿ The response time for major features (e.g., adding and modifying items, running reports) ¿¿ The maximum number of concurrent users the software can handle ¿¿ Whether the application fails or generates errors when ”too many” visitors are using it ¿¿ Compliance with specific quality-of-service goals It’s very difficult to set a baseline without a tool. For instance, you could easily manually record the response time of every feature simply by using a stopwatch and recording the times on a spreadsheet. But trying to simulate hundreds or thousands of concurrent users is impossible without the right tool. You won’t have enough people connect at the same time to get you those statistics. When you’re looking at tools, consider SmartBear’s LoadComplete. It’s easy to learn, inexpensive, and can handle all the tasks needed to create your baseline. Once you establish a baseline, you need to run the same tests after each new software release to ensure it didn’t degrade performance. If performance suffers, you need to know what functions degraded so the technical team can address them. Once you create your baseline with LoadComplete, you can run those same load tests on each release without any additional work. That enables you to collect statistics to determine if a new release has adversely affected performance. How Do You Get Started with API Load Testing? Another place performance issues can surface is within your Web services API. If your user interface uses your API, it can impact not only performance of the API but also the overall user interface experience for your cus- tomers. Similar to application load testing, you need to create a baseline so you know what to expect in terms of average response time and learn what happens when a large number of users are calling your API. Use the same approach as with application load testing to set baselines and compare them against each code iteration. You can try to set the baseline manually, but it requires a lot of difficult work. You’d have to write harnesses that call your API simultaneously and also write logic to record performance statistics. A low cost API load-testing tool like SmartBear’s loadUI Pro saves you all that work. Run the same API performance tests with each new software release to ensure it does not degrade API perfor- mance and help identify which functions slow down the system. What Is Production Monitoring? Once you’ve shipped your software to production, do you know how well it’s performing? Is your application running fast or does it slow down during specific times of the day? Do your customers in Asia get similar perfor- mance as those in North America? How does your website’s performance compare to your competitors’? How soon do you learn if your application has crashed? These are just some of the thorny questions about software performance that can be difficult to answer. But not knowing the answer can adversely affect how your customers respond to your application… and your company. 13
  • 14. Fortunately, you can address all of these critical questions by implementing production-monitoring tools. With them you can: ¿¿ Access website performance ¿¿ Receive an automatic e-mail or other notification if your website crashes ¿¿ Detect API, e-mail, and FTP issues ¿¿ Compare your website’s performance to your competitors’ sites When searching for the right performance-monitoring tool, consider SmartBear’s AlertSite products, the best on the market. What Are Some Metrics to Watch? Performance monitoring metrics need to focus on performance statistics and peer code review status. Here are some to consider: Load Test Metrics ¿¿ Basic Quality: Shows the effect of ramping up the number of users and what happens with the additional load. ¿¿ Load Time: Identifies how long your pages take to load. ¿¿ Throughput: Identifies the average response time for key actions taken. ¿¿ Server Side Metrics: Isolates the time your server takes to respond to requests. Production Monitoring Metrics ¿¿ Response Time Summary: Shows the response time your clients are receiving from your website. Also separates the DNS, redirect, first byte, and content download times so that you can better understand where time is being spent. ¿¿ Waterfall: Shows the total response time with detailed information by asset (images, pages, etc.). ¿¿ Click Errors: Errors your clients see when they click on specific links on your web page to make it easier to identify when a user goes down a broken path. 14
  • 15. What Can You Do Every Day to Ensure You’re Working Optimally? Each day, testing teams should: ¿¿ Review Application Load Testing Metrics: Examine your key metrics and create defects for performance issues. ¿¿ API Load Testing Metrics: If they’re not performing as they should, create defects for resolution. ¿¿ Review Performance Monitoring Metrics: Review your key metrics and create defects for monitoring issues. Learn More If you’d like to learn more about any of the SmartBear products discussed in this whitepaper, request a free a trial or receive a personalized demo of any of the products, contact SmartBear Software at +1 978-236-7900. You’ll find additional information at http://www.smartbear.com. Learn about other Agile Solutions Visit our website to see why testers choose SmartBear products. 15
  • 16. About SmartBear Software SmartBear Software provides tools for over one million software professionals to build, test, and monitor some of the best software applications and websites anywhere – on the desktop, mobile and in the cloud. Our users can be found worldwide, in small businesses, Fortune 100 companies, and government agencies. Learn more about the SmartBear Quality Anywhere Platform, our award-winning tools, or join our active user community at www.smartbear.com, on Facebook, or follow us on Twitter @smartbear. SmartBear Software, Inc. 100 Cummings Center, Suite 234N Beverly, MA 01915 +1 978.236.7900 www.smartbear.com ©2012 by SmartBear Software Inc. Specifications subject to change. WP-5CHA-032012-WEB