An old presentation, but just as relevant today as it was when I presented this at the British Computer Society in 2006.
This presentation showed how building a performance test team using a shared knowledge base with shared code libraries and best practice techniques made the performance test team a valuable part of the project team at a large UK bank.
Making test results and reports accessible to the entire project team and acting as a intermediary between the development teams and business users made the test team vital to the success of many projects at HBoS.
2. Introduction
• HBOS Formed by Merger – Sep 2001
• Halifax Building Society
• Bank of Scotland
• 70,000 employees
• UK, Ireland, Spain, Australia
• UK’s largest mortgage provider
• UK’s largest savings provider
• £440bn assets
3. Why am I here?
• Have worked alongside Mercury PS
• Experienced performance tester – 6 years
• Performance Center™ 8.0 (beta)
• LR 8.1 WebGUI (beta)
• J2EE diagnostics (beta)
• .NET diagnostics (beta)
• Scripting standards
• Team structure / mentoring
• Results publication and analysis
5. Team website
• Team knowledge base
• Central repository for results
• Visible throughout HBOS
6. Performance Test Services
• EPT – Early testing
• Informal
• Iterative testing
• Developer involvement
• Aimed at improving performance
• PAT – Acceptance testing
• Formal validation of application
• Final test before deployment
7. Challenges
• Delivering testing to meet growing business demands
• Keeping pace with developments
(eg .NET 2 / J2EE / Web Services / Citrix)
• Resource constraints
• Demonstrating ROI
• Fixed deadlines
8. Test Experience
• Web 80%
• COM / DCOM 10%
• Web Services 5%
• RTE 2%
• Citrix 2%
• Other 1%
• Citrix use likely to increase post LR 8.1
9. Test Stages
• Planning
• Alongside Developers
• Early access to code
• Discussion of key features
• Standard page IDs
• Recommendations
• Key Business Processes
• Knowledge pooling
10. Test Stages (Continued)
• Preparation
• Technical documentation
• Volumes calculations
• Test plan
• Normal load
• Peak load
• Duration test
• Application familiarisation
• Scenario design
11. Test Stages (Continued)
• Scripting
• Script recording
• Script standards
• Test data
• Error checking
12. Test Stages (Continued)
• Test Execution
• Prove scripts in test environment
• Prove data
• Re-state objectives
• Don’t test for the sake of it………….
13. Don’t test for the sake of it
“A test a day keeps the boss away”
14. Test Stages (Continued)
• Results Analysis
• Web-based reporting
• LoadRunner Analysis templates
• PERFMON charts
• Involve “panel of experts”
• Publish results daily
• Appropriate for audience
(See example site)
15. Feedback
• No longer a “hurdle”
• Seen as desirable
• Actively requested by the business
“Overall the experience was one of helpful experts who gave us useful
guidance in our testing. We fully intend to make further use of SI as we
enter more early performance testing phases as part of the PTP project” –
Darren Blackett, Senior Systems Developer - RBIT.
HBOS is the fifth largest bank in the UK HBOS was formed by a merger of the Halifax Building Society and the Bank of Scotland HBOS is growing rapidly, both by acquisition and organic growth.
Worked with LoadRunner for a little over 5 years. Consultancy background before taking a permanent position with a former customer! Have worked with Mercury PS on a number of product evaluations. Early adopter of Mercury Performance Centre and J2EE / .NET diagnostics Mercury PS staff have commented on the quality of our documentation and working practices and encouraged us to share them.
The performance testing team has more than tripled in size over the last 6 years Over time we have developed a number of procedures and “best practice” documents. All new starters (including contractors) are encouraged to follow HBOS procedures to allow simple handover of work from one tester to another.
To help new starters and contractors all our procedures and documentation are held on an Intranet. Links to other company Intranets as well as documentation, useful C code etc. Batch files for authentication Resource booking schedules for injectors etc. Knowledge base as well as links to external support sites e.g. Mercury & Microsoft.
The test team performs two types of testing EPT – Developer / Project team led. Iterative, aimed at performance tuning, early bug fixes etc. PAT – Final formal tests before application is deployed into production environment. De-risked by more extensive EPT. Better engagement with customers.
Project teams often don’t like testers! Seen as an obstacle to code deployment! Developers can delay the production of final code but the “go-live” date is fixed. All pressure at end of project falls on testers. Application resource requirements, MIPS etc are under constraint. Weekend working, out of hours testing etc. IF we test and don’t find problems our purpose is questioned. We aren’t always thanked for finding faults either!
In common with most Mercury users, majority of work is web related. Some COM/DCOM work Use same principles wherever possible. (Error checking, common C functions etc.)
Meeting project team understanding application Application demonstrations Page Ids (error checking) Discussions with developers, what are their concerns? What are the most important business processes (which 20% of transactions are used 80% of the time). Can the performance tests be run without “burning data”?
Documentation – screenshots (we use SnagIT) – Demo if required. Volume calculations – Demo of Excel spreadsheet which we use. Test plan – We do normal load to give performance metrics to production monitoring teams. - Peak load – simulate busiest hour of busiest day. - Duration test – check for memory leaks - Other tests as required. - If second release of an application perform “before and after” test. Early scripting – be prepared to script again and again. - Your always learning about the application - Better understanding of application leads to better tests. Application familiarisation - Sit with users if possible, or sit with application architects and developers. Scenario planning - What does an average user do. - Can we use log file analysis (e.g. Analog) to find out what users do? - Sit with users if possible. - Just because you have 70,000 employees you don’t need to size the application for 70,000 people. - Educate your customer, they may not be familiar with testing cycle.
Record scripts twice (for ease of correlation) Use comparison tools (e.g Beyond Compare) Use standards within the team, easier handover Test data is as important as scripts, test it all if possible before testing. Re-record as a matter of course, developer will swear code hasn’t changed. Don’t believe them! Write detailed error checking functions, simple text checks are OK but it’s better to tell a developer that they had an “HTTP 404 error on page X when it’s accessed by a user with a zero account balance” than it is to say, 50% of the scripts failed, it’s something to do with the application ‘cos the script works!
On day of test Check all scripts, use Controller as well as VuGen. Run them as multiple users. (Helps check for paramaterisation problems). Check all lines of data if possible. Resist pressure to “test for the sake of it” Projects often feel that the pressure is off the developers etc when testing is taking place. Don’t let them wash their hands of the project once it’s under your control, remain in contact. (See next slide)
A test a days keeps the boss away. Source – Professional Tester magazine – March 2005 Reproduced by permission of Professional Tester magazine. www.professionaltester.com Tests should have a purpose and should be performed to prove an applications capabilities or to investigate a problem. If a script isn’t working and the project team says, “just leave it out of the scenario”. We should push back and say, “Why was it there in the first place?” Running a test which does not in some way simulate real life or prove anything of consequence shouldn’t be done simply to keep management happy. Your time as a tester could be spent in more productive work, such as re-checking test data, documenting scripts or processes or examining previous test results in more detail.
Templates increase speed of report writing. Consistent look and feel. Publish combination of perfmon and LR graphs. Give developers access to results sets and Analysis tool, or you’ll spend your days constantly redrawing graphs! Find out who the application owners are and create a “panel of experts”, capacity planners, production support teams etc. If you experience problems people are always to hand for support. Publish results as soon as possible. Consider management summaries as well as more detailed reports for developers. (Especially if using detailed web page breakdown or diagnostics graphs).
By implementing the practices outlined in the previous slides the performance test team has gone from being a team that people tried to avoid to one that people actively look out for. Constant increase in team size is still barely keeping up with demand for our services. <Suitable quote> to be agreed.
What next? Examples of the technologies which we are investigating.