7. Traditional vs. Agile Approach Continuous sprint by sprint planning, deliver important features first Plan one time delivery which has to happen way later Planning Entire team Quality Engineer Quality Responsibility Evolving and Iteration Wise All Upfront Designing of Test Cases See working software every iteration and every release Milestone document review Progress review Adapt and adjust at every release and iteration boundary Prohibit Change Manage Change Constant interaction within the team and with the Product Owner Upfront Understanding Requirements Agile Traditional Criteria
To follow the waterfall model, one proceeds from one phase to the next in a purely sequential manner. For example, one first completes requirements specification, which are set in stone. When the requirements are fully completed, one proceeds to design. The software in question is designed and a blueprint is drawn for implementers (coders) to follow — this design should be a plan for implementing the requirements given. When the design is fully completed, an implementation of that design is made by coders. Towards the later stages of this implementation phase, disparate software components produced are combined to introduce new functionality and remove bugs.
Agile testing is all about applying agile values and principles to testing. Agile testing involves the set of good practices which help the agile team deliver high quality software.
The key here is to make the entire development team, not just testing or QA, responsible for testing and quality. Automation is the manta in agile testing. In agile, it is important to automate all unit & regression tests and further integrate them with continuous integration framework so that testers are free to concentrate more on exploratory testing.
The key here is to make the entire development team, not just testing or QA, responsible for testing and quality. Automation is the manta in agile testing. In agile, it is important to automate all unit & regression tests and further integrate them with continuous integration framework so that testers are free to concentrate more on exploratory testing.
TDD is one such software development technique which ensures that the software being developed is 100% covered by unit test cases. How it is so? It is because TDD talks about writing the test case first and then the actual coding. Let us look at the steps of TDD Write the unit test cases for the features to be developed in the sprint Run a build to make sure that the build recognizes this new test cases and build fails Now write just enough source code to ensure that it covers the test case Run the build again to confirm that the build passes Now because the developer has written the code just to cover the test case, it is time to refactor the source code to make sure that the code is readable by others (adding proper comments), simply its structure without changing the behavior of the feature developed.
Sometimes we have to follow the process of finding the most important bugs in a short period of time, this is called exploratory testing. Exploratory testing is like a chess game with a computer. Tester revise his plans after seeing the move of your opponent. Yet all your plans can be changed after one unpredictable move of your opponent. Can you write a detail plan for a chess game with a computer? Player uses all his knowledge and experience. He can define only the fist move in the game and it is unreasonable to plan far ahead. You can plan 1 move ahead, or 20 moves if you're a very experienced player, but you can't plan the whole game. To plan 20 moves he had to spend a lot of his valuable time (the clock is ticking). Of course he is trying to find some information about existing situations between moves. This is exactly what an experienced exploratory tester does. After running any test case, testers may need to find additional information about application and system from a developer, system architect, business analyst, or may be from literature. A lot of information is necessary for correct exploratory testing. Other teams whose products are used for creating the applications have influence on our testing too. Just as a star may seem dim in the spectrum of visible light, yet burn brightly in the infrared, the simple idea of exploratory testing becomes interesting and complex when viewed in the spectrum of skill . Consider chess. The procedures of playing chess are far less interesting than the skills. No one talks about how wonderfully Emanuel Lasker followed the procedures of chess when he defeated Steinitz in 1894 to become world champion. The procedures of chess remain constant, it’s only the choices that change, and the skill of the players who choose the next move. What makes exploratory testing interesting, and in my view profoundly important, is that when a tester has the skills to listen , read, think and report, rigorously and effectively, without the use of pre-scripted instructions, the exploratory approach to testing can be many times as productive (in terms of revealing vital information) as the scripted variety. And when properly supervised and chartered, even testers without special skills can produce useful results that would not have been anticipated by a script. If I may again draw a historical analogy, the spectacularly successful Lewis and Clark expedition is an excellent example of the role of skill in exploration:
A day in the life of a Quality Engineer Hi! My name is Cue Aye. I'm a Quality Engineer. I'm one of the few who make sure that we're building the right product in the right way. I work closely with the Product Manager and the developers too. A normal day My normal day, during an iteration, looks something like this: I come to work with a smile, around the same time as my team mates. It is essential that I maintain a pleasant disposition... that's because I know I'm in for a tough day, especially if the developers haven't been doing a good job! I open my email client and start downloading my waiting emails In parallel, I log in to JiRA and open my Personal Work Queue , already sorted by priority I decide, according to the estimates I'd signed up for, how many of those work items I can complete/continue today I shoot a print command for that list and the issue summaries I grab a coffee and chat with a colleague while my tasks for the day get printed I quickly go over the work item summaries as I walk over to our stand-up room During the stand-up, I listen to everybody speak offer help if someone needs it (but discuss it in a follow-up) honestly answer the 3 questions about my work progress I then come back to my desk and start working on tasks off the printed work items list, highest priority first What I do next depends on the different kinds of tasks that I may have signed up for: Writing acceptance criteria (this is usually substituted by writing of Test Case s) This task is closely linked to a business requirement . I need to understand the requirement as well as I can, using both my domain knowledge and frequent discussions with the Product Manager about the same There is quite a bit of back-and-forth that happens while fleshing out the acceptance criteria: I add a few points (mostly in the form of steps to be performed by an end-user), the Product Manager reviews them, gives his comments, and I update the issue accordingly Finally when both the Product Manager and I are happy with the criteria we've listed out, I resolve the work item issue Writing a Test Case This task is again closely linked to a business requirement and is very much similar to writing acceptance criteria – the only difference is that these are more in-depth and cover many more scenarios (such as positive and negative cases) I use the Test Management System to register the test case against the business requirement (this helps in maintaining traceability) Automating a Test Case This task usually goes hand in hand with writing a test case... we strive to automate as many of the tests as possible I use the test automation tools chosen at the beginning of the release, depending on the nature of the product and the kind of test automation strategy adopted I write the scripts, either programmatically or using the point-and-click approach, and run them again and again on my local machine, tweaking the script after each run to make the execution smooth and error-free... I make sure that I handle all of the exception cases too (the failure of one script shouldn't prevent other scripts from running) Once confident of the script, I check it in into our project's code repository, mentioning the work item issue id as a commit comment The Test Management System should be able to notice the check in and automatically link the script to the related Test Case . This would help automate execution of the test scripts. I test whether the checked-in scripts work as supposed to, and then I resolve my work item issue Running an automated Test Case This task is carried out when QA has received a Release Candidate Build for testing. Using the Test Management System , I select the Test Case(s) which need to be executed and give the command to run them. Usually this is all that I need to do because the rest is handled by the Test Management System itself: it'll check out the test scripts, execute them against the specified build in the QA staging environment, and log the test results in the Test Set Execution issue. If there are failures, Defect issues are also automatically created. In case the above is not possible for some of the scripts, I check out the scripts and run them myself, making note of the results and logging them in the Test Set Execution issue assigned to me. If there are failures, I myself create Defect issues . Once done, I resolve my work item issue Executing a Test Case manually It is sometimes necessary to manually execute a Test Case, when it is not possible to automate the same I interact with the Release Candidate Build provided to the Quality team, making sure the prerequisites of the Test Case are satisfied I perform the steps mentioned in the Test Case, making note of any discrepencies and of course, errors. I log the results on the Test Set Execution issue assigned to me. If there are failures, I myself create Defect issues . Once done, I resolve my work item issue Every now and then I note the time I'm spending on the various issues. I jot them down on the printed sheets, to update them online later. At the end of the day, it's time to go home... I go over my notes on the printed issue summaries and update the issues online. I also log my work, if I haven't done so already. Hmm... I'm satisfied with a good day's work