SlideShare une entreprise Scribd logo
1  sur  80
Télécharger pour lire hors ligne
Software Quality Assurance ,[object Object],(1) A planned and systematic pattern of all actions necessary to provide adequate confidence that an item or product conforms to established technical requirements.(2) A set of activities designed to evaluate the process by which products are developed or manufactured. ,[object Object],What's difference between client/server and Web Application ? ,[object Object],Client/server based is any application architecture where one server application and one or many client applications are involved like your mail server and MS outlook Express, it can be a web application as well, where the Web Application is a kind of client server application that is hosted on the web server and accessed over the internet or interanet. There are lots of things that differs between testing of the two type above and cann't be posted in one post but you can look into the data flow, communication and servside variable like session and security etc ,[object Object],Software Quality Assurance Activities ,[object Object],Application of Technical Methods (Employing proper methods and tools for developing software) ,[object Object],Conduct of Formal Technical Review (FTR) ,[object Object],Testing of Software ,[object Object],Enforcement of Standards (Customer imposed standards or management imposed standards) ,[object Object],Control of Change (Assess the need for change, document the change) ,[object Object],Measurement (Software Metrics to measure the quality, quantifiable) ,[object Object],Records Keeping and Recording (Documentation, reviewed, change control etc. i.e. benefits of docs). ,[object Object],What's the difference between STATIC TESTING and DYNAMIC TESTING? ,[object Object],Answer1:Dynamic testing: Required program to be executedstatic testing: Does not involve program executionThe program is run on some test cases & results of the program’s performance are examined to check whether the program operated as expectedE.g. Compiler task such as Syntax & type checking, symbolic execution, program proving, data flow analysis, control flow analysis Answer2:Static Testing: Verification performed with out executing the system code Dynamic Testing: Verification and validation performed by executing the system code ,[object Object],Software Testing ,[object Object],Software testing is a critical component of the software engineering process. It is an element of software quality assurance and can be described as a process of running a program in such a manner as to uncover any errors. This process, while seen by some as tedious, tiresome and unnecessary, plays a vital role in software development. Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and business structure. ,[object Object],What's difference between QA/testing ,[object Object],The quality assuranceprocess is a process for providing adequate assurance that the software products and processes in the product life cycle conform to their specific requirements and adhere to their established plans.
 The purpose of Software Quality Assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built ,[object Object],What black box testing types can you tell me about? ,[object Object],Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality. Functional testing is also a black-box type of testing geared to functional requirements of an application. System testing is also a black box type of testing. Acceptance testing is also a black box type of testing. Functional testing is also a black box type of testing. Closed box testing is also a black box type of testing. Integration testing is also a black box type of testing. ,[object Object],What is software testing methodology? ,[object Object],One software testing methodology is the use a three step process of...1. Creating a test strategy;2. Creating a test plan/design; and3. Executing tests. This methodology can be used and molded to your organization's needs. Rob Davis believes that using this methodology is important in the development and ongoing maintenance of his clients' applications,[object Object],Why Testing CANNOT Ensure Quality ,[object Object],Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed. ,[object Object],How to find all the Bugs during first round of Testing? ,[object Object],Answer1:I understand the problems you are facing. I was involved with a web-based HR system that was encountering the same problems. What I ended up doing was going back over a few release cycles and analyzing the types of defects found and when (in the release cycle including the various testing cycles) they were found. I started to notice a distinct trend in certain areas. For each defect type, I started looking into the possibility if it could have been caught in the prior phase (lots of things were being found in the Systems test phase that should have been caught earlier). If so, why wasn't it caught? Could it have been caught even earlier (say via a peer review)? If so, why not? This led me to start examining the various processes and found a definite problem with peer reviews (not very thorough IF they were even being done) and with the testing process (not rigorous enough). We worked with the customer and folks doing the testing to start educating them and improving the processes. The result was the number of defects found in the latter test stages (System test for example) were cut by over half! It was getting harder to find problems with the product as they were discovering them earlier in the process -- saving time & money! Answer2:There could be several reasons for not catching a showstopper in the first or second build/rev. A found defect could either functionally or physiologically mask a second or third defect. Functionally the thread or path to the second defect could have been boken or rerouted to another path or physiologically the tester who found the first defect knows the app must go back and be rewritten so he/she procedes halfheartedly on and misses the second one. I've seen both cases. It is difficult to keep testing on a known defective app. The testers seem to lose interest knowing that what effort they put in to test it, will have to be redone on the next iteration. This will test your metal as a lead to get them to follow through and maintain a professional attitude. Answer3:The best way is to prevent bugs in the first place. Also testing doesn't fix or prevent bugs. It just provides information. Applying this information to your situation is the important part. The other thing that you may be encountering is that testing tends to be exploratory in nature. You have stated that these are existing bugs, but not stated whether tests already existed for these bugs. Bugs in early cycles inhibit exploration. Additionally, a tester's understanding of the application and its relationships and interactions will improve with time and thus more 'interesting' bugs tend to be found in later iterations as testers expand their exploration (ie. think of new tests). No matter how much time you have to read through the documents and inspect artefacts, seeing the actual application is going to trigger new thoughts, and thus introduce previously unthought of tests. Exposure to the application will trigger new thoughts as well, thus the longer your testing goes, the more new tests (and potential bugs) are going to be found. Iterative development is a good way to counter this, as testers get to see something physical earlier, but this issue will always exist to some degree as the passing of time, and exploration of the application allow new tests to be thought of at inconvenient moments,[object Object],Is regression testing performed manually? ,[object Object],The answer to this question depends on the initial testing approach. If the initial testing approach was manual testing, then the regression testing is usually performed manually. Conversely, if the initial testing approach was automated testing, then the regression testing is usually performed by automated testing. ,[object Object],How to choose which defect to remove in 1000000 defects? (because It will take too much resources in order to remove them all.) ,[object Object],Answe1:Are you the programmer who has to fix them, the project manager who has to supervise the programmers, the change control team that decides which areas are too high risk to impact, the stakeholder-user whose organization pays for the damage caused by the defects or the tester? The tester does not choose which defects to fix. The tester helps ensure that the people who do choose, make a well-informed choice. Testers should provide data to indicate the *severity* of bugs, but the project manager or the development team do the prioritization. When I say 
indicate the severity
, I don't just mean writing S3 on a piece of paper. Test groups often do follow-up tests to assess how serious a failure is and how broad the range of failure-triggering conditions. Priority depends on a wide range of factors, including code-change risk, difficulty/time to complete the change, which stakeholders are affected by the bug, the other commitments being handled by the person most knowledgeable about fixing a certain bug, etc. Many of these factors are not within the knowledge of most test groups. Answe2:As a tester we don't fix the defects but we surely can prioritize them once detected. In our org we assign severity level to the defects depending upon their influence on other parts of products. If a defect doesnt allow you to go ahead and test test the product, it is critical one so it has to be fixed ASAP. We have 5 levels as1-critical2-High3-Medium4-Low5-CosmeticDev can group all the critical ones and take them to fix before any other defect. Answer3:Priority/Severity P1 P2 P3S1S2S3Generally the defects are classified in aboveshown grid. Every organization / software has some target of fixing the bugs.Example -P1S1 -> 90% of the bugs reported should be fixed.P3S3 -> 5% of the bugs reported may be fixed. Rest are taken in letter service packs or versions.Thus the organization should decide its target and act accordingly. Basically bugfree software is not possible. Answer4:Ideally, the customer should assign priorities to their requirements. They tend to resist this. On a large, multi-year project I just completed, I would often (in the lack of customer guidelines) rely on my knowledge of the application and the potential downstream impacts in the modeled business process to prioritize defects. If the customer doesn't then I fell the test organization should based on risk or other, similar considerations. ,[object Object],What is Software “Quality”? ,[object Object],Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free. ,[object Object],What are the five dimensions of the Risks? ,[object Object],Schedule: Unrealistic schedules, exclusion of certain activities when chalking out a schedule etc. could be deterrents to project delivery on time. Unstable communication link can be considered as a probable risk if testing is carried out from a remote location. Client: Ambiguous requirements definition, clarifications on issues not being readily available, frequent changes to the requirements etc. could cause chaos during project execution. Human Resources: Non-availability of sufficient resources with the skill level expected in the project are not available; Attrition of resources - Appropriate training schedules must be planned for resources to balance the knowledge level to be at par with resources quitting. Underestimating the training effort may have an impact in the project delivery. System Resources: Non-availability of /delay in procuring all critical computer resources either hardware and software tools or licenses for software will have an adverse impact. Quality: Compound factors like lack of resources along with a tight delivery schedule and frequent changes to requirements will have an impact on the quality of the product tested. ,[object Object],How do you perform integration testing? ,[object Object],To perform integration testing, first, all unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable, or acceptable, based on client input. ,[object Object],Why back-end testing is required, if we are going to check the front-end ....? ,[object Object],Why we need to do unit testing, if all the features are being tested in System testing. What extra things are tested in unit testing, which can not be tested in System testing. ,[object Object],Answer1: Assume that you're thinking client-server or web. If you test the application on the front end only you can see if the data was stored and retrievd correctly. You can't see if the servers are in an error state or not. many server processes are monitored by another process. If they crash, they are restarted. You can't see that without looking at it. The data may not be stored correctly either but the front end may have cached data lying around and it will use that instead. The least you should be doing is verifying the data as stored in the database. It is easier to test data being transferred on the boundaries and see the results of those transactions when you can set the data in a driver. Answer2: Back-End testing : Basically the requirement of this testing depends on ur project. like Say if ur project is .Ticket booking system,Front end u will provided with an Interface , where u can book the ticket by giving the appropriate details ( Like Place to go, and Time when u wanna go etc..). It will have a Data storage system (Database or XL sheet etc) which is a Back end for storing details entered by the user.After submitting the details ,U might have provided with a correct acknowledgement.But in back end , the details might not updated correctly in Database becoz of wrong logic development. Then that will cause a major problem. and regarding Unit level testing and System testing Unit level testing is for testing the basic checks whether the application is working fyn with the basic requirements.This will be done by developers before delivering to the QA.In System testing , In addition to the unit checks ,u will be performing all the checks ( all possible integrated checks which required) .Basically this will be carried out by tester Answer3:Ever heard about divide and conquer tactic ? It is a same method applied in backend and frontend testing. A good back end test will help minimize the burden of frontend test. Another point is you can test the backend while develope the frontend. A true pararelism could be achived. Backend testing has another problem which must addressed before front end could use it. The problem is concurency. Building a scenario to test concurency is formidable task. A complex thing is hard to test. To create such scenarios will make you unsure which test you already done and which you haven't. What we need is an effective methods to test our application. The simplest method i know is using divide and conquer. Answer4:A wide range of errors are hard to see if you don't see the code. For example, there are many optimizations in programs that treat special cases. If you don't see the special case, you don't test the optimization. Also, a substantial portion of most programs is error handling. Most programmers anticipate more errors than most testers. Programmers find and fix the vast majority of their own bugs. This is cheaper, because there is no communication overhead, faster because there is no delay from tester-reporter to programmer, and more effective because the programmer is likely to fix what she finds, and she is likely to know the cause of the problems she sees. Also, the rapid feedback gives the programmer information about the weaknesses in her programming that can help her write better code. Many tests -- most boundary tests -- are done at the system level primarily because we don't trust that they were done at the unit level. They are wasteful and tedious at the system level. I'd rather see them properly done and properly automated in a suite of programmer tests. ,[object Object],Any recommendation for estimation how many bugs the customer will find till gold release? ,[object Object],Answer1:If you take the total number of bugs in the application and subtract the number of bugs you found, the difference will be the maximum number of bugs the customer can find. Seriously, I doubt you will find any sort of calculations or formula that can answer your question with much accuracy. If you could refernce a previous application release, it might give you a rough idea. The best thing to do is insure your test coverage is as good as you can make it then hope you've found the ones the customer might find. Remember Software testing is Risk Management! Answer2:For doing estimation :1.)Find out the Coverage during testing of ur software and then estimate keeping in mind 80-20 principle. 2.)You can also look at the deepening of your test cases e.g. how much unit level testing and how much life cycle teting have you performed (Believe that most of the bugs from customer comes due to real use of lifecycle in the software) 3.)You can also refer the defect density from earlier releases of the same product line. by doing these evaluation you can find out the probability of bugs at an approximately optimum estimation. Answer3:You can look at the customer issues mapping from previous release (If you have the same product line) to the current release ,This is the best way of finding estimation for gold release of migration of any product.Secondly, till gold release most of the issues comes from various combination of installation testing like cross-platform,i18 issues,Customization,upgradation and migration. So ,these can be taken as a parameter and then can estimation be completed. ,[object Object],When the build comes to the QA team, what are the parameters to be taken for consideration to reject the build upfront without committing for testing ? ,[object Object],Answer1:Agree with R&D a set of tests that if one fails you can reject the build. I usually have some build verification tests that just make sure the build is stable and the major functionality is working. Then if one test fails you can reject the build. Answer2:The only way to legitimately reject a build is if the entrance criteria have not been met. That means that the entrance criteria to the test phase have been defined and agreed upon up front. This should be standard for all builds for all products. Entrance criteria could include:- Turn-over documentation is complete- All unit testing has been successfully completed and U/T cases are documented in turn-over- All expected software components have been turned-over (staged)- All walkthroughs and inspections are complete- Change requests have been updated to correct status- Configuration Management and build information is provided, and correct, in turn-overThe only way we could really reject a build without any testing, would be a failure of the turn-over procedure. There may, but shouldn't be, politics involved. The only way the test phase can proceed is for the test team to have all components required to perform successful testing. You will have to define entrance (and exit) criteria for each phase of the SDLC. This is an effort to be taken together by the whole development team. Developments entrance criteria would include signed requirements, HLD doc, etc. Having this criteria pre-established sets everyone up for success Answer3:The primary reason to reject a build is that it is untestable, or if the testing would be considered invalid. For example, suppose someone gave you a 
bad build
 in which several of the wrong files had been loaded. Once you know it contains the wrong versions, most groups think there is no point continuing testing of that build. Every reason for rejecting a build beyond this is reached by agreement. For example, if you set a build verification test and the program fails it, the agreement in your company might be to reject the program from testing. Some BVTs are designed to include relatively few tests, and those of core functionality. Failure of any of these tests might reflect fundamental instability. However, several test groups include a lot of additional tests, and failure of these might not be grounds for rejecting a build. In some companies, there are firm entry criteria to testing. Many companies pay lipservice to entry criteria but start testing the code whether the entry criteria are met or not. Neither of these is right or wrong--it's the culture of the company. Be sure of your corporate culture before rejecting a build. Answer4:Generally a company would have set some sort of minimum goals/criteria that a build needs to satisfy - if it satisfies this - it can be accepted else it has to be rejectedFor eg.Nil - high priority bugs2 - Medium Priority bugsSanity test or Minimum acceptance and Basic acceptance should pass The reasons for the new build - say a change to a specific case - this should pass Not able to proceed - non - testability or even some more which is in relation to the new build or the product If the above criterias don't pass then the build could be rejected. ,[object Object],What is Benchmark? ,[object Object],How it is linked with SDLC (Software Development Life Cycle)?or SDLC and Benchmark are two unrelated things.?What are the compoments of Benchmark?In Software Testing where Benchmark fits in? A Benchmark is a standard to measure against. If you benchmark an application, all future application changes will be tested and compared against the benchmarked application. ,[object Object],Which of the following Statements about gernerating test cases is false? ,[object Object],Which of the following Statements about gernerating test cases is false? 1. Test cases may contain multiple valid conditions2. Test cases may contain multiple invalid conditions3. Test cases may contain both valid and invalid conditions4. Test cases may contain more than 1 step.5. test cases should contain Expected results. ,[object Object],Answer1:all the conditions mentioned are valid and not a single condition can be stated as false.Here i think, the condition means the input type or situation (some may call it as valid or invalid, positive or negative)Also a single test case can contain both the input types and then the final result can be verified (it obviously should not bring the required result, as one of the input condition is invalid, when the test case would be executed), this usually happens while writing secnario based test cases. For ex. Consider web based registration form, in which input data type for some fields are positive and for some fields it is negative (in a scenario based test case)Above screen can be tested by generating various scenario's and combinations. The final result can be verified against actual result and the registration should not be carried out sucessfully (as one/some input types are invalid), when this test case is executed. The writing of test case also depends upon the no. of descriptive fields the tester has in the test case template. So more elaborative is the test case template, more is the ease of writing test cases and generating scenario's. So writing of test cases totally depends on the indepth thinking of the tester and there are no predefined or hard coded norms for writing test case.This is according to my understanding of testing and test case writing knowledge (as for many applications, i have written many positive and negative conditions in a single test case and verified different scenario's by generating such test cases) Answer2:The answer to this question will be 3 Test cases may contain both valid and invalid conditions. Since there is no restriction for the test case to be of multiple steps or more than one valid or invalid conditions. But A test case whether it is feature ,unit level or end to end test case ,it can not contain both valid and invalid condition in a unit test case.Because if this will happen then the concept of test case for a result will be dwindled and hence has no meaning. ,[object Object],Which things to consider to test a mobile application through black box technique? ,[object Object],Answer1:Not sure how your device/server is to operate, so mold these ideas to fit your app. Some highlights are: Range testing: Ensure that you can reconnect when leaving and returning back into range. Port/IP/firewall testing - change ports and ips to ensure that you can connect and disconnect. modify the firewall to shutoff the connection. Multiple devices - make sure that a user receives his messages with other devices connected to the same ip/port. Your app should have a method to determine which device/user sent the message and only return to it. Should be in the message string sent and received. Unless you have conferencing capabilities within the application. Cycle the power of the server and watch the mobile unit reconnect automatically. Mobile unit sends a message and then power off the unit, when powering back on and reconnecting, ensure that the message is returned to the mobile unit. Answer2:Not clearly mentioned which area of the mobile application you are testing with. Whether is it simple SMS application or WAP application, you need to specify more details.If you are working with WAP then you can download simulators from net and start testing over it. ,[object Object],What is the general testing process? ,[object Object],The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests. Test data are inputs that have been devised to test the systemTest Cases are inputs and outputs specification plus a statement of the function under the test.Test data can be generated automatically (simulated) or real (live). The stages in the testing process are as follows:1. Unit testing: (Code Oriented)Individual components are tested to ensure that they operate correctly. Each component is tested independently, without other system components.2. Module testing: A module is a collection of dependent components such as an object class, an abstract data type or some looser collection of procedures and functions. A module encapsulates related components so it can be tested without other system modules. 3. Sub-system testing: (Integration Testing) (Design Oriented)This phase involves testing collections of modules, which have been integrated into sub-systems. Sub-systems may be independently designed and implemented. The most common problems, which arise in large software systems, are sub-systems interface mismatches. The sub-system test process should therefore concentrate on the detection of interface errors by rigorously exercising these interfaces. 4. System testing: The sub-systems are integrated to make up the entire system. The testing process is concerned with finding errors that result from unanticipated interactions between sub-systems and system components. It is also concerned with validating that the system meets its functional and non-functional requirements. 5. Acceptance testing: This is the final stage in the testing process before the system is accepted for operational use. The system is tested with data supplied by the system client rather than simulated test data. Acceptance testing may reveal errors and omissions in the systems requirements definition( user - oriented) because real data exercises the system in different ways from the test data. Acceptance testing may also reveal requirement problems where the system facilities do not really meet the users needs (functional) or the system performance (non-functional) is unacceptable. Acceptance testing is sometimes called alpha testing. Bespoke systems are developed for a single client. The alpha testing process continues until the system developer and the client agrees that the delivered system is an acceptable implementation of the system requirements.When a system is to be marketed as a software product, a testing process called beta testing is often used. Beta testing involves delivering a system to a number of potential customers who agree to use that system. They report problems to the system developers. This exposes the product to real use and detects errors that may not have been anticipated by the system builders. After this feedback, the system is modified and either released fur further beta testing or for general sale,[object Object],What's normal practices of the QA specialists with perspective of software? ,[object Object],These are the normal practices of the QA specialists with perspective of software [note: these are all QC activities, not QA activities.] 1-Desgin Review Meetings with the System Analyst and If possible should be the part in Requirement gathering 2-Analysing the requirements and the desing and to trace the desing with respect to the requirements 3-Test Planning 4-Test Case Identification using different techniques (With respect to the Web Based Applciation and Desktoip Applications) 5-Test Case Writing (This part is to be assigned to the testing engineers) 6-Test Case Execution (This part is to be assigned to the testing engineers) 7-Bug Reporting (This part is to be assigned to the testing engineers) 8-Bug Review and thier Analysis so that future bus can be removed by desgining some standards ,[object Object],from low-level to high level (Testing in Stages) ,[object Object],Except for small programs, systems should not be tested as a single unit. Large systems are built out of sub-systems, which are built out of modules that are composed of procedures and functions. The testing process should therefore proceed in stages where testing is carried out incrementally in conjunction with system implementation. ,[object Object],The most widely used testing process consists of five stages ,[object Object],Component testingUnit TestingVerification(Process Oriented)White Box Testing Techniques(Tests that are derived from knowledge of the program's structure and implementation)Module TestingIntegrated testingSub-system TestingSystem TestingUser testingAcceptance TestingValidation(Product Oriented)Black Box Testing Techniques(Tests are derived from the program specification),[object Object],However, as defects are discovered at any one stage, they require program modifications to correct them and this may require other stages in the testing process to be repeated. Errors in program components, say may come to light at a later stage of the testing process. The process is therefore an iterative one with information being fed back from later stages to earlier parts of the process. ,[object Object],How to test and to get the difference between two images which is in the same window? ,[object Object],Answer1:How are you doing your comparison? If you are doing it manually, then you should be able to see any major differences. If you are using an automated tool, then there is usually a comparison facility in the tool to do that. Answer2:Jasper Software is an open-source utility which can be compiled into C++ and has a imgcmp function which compares JPEG files in very good detail as long as they have the same dimentions and number of components. Answer3:Rational has a comparison tool that may be used. I'm sure Mercury has the same tool. Answer4:The key question is whether we need a bit-by-bit exact comparison, which the current tools are good at, or an equivalency comparison. What differences between these images are not differences? Near-match comparison has been the subject of a lot of research in printer testing, including an M.Sc. thesis at Florida Tech. It's a tough problem. ,[object Object],Describe methods to determine if you are testing an application too much? ,[object Object],Answer1: While testing, you need to keep in mind following two things always:-- Percentage of requirements coverage-- Number of Bugs present + Rate of fall of bugs-- Firstly, There may be a case where requirement is covered quite adequately but number of bugs do not fall. This indicates over testing.--- Secondly, There may be a case where those parts of application are also being tested which are not affected by a CHANGE or BUG FIXTURE. This is again a case of over testing.-- Third is the case as you have suggested, with slight modification, i.e bug has sufficiently dropped off but still testing is being at SAME levels as before.Methods to determine if an application is being over-tested are--1. Comparison of 'Rate of Drop in number of Bugs' & 'Effort Invested in Testing' (With all Requirements been met) That is, if bug rate is falling (as it generally happens in all applications), but effort invested in man hours does not fall, this implies Over testing. 2. Comparison of 'Achievment of bug rate threshold' & 'Effort Invested in Testing' (With all Requirements been met) That is, if bug rate has already achieved the agreed-upon value with business and still the testing efforts are being invested with no/little reduction. 3. Verifying if the 'Impact Analysis' for 'Change Requests' has been done properly and being implemented correctly. That is, to check and verify that the components of AUT which have got impacted by the new change are being tested only and no other unrequired component is being tested unneccessarily. If unaffected components are being tested, this implies Over testing. Answer2: If the bug find rate has dropped off considerably, the test group should shift its testing strategy. One of the key problems with heavy reliance on regression testing is that the bug find rate drops off even though there are plenty of bugs not yet found. To find new bugs, you have to run new tests. Every test technique is stronger for some types of bugs and weaker for others. Many test groups use only a few techniques. In our consulting, James Bach and I repeatedly worked with companies that relied on only one or two main techniques. When one technique, any one test technique, yields few bugs, shifting to new technique(s) is likely to expose new problems. At some point, you can use a measure that is only partially statistical -- if your bug find rate is low AND you can't think of any new testing approaches that look promising, THEN you are at the limit of your effectiveness and you should ship the product. That still doesn't mean that the application is overtested. It just means that YOU'RE not going to find many new bugs. Answer3: Best way is to monitor the test defects over the period of time Refer williams perry book, where he has mentioned the concept of 'under test' and 'over test', in fact the data can be plotted to see the criteria. Yes one of the criteria is to monitor the defect rate and see if it is almost zero second method would be using test coverage when it reach 100% (or 100% requirement coverage) ,[object Object],Software QA/Testing Technical FAQs ,[object Object],(Continued from previous question...),[object Object],Procedural Software Testing Issues ,[object Object],Software testing in the traditional sense can miss a large number of errors if used alone. That is why processes like Software Inspections and Software Quality Assurance (SQA) have been developed. However, even testing all by itself is very time consuming and very costly. It also ties up resources that could be used otherwise. When combined with inspections and/or SQA or when formalized, it also becomes a project of its own requiring analysis, design and implementation and supportive communications infrastructure. With it interpersonal problems arise and need managing. On the other hand, when testing is conducted by the developers, it will most likely be very subjective. Another problem is that developers are trained to avoid errors. As a result they may conduct tests that prove the product is working as intended (i.e. proving there are no errors) instead of creating test cases that tend to uncover as many errors as possible. ,[object Object],How do I start with testing? ,[object Object],Think twice (or may be more) times before you choose a career. Are you interested in it or do u just want to jump on the bandwagon? PrerequisiteYou can join a software development company as a tester if you can convince the interviewer 1. You have a knack for breaking software2. You are aware of basic Quality concepts and belive in them3. You want to pursue Testing as a career and not just to try it ,[object Object],OO Software Testing Issues ,[object Object],A common way of testing OO software testing-by-poking-around (Binder, 1995). In this case the developer's goal is to show that the product can do something useful without crashing. Attempts are made to 
break
 the product. If and when it breaks, the errors are fixed and the product is then deemed 
tested
. Testing-by-poking-around method of testing OO software is, in my opinion, as unsuccessful as random testing of procedural code or design. It leaves the finding of errors up to a chance.Another common problem in OO testing is the idea that since a superclass has been tested, any subclasses inheriting from it don't need to be.This is not true because by defining a subclass we define a new context for the inherited attributes. Because of interaction between objects, we have to design test cases to test each new context and re-test the superclass as well to ensure proper working order of those objects.Yet another misconception in OO is that if you do proper analysis and design (using the class interface or specification), you don't need to test or you can just perform black-box testing only.However, function tests only try the 
normal
 paths or states of the class. In order to test the other paths or states, we need code instrumentation. Also it is often difficult to exercise exception and error handling without examination of the source code. ,[object Object],What is the purpose of black box testing? ,[object Object],Answer1:The main purpose of BB Testing is to validate that the application works as the user will be operating it and in the environments of their systems. How do you do system testing and integration testing? You may lose time and money but you may also lose Quality and eventually Customers! Answer2:
What is the purpose of black box testing?
 Black-box testing checks that the user interface and user inputs and outputs all work correctly. Part of this is that error handling must work correctly. It's used in functional and system testing. 
We do everything in white box testing: - we check each module's function in the unit testing
 Who is 
we
? Are you programmers or quality assurance testers? Usually, unit testing is done by programmers, and white-box testing would be how they'd do it. 
- once unit test result is ok, means that modules work correctly (according to the requirement documemts)
 Not quite. It means that on a stand-alone basis, each module is okay. White box testing only tests the internal structure of the program, the code paths. Functional testing is needed to test how the individual components work together, and this is best done from an external perspective, meaning by using the software the way an end user would, without reference to the code (which is what black-box testing is). if we doing testing again in black box will we lose time and money?
 No, the opposite: You'll lose money from having to repair errors you didn't catch with the white-box testing if you don't do some black-box testing. It's far more expensive to fix errors after release than to test for them and fix them early on. But again, who is 
we
? The black box testers should not be the people who did the programming; they should be the QA team -- also some end users for the usability testing. Now that I've said that, good programmers will run some basic black-box tests before handing the application to QA for testing. This isn't a substitute for having QA do the tests, but it's a lot quicker for the programmer to find and fix an error right away than to have to go through the whole process of reporting a bug, then fixing and releasing a new build, then retesting. ,[object Object],How do you create a test plan/design? ,[object Object],Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking...* Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.* Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.* It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.* Test scenarios are executed through the use of test procedures or scripts.* Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.* Test procedures or scripts include the specific data that will be used for testing the process or transaction.* Test procedures or scripts may cover multiple test scenarios.* Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope. * Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.* Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.* A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.Inputs for this process:* Approved Test Strategy Document.* Test tools, or automated test tools, if applicable.* Previously developed scripts, if applicable.* Test documentation problems uncovered as a result of testing.* A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code, and software complexity data.Outputs for this process:* Approved documents of test scenarios, test cases, test conditions, and test data.* Reports of software design issues, given to software developers for correction. ,[object Object],What is the purpose of a test plan? ,[object Object],Reason number 1: We create a test plan because preparing it helps us to think through the efforts needed to validate the acceptability of a software product. Reason number 2: We create a test plan because it can and will help people outside the test group to understand the why and how of product validation. Reason number 3: We create a test plan because, in regulated environments, we have to have a written test plan. Reason number 4: We create a test plan because the general testing process includes the creation of a test plan. Reason number 5: We create a test plan because we want a document that describes the objectives, scope, approach and focus of the software testing effort. Reason number 6: We create a test plan because it includes test cases, conditions, the test environment, a list of related tasks, pass/fail criteria, and risk assessment. Reason number 7: We create test plan because one of the outputs for creating a test strategy is an approved and signed off test plan document. Reason number 8: We create a test plan because the software testing methodology a three step process, and one of the steps is the creation of a test plan. Reason number 9: We create a test plan because we want an opportunity to review the test plan with the project team. Reason number 10: We create a test plan document because test plans should be documented, so that they are repeatable. ,[object Object],Can we prepare Test Plan without SRS? ,[object Object],It is not always mandatory that you should have SRS document to prepare a Test Plan. This kind of Documents Hierarchy is maintained to maintain Organizational standards and also to have clear understanding of the things. Yes you can Prepare a Test plan directly without SRS, When the Requirements are clear with your clients,and when your URD(User Requirement Document ) is supportive enough to clarify the issues. Though we don't have SRS clients will be giving some information SRS only contains mainly Product information But we will not know the Testing effort if we don't have SRS. SRS contains How many cycles we are testing, and on the platforms we are testing , etc. Actually there won't be any harm in doing so, becoz, ultimately you will send your Test plan document to your client and after getting approval from him only you start Testing. (Note:- SRS is the document which you get in the Analysis phase of your Software Development. Test plan is the document , which contains the details of Product interms of , Tset strategy , Scope of testing, Types of tests to be conducted,Risk Managemnet , Mention of Automation Tool ,About Bug tracking Tool, etc..,) ,[object Object],How do test plan templates look like? ,[object Object],The test plan document template helps to generate test plan documents that describe the objectives, scope, approach and focus of a software testing effort. Test document templates are often in the form of documents that are divided into sections and subsections. One example of a template is a 4-section document where section 1 is the description of the 
Test Objective
, section 2 is the the description of 
Scope of Testing
, section 3 is the the description of the 
Test Approach
, and section 4 is the 
Focus of the Testing Effort
. All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. They also help in learning where information is located, making it easier for a user to find what they want. With standards and templates, information will not be accidentally omitted from a document. Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions. A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. ,[object Object],How to Test a desktop systems ? ,[object Object],You will likely have to use a programming or scripting language to interact with the service directly. You will have more control over the raw information that way. You will have to determine what the service is supposed to do and how it is supposed to interact with other applications and services. A data dictionary likely exists. It may not be called that however. What this document does is explain what commands the service will respond to and what sort of data should be sent. You will have to use this document to do your testing. Get close to the person or people who created the document or the service and expect them to keep you in the loop when changes take place (it doesn't help anyone if you report a defect and it's really only reflecting an expected change in the operation of the service). Desktop applications are generally designed to run and quit. You have to be concerned with memory leaks and system usage. ,[object Object],How do you create a test strategy? ,[object Object],The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.Inputs for this process:* A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.* A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.* Testing methodology. This is based on known standards.* Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.* Requirements that the system can not provide, e.g. system limitations. Outputs for this process:* An approved and signed off test strategy document, test plan, including test cases.* Testing issues requiring resolution. Usually this requires additional negotiation at the project management level. ,[object Object],How to do Estimating Testing effort ? ,[object Object],Time Estimation method for Testing Process Note : folloing method is based on use case driven specification. Step 1 : count number of use cases (NUC) of system step 2 : Set Avg Time Test Cases(ATTC) as per test plan step 3 : Estimate total number of test cases (NTC) Total number of test cases = Number of usecases X Avg testcases per a use case Step 4 : Set Avg Execution Time (AET) per a test case (idelly 15 min depends on your system) Step 5 : Calculate Total Execution Time (TET) TET = Total number of test cases * AET Step 6 : Calculate Test Case Creation Time (TCCT)useually we will take 1.5 times of TET as TCCTTCCT = 1.5 * TETStep 7 : Time for ReTest Case Execution (RTCE) this is for retestinguseually we take 0.5 times of TETRTCE = 0.5 * TETStep 8 : Set Report generation Time (RGTusually we take 0.2 times of TETRGT = 0.2 * TETStep 9 : Set Test Environment Setup Time (TEST)it also depends on test planStep 10 : Total Estimation time = TET + TCCT+ RTCE + RGT + TEST + some buffer...;)ExampleTotal No of use cases (NUC) : 227Average test cases per Use cases(AET) : 10Estimated Test cases(NTC) : 227 * 10 = 2270Time estimation execution (TET) : 2270/4 = 567.5 hrTime for creating testcases (TCCT) : 567.5*4/3 = 756.6 hrTime for retesting (RTCE) : 567.5/2 = 283.75 hrReport Generation(RGT) = 100 hrTest Environment Setup Time(TEST) = 20 hr.-------------------Total Hrs 1727.85 + buffer-------------------here 4 means Number of testcases executed per houri.e 15 min will take for execution of each test case,[object Object],What does a test strategy document contain? ,[object Object],The test strategy document contains test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment. The test strategy document is a formal description of how a software product will be tested. What is the test strategy document developed for? It is developed for all levels of testing, as required. How is it written, and who writes it? It is the test team that analyzes the requirements, writes the test strategy, and reviews the plan with the project team,[object Object],hy Q/A should not report to development? ,[object Object],Based on research from the Quality Assurance Institute, the percent of quality groups in each location is noted, 50% - reports to Senior IT Manager - This is the best positioning because it gives the Quality Manager immediate access to the IT Manager to discuss and promote Quality issues, when the quality manager reports elsewhere, quality issues may not be raised to the appropriate level or receive the necessary action. 25% - reports to Manager of systems/programming 15 % reports to Manger oprerations. 10 % outside IT function. ,[object Object],In QA team, everyone talks about process. What exactly they are taking about? Are there any different type of process? ,[object Object],Answer1:When you talk about 
process
 you are generally talking about the actions used to accomplish a task. Here's an example: How do you solve a jigsaw puzzle? You start with a box full of oddly shaped pieces. In your mind you come up with a strategy for matching two pieces together (or no strategy at all and simply grab random pieces until you find a match), and continue on until the puzzle is completed. If you were to describe the *way* that you go about solving the puzzle you would be describing the process. Some follow-up questions you might think about include things like:- How much time did it take you to solve the puzzle?- Do you know of any skills, tricks or practices that might help you solve the puzzle quicker?- What if you try to solve the puzzle with someone else? Does that help you go faster, or slower? (why or why not?) Can you have *too* many people on this one task?- To answer your second question, I'll ask *you* the question: Are there different ways that people can solve a jigsaw puzzle? There are many interesting process-related questions, ideas and theories in Quality Assurance. Generally the identification of workplace processes lead to the questions of improvement in efficiency and productivity. The motivation behind that is to try and make the processes as efficient as possible so as to incur the least amount of time and expense, while providing a general sense of repeatability, visibility and predictability in the way tasks are performed and completed. The idea behind this is generally good, but the execution is often flawed. That is what makes QA so interesting. You see, when you work with people and processes, it is very different than working with the processes performed by machines. Some people in QA forget that distinction and often become disillusioned with the whole thing. If you always remember to approach processes in the workplace with a people-centric view, you should do fine. ,[object Object],Answer2:There is:* Waterfall* Spiral* Rapid prototype* Clean room* Agile (XP, Scrum, ...),[object Object],What is a requirements test matrix? ,[object Object],The requirements test matrix is a project management tool for tracking and managing testing efforts, based on requirements, throughout the project's life cycle. The requirements test matrix is a table, where requirement descriptions are put in the rows of the table, and the descriptions of testing efforts are put in the column headers of the same table. The requirements test matrix is similar to the requirements traceability matrix, which is a representation of user requirements aligned against system functionality. The requirements traceability matrix ensures that all user requirements are addressed by the system integration team and implemented in the system integration effort. The requirements test matrix is a representation of user requirements aligned against system testing. Similarly to the requirements traceability matrix, the requirements test matrix ensures that all user requirements are addressed by the system test team and implemented in the system testing effort. ,[object Object],What metrics are used for test report generation? ,[object Object],Metrics that can be used for test report generation include...McCabe metrics: cyclomatic complexity metric (v(G)), actual complexity metric (AC), module design complexity metric (iv(G)), essential complexity metric (ev(G)), pathological complexity metric (pv(G)), design complexity metric (S0), integration complexity metric (S1), object integration complexity metric (OS1), global data complexity metric (gdv(G)), data complexity metric (DV), tested data complexity metric (TDV), data reference metric (DR), tested data reference metric (TDR), maintenance severity metric (maint_severity), data reference severity metric (DR_severity), data complexity severity metric (DV_severity), global data severity metric (gdv_severity).McCabe object-oriented software metrics: encapsulation percent public data (PCTPUB), access to public data (PUBDATA), polymorphism percent of unoverloaded calls (PCTCALL), number of roots (ROOTCNT), fan-in (FANIN), quality maximum v(G) (MAXV), maximum ev(G) (MAXEV), and hierarchy quality (QUAL).Other object-oriented software metrics: depth (DEPTH), lack of cohesion of methods (LOCM), number of children (NOC), response for a class (RFC), weighted methods per class (WMC), Halstead software metrics program length, program volume, program level and program difficulty, intelligent content, programming effort, error estimate, and programming time.Line count software metrics: lines of code, lines of comment, lines of mixed code and comments, and lines left blank. ,[object Object],What is the difference between efficient and effective? ,[object Object],
Efficient
 means having a high ratio of output to input; which means working or producing with a minimum of waste. For example, 
An efficient engine saves gas.
 Or, 
An efficient test engineer saves time
. 
Effective
, on the other hand, means producing or capable of producing an intended result, or having a striking effect. For example, 
For rapid long-distance transportation, the jet engine is more effective than a witch's broomstick
. Or, 
For developing software test procedures, engineers specializing in software testing are more effective than engineers who are generalists
,[object Object],How effective can we implement six sigma principles in a very large software services organization? ,[object Object],Answer1: Effective way of implementing sixsigma. there are quite a few things one needs 1. management buyin2. dedicated team both drivers as well as adopters3. training4. culture building - if you have a pro process culture, life is easy5. sustained effort over a period towards transforming, people, thoughts and actions Personally technical content is never a challenge, but adoption is a challenge. Answer2: 
Six sigma
 is a combination of process recommendations and mathematical model. The name 
six sigma
 reflects the notion of reducing variation so much that errors -- events out of tolerance -- are six standard deviations from a desired mean. The mathematics are at the core of the process implementation. The problem is that software is not hardware. Software defects are designed in, not the result of manufacturing variation. The other side of six sigma is the drive for continuous improvement. You don't need the six sigma math for this and the concept has been around long before the six sigma movement. To improve anything, you need some type of indicator of its current state and a way to tell that it is improved. Plus determination to improve it. Management support helps. Answer3: There are different methodologies adopted in sixsigma. However, it is commonly referenced from the variance based approach. If you are trying to look at sixsigma from that, for software services, fundamentally the measurement system should be reliable - industry has not reached the maturity level of manufacturing industry where it fits to a T. The differences between SW and HW/manufacturing industry is slightly difficult to address. There are some areas you can adopt sixsigma in its full statistical form(eg in-process error rate, productivity improvements etc), some areas are difficult. The narrower the problem area is, the better it gets even in software services to address adopting the statistical method. There are methodologies that have a bundle of tools,along with statistical techniques, are used on the full SDLC. A generic observation is ,SS helps if we look for proper fitment of methodology for the purpose. Else doubts creep in ,[object Object],Are developers smarter than tester? Any suggestion about the future prospects and technicality involvedin the testing job? ,[object Object],Answer1:QA & Testing are thankless jobs. In a software development company developer is a core person. As you are a fresh graduate, it would be good for you to work as a developer. From development you can always move to testing or QA or other admin/support tasks. But from Testing or QA it is little difficult to go back to development, though not impossible(as u are BE comp)Seeing the job market, it is not possible for each & every fresher to get into development. But you can keep searching for it.Some big company's have seperate Verifiction & Validation groups where only testing projects are executed. Those teams have TLs, PLs who are testing experts. They earn good salary same as development people. In technical projects the testing team does lot of technical work. You can do certifications to improve your technical skills & market value. It all depends on your way of handling things & interpersonal, communication and leadership skills. If it is difficult for you to get a job in developement or you really like testing, just go ahead. Try to achieve excellence as a testing professional. You will never have a job problem .Also you will always get onsite opportunities too!! Yuo might have to struggle for initial few years like all other freshers. Answer2:QA and Testing are thankless only in some companies. Testing is part of development. Rather than distinguish between testing and development,distinguish between testing and programming. Programming is also thankless in some companies. Not suggesting that anyone should or should not go into testing. It depends on your skills and interests. Some people are better at programming and worse at testing, some better at testing and worse at programming, some are not suited for either role. You should decide what you are good at and what fascinates you. What type of work would make you WANT to stay at work for 60-80 hours a week for a few years because it is so interesting? Suggesting that there are excellent testing jobs out there, but there are bad ones too (in testing and in programming, both). Have not seen any certification in software testing that improves the technical skill of anyone. Apparently, testing certification improves a tester's market value in some markets. Most companies mean testing when they say 
QA
. Or they mean Testing plus Metrics, where the metrics tasks are low-skill data collection and basic data analysis rather than thinking up and justifying measurement systems appropriate to the questions at hand. In terms of skill, salary, intellectual challenge and value to the company, testing+metrics is the same as testing. Some companies see QA more strategically, and hire more senior people into their groups. Here is a hint--if you can get a job in a group called QA with less than 5 years of experience, it's a testing group or something equivalent to it. Answer3:Nothing is considered as great or a mean job. As long as you like and love to do, everything in that seems to be interesting. I started as a developer and slowly moved to Testing. I find testing to be more challenging and interesting. I have solid 6 years of testing experience alone and many sernior people are there in my team, who are professional testers. Answer4:testing is low-skill work in many companies. Scripted testing of the kind pushed by ISEB, ISTQB, and the other certifiers is low skill, low prestige, offers little return value to the company that pays for it, and is often pushed to offsite contracting firms because it isn't worth doing in-house. In many cases, it is just a process of 
going through the motions
 -- pretending to do testing (and spending a lot of money in the pretense) but without really looking for any important information and without creating any artifacts that will be useful to the project team. The only reason to take a job doing this kind of work is to get paid for it. Doing it for too long is bad for your career. There are much higher-skill ways to do testing. Some of them involve partial automation (writing or using programs to help you investigate the program more effectively), but automation tools are just tools. They are often used just as mind-numbingly and valuelessly as scripted manual testing. When you're offered this kind of position, try to find out how much judgment you will have to exercise in the analysis of the product under test and the ways that it provides value to the users and other stakeholders, in the design of tests to check that value and to check for other threats to value (security failures, performance failures, usability failures, etc.)--and how much this position will help you develop your judgment. If you will become a more skilled and more creative investigator who has a better collection of tools to investigate with, that might be interesting. If not, you will be marking time (making money but learning little) while the rest of the technical world learns new ideas and skills. ,[object Object],What's the difference between priority and severity? ,[object Object],The word 
priority
 is associated with scheduling, and the word 
severity
 is associated with standards. 
Priority
 means something is afforded or deserves prior attention; a precedence established by urgency or order of or importance. Severity is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles. For example, a severe code of behavior. The words priority and severity do come up in bug tracking. A variety of commercial, problem-tracking / management software tools are available. These tools, with the detailed input of software test engineers, give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it. The fixes are based on project priorities and severity of bugs. The severity of a problem is defined in accordance to the end client's risk assessment, and recorded in their selected tracking tool. A buggy software can severely affect schedules, which, in turn can lead to a reassessment and renegotiation of priorities. ,[object Object],How to test a web based application that has recently been modified to give support for Double Byte Character Sets? ,[object Object],Answer1:should apply black box testing techniques (boundary analysis, equivalence partioning) Answer2:The Japanese and other East Asian Customers are very particular of the look and feel of the UI. So please make sure, there is no truncation at any place. One Major difference between Japanese and English is that there is no concept of spaces between the words in Japanese. The line breaks in English usually happens whenever there is a Space. In Japanese this leads to a lot of problem with the wrapping on the text and if you have a table with defined column length, you might see text appearing vertical. On the functionality side:1. Check for the date format and Number format. (it should be in the native locale)2. Check that your system accepts 2-byte numerals and characters.3. If there is any fields with a boundary value of 100 characters, the field should accept, the same number of 2-byte character as well.4. The application should work on a Native (Chinese, Japanese, Korean) OS as well as on an English OS with the language pack installed.Writing a high level test plan for 2-byte support will require some knowledge of the application and its architecture. ,[object Object],What is the difference between software fault and software failure? ,[object Object],Software failure occurs when the software does not do what the user expects to see. Software fault, on the other hand, is a hidden programming error.A software fault becomes a software failure only when the exact computation conditions are met, and the faulty portion of the code is executed on the CPU. This can occur during normal usage. Or, when the software is ported to a different hardware platform. Or, when the software is ported to a different complier. Or, when the software gets extended. ,[object Object],before creating test cases to 
break the system
, a few principles have to be observed: ,[object Object],Testing should be based on user requirements. This is in order to uncover any defects that might cause the program or system to fail to meet the client's requirements.Testing time and resources are limited. Avoid redundant tests.It is impossible to test everything. Exhaustive tests of all possible scenarios are impossible, simple because of the many different variables affecting the system and the number of paths a program flow might take.Use effective resources to test. This represents use of the most suitable tools, procedures and individuals to conduct the tests. The test team should use tools that they are confident and familiar with. Testing procedures should be clearly defined. Testing personnel may be a technical group of people independent of the developers.Test planning should be done early. This is because test planning can begin independently of coding and as soon as the client requirements are set.Testing should begin at the module. The focus of testing should be concentrated on the smallest programming units first and then expand to other parts of the system.We look at software testing in the traditional (procedural) sense and then describe some testing strategies and methods used in Object Oriented environment. We also introduce some issues with software testing in both environments. ,[object Object],Would like to know whether Black Box testing techniques like Boundary Value Analysis and Equivalence Partitioning - during which phases of testing are they used,if possible with examples ? ,[object Object],Answer1:Also Boundary Value Analysis and Equivalence Partitioning can be used in unit or component testing, and generally is used in system testing Example, you have a module designed to work out the tax to be paid:An employee has £4000 of salary tax free. The next £1500 is taxed at 10%The next £28000 is taxed at 22%Any further amount is taxed at 40%You must define test cases that exercise valid and invalid equivalence classes:Any value lower than 4000 is tax freeAny value between 4000 and 5500 must paid 10%Any value between 5501 and 33500 must paid 22%Any value bigger than 33500 must paid 40%And the boundary values are: 4000, 4001, 5501, 33501Answer2:This Boundary value analysis and Equivalence partitioning is used to prepare the positive and negative type test cases. Equivalence partitioning: If you want to validate the text box which accepts the value between 2000 to 10000 , then the test case input is partitioned as the following way 1. <=20002. >=2000 and <=100003. >10000The boundary Values analysis is checking the input values on boundaries. IN the above case it can checked with whether the input values is on the boundary or above the boundary or in low boundary. ,[object Object],How to use methods/techniques to test the bandwidth usage of a client/server application? ,[object Object],Bandwidth Utilization: Basically at the client-server model you will be most concerned about the bandwidth usage if your application is a web based one. It surely is a part of concern when the throughput and the data transfer comes into the picture. I suggest you to use the Radview's Webload for the Load and Stress testing tool for the same. Available at the demoware.. you can record the scenarios of the normal user over the variable connection speed and then run it for hours to know about the bandwidth utilisation and the throughput and data trasfer rate, hits per sec, etc... there is a huge list of parameters which can be tested over a n no of combinations.,[object Object],How to insert a check point to a image to check enable property in QTP? ,[object Object],Answer1:AS you are saying that the all images are as push button than you can check the property enabled or disabled. If you are not able to find that property than go to object repository for that objecct and click on add remove to add the available properties to that object. Let me know if that works. And if you take it as image than you need to check visible or invisible property tht also might help you are there are no enable or disable properties for the image object. Answer2:The Image Checkpoint does not have any property to verify the enable/disable property. One thing you need to check is:* Find out form the Developer if he is showing different images for activating/deactiving i.e greyed out image. That is the only way a developer can show deactivate/activate if he is using an 
image
. Else he might be using a button having a headsup with an image.* If it is a button used to display with the headsup as an image you woudl need to use the object Properties as a checkpoint. ,[object Object],How do you write test cases? ,[object Object],When I write test cases, I concentrate on one requirement at a time. Then, based on that one requirement, I come up with several real life scenarios that are likely to occur in the use of the application by an end user. When I write test cases, I describe the inputs, action, or event, and their expected results, in order to determine if a feature of an application is working correctly. To make the test case complete, I also add particulars e.g. test case identifiers, test case names, objectives, test conditions (or setups), input data requirements (or steps), and expected results. Additionally, if I have a choice, I like writing test cases as early as possible in the development life cycle. Why? Because, as a side benefit of writing test cases, many times I am able to find problems in the requirements or design of an application. And, because the process of developing test cases makes me completely think through the operation of the application. ,[object Object],Diferences Between System Testing and User Acceptance Testing? ,[object Object],Answer1: system testing: The process of testing an integrated system to verify that it meets specified requirements. acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. First, I don’t classify the incidents or defects regarding the phase the software development process or testing, I prefer classify them regarding their type, e. g. Requeriments, features and functionality, structural bugs, data, integration, etc The value of categorising faults is that it helps us to focus our testing effort where it is most important and we should have distinct test activietis that adrress the problems of poor requerimients, structure, etc. You don’t do User Acceptance Test only because the software is delivered! Take care about the concepts of testing! Answer2: In my company we do not perform user acceptance testing, our clients do. Once our system testing is done (and other validation activities are finished) the software is ready to ship. Therefore any bug found in user acceptance testing would be issued a tracking number and taken care of in the next release. It would not be counted as a part of the system test. Answer3: This is what i feel is user acceptance testing, i hope u find it useful. Definition:User Acceptance testing is a formal testing conducted to determine whether a software satisfies it's acceptance criteria and to enable the buyer to determine whether to accept the system. Objective:User Acceptance testing is designed to determine whether the software is fit for the user to use. And also to determine if the software fits into user's business processes and meets his/her needs. Entry Criteria:End of development process and after the software has passed all the tests to determine whether it meets all the predetermined functionality, performance and other quality criteria. Exit Criteria:After the verification that the docs delivered are adequate and consistent with the executable system. Software system meets all the requirements of the customer Deliverables:User Acceptance Test PlanUser Acceptance TestcasesUser guides/docsUser Acceptance TestreportsAnswer4: System Testing: Done by QA at developemnt end.It is done after intergration is complete and all integration P1/P2/P3 bugs are fixed. the code is freezed. No more code changes are taken. Then All the requirements are tested and all the intergration bugs are verified. UAT: Done by QA(trained like end users ). All the requiement are tested and also whole system is verified and validated. ,[object Object],What is the difference between a test plan and a test scenario? ,[object Object],Difference number 1: A test plan is a document that describes the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a document that describes both typical and atypical situations that may occur in the use of an application. Difference number 2: Test plans define the scope, approach, resources, and schedule of the intended testing activities, while test procedures define test conditions, data to be used for testing, and expected results, including database updates, file outputs, and report results. Difference number 3: A test plan is a description of the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a description of test cases that ensure that a business process flow, applicable to the customer, is tested from end to end. ,[object Object],Can you give me an example on reliability testing? ,[object Object],For example, our products are defibrillators. From direct contact with customers during the requirements gathering phase, our sales team learns that a large hospital wants to purchase defibrillators with the assurance that 99 out of every 100 shocks will be delivered properly. In this example, the fact that our defibrillator is able to run for 250 hours without any failure in order to demonstrate the reliability, is irrelevant to these customers. In order to test for reliability we need to translate terminology that is meaningful to the customers into equivalent delivery units, such as the number of shocks. Therefore we describe the customer needs in a quantifiable manner, using the customer’s terminology. For example, our quantified reliability testing goal becomes as follows: Our defibrillator will be considered sufficiently reliable if 10 (or fewer) failures occur from 1,000 shocks. Then, for example, we use a test / analyze / fix technique, and couple reliability testing with the removal of errors. When we identify a failed delivery of a shock, we send the software back to the developers, for repair. The developers build a new version of the software, and then we deliver another 1,000 shocks (into dummy resistor loads). We track failure intensity (i.e. failures per 1,000 shocks) in order to guide our reliability testing, and to determine the feasibility of the software release, and to determine whether the software meets our customers' reliability requirements. ,[object Object],Need function to find all the positions?Ex: a string 
abcd, efgh,ight
 .Want break this string based on the criteria here ever found the.. ,[object Object],Answer1:And return the delimited fields as a list of string? Sound like a perl split function. This could be built on one of your own containing:[ ] //knocked this together in a few min. I am sure there is a much more efficent way of doing things[ ] //but this is with the cobling together of several built in functions[-] LIST OF STRING Split(STRING sDelim, STRING sData)[ ] LIST OF STRING lsReturn[ ] STRING sSegment[-] while MatchStr(
*{sDelim}*
, sData)[ ] sSegment = GetField(sData, sDelim, 1)[ ] ListAppend(lsReturn, Trim(sSegment))[ ] //crude chunking:[ ] sSegment += 
,
[ ] sData = GetField(sData, sSegment, 2)[-] if Len(sData) > 0[ ] ListAppend(lsReturn, Trim(sData))[ ] return lsReturnAnswer2:You could use something like this.... hope I am understanding the problem[+] testcase T1()[ ] string sTest = 
hello, there I am happy
[ ] string sTest1 = (GetField (sTest, 
,
, 2))[ ] Print(sTest1)[ ][ ] This Prints 
there I am happy
[ ] GetField(sTest,
,
1)) would Print hello, etc.... Answer3:Below is the function which return all fields (list of String).[+] LIST OF STRING ConvertToList (STRING sStr, STRING sDelim)[ ] INTEGER iIndex= 1[ ] LIST OF STRING lsStr[ ] STRING sToken = GetField (sStr, sDelim, iIndex)[ ][+] if (iIndex == 1 && sToken == 

)[ ] iIndex = iIndex + 1[ ] sToken = GetField (sStr, sDelim, iIndex)[ ][+] while (sToken != 

)[ ] ListAppend (lsStr, sToken)[ ] iIndex = iIndex+1[ ] sToken = GetField (sStr, sDelim, iIndex)[ ] return lsStr ,[object Object],What is the difference between monkey testing and smoke testing? ,[object Object],Difference number 1: Monkey testing is random testing, and smoke testing is a nonrandom testing. Smoke testing is nonrandom testing that deliberately exercises the entire system from end to end, with the the goal of exposing any major problems.Difference number 2: Monkey testing is performed by automated testing tools, while smoke testing is usually performed manually.Difference number 3: Monkey testing is performed by 
monkeys
, while smoke testing is performed by skilled testers.Difference number 4: 
Smart monkeys
 are valuable for load and stress testing, but not very valuable for smoke testing, because they are too expensive for smoke testing.Difference number 5: 
Dumb monkeys
 are inexpensive to develop, are able to do some basic testing, but, if we used them for smoke testing, they would find few bugs during smoke testing.Difference number 6: Monkey testing is not a thorough testing, but smoke testing is thorough enough that, if the build passes, one can assume that the program is stable enough to be tested more thoroughly.Difference number 7: Monkey testing either does not evolve, or evolves very slowly. Smoke testing, on the other hand, evolves as the system evolves from something simple to something more thorough. Difference number 8: Monkey testing takes 
six monkeys
 and a 
million years
 to run. Smoke testing, on the other hand, takes much less time to run, i.e. from a few seconds to a couple of hours. ,[object Object],It's a good thing to share test cases with customers ,[object Object],That's generally a good thing, but the question is why do they want to see them? Potential problems are that they may be considering changing outsourcing firms and want to use the test cases elsewhere. If that can be prevented, please do so. Another problem is that they want to micro manage your testing efforts. It's one thing to audit your work to prove to themselves that you're doing a good job, it's an entirely different matter if they intend to tell you that you don't have enough test coverage on the activity of module foo and far too much coverage on module bar, please correct it. Another issue may be that they are seeking litigation and they need proof that you were negligent in some area of testing. It's never a bad thing to have your customer wanting to be involved, unless you're a large company and this is a small (in terms of sales) customer. What are your concerns about this? Can you give more information on your situation and the customer's? ,[object Object],How to Read data from the Telnet session? ,[object Object],Declared[+] window DialogBox Putty[ ] tag 
* - PuTTY
[ ][ ] // Capture the screen contents and return as a list of strings[+] LIST OF STRING getScreenContents()[ ][ ] LIST OF STRING ClipboardContents[ ][ ] // open the system menu and select copy all to clipboard menu command[ ] this.TypeKeys(
,[object Object]
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs
Qa Faqs

Contenu connexe

Tendances

Chapter 6 - Tool Support for Testing
Chapter 6 - Tool Support for TestingChapter 6 - Tool Support for Testing
Chapter 6 - Tool Support for TestingNeeraj Kumar Singh
 
Chapter 4 - Performance Testing Tasks
Chapter 4 - Performance Testing TasksChapter 4 - Performance Testing Tasks
Chapter 4 - Performance Testing TasksNeeraj Kumar Singh
 
Testing throughout the software life cycle &amp; statistic techniques
Testing throughout the software life cycle &amp; statistic techniquesTesting throughout the software life cycle &amp; statistic techniques
Testing throughout the software life cycle &amp; statistic techniquesNovika Damai Yanti
 
Chapter 4 - Quality Characteristics for Technical Testing
Chapter 4 - Quality Characteristics for Technical TestingChapter 4 - Quality Characteristics for Technical Testing
Chapter 4 - Quality Characteristics for Technical TestingNeeraj Kumar Singh
 
Chapter 2 - Performance Measurement Fundamentals
Chapter 2 - Performance Measurement FundamentalsChapter 2 - Performance Measurement Fundamentals
Chapter 2 - Performance Measurement FundamentalsNeeraj Kumar Singh
 
Chapter 5 - Automating the Test Execution
Chapter 5 - Automating the Test ExecutionChapter 5 - Automating the Test Execution
Chapter 5 - Automating the Test ExecutionNeeraj Kumar Singh
 
Chapter 1 - The Technical Test Analyst Tasks in Risk Based Testing
Chapter 1 - The Technical Test Analyst Tasks in Risk Based TestingChapter 1 - The Technical Test Analyst Tasks in Risk Based Testing
Chapter 1 - The Technical Test Analyst Tasks in Risk Based TestingNeeraj Kumar Singh
 
Chapter 1 - Fundamentals of Testing
Chapter 1 - Fundamentals of TestingChapter 1 - Fundamentals of Testing
Chapter 1 - Fundamentals of TestingNeeraj Kumar Singh
 
Chapter 2 - Testing Throughout the Development LifeCycle
Chapter 2 - Testing Throughout the Development LifeCycleChapter 2 - Testing Throughout the Development LifeCycle
Chapter 2 - Testing Throughout the Development LifeCycleNeeraj Kumar Singh
 
Chapter 2 - White Box Test Techniques
Chapter 2 - White Box Test TechniquesChapter 2 - White Box Test Techniques
Chapter 2 - White Box Test TechniquesNeeraj Kumar Singh
 

Tendances (20)

CTFL Module 02
CTFL Module 02CTFL Module 02
CTFL Module 02
 
Fundamentals of Testing Section 1/6
Fundamentals of Testing   Section 1/6Fundamentals of Testing   Section 1/6
Fundamentals of Testing Section 1/6
 
Chapter 6 - Tool Support for Testing
Chapter 6 - Tool Support for TestingChapter 6 - Tool Support for Testing
Chapter 6 - Tool Support for Testing
 
Chapter 4 - Performance Testing Tasks
Chapter 4 - Performance Testing TasksChapter 4 - Performance Testing Tasks
Chapter 4 - Performance Testing Tasks
 
Istqb ctfl syll 2011
Istqb ctfl syll 2011Istqb ctfl syll 2011
Istqb ctfl syll 2011
 
Testing throughout the software life cycle &amp; statistic techniques
Testing throughout the software life cycle &amp; statistic techniquesTesting throughout the software life cycle &amp; statistic techniques
Testing throughout the software life cycle &amp; statistic techniques
 
Chapter 4 - Quality Characteristics for Technical Testing
Chapter 4 - Quality Characteristics for Technical TestingChapter 4 - Quality Characteristics for Technical Testing
Chapter 4 - Quality Characteristics for Technical Testing
 
Chapter 3 - Reviews
Chapter 3 - ReviewsChapter 3 - Reviews
Chapter 3 - Reviews
 
Chapter 5 - Test Management
Chapter 5 - Test ManagementChapter 5 - Test Management
Chapter 5 - Test Management
 
Chapter 2 - Performance Measurement Fundamentals
Chapter 2 - Performance Measurement FundamentalsChapter 2 - Performance Measurement Fundamentals
Chapter 2 - Performance Measurement Fundamentals
 
Chapter 1 - Basic Concepts
Chapter 1 - Basic ConceptsChapter 1 - Basic Concepts
Chapter 1 - Basic Concepts
 
Chapter 4 - Defect Management
Chapter 4 - Defect ManagementChapter 4 - Defect Management
Chapter 4 - Defect Management
 
Chapter 5 - Automating the Test Execution
Chapter 5 - Automating the Test ExecutionChapter 5 - Automating the Test Execution
Chapter 5 - Automating the Test Execution
 
Chapter 1 - The Technical Test Analyst Tasks in Risk Based Testing
Chapter 1 - The Technical Test Analyst Tasks in Risk Based TestingChapter 1 - The Technical Test Analyst Tasks in Risk Based Testing
Chapter 1 - The Technical Test Analyst Tasks in Risk Based Testing
 
Testplan
TestplanTestplan
Testplan
 
Chapter 3 - Test Automation
Chapter 3 - Test AutomationChapter 3 - Test Automation
Chapter 3 - Test Automation
 
Chapter 1 - Fundamentals of Testing
Chapter 1 - Fundamentals of TestingChapter 1 - Fundamentals of Testing
Chapter 1 - Fundamentals of Testing
 
Chapter 2 - Testing Throughout the Development LifeCycle
Chapter 2 - Testing Throughout the Development LifeCycleChapter 2 - Testing Throughout the Development LifeCycle
Chapter 2 - Testing Throughout the Development LifeCycle
 
Chapter 5 - Tools
Chapter 5 - ToolsChapter 5 - Tools
Chapter 5 - Tools
 
Chapter 2 - White Box Test Techniques
Chapter 2 - White Box Test TechniquesChapter 2 - White Box Test Techniques
Chapter 2 - White Box Test Techniques
 

En vedette

linkin park - the catalyst
linkin park - the catalystlinkin park - the catalyst
linkin park - the catalystjim ripall prod
 
My Top 3
My Top 3My Top 3
My Top 3lilweb
 
Just In Case You Missed It New Media
Just In Case You Missed It New MediaJust In Case You Missed It New Media
Just In Case You Missed It New Medialilweb
 

En vedette (6)

linkin park - the catalyst
linkin park - the catalystlinkin park - the catalyst
linkin park - the catalyst
 
My Top 3
My Top 3My Top 3
My Top 3
 
Himachal pradesh
Himachal pradeshHimachal pradesh
Himachal pradesh
 
Just In Case You Missed It New Media
Just In Case You Missed It New MediaJust In Case You Missed It New Media
Just In Case You Missed It New Media
 
Civil War Tim
Civil War TimCivil War Tim
Civil War Tim
 
Unix Basics For Testers
Unix Basics For TestersUnix Basics For Testers
Unix Basics For Testers
 

Similaire à Qa Faqs

Lesson 7...Question Part 1
Lesson 7...Question Part 1Lesson 7...Question Part 1
Lesson 7...Question Part 1bhushan Nehete
 
Software testing q as collection by ravi
Software testing q as   collection by raviSoftware testing q as   collection by ravi
Software testing q as collection by raviRavindranath Tagore
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance Webtech Learning
 
Software Testing Interview Questions For Experienced
Software Testing Interview Questions For ExperiencedSoftware Testing Interview Questions For Experienced
Software Testing Interview Questions For Experiencedzynofustechnology
 
Software testing
Software testingSoftware testing
Software testingSengu Msc
 
Software testing
Software testingSoftware testing
Software testingSengu Msc
 
Software Quality Assurance training by QuontraSolutions
Software Quality Assurance training by QuontraSolutionsSoftware Quality Assurance training by QuontraSolutions
Software Quality Assurance training by QuontraSolutionsQUONTRASOLUTIONS
 
EFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWEFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWJournal For Research
 
Introduction to software testing
Introduction to software testingIntroduction to software testing
Introduction to software testingVenkat Alagarsamy
 
Testing Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfTesting Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfMuhammadShoaibHussai2
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTINGacemindia
 
Software_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdfSoftware_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdfAnupmaMunshi
 
Manual Testing guide by nagula sai kiran.docx
Manual Testing guide by nagula sai kiran.docxManual Testing guide by nagula sai kiran.docx
Manual Testing guide by nagula sai kiran.docxsai kiran
 
Materi testing dan Implementasi sistem - Fundamentals of testing-What is Testing
Materi testing dan Implementasi sistem - Fundamentals of testing-What is TestingMateri testing dan Implementasi sistem - Fundamentals of testing-What is Testing
Materi testing dan Implementasi sistem - Fundamentals of testing-What is Testingdevinta sari
 
Fundamentals of testing
Fundamentals of testingFundamentals of testing
Fundamentals of testingTaufik hidayat
 

Similaire à Qa Faqs (20)

Lesson 7...Question Part 1
Lesson 7...Question Part 1Lesson 7...Question Part 1
Lesson 7...Question Part 1
 
Software testing q as collection by ravi
Software testing q as   collection by raviSoftware testing q as   collection by ravi
Software testing q as collection by ravi
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance
 
Software Testing Interview Questions For Experienced
Software Testing Interview Questions For ExperiencedSoftware Testing Interview Questions For Experienced
Software Testing Interview Questions For Experienced
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Software testing
Software testingSoftware testing
Software testing
 
Software testing
Software testingSoftware testing
Software testing
 
Software Quality Assurance training by QuontraSolutions
Software Quality Assurance training by QuontraSolutionsSoftware Quality Assurance training by QuontraSolutions
Software Quality Assurance training by QuontraSolutions
 
EFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWEFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEW
 
Introduction to software testing
Introduction to software testingIntroduction to software testing
Introduction to software testing
 
Testing Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfTesting Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdf
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTING
 
Software_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdfSoftware_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdf
 
SE-Testing.ppt
SE-Testing.pptSE-Testing.ppt
SE-Testing.ppt
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Stm unit1
Stm unit1Stm unit1
Stm unit1
 
Manual Testing guide by nagula sai kiran.docx
Manual Testing guide by nagula sai kiran.docxManual Testing guide by nagula sai kiran.docx
Manual Testing guide by nagula sai kiran.docx
 
Software testing
Software testingSoftware testing
Software testing
 
Materi testing dan Implementasi sistem - Fundamentals of testing-What is Testing
Materi testing dan Implementasi sistem - Fundamentals of testing-What is TestingMateri testing dan Implementasi sistem - Fundamentals of testing-What is Testing
Materi testing dan Implementasi sistem - Fundamentals of testing-What is Testing
 
Fundamentals of testing
Fundamentals of testingFundamentals of testing
Fundamentals of testing
 

Dernier

3.21.24 The Origins of Black Power.pptx
3.21.24  The Origins of Black Power.pptx3.21.24  The Origins of Black Power.pptx
3.21.24 The Origins of Black Power.pptxmary850239
 
M-2- General Reactions of amino acids.pptx
M-2- General Reactions of amino acids.pptxM-2- General Reactions of amino acids.pptx
M-2- General Reactions of amino acids.pptxDr. Santhosh Kumar. N
 
CAULIFLOWER BREEDING 1 Parmar pptx
CAULIFLOWER BREEDING 1 Parmar pptxCAULIFLOWER BREEDING 1 Parmar pptx
CAULIFLOWER BREEDING 1 Parmar pptxSaurabhParmar42
 
Benefits & Challenges of Inclusive Education
Benefits & Challenges of Inclusive EducationBenefits & Challenges of Inclusive Education
Benefits & Challenges of Inclusive EducationMJDuyan
 
Easter in the USA presentation by Chloe.
Easter in the USA presentation by Chloe.Easter in the USA presentation by Chloe.
Easter in the USA presentation by Chloe.EnglishCEIPdeSigeiro
 
What is the Future of QuickBooks DeskTop?
What is the Future of QuickBooks DeskTop?What is the Future of QuickBooks DeskTop?
What is the Future of QuickBooks DeskTop?TechSoup
 
DUST OF SNOW_BY ROBERT FROST_EDITED BY_ TANMOY MISHRA
DUST OF SNOW_BY ROBERT FROST_EDITED BY_ TANMOY MISHRADUST OF SNOW_BY ROBERT FROST_EDITED BY_ TANMOY MISHRA
DUST OF SNOW_BY ROBERT FROST_EDITED BY_ TANMOY MISHRATanmoy Mishra
 
Maximizing Impact_ Nonprofit Website Planning, Budgeting, and Design.pdf
Maximizing Impact_ Nonprofit Website Planning, Budgeting, and Design.pdfMaximizing Impact_ Nonprofit Website Planning, Budgeting, and Design.pdf
Maximizing Impact_ Nonprofit Website Planning, Budgeting, and Design.pdfTechSoup
 
Presentation on the Basics of Writing. Writing a Paragraph
Presentation on the Basics of Writing. Writing a ParagraphPresentation on the Basics of Writing. Writing a Paragraph
Presentation on the Basics of Writing. Writing a ParagraphNetziValdelomar1
 
How to Add a New Field in Existing Kanban View in Odoo 17
How to Add a New Field in Existing Kanban View in Odoo 17How to Add a New Field in Existing Kanban View in Odoo 17
How to Add a New Field in Existing Kanban View in Odoo 17Celine George
 
How to Make a Field read-only in Odoo 17
How to Make a Field read-only in Odoo 17How to Make a Field read-only in Odoo 17
How to Make a Field read-only in Odoo 17Celine George
 
How to Solve Singleton Error in the Odoo 17
How to Solve Singleton Error in the  Odoo 17How to Solve Singleton Error in the  Odoo 17
How to Solve Singleton Error in the Odoo 17Celine George
 
Patterns of Written Texts Across Disciplines.pptx
Patterns of Written Texts Across Disciplines.pptxPatterns of Written Texts Across Disciplines.pptx
Patterns of Written Texts Across Disciplines.pptxMYDA ANGELICA SUAN
 
Drug Information Services- DIC and Sources.
Drug Information Services- DIC and Sources.Drug Information Services- DIC and Sources.
Drug Information Services- DIC and Sources.raviapr7
 
2024.03.23 What do successful readers do - Sandy Millin for PARK.pptx
2024.03.23 What do successful readers do - Sandy Millin for PARK.pptx2024.03.23 What do successful readers do - Sandy Millin for PARK.pptx
2024.03.23 What do successful readers do - Sandy Millin for PARK.pptxSandy Millin
 
Patient Counselling. Definition of patient counseling; steps involved in pati...
Patient Counselling. Definition of patient counseling; steps involved in pati...Patient Counselling. Definition of patient counseling; steps involved in pati...
Patient Counselling. Definition of patient counseling; steps involved in pati...raviapr7
 
HED Office Sohayok Exam Question Solution 2023.pdf
HED Office Sohayok Exam Question Solution 2023.pdfHED Office Sohayok Exam Question Solution 2023.pdf
HED Office Sohayok Exam Question Solution 2023.pdfMohonDas
 
How to Use api.constrains ( ) in Odoo 17
How to Use api.constrains ( ) in Odoo 17How to Use api.constrains ( ) in Odoo 17
How to Use api.constrains ( ) in Odoo 17Celine George
 
AUDIENCE THEORY -- FANDOM -- JENKINS.pptx
AUDIENCE THEORY -- FANDOM -- JENKINS.pptxAUDIENCE THEORY -- FANDOM -- JENKINS.pptx
AUDIENCE THEORY -- FANDOM -- JENKINS.pptxiammrhaywood
 
The Stolen Bacillus by Herbert George Wells
The Stolen Bacillus by Herbert George WellsThe Stolen Bacillus by Herbert George Wells
The Stolen Bacillus by Herbert George WellsEugene Lysak
 

Dernier (20)

3.21.24 The Origins of Black Power.pptx
3.21.24  The Origins of Black Power.pptx3.21.24  The Origins of Black Power.pptx
3.21.24 The Origins of Black Power.pptx
 
M-2- General Reactions of amino acids.pptx
M-2- General Reactions of amino acids.pptxM-2- General Reactions of amino acids.pptx
M-2- General Reactions of amino acids.pptx
 
CAULIFLOWER BREEDING 1 Parmar pptx
CAULIFLOWER BREEDING 1 Parmar pptxCAULIFLOWER BREEDING 1 Parmar pptx
CAULIFLOWER BREEDING 1 Parmar pptx
 
Benefits & Challenges of Inclusive Education
Benefits & Challenges of Inclusive EducationBenefits & Challenges of Inclusive Education
Benefits & Challenges of Inclusive Education
 
Easter in the USA presentation by Chloe.
Easter in the USA presentation by Chloe.Easter in the USA presentation by Chloe.
Easter in the USA presentation by Chloe.
 
What is the Future of QuickBooks DeskTop?
What is the Future of QuickBooks DeskTop?What is the Future of QuickBooks DeskTop?
What is the Future of QuickBooks DeskTop?
 
DUST OF SNOW_BY ROBERT FROST_EDITED BY_ TANMOY MISHRA
DUST OF SNOW_BY ROBERT FROST_EDITED BY_ TANMOY MISHRADUST OF SNOW_BY ROBERT FROST_EDITED BY_ TANMOY MISHRA
DUST OF SNOW_BY ROBERT FROST_EDITED BY_ TANMOY MISHRA
 
Maximizing Impact_ Nonprofit Website Planning, Budgeting, and Design.pdf
Maximizing Impact_ Nonprofit Website Planning, Budgeting, and Design.pdfMaximizing Impact_ Nonprofit Website Planning, Budgeting, and Design.pdf
Maximizing Impact_ Nonprofit Website Planning, Budgeting, and Design.pdf
 
Presentation on the Basics of Writing. Writing a Paragraph
Presentation on the Basics of Writing. Writing a ParagraphPresentation on the Basics of Writing. Writing a Paragraph
Presentation on the Basics of Writing. Writing a Paragraph
 
How to Add a New Field in Existing Kanban View in Odoo 17
How to Add a New Field in Existing Kanban View in Odoo 17How to Add a New Field in Existing Kanban View in Odoo 17
How to Add a New Field in Existing Kanban View in Odoo 17
 
How to Make a Field read-only in Odoo 17
How to Make a Field read-only in Odoo 17How to Make a Field read-only in Odoo 17
How to Make a Field read-only in Odoo 17
 
How to Solve Singleton Error in the Odoo 17
How to Solve Singleton Error in the  Odoo 17How to Solve Singleton Error in the  Odoo 17
How to Solve Singleton Error in the Odoo 17
 
Patterns of Written Texts Across Disciplines.pptx
Patterns of Written Texts Across Disciplines.pptxPatterns of Written Texts Across Disciplines.pptx
Patterns of Written Texts Across Disciplines.pptx
 
Drug Information Services- DIC and Sources.
Drug Information Services- DIC and Sources.Drug Information Services- DIC and Sources.
Drug Information Services- DIC and Sources.
 
2024.03.23 What do successful readers do - Sandy Millin for PARK.pptx
2024.03.23 What do successful readers do - Sandy Millin for PARK.pptx2024.03.23 What do successful readers do - Sandy Millin for PARK.pptx
2024.03.23 What do successful readers do - Sandy Millin for PARK.pptx
 
Patient Counselling. Definition of patient counseling; steps involved in pati...
Patient Counselling. Definition of patient counseling; steps involved in pati...Patient Counselling. Definition of patient counseling; steps involved in pati...
Patient Counselling. Definition of patient counseling; steps involved in pati...
 
HED Office Sohayok Exam Question Solution 2023.pdf
HED Office Sohayok Exam Question Solution 2023.pdfHED Office Sohayok Exam Question Solution 2023.pdf
HED Office Sohayok Exam Question Solution 2023.pdf
 
How to Use api.constrains ( ) in Odoo 17
How to Use api.constrains ( ) in Odoo 17How to Use api.constrains ( ) in Odoo 17
How to Use api.constrains ( ) in Odoo 17
 
AUDIENCE THEORY -- FANDOM -- JENKINS.pptx
AUDIENCE THEORY -- FANDOM -- JENKINS.pptxAUDIENCE THEORY -- FANDOM -- JENKINS.pptx
AUDIENCE THEORY -- FANDOM -- JENKINS.pptx
 
The Stolen Bacillus by Herbert George Wells
The Stolen Bacillus by Herbert George WellsThe Stolen Bacillus by Herbert George Wells
The Stolen Bacillus by Herbert George Wells
 

Qa Faqs

  • 1.