Ce diaporama a bien été signalé.
Le téléchargement de votre SlideShare est en cours. ×

Rit 8.5.0 performance testing training student's guide

Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Performance	Testing	with	IBM	
Rational	Integration	Tester
Note		
Before	using	this	information	and	the	product	it	supports,	read	the	information	in	“Legal	
Notices”	on	page	38.	
	
...
1 

INTRODUCTION ............................................................................................................
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Publicité
Chargement dans…3
×

Consultez-les par la suite

1 sur 41 Publicité

Plus De Contenu Connexe

Diaporamas pour vous (20)

Les utilisateurs ont également aimé (12)

Publicité

Similaire à Rit 8.5.0 performance testing training student's guide (20)

Publicité

Rit 8.5.0 performance testing training student's guide

  1. 1. Performance Testing with IBM Rational Integration Tester
  2. 2. Note Before using this information and the product it supports, read the information in “Legal Notices” on page 38. © Copyright IBM Corporation 2001, 2013.
  3. 3. 1  INTRODUCTION ............................................................................................................................. 3  2  BACKGROUND .............................................................................................................................. 4  3  PERFORMANCE TEST INFRASTRUCTURE .......................................................................................... 5  3.1  3.2  PERFORMANCE TEST CONTROLLER .......................................................................................... 6  3.3  ENGINES................................................................................................................................ 6  3.4  PROBES ................................................................................................................................ 7  3.5  4  INTRODUCTION ....................................................................................................................... 5  AGENTS ................................................................................................................................. 8  ARCHITECTURE SCHOOL ............................................................................................................... 9  4.1  4.2  BASIC SYSTEM SETUP ............................................................................................................. 9  4.3  AGENT AND ENGINE SETUP .................................................................................................... 10  4.4  5  INTRODUCTION ....................................................................................................................... 9  PROBE SETUP ...................................................................................................................... 11  CREATING THE LOAD GENERATING TEST ....................................................................................... 13  5.1  5.2  BASIC SETUP........................................................................................................................ 14  5.3  6  REUSING FUNCTIONAL TEST RESOURCES ................................................................................ 13  TIMED SECTIONS .................................................................................................................. 16  CREATING PERFORMANCE TESTS ................................................................................................. 18  6.1  6.2  INITIAL SETUP ....................................................................................................................... 18  6.3  ADDING TESTS ..................................................................................................................... 19  6.4  ENGINE SETTINGS ................................................................................................................ 20  6.5  7  INTRODUCTION ..................................................................................................................... 18  MANAGING PROBES .............................................................................................................. 20  RUNNING PERFORMANCE TESTS AND ANALYZING RESULTS............................................................. 22  7.1  7.2  VIEWING RESULTS ................................................................................................................ 23  7.3  8  RUNNING THE TEST............................................................................................................... 22  MULTIPLE DATA SETS ............................................................................................................ 25  DATA DRIVING PERFORMANCE TESTS............................................................................................ 27  8.1  8.2  9  DIFFERENCES FROM FUNCTIONAL TESTS ................................................................................ 27  DRIVING A LOAD GENERATING TEST WITH EXTERNAL DATA ........................................................ 27  LOAD PROFILES .......................................................................................................................... 29  9.1  PERFORMANCE TESTING SCENARIOS ...................................................................................... 29  9.2  CONSTANT GROWTH ............................................................................................................. 30  9.3  EXTERNALLY DEFINED LOAD PROFILES ................................................................................... 30  Page 1 of 39 © IBM Corporation 2001, 2013
  4. 4. 10  ADVANCED TOPICS .................................................................................................................. 33  10.1  BACKGROUND TESTS ......................................................................................................... 33  10.2  LOG MEASUREMENT .......................................................................................................... 33  10.3  CREATING THE MEASUREMENT TEST ................................................................................... 34  10.4  ADDING THE MEASUREMENTS TO A PERFORMANCE TEST ....................................................... 37  11  LEGAL NOTICES ...................................................................................................................... 38  Page 2 of 39 © IBM Corporation 2001, 2013
  5. 5. 1 Introduction This document serves as a training manual to help familiarize the user with the performance testing capabilities available in IBM® Rational® Integration Tester. It is expected that the reader has already been through the basic Rational Integration Tester training and understands the workflow of Rational Integration Tester. In this course you will perform the following tasks:  Create performance tests  Set up agents, probes, and engines to execute and monitor performance tests  Analyze results of performance tests  Manage the amount of load driven by a performance test over time  Data drive performance tests Page 3 of 39 © IBM Corporation 2001, 2013
  6. 6. 2 Background When testing a service‐oriented architecture (SOA), there will be times when simply verifying functional requirements will not be enough. Many systems will need to come with service level agreements (SLAs) that will state a minimum level of performance that must be satisfied. This level of performance might have a number of components. In particular, system uptimes and message response times will be important. However, it will not be enough to test the system to check that it can respond to a single message within a given amount of time; the system will need to hold up under a certain amount of load as well. This load might take the form of a large number of messages, extreme message rates, or large amounts of data. In addition, accurately modeling the load on the system might require you to generate message requests from a number of different sources. For experienced performance testers, this will all be fairly familiar. However, SOA environments bring challenges on top of the traditional client‐server model. For example, services are often shared among several applications and failure can occur anywhere along the transaction path. Consider both the number of services in place and the many points at which they intersect, any one of which might not be performant; how can you ensure that performance levels satisfy the nonfunctional requirements? In addition, there is a fundamental difference between SOA performance testing and a traditional, client‐server approach. Performance testers who are familiar with the traditional approach tend to talk in terms of the number of users or “virtual users” that are required to generate the load. They also tend to be concerned with end‐to‐end response times: the response time experienced by an end user. This end‐to‐end performance testing is typically run against a functionally proven, complete system. SOA performance testers are still interested in response times but would be more interested in the volume of messages sent between components; there is no requirement to wait until the system has completed assembly or for a front end GUI interface to be created. Hence, the SOA performance tester can begin testing much earlier. When running performance tests, you will normally be faced with the following questions:  Does the system’s performance satisfy the requirements or SLAs?  At which point will the performance degrade?  Can the system handle sudden increases in traffic without compromising response time, reliability, and accuracy?  Where are the system bottlenecks?  What is the system’s breakpoint?  Will the system recover (and when)?  Does the system performance degrade if run for an extended period at relatively low levels of load?  Are there any capacity issues that come from processing large amounts of data? Page 4 of 39 © IBM Corporation 2001, 2013
  7. 7. 3 Performance test infrastructure In this chapter, you will:  Look at the distributed nature of a performance test infrastructure.  See how engines are used to run actions within a performance test  Examine how data needs to be recorded from the system under test, and how this can be done with probes 3.1 Introduction Before creating performance tests, it is important to revise how you set up the infrastructure of Rational Integration Tester. As in regular integration tests, you have the Rational Integration Tester GUI, and the results database. However, these will work slightly differently in the context of a performance test. While in a regular integration test, the GUI and the test are normally run from the same computer, a performance test can be run from another computer, or might be distributed across a number of other computers. This means that the Rational Integration Tester software, as presented by the GUI, also provides a test controller, to manage any remote systems involved in the performance test. In addition, the results database, which is optional for an integration test, becomes mandatory for performance tests. This is due to the higher volume of data that is recorded during a performance test – it cannot be easily presented in a simple console window, but will need to be summarized, and possibly manipulated. Besides the GUI, test controller, and results database, there are also three new items in the Rational Integration Tester infrastructure: engines, probes, and agents. They fit together in a framework to run tests and monitor performance across a number of different systems. Page 5 of 39 © IBM Corporation 2001, 2013
  8. 8. 3.2 Performance test controller The performance test controller is the component used to start and manage the performance test. Usually, it will be the main Rational Integration Tester program. However, it might also be one of the methods of executing a test outside the GUI, such as the RunTests command. When a performance test is launched, the controller will set up communications with each of the agents. It will send a copy of the project containing the performance test to each of the agents, verify that the agents can report back correctly, and manage the agents as the performance test is executing. 3.3 Engines An engine is the process that actually runs a test in Rational Integration Tester. When carrying out integration testing using Rational Integration Tester, an engine exists in a supporting role, and runs the tests on behalf on the controlling instance of Rational Integration Tester (the instance of Rational Integration Tester that is running the main performance test) on the user’s computer. When performance testing, the engine is separated from the controlling instance of Rational Integration Tester. The engine can be on the same computer as Rational Integration Tester, or it can be on another computer. In fact, there might be more than one engine, spread across multiple computers. If there is more than one engine, test iterations are spread across the available engines. For example, in a performance test that is running 40 tests per second with 2 engines, each engine would be running Page 6 of 39 © IBM Corporation 2001, 2013
  9. 9. 20 tests per second. The distribution of the tests is handled by the controlling instance of Rational Integration Tester. Using multiple engines lets you solve a number of problems. Most simply, if one computer is not capable of generating a high enough load for a performance test, the load can be split across multiple computers. Secondly, using multiple engines gives you the capability to distribute the load across multiple endpoints. For example, if you need to simulate requests arriving from different parts of the world, or from different networks, you can set up engines in those locations in order to satisfy the demands of the performance test. During a performance test, the engines will be writing information back to the results database, reporting on timing and other statistics. They will also be communicating their progress back to the performance test controller. 3.4 Probes With such complex and heterogeneous platforms, it can difficult to understand what to measure apart from transaction response times. It will be impossible to measure everything. Probes are the tools used by Rational Integration Tester to gather statistics from the system under test. A variety of probes are available to the user:  System Statistics Probe  Windows Performance Monitor Probe  TIBCO BusinessWorks Probe  TIBCO Rendezvous Probes  TIBCO EMS Probe  Sonic MQ Probe  webMethods Broker Probe  webMethods Integration Server Probe  JMX Probe The probes will be deployed on the systems that you want to measure, and multiple probes can co‐ exist on one system. Recording statistics with Rational Integration Tester’s probes gives you access to much more information than just the transaction response times. This can aid you in determining the cause of poor performance. If response times are becoming too long after going past a certain number of requests per second, you can use probes to see if it is due to load on the CPU, excess memory usage, message queues growing larger, or another cause. Whichever probes you choose, they will record statistics during the performance test, and write them directly to the results database. These writes are set up as a low‐priority task, so that they cause as small an impact as possible on system performance. Page 7 of 39 © IBM Corporation 2001, 2013
  10. 10. 3.5 Agents Engines drive the tests, and probes monitor them. However, both need a host controlling them, talking to the instance of Rational Integration Tester controlling the performance test. This role is played by the Rational Integration Tester Agent. The agent runs on each computer that has an engine or a probe, handling the communications with Rational Integration Tester. The agent can host an engine, a probe, or both at the same time. In fact, it can also handle multiple probes or engines within the one agent. The agent is installed with the Rational Test Performance Server (RTPS), or the Rational Test Virtualization Server (RTVS). It can be run by hand, or set up as a service on the system it is running on. Due to this, each computer that requires an agent requires an installation of RTPS or RTVS. Following the installation of RTPS or RTVS, the agent will need to be configured in the Library Manager. This configuration follows the same procedure as that of Rational Integration Tester itself, and so it will not be covered in this training course. Note: If you are running through this training material on a cloud instance or virtual machine, all parts of the system will be on a single machine. This is purely for ease of configuration and does not reflect a real‐world scenario. Page 8 of 39 © IBM Corporation 2001, 2013
  11. 11. 4 Architecture School In this chapter, you will:  Configure Rational Integration Tester to connect to the system under test  Set up an engine to run the performance test, with an agent to host it  Configure a probe to measure the performance of the system during testing 4.1 Introduction Creating a model of the system under test for performance tests will be very similar to creating a model for functional tests. However, in addition to modeling the system under test, the Architecture School perspective will also be used to provide configuration data for the agents, engines, and probes in the system. Adding this information to your Rational Integration Tester project should be done after the normal process of modeling the system under test; configuration for the performance testing components is then carried out in the Physical View of Architecture School. Note that as it is configured on a physical basis, you might need to configure new components when setting up new environments. 4.2 Basic system setup In this example, you will be testing a web service: a simple login service that takes a username and password, and returns a login token. This web service is provided as an example with the Rational Test Control Panel. 1. To view the WSDL for the example web service, you will need to navigate to the examples page of your Rational Test Control Panel. This should be visible in your web browser at http://localhost:7819/RTCP/examples . 2. Click on Login WSDL and copy the URL of the page that is opened. 3. Open up Rational Integration Tester, and start a new project. Note that you will need to use a results database; one is already specified on the cloud instance by default, so you can keep using this, but use the Test Connection button to make sure that it is working correctly. If you are not using a cloud instance, ask your instructor for the results database settings. 4. Rational Integration Tester will open to the Logical View of Architecture School. 5. Press Ctrl+V to paste the URL of the WSDL into the Logical View. 6. The Create a New External Resource window will open. It will have recognized that you have added the login WSDL. Click Next. Page 9 of 39 © IBM Corporation 2001, 2013
  12. 12. 7. On the second screen of the wizard, make sure that Create a new component is selected, and then click Next again. 8. On the final screen, select the option Perform the synchronization but don’t switch views, then click Finish. 9. The Logical View should update to show that Rational Integration Tester has analyzed the WSDL and modeled the contents of it, as shown below. 10. Switch to the Physical View of Architecture School. You should see that there is also a new HTTP server listed here as well. 4.3 Agent and engine setup 1. The agent and engine are already set up and running on your computer. This means that there is no need to set up the software outside Rational Integration Tester. However, you still need to tell Rational Integration Tester which agents and engines to use for performance tests. To start this process, make sure you are still in the Physical View, press the button at the left of the Physical View Toolbar, and select the Agent option. 2. In the Host field, enter localhost. For the Port number, make sure that the default setting is 4476. 3. An engine called default is automatically attached to the engine; leave this as‐is, and click OK to close the window and complete the agent configuration. Note: As your HTTP server is set up on the localhost, the agent needs to be using localhost as well. If the HTTP server had used the hostname of the computer, then the agent would have needed to do the same. As all resources are mapped by their network address, this makes it easier for a performance test to decide which hosts need monitoring when setting up probes for that test. Page 10 of 39 © IBM Corporation 2001, 2013
  13. 13. 4.4 Probe setup The next step is to setup the probe on the same computer as the agent. Remember that each probe will need to be running on an agent, or tests using that probe will fail. Also, probes can be set up on individual hosts, or on services running on those hosts; for example, the System Statistics probe will run on a particular host, but most of the technology‐specific probes will need to be attached to a particular process on that host. If you need to use those particular probes in the future, they can be configured by editing the properties of that physical component; the process is very similar to what you will see in this exercise. 1. In the Physical View, each physical component will be visible in a tree underneath a Subnet node and a Host node. Double click on the host (which should be labeled as localhost) to bring up its properties. 2. Once the properties dialog for your host has appeared, switch to the Probes tab. 3. In this course, you will be using the System Statistics probe. Select it so that it can be configured. 4. The first, and most important, setting to note is the Hosting Agent at the very bottom of the window. It tells you which agent is running this probe. In this course, you only have one agent to deal with, but make sure that the agent for this probe is set to the agent created in the previous exercise. If the agent is not set here, then any performance tests that attempt to use this probe will fail. 5. Within the same window, make sure that the Statistics collection interval is set to 1 second, and that the Processes section of the window is set up to Monitor all processes. The window should now look like the screen capture below: Page 11 of 39 © IBM Corporation 2001, 2013
  14. 14. 6. Click OK to close the window. Your system is now set up to gather statistics during performance tests. Page 12 of 39 © IBM Corporation 2001, 2013
  15. 15. 5 Creating the load generating test In this chapter, you will:  Create a functional test that can be used as a load generating test  Encounter the test actions created for use within performance tests  Create a timed section within a test to capture timing and status information 5.1 Reusing functional test resources One of the advantages of using Rational Integration Tester for SOA testing is that functional tests can easily be refactored to be run within a performance scenario. This is important because, when evaluating the performance of the system, it is insufficient to just send a request and measure the time it takes for a response to arrive. For example, if a web service operation rejects input and sends back a SOAP Fault message, the time it takes might be significantly different from the time it takes to properly process a request and return a valid response. If a test does not truly validate the outcome of an operation, it will provide an inaccurate view of the true system performance. In this case, you will create a simple functional test that will be used as the basis for your performance tests. This functional test will be used as a load generating test within your main performance test. When editing the load generating test, there are several new actions that can be used. These actions will be ignored when running the test as a functional test; they will only be executed when it is run as part of a performance test. Performance actions Begin Timed Section: Mark the beginning of a timed section for a performance test. End Timed Section: Mark the end of a timed section for a performance test. Log Measurement: Log data to the results database during a performance test. Page 13 of 39 © IBM Corporation 2001, 2013
  16. 16. Note: The Initialise, Test Steps, and Tear Down sections become more important in performance tests. When you run multiple iterations of a load generating test, the Initialise part of the test will only be run once at the beginning, and the Tear Down section once at the end. Only the Test Steps will be run for each iteration of any load generating test used in the performance test. This means that you can, for example, set up and clean up a database within the Initialise and Tear Down sections without impacting the data that you are actually interested in. 5.2 Basic setup 1. Before you can create a performance test, you need to provide a load generating test that will contain the actions carried out during each iteration. To do this, you will create a normal test, and add a timed section to it. Go to Test Factory, and right‐click the Login operation to bring up the context menu. Select New > Tests > Test Using MEP. 2. The Create dialog box will open. Click the Options button to bring up a Settings dialog. 3. On the Message Settings tab, make sure the option Include Optional Fields is selected, and then click OK to return to the Create dialog box. 4. Call the test loginBase, and click OK. A test will be created. 5. Open up the Send Request message, and enter a username, password, and application in the relevant fields. The contents do not matter for this example, as the login service will accept any input for these fields. 6. Click OK to return to the test. Page 14 of 39 © IBM Corporation 2001, 2013
  17. 17. 7. You now need to set up the validation for this message. For this example, it will be enough to check that a login token is returned, and that it contains hexadecimal digits broken up with hyphens. Open up the Receive Reply action, and find the Token field. Double click the (Text) section below that to bring up the Field Editor. 8. In the top half of the Field Editor, make sure that the Equality validation is selected, as you will be replacing this validation with a regular expression: 9. Change the Action Type option just below from Equality to Regex. 10. Enter the regular expression ^[a‐f0‐9‐]*$ 11. To test that it is working, enter 44ef‐2ab7‐573d into the Document field, and click Test. The Result field should update to show true (if not, make sure that there is not a space character at the end of the string). 12. Now add –y8rr to the end of the Document field, giving 44ef‐2ab7‐573d–y8rr, and click Test again. This should fail. 13. Click OK to close the Field Editor. 14. Still in the Receive Reply test action, make sure that the Timeout value is set to 1000 ms. The action should now look like the following screen capture: Page 15 of 39 © IBM Corporation 2001, 2013
  18. 18. 15. Click OK to return to the test. 16. Save the test, and then run it in Test Lab to make sure that it passes. If there are any problems, fix them before moving on. 5.3 Timed sections Timed sections, marked by the Begin Timed Section and End Timed Section actions, allow you to time the execution of different parts of the test as it runs. A single functional test can contain multiple timed sections, which might overlap, or contain other timed sections. For each timed section, Rational Integration Tester will log data into the results database while a performance test is running. This will include not only the time taken for the timed section to run, but also the status of the section – whether it passed, failed, or timed out. If no timed sections are added to the test, Rational Integration Tester will still record the length of time taken to execute each iteration of the entire test, and the status at the end of that iteration. 1. Start by returning to the Test Factory. 2. Add two new actions to the test: a Begin Timed Section , and an End Timed Section . The Begin Timed Section should go before the two messaging actions, while the End Timed Section should go afterwards. 3. Open the Begin Timed Section action. 4. The timed section will need a name. Call it S1. Page 16 of 39 © IBM Corporation 2001, 2013
  19. 19. 5. Below the name of the timed section, there is an option to determine how this timed section will be recorded (Pass/Fail/Timeout). You can use the status of the test at the end of the section, or the status of the test at the end of that iteration of the test. In this particular case, since the timed section covers the entire test, it will not make any difference which of the two options you choose. 6. Close the Begin Timed Section action, and open the End Timed Section action. This has only one setting – the name of the timed section. Match it to S1, the section you started with the Begin Timed Section action, and close the dialog box. 7. Save the loginBase test. Your load generating test is now complete. However, you still need to state how many iterations of this test should be carried out, at what rate, and so on. This will be handled separately, in a performance test. Page 17 of 39 © IBM Corporation 2001, 2013
  20. 20. 6 Creating performance tests In this chapter, you will:  Create a simple performance test  Add a load generating test to a performance test  Configure a performance test to use selected engines and probes 6.1 Introduction Once you have a functional test that will create a load on the system, you can start putting together a performance test. Performance tests are created in the Test Factory; they are contained within the same tree structure as other test resources. Your first performance test will be fairly basic, running a single load generating test at 1 iteration per second. Later performance tests will look at changing the load on the system, and varying it over time. 6.2 Initial setup 1. In the Test Factory Tree, right‐click the Login operation, and select New > Tests > Performance Test. Call the test simplePerformance. 2. The initial screen of the Performance Test Editor will appear. 3. Click the text on the left side of the editor. 4. The right side of the editor will alter to show settings for the performance test. Make sure that you are on the Execution tab. Page 18 of 39 © IBM Corporation 2001, 2013
  21. 21. 5. Most settings can be left at their defaults, but change the length of the test phase to 30 seconds, as in the screen capture below. 6.3 Adding tests 1. A performance test on its own does nothing; it requires load generating or background tests in order to test the performance of your system. You can now add the loginBase test as your load generating test. Right‐click the text on the left side of the editor. Two options will be shown: 2. Click Add Load Generating Test. 3. A Load Generating Test node will be added beneath the main Performance Test node, and should be selected on the left side of the editor. Configuration information for the load generating test will be shown on the right hand side. The first thing you need to do is to choose which test will be used for load generation. To do this, make sure you are on the Execution tab, and find the Test Path field. Click the Browse button next to that field, and select the loginBase test from the dialog box that opens. 4. You can leave the other execution options at their default settings for the moment, as shown below. Page 19 of 39 © IBM Corporation 2001, 2013
  22. 22. 5. The load generating test is nearly ready to go. However, you still need to say what engine (or engines) will be executing this test. 6.4 Engine settings Your test can be run on one or more engines. For the purposes of this manual, you will only be using a single engine running on a single agent, but in more complex tests, multiple engines can be set up in different locations, splitting up the load between different computers. Regardless of how many engines are being used, remember that the engines are all managed by a single controller: the instance of Rational Integration Tester that is running the performance test. 1. Switch from the Execution tab to the Engines tab. 2. Click the Add… button at the bottom of the screen. 3. A Select dialog box will open. In this case, there will be only one engine available: the default engine attached to the agent you created in Architecture School. Select the default engine, and click OK. 4. If multiple engines were available, you could select more of them by clicking the Add… button again, and selecting other engines, but for this example, your test is now ready to run. Before doing that, though, let’s take a look at how the system under test can be monitored during the test. 6.5 Managing probes Each performance test can choose which probes it wants to use to gather data. Different tests might need to measure different data. For example, one test might be gathering system data, while another might gather statistics from the middleware layer. Regardless of which probes are being requested here, they must still be set up in Architecture School. For this example, you will be using the System Statistics probe, as set up in the previous exercises. 1. On the left side of the Performance Test Editor, select the Performance Test to switch back to its settings. Page 20 of 39 © IBM Corporation 2001, 2013
  23. 23. 2. Click on the Probes tab. 3. You can now select a probe from those available. As you have only set up the System Statistics probe, check the box for that probe, and leave the others blank. 4. Save the simplePerformance test. Page 21 of 39 © IBM Corporation 2001, 2013
  24. 24. 7 Running performance tests and analyzing results In this chapter, you will:  Execute a performance test and view the statistics shown at runtime.  View the results of a performance test in the Results Gallery  Compare results of multiple executions of a performance test 7.1 Running the test The procedure for running a performance test is much the same as for a functional test; simply use the Run button in the Test Lab, or double‐click the test in the tree. While the performance test is running, a summary of the data being gathered will be displayed in the console. Full reporting data can be found in the Results Gallery, as you will see in the following exercise. 1. Switch to Test Lab. 2. Run the simplePerformance test. 3. Watch the console results. You will notice that the probes will be started 15 seconds before the load generating tests are run, as per the settings in the performance test. 4. Once the load generating tests are being run, you will see counters for the numbers of tests started, passed, failed, and the number of pending database writes. These are defined in the following table: Started Iterations started in the report interval (the setting is 'Collect statistics every' in Performance Test Statistics tab). The default interval is 5 Page 22 of 39 © IBM Corporation 2001, 2013
  25. 25. seconds, so if the test is set for 10 transactions per second this would show a total of 50 each time. Passed Iterations passed so far during the performance test. Timed out Iterations where message receivers did not get a response within their configured timeout. Failed Iterations failed so far during the performance test. Pending DB writes Database writes queued on the results database. Large numbers indicate that database access is slower than required and might be the result of a slow network connection. Note that the writes are buffered and do not slow down the test rate. Note: A performance test might run on longer than the specified time while remaining test instances are completed and database writes are flushed. In this case you will see the started figure as zero for those intervals because the given number of iterations has already been started. 7.2 Viewing results The Test Lab does not show much in the way of results besides statistics for how many timed sections passed, failed, and so on. This information may be found in the Results Gallery, where you can then analyze it further. 1. Switch to the Results Gallery perspective to see the results of your test. 2. In the Results Gallery, you will see a single line describing basic information about your test: start and end times, number of iterations, and so on. Select this, and then click the Analyse button in the Results Gallery Toolbar. 3. An empty chart will now appear. The next step is to populate this with some of the data you have collected with your probes. This is available to the left of the chart, sorted into different categories. Expand this out to navigate to Performance Test Sections (Based on Start  Times) > Average Pass Section Duration > simplePerformance / S1. 4. Select the check box that is shown at that location. 5. A chart will be displayed on the left side of the window. Experiment with adding other information gathered by your probes to the chart. In particular, look at the information recorded by the System Statistics Probe. Note that charts can be removed simply by clearing the relevant check boxes. Page 23 of 39 © IBM Corporation 2001, 2013
  26. 26. 6. As you experiment, you might find a situation where the axes of charts do not match. For example, if you look at the System Statistics probe, and choose to observe the recorded data for the used memory and CPU load, you might see something like this: The number of MB available is changing, but the scale of the chart for the CPU load is preventing you from seeing that information. 7. In order to fix this, you can edit the properties of one of the charts so that it is displayed on a separate axis. Go to the left of the chart display, where you have selected your data, and double click one of the colored lines that shown next to a selected check box. 8. A Choose Style dialog box will open. On the Style tab, you can change how the data is displayed (color of the line, type of chart, and so on.). Change any settings here that are of interest. Page 24 of 39 © IBM Corporation 2001, 2013
  27. 27. 9. Switch to the Data tab, and set the Axis to 2. Close the dialog box, and the charts should update: 10. At this stage, you can edit the chart and give it a name and some notes, in the text fields below the chart. You can also save the chart for later reference, or use the Export button to export data to a CSV file. 7.3 Multiple data sets So far, you have only looked at results for a single performance test. It is also possible to compare results of multiple performance tests, run at different times. This allows you to see how changes made to your load generating test, or changes made to the system, have affected the performance of the system. 1. Close the chart for the moment, and return to the Test Lab. 2. Run the simplePerformance test again. 3. Once it is complete, go back to the Results Gallery, and choose Analyse Results for the most recent test run. 4. A chart will open, as before. However, this time another option is open to you: comparing this test run to any previous test run. Switch to the Data Sets tab, and click the Add button. 5. Select your previous test run here. Page 25 of 39 © IBM Corporation 2001, 2013
  28. 28. 6. Return to the Counters tab. For each counter, two charts will now be available, and you can use this to compare the current results to the previous results. Page 26 of 39 © IBM Corporation 2001, 2013
  29. 29. 8 Data driving performance tests In this chapter, you will:  Learn the limitations of using the iterate actions within a load generating test.  Use the Input Mappings settings in the performance test to data drive a load generating test. 8.1 Differences from functional tests When creating functional tests in Rational Integration Tester, you can simply use the Iterate Test Data action to run through a data set, and test the system with that particular data set. However, just imagine that you had a test similar to the one you created earlier, sending a single message and receiving a single message. As an example, if you were to use the Iterate Test Data action (or any other Iterate test action) with that test, to send 10 messages, and your performance test had also specified 10 iterations per second, you could end up sending anywhere between 10 and 100 messages per second, with no precise control over the load on the system. In addition, test duration and status data will be limited in their usefulness; rather than measuring the individual messaging times, you would be measuring the time to send and receive responses to 10 messages. Similarly, you would be recording the pass or fail status of a group of 10 messages, rather than a single message. For these reasons, it is generally advised not to use the Iterate Test Data action within a performance test. Instead, you can map a data source to a test using the Input Mappings tab for each Load Generating or Background Test. 8.2 Driving a load generating test with external data 1. Create a copy of the loginBase test, and call it loginDataDriven. 2. Open the Send Request action of the loginDataDriven test, and go to the Config tab. 3. Select both the UserName and Password text fields using Ctrl+Click, being careful not to select the element nodes, and then Quick Tag those fields, as shown in the screen capture below: Page 27 of 39 © IBM Corporation 2001, 2013
  30. 30. 4. Save the loginDataDriven test. 5. Now make a copy of the simplePerformance test, and call it dataDrivenPerformance. 6. Open the dataDrivenPerformance test, and go to the settings for the Load Generating Test. 7. Find the Test Path setting. It should currently be set to the loginBase test. Use the Browse button to change this to the loginDataDriven test. 8. You now need a data source, so leave Rational Integration Tester to create one. Create a brand new CSV file or Excel spreadsheet on your desktop, with the following data (or some of your own invention; just remember to add a line with headings): User,Pass  Jim,gr33nhat  Steve,ght3st3r  Monica,perf0rmance  Karen,eng1n3s  Ben,pr0b3s  9. Save your CSV or Excel file, and return to Rational Integration Tester. Create a File Data Source or Excel Data Source to connect to your data. Remember to use the Refresh button to check that the data has loaded properly, and then Save the data source. 10. Go back to the dataDrivenPerformance test, and go to the Input Mappings tab. Under this tab, you will see three new tabs appear: Config, Filter, and Store. 11. In the Config tab, use the Browse button to choose your test data set. 12. If you wanted to filter the incoming data, you could do that on the Filter tab. In this case, you will be looking at the entire data set, so go to the Store tab. 13. Map the tags in the loginDataDriven test to the columns in your data source here, and save the dataDrivenPerformance test. 14. Run the test and analyze the results. Page 28 of 39 © IBM Corporation 2001, 2013
  31. 31. 9 Load profiles In this chapter, you will:  Encounter standard performance testing scenarios, including load, stress, and soak tests.  Use the Constant Growth settings in a performance test to provide an increase in the load on the system.  Data drive the load on the system using the Externally Driven settings in a performance test. 9.1 Performance testing scenarios So far, your tests have been modeling a very simple scenario, running your load generating test at 1 iteration per second. However, there are a number of scenarios where it might be desirable to use more complex performance tests. This section describes a few example scenarios, and how Rational Integration Tester can deal with them. Load testing A load test attempts to represent a period in the working day, and to test how the system will respond to a similar load. For example, it might be anticipated that the greatest risk of non‐performance will occur at the start of the working day. The scenario will model the ramp up from the minimum number of users to the peak login period during the first few hours of business. Stress testing A stress test scenario is used to identify the breakpoint of the system. The breakpoint will thus be identified as a specified load (and ramp‐up) and will be used to identify performance weaknesses in the distributed system. This can be modeled with a simple linear increase in the load on the system. Stress tests can also be useful in proving the recoverability of a system – how does the system break and how gracefully can it recover under extreme loads? In this case, a bell curve could be used to view how the system breaks as it approaches a point identified as a breakpoint, and then to allow the system some room to recover. Soak testing A soak test scenario will be run against a constant low‐level load that might run for hours or days. This scenario could be a modeling an iteration run once per minute, or less. Running the test for an extended period of time will identify any issues that can manifest themselves over a longer period, such as memory leaks. Page 29 of 39 © IBM Corporation 2001, 2013
  32. 32. High intensity scenarios High intensity scenarios, whether they are load tests or stress tests, will require a lot of data. Some basic math will be required to understand the overall data requirements in terms of the data to drive the tests and the data that is required to be present in the system for reference and execution. If you are expected to run at 100 transactions per second for over 3 hours then you will require 3 x 60 x 60 x 100 rows of data: over 1 million rows of data! Whether or not you can use repeating data in these million rows of data will be determined by the nature of the system under test. Similar issues may arise in particularly lengthy soak tests. In addition, modeling the correct number of requests in a high intensity scenario might not be possible with a single test engine. You might need to create multiple test engines in order to be able to create a sufficient load on the system. 9.2 Constant growth Moving beyond a simple, constant load on the system, the simplest way to vary the load in a performance test is to increase the load over time. To do this, you can split the performance test up into multiple phases of a given duration, increasing the number of iterations for each new phase. This will give a simple demonstration of how the system handles an increasing amount of load. 1. Return to the Test Factory, and create a copy of the simplePerformance test. 2. Rename the new test, and call it threePhaseTest. 3. Open the threePhaseTest, and on the Execution tab, change the Number of test phases to 3. 4. Switch to the Load Generating Test on the left side, and go to the Execution tab. 5. Change the Initial target iterations setting to 5 per second. 6. Set the Increment per phase to 5. You now have 3 phases, the first running at 5 iterations per second, the second at 10 iterations per second, and the third at 15 iterations per second. 7. Save the performance test, and run it from the Test Lab. You should see each phase execute in the console; notice that each phase is running more and more tests per second. 8. Go to the Results Gallery to view the tests, and view the data in a chart. 9.3 Externally defined load profiles Using an external data source gives you much more control over the amount of load on the system. The length of each phase can be varied as required. In addition, a constant level of growth is no longer required – the exact number of iterations per second, minute, or hour can be set for each individual phase. 1. Minimize Rational Integration Tester, and create a new CSV or Excel file on your desktop with the following data: Page 30 of 39 © IBM Corporation 2001, 2013
  33. 33. Period,Iterations  10,10  20,30  30,10 The first column of the data will be used to specify the length of each phase, while the second supplies the number of iterations in each phase. The performance test in this example specifies its units as seconds (minutes and hours are also possible). This means that you will have 10 seconds at the beginning of the test where you run 10 iterations per second, 20 seconds where you ramp it up to 30 iterations per second, and then 30 seconds where you allow the system to go back to the original 10 iterations per second. 2. Return to Rational Integration Tester, and create a File Data Source or Excel Data Source to link to your data. Remember to use the Refresh button to check that the data loads correctly. 3. Save the new data source. 4. Create a copy of the simplePerformance Test, and call it externalPhaseTest. 5. Open the externalPhaseTest, and go to the Execution tab of the Performance Test settings. 6. Change the Load Profile to Externally Defined. 7. Next to Data set for load phases, click the Browse button to find and select the data source containing the phase data. 8. Leave the Execute test phases field blank, and make sure that the Phase duration read from column setting is set to Period. 9. Switch to the settings for the Load Generating Test, and go to the Execution tab. 10. Make sure that the number of iterations is read from the Iterations column. 11. Save the externalPhaseTest, and run it in the Test Lab. 12. Go to the Results Gallery, and view the test results. You might notice that the statistics for the minimum, average, and maximum pass section durations spike during the test, indicating that the load was slowing down the performance of the system. Page 31 of 39 © IBM Corporation 2001, 2013
  34. 34. Page 32 of 39 © IBM Corporation 2001, 2013
  35. 35. 10 Advanced topics In this chapter, you will:  Encounter background tests, and learn their uses.  See how the Log Measurement action works, and how it can be used.  Use the Log Measurement action in a background test to provide a custom probe. 10.1 Background tests So far, you have used a load generating test to provide a pre‐defined load upon the system. Multiple load generating tests could be used, if required. However, in some cases, you might want to provide a constant stimulus for the system while using your load generating test. There may also be situations where you need to use stubs to simulate part of the system under test. Both of these situations can be handled by adding a background test to your performance test. A background test is a functional test (or stub) that will be run repeatedly for the duration of the performance test (or, optionally, until the background test fails). This means that it will be run concurrently with any load generating tests included in the performance test. Each background test can have a single iteration running at a time; it can also be run multiple times in parallel. However, timing and status information will not be recorded for a background test. The differences between execution of load generating and background tests mean that several things should be kept in mind. Firstly, the Initialise and Tear Down phases of the test will be run as normal. Second, while the Begin Timed Section and End Timed Section actions can still be included in the functional test, they will not have any effect on what is recorded into the results database during the performance test. 10.2 Log measurement The Log Measurement action can be used to log custom data into your database while running a performance test. This can be useful in several situations. First, it can be used when recording data from the system under test, acting as a custom probe. This might be necessary when information is required that is not covered by the standard probes – for example, when querying proprietary systems for information. An alternative use exists for systems where a message goes through several processes before a response is received by Rational Integration Tester. In these cases, it might be desirable to measure the time taken for a single process to provide a response, rather than measuring the entire round‐trip time between the initial message sent from Rational Integration Tester, and that eventual response. Page 33 of 39 © IBM Corporation 2001, 2013
  36. 36. In the diagram below, Rational Integration Tester publishes a message to a queue, and waits for a response. Using the data normally gathered by a performance test, you would be told how long it took for the message to be processed by operations A, B, and C. However, if there were performance issues as you increased the load on the system, you would not know if these could be narrowed down to a single service. For example, you might suspect that service B is where most of the delay is occurring. In order to investigate this, you could add timestamps to fields in the message as it passes through the system. Subtracting Time 2 from Time 3 would then give you the amount of time that was spent inside service B. Using the log measurement action, this information could be recorded in the results database, and analyzed later with respect to the load on the system. When using the Log Measurement action, it is important to note that it cannot be used within a timed section. This is because writing to the results database would alter the time taken during the execution of the timed section, thereby skewing the timing information. 10.3 Creating the measurement test In this example, you will use a background test and the log measurement action to act as a custom probe for the system under test. This particular example will be gathering data about the bytes sent and received by the system. While this could be done with a probe, it will be used here as an example of gathering information manually. You will be using a background test for two reasons: first, it means that there is no need to change your load generating test; and second, you do not need to have your probe constantly running – the test will gather your data every two seconds, rather than constantly polling the system and possibly adding extra unintended load. 1. Create a new test for the Login operation, and call it byteMonitor. 2. Add a Run Command action to the test. Page 34 of 39 © IBM Corporation 2001, 2013
  37. 37. 3. On the Config tab, enter the following commands: netstat ‐s | grep InOctets  netstat ‐s | grep OutOctets 4. Make sure that the Wait for command execution to finish checkbox is selected. 5. Click the Test button. You should see two lines of data for stdout, similar to the following: InOctets: 477678994   OutOctets: 435217918  6. Switch to the Store tab, so you can store the data into tags. 7. You will need to store the two numbers into separate tags, called bytesSent and bytesReceived. To do this, right‐click the stdout field, and choose Contents > Edit. 8. Make sure you are looking at the Store tab within the window that opens, and then click the New button. 9. Details for the data to store will be shown below. The default action type should be set to Copy; change it to Regular Expression. 10. Change the Tag to bytesReceived. You can also change the description field to match. 11. In the Expression section, type the regular expression d+ to match a number. Below that, set Extract Instance to 1, which will extract the first number found in the string. In the example string given above, this would store 477678994 into the bytesReceived tag. You can test this out with an example string to check that it is working correctly. 12. You still need to store the number of bytes sent. Click New again to generate a second store action for the stdout field, and follow steps 9‐11 again, but this time set the Tag name to bytesSent, and Extract Instance to 2. Similar to the first action, if you were to test this out using the example above, you should get a Result of 435217918. 13. Once you are done, the two tags should be configured as seen below: Page 35 of 39 © IBM Corporation 2001, 2013
  38. 38. 14. Click OK to close the Field Editor, and then OK again to close the test action. 15. Before adding the Log Measurement action, it is a good idea to check that this is working as expected. Add a normal Log action, and log the values captured in the bytesReceived and bytesSent tags to the console. 16. Run the test, and check that it works at the moment. If it does not, check the preceding steps to make sure that everything has been entered correctly. 17. Return to the Test Factory, and delete or disable the Log action. 18. Add a new Log Measurement action after the Run Command. 19. Set up the Log Measurement action as shown in the image below. This will graph the number of bytes sent and received by looking up the values captured earlier. The attributes section allows you to graph multiple sets of data; in this case you only have one, but at least one attribute is required in order for the Log Measurement action to function. Page 36 of 39 © IBM Corporation 2001, 2013
  39. 39. 20. Click OK to close the action. 21. Add a Sleep action to the end of the test. As you will be running this as a background test, and background tests run continuously, it is advisable to pace this test so it does not interfere with system results. Set the Sleep action to have a fixed duration of 2000ms. 22. Save the byteMonitor Test. 10.4 Adding the measurements to a performance test 1. Create a copy of the externalPhaseTest, and call it customLogging. 2. Edit the customLogging Test, and right‐click the Performance Test label on the left side of the editor – you will see the options for adding load generating and background tests. Add a background test. 3. On the Execution tab for the Background Test, select byteMonitor for the Test Path field. 4. Make sure that Terminate on failure is not selected. 5. Switch to the Engines tab, and click Add. There will only be one engine available, as before. Select it. 6. Save the customLogging test, and run it in the Test Lab. 7. Once it has run, you should be able to view the results in the Results Gallery. The counter will be found in the Log Values section. Page 37 of 39 © IBM Corporation 2001, 2013
  40. 40. 11 Legal Notices          The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON‐INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. If you are viewing this information in softcopy, the photographs and color illustrations may not appear. Any references in this information to non‐IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development‐level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non‐IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non‐IBM products. Questions on the capabilities of non‐IBM products should be addressed to the suppliers of those products. All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. The sample Page 38 of 39 © IBM Corporation 2001, 2013
  41. 41. programs are provided "AS IS", without warranty of any kind. IBM shall not be liable for any damages arising out of your use of the sample programs. Trademarks and service marks     IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at www.ibm.com/legal/copytrade.shtml. Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java‐based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates Other company, product, or service names may be trademarks or service marks of others. Page 39 of 39 © IBM Corporation 2001, 2013

×