This document outlines common mistakes made by new performance test engineers. It discusses 6 main mistakes: 1) only checking HTTP status codes without validating transactions, 2) using improper think and pause times, 3) prematurely identifying bottlenecks without root cause analysis, 4) making false assumptions during tests, 5) attempting analysis before tests complete, and 6) getting stuck on anomalies that don't reproduce by only running tests once. The document emphasizes the importance of validation, realistic timing, methodical testing, letting tests complete before analysis, and running tests multiple times to avoid non-reproducible issues.
2. Introduction
• Performance engineering can be a challenging field to get into. But
it’s a very lucrative career, and people shouldn’t be too scared to
get involved. If you’re a test engineer and are looking to learn
performance testing, be assured that it’s a great career that’s
going to open a lot of doors for you.
• However, if you’re just getting started there are a host of common issues that most
newbies (and some more seasoned performance testing engineers) encounter.
2
3. Common Mistakes
• Check More than the Status Code
• Think Time Matters
• Don’t Cry Wolf
• False Assumptions
• Analysis is Important – Wait Until the Test is Done
• Beware of Sinkholes
3
4. Check More than the Status Code
• When you’re performance testing and automating a
transaction, one of those common mistakes occurs
when an http status code of 200 comes back. You
might think everything is ok at this point but what
happens if it was a login test and your login script
enters a bad password—basically you would be it
redirected you back to the login page.
• To avoid this mistake, you should definitely place
validations or assertions — which are basically string
verifies and responds — to ensure those transactions
are indeed successful.
4
5. Think Time Matters
5
• Another common performance testing mistake has to
do with using improper pause and think times.
• You should look at the tool and say, “Okay — is it going
to take five seconds to make a decision on this page? Is
it going to take a minute to read this article?”
• This can also cause people to create unrealistic results
that will in turn cause the team to panic for no reason,
so avoid this mistake at all costs.
6. Don’t Cry Wolf
• Crying wolf when you spot a bottleneck will get you into
trouble fast, and often happens when you isolate a symptom
instead of the root cause.
• This goes back to more methodical performance testing, where
you create a test that ramps up more slowly so you can see the
trends and throughputs, and can see the servers becoming
busier and busier. You can tell what server becomes saturated
first, and rather than saying, “This is now at 100% use,” or,
“This is out of memory,” you should be aware that these could
just be symptoms.
6
7. False Assumptions
7
• Another error I see time and time again is when
customers make false assumptions while a test is running.
• During the running of a test, the tool is doing its job. It’s
collecting the KPIs and the data. When it’s presenting that
data to you, however, it can look a little skewed.
• In order to identify trends and really understand
what’s going on, you should let the test run to
completion. If you get a very high error rate after
completion, that’s the time when you should stop it,
then go back and do your analysis.
8. Analysis is Important – Wait Until the
Test is Done
• Working smarter — not harder — and understanding what really happened during
your test.
• If you attempt to analyze a test and arrive at conclusions while it’s still running, more
often than not you’ll only be wasting your valuable time
8
9. Beware of Sinkholes
• Back when I was doing mainly performance testing, I would
sometimes go to the extreme; I would say to the team, “I’m not
sharing any results until I get three runs and I’m able to analyze
them.”
• Most people don’t understand the performance engineering
process; they want results immediately.
• Different performance engineers agreed and mentioned that
they does the same thing – runs a test at least three times
before they reports the results, because they doesn’t want folks
wasting time on anomalies that aren’t reproducible. More often
than not, that’s where the sinkholes are.
• Newbie performance engineers may find themselves spending
hours and hours researching an issue when it was simply a fluke
that happened once and wasn’t reproduced. 9