Thursday, October 10, 2013

Performance Testing for Compelling Results

July 25, 2010 by · Leave a Comment 

You point.  You click.  You wait.  You leave.

It’s the four-second clock countdown.  Any website that makes you wait longer than 4 seconds is losing users, and losing them fast.  The social fabric of today’s online users have woven the expectation that things just have to work and work quickly, regardless of the complexity of what is going on behind the scenes.  Performance tuning is more important than ever — why then, is performance testing still hard to have a palatable conversation around?

The battle between open source performance tools and is a delicate one.  With an investment of zero, any feature set potentially promises a skyrocketing ROI — but the installation, configuration, maintenance, and infrastructure consistency these open source tools need can quickly bat the ROI down from poundcake to pancake.  Regardless of tool selection, too often these performance profiling failures trickle through to management:

  • Flurry of fear and uncertainty around an index or home page
  • False sense of urgency around new feature performance impact
  • Ineffective communication of valuable metrics
  • Endless amounts of data

John P. Kotter is Chief Research Officer at and Professor Emeritus at Harvard Business School.  Kotter suggests in August 2010′s Harvard Business Review that we trade in data-choked PowerPoint slides for “real action.”  He goes on to say that “people change because they are shown a truth that influences their feelings, not because they were given endless amounts of logical data.”  Dumping performance test results to the team is useless without a key indicating what the metrics apply to, the testing techniques used, the features tested, and most importantly what the data means. QA needs to make performance testing data more human, so that it is human-digestible.

I remember a database upgrade project I was leading the performance profiling efforts on with my QA team.  The response of the system ran as expected up until about 4MB/s of throughput and 50 virtual users.  Past that point, the system response dropped to near zero because of some elements in the data layer locking up and failing to allow threads to complete and drain.  This issue, which we expected to take 2-3 days to fix, ended up requiring 4 more weeks of project extension, six on-site vendor specialists, and sending our architect across the continent to install our application locally into their internal lab.  If our feature-based testing hadn’t been configured to work these exact test cases, not only would we possibly have issued a false positive on the performance test, but we may at best have been able to say that the system was failing, but with no insight as to why.

A compelling performance test identifies:

  • The features included in the test
  • The nature of the load, not just the number of users or concurrent threads
  • Duration, expected results, infrastructure affected, network-pathing pitfalls, initial conditions, actual results
  • Throughput in KB/s, VUsers warm up and cool down, explanation of how this simulates production conditions
  • Feature stories or use cases tested and response time per feature or use case if possible
  • Advertising disabling and possible data effects
  • Comparison to baseline

When in doubt, don’t guess.  Never claim a metric means something when you’re unsure.  The team would rather have a mysterious number than a number under pretense of guesswork.  Nobody expects performance tests to be perfect, because they never can be 100%.  Work to iterate closer and closer to production conditions as you learn more about your environment, application architecture, and user profile distributions.

  • FacebookFacebook
  • emailemail
  • PrintPrint
  • TwitterTwitter
  • Google BuzzGoogle Buzz
  • LinkedInLinkedIn
  • TumblrTumblr

Speak Your Mind

Tell us what you're thinking...
and oh, if you want a pic to show with your comment, go get a gravatar!


four − 1 =