What does QA do?

There I was, Mr. QA Expert, having just submitted an article about what QA was to editorial.  The job of editorial, for those of you who do not know, is to take what is written and make it plausibly understandable and grammatically correct.  These are skills that are lost on most Americans born in America.

(Americans tend to speak American-English, while the rest of the world learns the Queens English, which in my opinion is far more elegant.)

Most people hate feedback — even the ones who ask for feedback.  They say they want it with their words… they hate hearing it with their expression.  So, knowing this, my editorial colleague was hesitant to give me direct feedback on some of my work (with good reason, I’m much bigger than she is) and I was laughingly trying to convince her to do her worst. Promise you won’t be mad? Yes I promise. Are you sure, because I can look at it again? Would you please send it to me. OK but I sort of edited a lot out.  O REALLI.

She was very sweet in crafting her critique of my work, despite her perilous warnings. She cut out a lot of the fat (which you my current readers must put up with, as I cannot yet afford an editor for these articles.)

In the center of my piece in italics was her perfect feedback, and zinger question:

You don’t talk about what QA does. What does QA do?

Foof! And with those four words her freshly-hewn arrow hurtled to the core of the conundrum. She hit the very question that every CIO should be asking themselves – and the very question I had failed to answer. As a CIO or segment owner, always ask yourself: Whatever the technology, whatever the product, whatever the industry: what is my QA group doing?

Truly, QA is your last defense before your product ships. You want your top people involved with and/or understanding QA, and you want to make sure QA knows what they’re doing. So… what exactly is it you said they did, sir?

I didn’t say! I smiled as I thought about her question. I started with what I knew. I knew we in QA did valuable work– nobody questioned that — I could easily off the top of my head rattle off 10 to 15 bullet points of tests and activities we had done that day, that week, that month that contributed to a number of higher-level goals that I had for testing this particular project. That was easy for me, but I felt it was didactic and not really answering the question anyway. Why hadn’t I written what QA did in the article?

Something about that question was keeping me in the meta – what, in other words, kept me “talking about talking about” the question of what QA does, rather than give the answer in an easily categorize-able, quickly index-able response. What was limiting me from giving the response?  Some pondering… then I got it.

I didn’t answer the question because there was a complication in the question itself – a traffic director at a four-way intersection that, given certain dependencies, would send the answer in any one of a number of directions. There are certain assumptions, I concluded, that are hidden in the question itself “What does QA do?”

In one case, for example, I could talk about the motivation and philosophy of QA and how those principles apply to the products under test.

Or, I could talk about testing techniques, testing tools, and how the successes and failures of my approach to date both aligned with our vision of the direction of where we want to take the product.

Alternatively, I could talk about an open-source Selenium or Watir toolkit approach and how to convert between C#, Ruby, and Python test code in forming relatively quick flips between the Selenium IDE and your programming language of choice (the IDE does a base conversion but some tweaking is always necessary to get the tests to actually run repeatedly.)

So what’s the answer? First, know your audience:


  1. People who want the answer to put down word for word for their homework/project/essay/job. You receive points if you can catch these people in the act and call them on it. Awarding yourself +2pts in the forum for catching them in the act is welcomed. And then the shunning begins.
  2. People who are asking to deepen their knowledge on the topic. Forums were designed for these types of people. They are rare and cherished. Strive to be one.
  3. People who are asking for the sake of asking. Don’t be that guy.
  4. People who are asking a question to get an authority to say something the asker wants someone else to hear so they change they way they’re doing something. “Say, do you think printing a GANTT chart is useful for a checkpoint meeting if you have 3 resources on each task?” And as the speaker says No and goes into other options the person, who has their arms folded and a wry grin on their face, gloatingly throws a syrupy WOULDNT WANNA BE YA laser-gaze at their project manager, who is maintaining face and building a passive aggressive retaliation that will last 8-12 weeks.
  5. The Me Too!-ers.

It’s pretty easy to filter out 3, 4 and 5.   3’s and 4’s usually only show up in live presentations (if you’re ever a speaker or giving a keynote at a conference, you’ll see what I mean) and comments from 5’s are easy enough for eyeballs to auto-spam. It’s almost as if our spinal cords have somehow adapted to filter out certain combinations of keywords or flavors of phrases that are out of tone, rhythm, grammatically awkward, or relate to porn. (This is why the ILOVEYOU virus was so successful in the late 90s. It didn’t trip our natural filters: suddenly you had an email from a co-worker who said they loved you. Really? It couldn’t be… could it? I better open this thing just to check really quick. INFESTATION OF THE WORLD.)

2’s are a joy. They usually present themselves with the cleanest wording and make an attempt to present their question with enough context to frame their situation, complication, and question. They might even own a copy of the Minto Pyramid Principle. Even the grammatically rocky questions can often be 2’s, hidden simply behind a cultural or communication gap. The longest threads are not necessarily attributed to 2’s, because of the 1’s.

1’s are unfortunately the vast majority of the minds plugging into the interwebs. This is not a surprise – the vast majority of students in any learning environment will be looking for the quick answer to get the grade and get out to get to the proverbial – or literal – recess. Call it projection if you want, but this has been my experience in my grade school, high school, college, post-college, and professional career. Maybe it’s a growing-up thing. Granted, the risk and negative consequences of cheating grows progressively potent as you move up from a zero on a quiz to being let go from a career position, but this doesn’t stop people. (Before I interview for a job at a new firm I do a web search for the QA team’s names on QA forums. I’ve found company source code posted on the internets with questions about it three times. They didn’t even remove the comments or rename the variables to hide the site name.) The noise on any forum can be difficult to cut through – most forums originate to serve a learning purpose but end up feeling somewhere between a playground and a small room with a desk during crowded TA office hours. Forum moderators and flag-mods do their best to maximize the signal-to-noise ratio.

So what’s the big deal with asking just to get an answer? If you’re baking a casserole, nothing. You get the answer, the wine is poured, and somebody puts on a Tom Jones record and the night really starts cookin’. The question is not why are answers bad: the question is what is the difference between parroting answers back (that you may or may not understand) and mindfully evaluating principles?

The answer to the value of deepening the question is the choice to expand contextual-personal knowledge and is measured by and observed in cause and effect. Choosing to evaluate a question in one’s own model of reality and trying to push aside the obvious to get to “the question behind the question” can lead to a much more valuable exchange between the asker and the answerer. The question becomes more of a conversation and more often than not results in more questions, the assumptions of the original question now questions themselves rather than hard assumptions. The asker has a wider breadth of perception about the subject matter even though their original question may remain unanswered – they may now have the tools to sleuth out the solution on their own or more accurately craft a new question that is in the direction of where they really wanted to go.

I’ll give an example from web software QA.

If this conversation is over the phone, it can last a few minutes. If this conversation is over instant messenger, it can last 10-15 minutes. If this conversation is over a ticketing system, it can last 2-3 hours. If this conversation is over email, it can last 2-3 days.

Them: Why is the web site down?

Me: What URL are you trying to access?

Them: www.mysite.com/contactus

Me: What behavior are you seeing?

Them: I can’t leave feedback.

Me: How are you trying to leave feedback?

Them: I type into the white box at the bottom and click Submit but nothing happens.

Me: It works for me in Firefox 3.5. What browser are you using?

Them: I’m using IE, it worked before in IE and now its broken.

Me: Strange, it works for me in IE7. What version of IE are you using, and when was the last time it worked for you?

Them: It worked last Thursday. I’m using IE6.

Me: I’ve loaded that page in IE6 and I see what you mean. It looks like the CSS has slid the button underneath another page object. I’ve logged a defect with the team on this – it appears this was not caught in last Thursday night’s release. In the meantime, try another browser if you can and use that for now.

Them: OK Thanks.

The issue wasn’t that the site was down at all. The real problem they were having was accessing the Submit button in an older browser (IE6, the bane of all designers existence and a favorite defect fountain for QA) and the solution was twofold: temporarily use a browser to get the functionality to work now, and get the development and product teams dialed up to prioritize the front end fix for a production patch or next build.

In the above example, I never really answered the person’s question. The reason I didn’t really answer the question “why is the web site down” is because the answer to the question is “The web site isn’t down.” That answer doesn’t help the person asking the question, and my job as QA is to elicit the real problems with the software and help the user solve their problem. I had to ask six questions to probe the assumptions I spotted in their question. These assumptions are second nature to most web QA engineers simply because we’ve seen the same problem in so many different ways.

The cause and effect of parroting answers back or answering the question at face value often stays at the surface of the question and never questions the assumptions in the question. This exchange can be frustrating and fruitless, and even seem arrogant. #2 is the clear winner.

There is a clear upside in questioning the assumptions in the questions asked about software systems as well as the product requirements. Really, product requirements are questions in disguise. Every product requirement is really an experiment. A well-formed requirement or user story takes the form

“As a <role> I want to be able to <do something> so that <business reason>.”

Really what they’re saying is

“As a <role> I want to see if by <doing something> it results in <business reason>, if so build on this ability, if not then revisit and adjust.”

The first version is much shorter, so we use that one, but we’re really saying the second one.

QA audits everything in the sense of looking at a bill of materials or estimate and giving it a second look. We enjoy doing this whether or not we’re experienced or authorized to do so – as QA engineers we know that the most important part of auditing is asking the question, even if QA ourselves do not understand the question: either way, QA learns. When learning happens, value is increased. We look at product requirements in the light of the latter, longer version of a well-formed requirement above.

With each requirement we consider in our product (we consider ourselves owners of all parts of the product, all parts of the technology system, and the users our family) we ask ourselves:

  • Is what they’re asking for what they really want?
  • How will this affect the user experience?
  • Is this change in line with our brand?
  • Do our users really want this?
  • Will this be helpful?
  • Will this help the system run faster?
  • Is this feature worth it?
  • How will this integrate with what we have?
  • Is this just someone’s ego flexing?
  • Is this estimate realistic?
  • Can we even test this?
  • Do we need to invent a new test harness or stub to exercise this?
  • How will we test to make sure this works?
  • How will we test to make this break?
  • What are the security weaknesses in this feature?
  • What are the risks?
  • What is the potential up-side and down-side?
  • What are we assuming to be true that might not be?
  • What are we relying on to be true for this feature to succeed?
  • Do I understand how to manually test this?
  • Do I have a clear picture of how I’ll write automated tests for this?
  • Do I have a clear sense of how I’ll record and script a browser UI test for this?
  • Am I prepared to update a regression suite of known defects for this feature?
  • What help from the development team do I need to test this?
  • What sort of test data will I need to create?
  • How much time will I need to design, implement, check, execute, and report on pass/fail status of my tests for this feature?

QA is running through the mental checklist above during the phase of the project where new product features are being introduced to the development team. If you as a QA member aren’t a part of that process, work yourself in however you have to. Throw the SDLC rules out the window if you have to – get in that room where development is learning about new product features. You may be allowed to speak up in the meeting – if so, do so only after careful consideration of the checklist.

Is it ridiculous to suggest QA go through the above checklist for each product feature? I see your assumption! Let’s think about riding a bicycle. When you first rode a bicycle, you thought about everything – what your arms and legs were doing, how hard to pedal, which way to lean, watching where you were going on the road, listening to the sound of the bike and the sound of your friends or family cheering you on, watching out for automobiles on the road, figuring out how to make the bike stop with your legs or hands, and then what to do with your legs when the bike came to a halt. (Bonus points if you manage to walk away with only a scrape or two from the unfortunate shoelaces-caught-around-the-pedals when you come to a stop – I’ll never forget that afternoon!) No, of course you don’t think of all of those things now: why? You’ve tokenized them. It’s like your body recorded them, synthesized the success and failure data with the input signals, and can almost play back a macro that takes in the data for you and processes. You can ride a bike really easily now. Even if you haven’t ridden a bike in years, your body somehow remembers. You can probably even ride with no hands and think about what kind of pizza you want to get instead of thinking at all about riding the bike while on it. (Wear your helmet, even if you’re 40. I’ve seen some nasty spills involving adult cyclists in New York City that ended in ambulance rides.)

Just as the mind and body tokenizes a bicycle ride, the mind and body tokenizes QA checklists. The value of checklists is to establish good practices so you can tokenize these excellent points of reflection for each product feature. It’s just like riding a bike. Keep at it, and you’ll get the hang of it.

In the estimation meeting, you’ll be able to say “There’s no way this is an 8-pointer, I can think of 8 test data tables I’ll need to add a minimum of 30 rows to. And Johnny I’ll need two more columns on that .JSP page you wrote to interface with the web service out to our consumer. What do you think guys?” Boom, you’re on your way, and you got help and commitment from the team. The other QA team without checklists are probably organizing their emails and spreadsheets and will have a complaint for the project manager 2-3 days later. Good QA people strive to always be ahead of the game. QA likes to win, but competing WITH, not competing against, development teams.

So what does QA do, now that we’ve checked several assumptions built into the question?

QA audits product requirements, forms the best plan to test the product features, writes software code that tests the application software code (yes, code testing code), eyeballs the product through a web browser by clicking through it, facilitates entering defects/bugs and the bug triage process, guides and manages any user accepting testing (yes, project management,) and is responsible for reporting metrics. Some metrics include number of bugs found, bugs fixed, bugs outstanding, bugs per thousand lines of code, bugs per estimate points, and automated test coverage.

The QA analyst, engineer, manager, director, and VP roles play slightly different roles from company to company. It is helpful if the QA professional educate themselves on exactly what these roles are (an article about this is forthcoming) so they can be clear and concise in their presentation of what they do and what they’re looking for in a new contract or management position.  Understanding the purpose of QA is paramount to shipping working product faster than your competitors can.

1 comment

  1. Its kind of funny, because you spent the article pointing out a really valuable question – that you then proceeded to ignore in your answer, yet you ended the article with a sentence that with a little creative editing answers the question.

    You asked what does QA do?

    Then you proceeded to tell us how YOU do QA in your job, i.e. what works for YOU. Though you had many good iedas, it was far a field from the answer, since EVRYONE doing QA faces different circumstances, and good ideas for YOU may or may not be good ideas ME.

    Now to editing your last sentence:

    “Understanding the purpose of QA is paramount to shipping working product faster than your competitors can.”

    the purpose of QA is … faster than your competitors

    QA does efficiency.

    In other words, for there to be QA, someone [Devlopment] has to be doing something [proactive, productive role], and QA’s purpose is to make them more efficient [reactive, protective role].

    Applying Occum’s Razor to the respective tasks of Development and QA:
    Development – Does
    QA – Prevents

    Development and QA are often at loggerheads, which is understandble given the seemingly contradictory tasks essential to the nature of their roles.

    QA’s onus is to show everyone that they make things cheaper, faster, better by pointing out problems sooner, when they are easier to solve.

    “QA likes to win, but competing WITH, not competing against, development teams.”

Leave a Reply

Your email address will not be published. Required fields are marked *