How is QA different for Waterfall, Agile, and XP?

Product teams partner with software development teams or consultant organizations that choose a software development process to create and implement solutions.  These processes feed the question of what is the most effective QA approach.

Let us discuss the three most popular worlds in selecting a Software Development Lifecycle, or SDLC: Waterfall, Agile, and Extreme Programming (or XP).

All of these processes seem to create working software; some tend to be better suited to large or small projects in large or small firms.  The question heating up the blogs and consulting lines these days is what is the difference in QA for these different approaches?

Every SDLC takes in product requirements of some kind and generates a software product as a result.  Describing what a system should do is done by describing the future product in terms of both business (non-functional) and technical (functional) perspectives.

Examples of non-functional requirements would be that the login contains search-friendly keywords and meta tags (This is a search engine optimization requirement, or SEO), or that a credit card purchase transaction completes in less than 7 seconds (This is a service level agreement requirement, or SLA), or that the branding and color scheme of the product purchase page matches that of the product description page (this is a brand integrity requirement).

Examples of functional requirements would be that a shopping cart be able to add up to 50 quantities of a single product at once, a news article display 3 related stories and 2 advertisements,  or that an application log keep up to 3 days worth of transactions before it is retired to a data warehouse.

Once we understand both the nature of requirements and the value in classifying requirements (the value is that having individuals or groups manage distinct groups of like-requirements saves time and effort), our question of how to QA for different SDLCs is further complicated because project managers, product owners, and yes even QA managers are often uncertain of how these different software development processes are designed to interact with a QA group.  Opinions on this subject seem rampant while cause and effect analysis seems unfortunately rare; opinions offer a limited (and often personal or slanted) perspective but seldom answer the question, “Why?”

Let us answer the Why by talking about the three SDLCs and their cause and effect nature of being tightly-coupled or loosely-coupled to a QA process.  Yes, Virginia, there is QA for all of these SDLCs.

A waterfall environment will hand over usually large blobs of new functionality to a QA group, usually after a number of weeks, months, or years (yipes!) of hidden development.  A waterfall approach involves a large number of changes all at once.  These changes require a dedicated effort to manage and control the product requirements: one or more business system analysts (BSAs) are employed to manage and track what the product is “supposed” to do.  BSAs keep a book of requirements up to date and QA digests these requirements as food for testing the software build that is eventually delivered to them.  The QA cycle involves authoring manual tests to match and cover the product requirements, authoring automated scripts to test the product features, updating regression scripts to test the defects that were found in past builds, and to run performance analysis on the build.  There are a number of ways of specifying how this is done, but in general this approach integrates with a waterfall approach well and requires larger toolsets to track test cases, testing results, performance testing, and automated scripting.  Waterfall, and it’s not-really-Agile cousin Iterative Programming, requires a heavy investment in tools to get the job done because of the copious amount of work that needs to be tracked.  QA is loosely-coupled to development because the size of the change package is so large and the frequency of deployment is so small.

An Agile environment will usually employ a methodology that adopts Agile principles, such as Scrum.  Scrum is a structure set up to uphold Agile principles such as early and continuous delivery of software frequently, an intimate interdependent work environment, and continuous evaluation of technical excellence.  Deployments of software that arrive early and often mean a far less time, if any, for QA.  This means that QA needs to reconstitute the content of their approach to testing.  QA attends commitment and estimation sessions with developers where they can both receive instruction and education about features to be included in the build as well as assert their opinion of testing-specific concerns that may surface as a result of the feature-set under consideration.  QA focuses on writing automated scripts, usually using an open source test framework, against the software product as the features are being built.  QA prepares a performance test and adjusts the navigation scripts and virtual user balance ratio as necessary based on their estimation of the features in that iteration.  When the software is delivered, QA often has only a few days to complete testing rather than weeks or months.  Staff up one or two people for manual testing to hit the build with cross-browser testing, a minimum of two technical QA resources on the team to maintain the performance, automation, and automation-regression script systems, and two or more subject-matter experts (SMEs) or business champions and a application development manager who understands Scrum are all key factors to increasing testing code coverage under Agile.  QA is tightly-coupled to development because the size of the change package can be medium to small but the frequency of deployment is often at least once a month.

An Extreme Programming (XP) environment is similar to Agile but delivers software many times a month, often many times a week, to a production environment.  XP processes can deliver and build working product, in theory, whenever is needed.  QA is incredibly tightly-coupled to development because the level of communication and understanding is very high.  Only the most technical QA Managers and QA staff members will survive in XP.  QA staff must have a working knowledge of shell script, Ruby/Python, and the deployment process, as well as SEO and ad-ops for web software.  QA will usually work entirely open-source in these environments, implementing tools in a “fast and loose” manner as needed.  Changing direction completely is common.  QA must write the minimal set of functions required to provide an automated test bed, code coverage metrics, manual cross-browser testing, and performance analysis and be able to do so in hours rather than days.  Attention to detail, a good memory, and approaching challenges and failures with a “what are our options here” attitude is the only way to survive.

Interestingly, QA approaches successful in one SDLC would be seen as incorrect or inappropriate for other SDLCs.  What is appropriate for waterfall can be a poor choice for Agile, and vice versa.  The selection of tools and the decision of when and how to test software is a heavily dependent function of the SDLC employed by the development team and the delivery and flexibility requirements of the product team.

Being able to think at this meta level is a critical success factor to QA Managers, product managers, project managers, and CIOs as they consider how to most effectively create their software product and deliver it to their customers.  In this industry, the teams that can utilize mindful reasoning and deliver accurate metrics will be the ones that win.

3 comments

  1. The above article equates QA to Testing – a common represenation in the industry (is that a mis representation or not is a whole different discussion). However, to comment on the article – about how testing differes in each methodology.

    Well, in agile and XP, as the objective is to deliver apps at a quicker pace with uncompromised quality, its ideal to bring in test at the earlier stage of the project unlike waterfall. This reduces the CoQ (Cost of Quality – also referred by some as CoPQ – Cost of Poor Quality). A better approach for agile would be a TDD (Test Driven Development) rather than a Scrum based method (which is basically a cascaded micro-mini-waterfall split into Feature crews building user stories). Generally in Scrum, the issues faced in Integration phase is much high compared to other models. If the feature crews are independent of each other (mostly not), integration issues are not much to be worried of, and in such cases, Scrum works well. Other cases, TDD would be a better appoach.

    In both the Agile & XP methods, Test should be in equal pace as Dev, developing stubs in same stride as Dev code. Most XP projects aims at getting the functionalities getting done and focuses less (comparitively) on UI. So getting the stubs do the code’s job and reconfiguring the test to the actual code does most of the functionality testing. Testing NFRs (Non functional Reqs) is again another critical factor to consider in all the methodologies, which should be taken up in a case-to-case basis.

    All said and done, the best judge to identify the Testing process Agile & XP are the schedule, resource availability, skill sets of Testers, expertise of consultants in such engagements and right process & management support.

  2. “Attention to detail, a good memory, and approaching challenges and failures with a “what are our options here” attitude is the only way to survive.” – I just loved this statement.

    To find out ‘What are our options here?’ is the main challenge in a QA person’s life.And whenever we miss any option(we may call it scenario/case), that is the time when life feels critical.

    Also when the application becomes so large with thousands of features and functionalities, it becomes hard for a QA to remember each detail.

    Thanks, nice article.

  3. Good article. The problem faced by any of these methods is when they are selected, not by appropriateness to the product, but by Executive staff who just want the fastest cycles. While Waterfall can take longer and Agile can appear to be faster, if they are selected by TTM needs and not product requirements you can end up with a chaotic mess.

    You can’t take a cake that bakes at 350 degrees for 30 minutes and bake it in 15 minutes at 700 degrees. Likewise, making a whole lot of smaller cakes that might bake faster is not going to make it easier to assemble them back into one cake. Not recognizing the BETTER method to a quality product and just going with Time To Market is a recipe for disaster.

    A hybrid approach selected and/or created by QA professionals is the better method.

Leave a Reply

Your email address will not be published. Required fields are marked *