Product teams partner with software development teams or consultant organizations that choose a software development process to create and implement solutions. These processes feed the question of what is the most effective QA approach.
Let us discuss the three most popular worlds in selecting a Software Development Lifecycle, or SDLC: Waterfall, Agile, and Extreme Programming (or XP).
All of these processes seem to create working software; some tend to be better suited to large or small projects in large or small firms. The question heating up the blogs and consulting lines these days is what is the difference in QA for these different approaches?
Every SDLC takes in product requirements of some kind and generates a software product as a result. Describing what a system should do is done by describing the future product in terms of both business (non-functional) and technical (functional) perspectives.
Examples of non-functional requirements would be that the login contains search-friendly keywords and meta tags (This is a search engine optimization requirement, or SEO), or that a credit card purchase transaction completes in less than 7 seconds (This is a service level agreement requirement, or SLA), or that the branding and color scheme of the product purchase page matches that of the product description page (this is a brand integrity requirement).
Examples of functional requirements would be that a shopping cart be able to add up to 50 quantities of a single product at once, a news article display 3 related stories and 2 advertisements, or that an application log keep up to 3 days worth of transactions before it is retired to a data warehouse.
Once we understand both the nature of requirements and the value in classifying requirements (the value is that having individuals or groups manage distinct groups of like-requirements saves time and effort), our question of how to QA for different SDLCs is further complicated because project managers, product owners, and yes even QA managers are often uncertain of how these different software development processes are designed to interact with a QA group. Opinions on this subject seem rampant while cause and effect analysis seems unfortunately rare; opinions offer a limited (and often personal or slanted) perspective but seldom answer the question, “Why?”
Let us answer the Why by talking about the three SDLCs and their cause and effect nature of being tightly-coupled or loosely-coupled to a QA process. Yes, Virginia, there is QA for all of these SDLCs.
A waterfall environment will hand over usually large blobs of new functionality to a QA group, usually after a number of weeks, months, or years (yipes!) of hidden development. A waterfall approach involves a large number of changes all at once. These changes require a dedicated effort to manage and control the product requirements: one or more business system analysts (BSAs) are employed to manage and track what the product is “supposed” to do. BSAs keep a book of requirements up to date and QA digests these requirements as food for testing the software build that is eventually delivered to them. The QA cycle involves authoring manual tests to match and cover the product requirements, authoring automated scripts to test the product features, updating regression scripts to test the defects that were found in past builds, and to run performance analysis on the build. There are a number of ways of specifying how this is done, but in general this approach integrates with a waterfall approach well and requires larger toolsets to track test cases, testing results, performance testing, and automated scripting. Waterfall, and it’s not-really-Agile cousin Iterative Programming, requires a heavy investment in tools to get the job done because of the copious amount of work that needs to be tracked. QA is loosely-coupled to development because the size of the change package is so large and the frequency of deployment is so small.
An Agile environment will usually employ a methodology that adopts Agile principles, such as Scrum. Scrum is a structure set up to uphold Agile principles such as early and continuous delivery of software frequently, an intimate interdependent work environment, and continuous evaluation of technical excellence. Deployments of software that arrive early and often mean a far less time, if any, for QA. This means that QA needs to reconstitute the content of their approach to testing. QA attends commitment and estimation sessions with developers where they can both receive instruction and education about features to be included in the build as well as assert their opinion of testing-specific concerns that may surface as a result of the feature-set under consideration. QA focuses on writing automated scripts, usually using an open source test framework, against the software product as the features are being built. QA prepares a performance test and adjusts the navigation scripts and virtual user balance ratio as necessary based on their estimation of the features in that iteration. When the software is delivered, QA often has only a few days to complete testing rather than weeks or months. Staff up one or two people for manual testing to hit the build with cross-browser testing, a minimum of two technical QA resources on the team to maintain the performance, automation, and automation-regression script systems, and two or more subject-matter experts (SMEs) or business champions and a application development manager who understands Scrum are all key factors to increasing testing code coverage under Agile. QA is tightly-coupled to development because the size of the change package can be medium to small but the frequency of deployment is often at least once a month.
An Extreme Programming (XP) environment is similar to Agile but delivers software many times a month, often many times a week, to a production environment. XP processes can deliver and build working product, in theory, whenever is needed. QA is incredibly tightly-coupled to development because the level of communication and understanding is very high. Only the most technical QA Managers and QA staff members will survive in XP. QA staff must have a working knowledge of shell script, Ruby/Python, and the deployment process, as well as SEO and ad-ops for web software. QA will usually work entirely open-source in these environments, implementing tools in a “fast and loose” manner as needed. Changing direction completely is common. QA must write the minimal set of functions required to provide an automated test bed, code coverage metrics, manual cross-browser testing, and performance analysis and be able to do so in hours rather than days. Attention to detail, a good memory, and approaching challenges and failures with a “what are our options here” attitude is the only way to survive.
Interestingly, QA approaches successful in one SDLC would be seen as incorrect or inappropriate for other SDLCs. What is appropriate for waterfall can be a poor choice for Agile, and vice versa. The selection of tools and the decision of when and how to test software is a heavily dependent function of the SDLC employed by the development team and the delivery and flexibility requirements of the product team.
Being able to think at this meta level is a critical success factor to QA Managers, product managers, project managers, and CIOs as they consider how to most effectively create their software product and deliver it to their customers. In this industry, the teams that can utilize mindful reasoning and deliver accurate metrics will be the ones that win.