Continuous Integration, Continuous Delivery & Customer Success
The disconnect between QA and DevOps often occurs due to misconceptions about the role of quality assurance in the modern software development life cycle. Even as continuous integration servers and continuous delivery practices check your code for errors, they don’t ensure its quality when you define quality from your customer’s perspective rather than your code’s: “quality is fitness for your customer’s purpose.”
Think of it this way: quality assurance isn't just about finding bugs; modern QA is a fitness regimen for both your application and your organization.
Continuous delivery is undoubtedly a powerful tool when used properly. Releasing new, fully-functional builds at the drop of a hat is an alluring prospect, but it can be a dangerous siren song if poorly implemented. With fully-automated workflows, all builds that pass automated testing are pushed to staging (or even production), but how do you decide whether customers should actually see it? The delivery of a final product to customers is a business decision, not a technical one. Such business decisions are so critical that they are best made with multiple inputs, providing a more comprehensive view of the risks.
Even sophisticated organizations like Google experience issues with continuous delivery practices, due to a lack of human-oriented quality control practices. In one particular case, the Google Consumer Surveys team pushed a build to production that contained an obvious visual bug: "a dancing purple pony." The cause of this major fumble? An incorrect CSS class name. The team at Google had implemented the right series of checks and balances to verify the integrity of the build and ensure nothing untoward was pushed to production:
- Monitoring
- Test Automation
- Continuous Integration
- Code Review
- Deploy to Staging
- QA Checklist
Everything came back green. All tests were successful, staging looked as expected, and even their QA checklist failed to notice the issue. In spite of all these checks, along with the dozens of metrics that failed to report any issues, none of their systems could identify this dancing purple pony sitting right there on the page. As Brett Slatkin, engineering lead on the project, puts it, "I don't know anything about how my customer perceives the app."
In the old days of waterfall development and separate QA teams, Slatkin or his product manager had an independent source of information about his product's fitness for customers. Today he might not. In Google's case this is a trivial example for a relatively marginal product. But if (like most companies) your risks are higher and your level of automation lower, you need an independent source of truth about how your customers will perceive the app.
QA Supports Continuous Delivery
As we’ve seen, even the most diligent organizations can confuse good test results with actual fitness. If teams assume that automated tests, code review, and staging deployments are all it takes to be fully prepared to climb that mountain of production release, it shouldn't come as much of a surprise if an unexpected fall occurs, plummeting the project back toward the bottom.
A handful of op-ed pieces and blog posts have appeared lately, suggesting that the need for QA is declining with the rise of DevOps practices. The core of this contention is that automated methodologies are sufficiently advanced that human intervention is largely unnecessary in the release process. Speed and efficiency of computerized solutions wins out over traditional QA methods, or so the theory goes.
Two counterpoints are critical to consider here:
- Automation and continuous-X practices have difficulty grappling with malleable aspects of a project. Components such as the UI and the mobile experience present numerous uncontrolled factors, which are challenging for automated tests to handle. Human testers, on the other hand, can easily evaluate and provide feedback on such elements.
- Customer goals must always be at the forefront of both development and testing practices. It's not always enough to go through your basic DevOps process, relying on automated code review, automated tests, and metrics to indicate the fitness of the software.
Ultimately, quality assurance should emphasize the customer experience over all else, by ensuring that software quality is at peak performance.
But how can you get a regular sense of your software’s fitness for the Big Game if you’re operating with the speed and efficiency of a DevOps team?
Best Practice: QA as a Fitness Regimen
Historically, quality assurance has been relegated to the back of the pack, as one of the very last picks in the disciplined pickup game that is the development life cycle. Waterfall methods are notorious for pushing testing and other quality control practices toward the end of the process, which often leads to a mad sprint to fix bugs and reach the finish line of release, before the looming deadline.
Modern agile methodologies, which are rough and tumble by comparison to waterfall styles, are typically more test-oriented, ideally centering testing among the engineering team. Yet, even with more diligent developer-driven testing practices in place, agile DevOps risks relegating customers-focused QA to an afterthought. Frequently exercising your product with real-world people and customers is a smart business decision, ensuring the system doesn’t become out of shape. An ideal world of software development combines automation with human judgment, informing better business decisions, including when your software is totally fit and ready to ship to customers.
Among DevOps-savvy organizations, we see three different patterns for integrating human-driven QA and DevOps:
- The Tryout: A deep exploratory test, carried out before significant new features are merged into the master codebase. Typically these tests occur as part of a pull request process and are designed to be thorough, probing edge cases and ensuring that unit and integration tests have covered the system adequately.
- The Pre-Game Shakedown: Teams that follow a disciplined, time-driven release cycle often schedule their tests to kick off at a predetermined time, usually around 24 hours before their scheduled release push. The focus of these tests is often wide rather than deep: coverage of operating systems, devices, and browsers that their internal teams and automation may not hit.
- The Weekly Scrimmage: Teams that release at will, rather than on a pre-determined schedule, often find it useful to conduct time-constrained, or chartered, test regularly each week. In this way, they gain insight into drifts that may have happened in their code, and especially into hotspots that their automated tests are not covering adequately. These hotspots can either be converted into automated tests, or can guide functional testing in focused areas before pull requests or production pushes.
Crowdtesting: Human QA Testing at DevOps Speed
At test IO, we’re a crowdtesting company, so we see companies that have integrated crowdtesting into their release process. It’s certainly possible to follow these practices solely with an in-house team, and better still to do it with an in-house team that’s supported by crowdtesters. One key value of crowdtesters in this equation is speed. The on-demand model at test IO means that we can bring the appropriate resources to bear on a project at speeds that are difficult for most companies to achieve purely with internal resources.
The integration of a QA process into DevOps helps you manage risk and ensure that your application is both stable and robust. Bugs will be found earlier and more often. But most importantly, it makes sure your application is always fit for game time. Just as a great athlete is not defined solely by VO2 Max or pulse at rest, your app isn’t defined by DevOps test scorecard alone. Human insight provides that extra level of fitness through continuous improvement, making sure you’re training for the real goal: building great software for your customers.
GET IN TOUCH
Learn More About Test IO
Our testing experts stand ready to address your most challenging QA initiatives. If you’re interested in becoming a freelance tester, click here.