In an insightful post from this past November, Bas Dijkstra, test automation trainer and consultant, contributed to the ongoing conversation around “how much testing is enough testing,” specifically focusing on automation.
In “Making a case for less automation,” Bas argues that while automation needs and expectations continue to grow, it might be beneficial to take a more critical approach to automated testing: sometimes less is more if a smaller suite means a higher quality one.
Specifically, he proposes that you should ask yourself, “Are we comfortable with not having the information that this check provides?” If the answer is yes, then it’s likely not worth the time to automate. Doing so allows you to maintain a suite of the most pertinent automated checks, eliminating those that don’t add enough value (and maybe aren’t even worth your time).
This is a fantastic argument, but I’d like to take it one further by adding another lens: company size and capability. A lot of larger organizations (with ample monetary and personnel resources) are often times “automation happy,” prone to throwing automated checks around like confetti. Why? Because they can. On the other side of that, there are smaller companies that don’t necessarily have the resources -- financial, personnel, technical -- in order to produce a comprehensive automation suite.
However, these smaller companies often see comprehensive suites as the sole, integral way forward. This limits their ability to implement checks in the short run and may stymie their quality assurance efforts in the long term.
Moreover, some companies that are not inherently “tech” companies may lack the technical prowess or roadmap to effectively instill satisfactory internal QA processes. An e-commerce company that launches off an off-the-shelf platform may initially spend time and resources growing operational and marketing personnel before realizing that they’re outgrowing their initial web platform. At this point, they may struggle to build their own proprietary platform or make internal technical hires. Forcing a case for automation, let alone more automation, is an uphill battle. In this situation, adopting manual testing -- not even regression testing, but more simply, exploratory testing -- can help to bridge the gap between 0 and 1. Once a testing process becomes apparent, it can be much easier to adopt a more robust automation framework. But again, as per Bas’ argument, “robust” sits on a sliding scale that should be appropriately adjusted to current needs.
The reason I speak of this evolution of testing -- from no testing to exploratory testing to simple automation to more comprehensive automation -- is to further elucidate the point that testing is not a plug-and-play solution, automation in particular; it is a gradual journey that, like implementing an automated check, begs the question, “Are we comfortable not having the information that this [testing effort] provides?”