Skip navigation EPAM

Good Manual Testing vs Bad Manual Testing

Phil


Such a view is misguided at best, dangerous at worst. Manual testing provides the means to locate unforeseen issues and bugs, receive real-world usability feedback, and to confirm the readiness of your application for the limelight of production. It even saves developers’ time in the heat of rapid iteration.

Let’s explore why and when manual testing is useful, and to provide some advice about when you’re doing it right -- and wrong.

Not sure if you're doing manual testing the right way? Take our quiz and find out!

Manual and Automated Testing Are Not Opposed

Any argument for the efficacy of automated testing, particularly in the DevOps world, often frames manual testing as an opposing practice; a bipolar negative that cannot (nor should not) compete with test automation. That isn't our aim, nor our claim. Automated testing is, undoubtedly, an extremely powerful tool throughout many software development life cycles. Test-driven development, and other test-focused practices, have shown automated testing to be a fundamental linchpin, necessary to produce quality software.

So let’s dismiss the notion that automated and manual testing cannot coexist. In fact, when properly implemented, the two practices have a natural, symbiotic relationship. Our goal is not to besmirch automated testing, but instead to clarify the situations when well-run manual tests will benefit a project.

How Manual Testing Can Help

Before we dive into how manual testing can help build stronger software, let’s define it. Pedants have argued that manual testing isn't even an appropriate term in software development, hung up as they are on the notion that "manual" only refers to tasks performed by humans, without the assistance of machines or tools. Rather than being sticklers about terminology, let's just clarify that for our purposes, manual testing refers to any form of software testing where a person initiates and performs the test in human time. Thus, an exploratory test or the step-by-step execution of a test case would qualify as manual, but a person pressing a button to invoke a suite of Selenium tests would not.

Exploratory testing occurs when a tester approaches the software under test without a rigid test plan, effectively allowing test design and test execution to happen concurrently.

With that out of the way, let's take a closer look at just a few of the ways manual exploratory testing can be invaluable throughout the development life cycle:

Discovering Hidden Issues

When test cases are initially created, often an existing yet undiscovered bug limits functionality in an unforeseen way. For example, when a bug unintentionally hides a visual element, often no test case exists that covers that element. Even after the bug is fixed, some form of testing must be performed to ensure the software functions as expected, since automated tests weren't designed to cover this particular issue. Manual exploratory testing often discovers such masked defects.

Overcoming Cognitive Bias

Your developers, and to a certain extent your QA team, know how your software is “supposed to work.” Good exploratory testing approaches software with fresh eyes. Ideally, testers can be found from virtually any locale and hired with your particular demographic or regional requirements in mind. Whether this is localized to your area or globalized with users from across the planet, manual testing provides real human feedback from people who didn’t design your software, so their usage patterns more closely resemble your customers’.

Obtaining Comprehensive Device Coverage

Modern apps may run on a range of devices and platforms, all of which have multiple configuration options. Ideally, manual testing ensures that your software is analyzed across the spectrum of devices, browsers, operating systems, and so forth, that exist in the world. Simulations or emulations of such devices may give you “happy path” coverage on a small sample set, but are unlikely to cover the range of configurations your customers use.

Providing Real-World Usability Feedback

Testers, particularly those from crowdtesting services, are able to provide usability feedback that closely resembles feedback that your customers might have. It’s not always practical to solicit real customer feedback before a release, nor can your employees always look at your software through the customer’s eyes. Skilled testers can solve these problems.

Good Manual Testing Practices

Now that we've seen a few of benefits that manual exploratory testing can bring to your project, it's also important to examine some good ways to actually implement manual testing.

Exploratory Testing of New Features

The most obvious use of manual testing is to perform manual exploratory testing when new features are added to the system. If ever there is a moment in the software development life cycle when new bugs or issues can crop up and cognitive bias can cloud your vision, it's when a new feature is released. Manual testers ensure that not only does the feature work as expected, but they can explore every facet of the feature, uncovering bugs or potential future issues that developers may not have considered. Ideally, this round of testing should not be performed by developers, but rather, by QA professionals with an outside view.

Pre-release Manual Regression Testing

Performing manual testing as part of the regression test cycle can be a huge benefit, particularly for regression testing prior to a release. This provides a great deal of new eyes on the product during this critical period, ensuring that every 'I' is dotted and every 'T' is crossed, and there's no unforeseen problems, which your customers would otherwise experience after release. Even when automated regression tests have passed, the insight of professional, manual testers is valuable, as they often find visual problems and unmasked defects that automated tests miss.

Manual Test Cases During Rapid Changes

Many development teams have adopted Agile-infused methodologies, in order to take advantage of the many benefits, including rapid iteration and release cycles. However, rapid change means many opportunities for functional bugs to creep in. Even when all changes have associated test cases, it often makes sense to have testers execute the tests rather than attempt to automate them, because automated functional tests are expensive to maintain under conditions of rapid change. Human testers are robust to small changes in your user interface or naming conventions in your markup code. Automated functional tests are not.

Bad Manual Testing Practices

While manual testing has its place, sometimes it leads to more harm than good. While this list is by no means exhaustive, these examples should provide you with a few basic guidelines on how to avoid bad manual testing practices.

Developer-Driven Manual Test Case Execution

It's an easy trap to fall into as a manager -- when quality is down and deadlines are coming up, the entire development team is told to perform manual testing for the entire product, for every release, until quality improves. Repeat this a few cycles, however, and before you know it, the test suite is a mess, the team is dispirited, and developers are spending more time running manual test cases than they are writing new code or refactoring existing code to make it more testable. This is particularly problematic when projects lack sufficient unit and integration tests. Rather than forcing a unsustainable form of manual testing driven by developers, it's better to allow developers to focus on refactoring and improving code, to make it more testable, while allowing professional testers, via crowdtesting or otherwise, to perform any necessary manual testing.

Repetitive QA-Driven Manual Test Execution

Even for organizations with the resources for dedicated quality assurance personnel, asking the QA team to repeatedly perform manual testing is often a waste of their time and efforts, especially when other QA tasks need to be performed. In such cases, the code is most likely not written in an effectively testable way, or perhaps units tests don't even exist in the first place. If QA can be improving your automated workflow or configuring the test infrastructure, those are tasks well-suited to that team. Manual testing, on the other hand, can then be executed by crowdtesters outside the organization, freeing up the quality assurance members to improve the process elsewhere.

Improper Bug Reporting and Tracking

Improper reporting is a very common problem when organizations perform “bug bashes” among their non-technical staff or early users. When anyone performs manual testing, it does virtually no good if there isn't a strong culture and infrastructure for reporting and tracking bugs. Testers, whoever they are, should have a centralized system to track bugs, and everyone using it should be fully aware of the proper syntax and etiquette used to write up reports. Most importantly, bug reports should include clear steps for how to reproduce the bug. Otherwise, developers waste their time reproducing bugs they should be fixing.

Ultimately, manual testing is a powerful tool for software development, but it must be smartly implemented. Allow developers and the QA team to focus on what they do best (writing clean code and executing automated testing), while simultaneously placing the burden of proper manual testing on professional testers. With QA services, such as crowdtesting, providing extra support on the manual end and developers and in-house QA on the automated end, the project will be naturally balanced, and you can rest assured that your software is ready for production release, into the hands of your customers.

blog

GET IN TOUCH 
 

Learn More About Test IO  

Our testing experts stand ready to address your most challenging QA initiatives. If you’re interested in becoming a freelance tester, click here