These best practices can help you get the most from your functional testing.
No modern automated software testing suite would be complete without the inclusion of automated unit tests. Whereas functional testing is only concerned with the what, unit testing focuses on the how, by verifying that each piece of code behaves as expected.
Since unit tests are most commonly created by developers, no special programming skills are needed to create them, and their creation scales linearly with the number of developers you have. Crucially, the act of creating these tests, separately from their execution, should help improve the quality of your code. Then, when you automate the execution and analysis of unit tests, you have constant insight into the health of your codebase.
Unit tests give immediate, specific feedback about what's going on in the code, thereby helping to confirm that the code performs the tasks it’s intended to. While improperly designed and implemented unit tests can be a dangerous crutch, properly executed tests should not produce any false-positives, so unit tests will probably get you the most bang for your automation buck.
Integration testing is a means by which multiple, independent components of the system are tested with one another. While these components may be independent, that isn't to suggest they are not interconnected within the context of the software. For example, a database layer is independent of the functional code, but the two must work together to ensure changes made by a user are reflected in the database, and vice versa.
Proper integration testing is a necessity for all but the simplest applications. Most modern projects will rely on many integrations and third-party components, including data layers, content delivery networks, email services, benchmarking/load testing, deployment infrastructure, analytics... the list goes on and on. While the overall testing demands will differ from one integration to the next, it's critical that all integrations function as expected, particularly when new builds are pushed or massive changes are coming down the pike.
API integration testing is an important subset of integration testing, which focuses on testing your software's integration with any third-party APIs you may be using. Many service APIs provide their own language-specific software development kit (SDK), which can be used directly in code in order to access their service. On the other hand, a far more common implementation of service APIs is to provide a representational state transfer (RESTful) web service to clients. By sending appropriate data to specific URIs of the service, along with a valid HTTP Request Method verb, the service will return a response, indicating a successful transaction, or even an error, when applicable.
Load testing is a crucial benchmark for many software projects. Accurately measuring both normal and peak load for your system ensures your team can better plan for production launch.
Automated load testing tools allows load testing to be performed on-demand, if not constantly. Load testing is by definition automated, since it involves simulating or replaying traffic to an application at high speed. This makes load testing a useful status check. By performing nightly load tests, you'll often catch non-functional problems that you may have introduced during development the day before. If, for example, your application’s ability to serve traffic drops by 25% from one day to the next, you’ve probably introduced something into your code that you need to fix. Consequently, load testing is important for for most projects where scalability or performance is (or will be) a factor.
Functional testing helps to verify that your software is doing what it should, without worrying about how it does it. This often takes the form of verifying the functionality of the interface or other end-to-end components, without the need to dive into the code that powers it.
Automated functional testing expands on these benefits, by allowing your organization to frequently execute functional tests and verify the results, often (theoretically) without human intervention. Teams can develop a suite of appropriate functional tests, plug them into an automated tool or execution script, and rest easier knowing that all functional tests will be performed automatically.
Of course, automated functional testing isn't a cure-all for every potential functional problem. Such tests have a tendency to be flaky when not properly maintained or monitored, particularly during rapid changes or leading up to a new feature release. And automated functional tests only find the issues for which you’ve written test cases. As James Bach has argued, this isn’t precisely “testing” -- it’s checking; important as far as it goes as one tool in a bigger toolbox.
Automated regression testing is something of a holy grail. Its appeal is obvious: as the software development lifecycle progresses and new features are added, executing necessary regression tests manually becomes a major burden when performed in-house. So many organizations look to automation as a method to improve regression testing efficiency.
While many people associate regression test automation with UI-driven functional tests, an ideal automated regression testing plan combines unit tests, integration tests, UI-driven functional tests, and human intervention. It is no doubt possible to create automated tests that exercise the UI and user functionality of the product, and many libraries exist to help. However, testing at the UI level should come as a last resort in terms of priority within automated regression testing. By its very nature, the UI is extremely volatile, as often, even when the functionality that is behind the UI is still working correctly, minor changes in the UI itself can cause test failures. Therefore, even in the most well-designed test suites, UI tests will frequently fail during automated regression testing, and require further analysis or manual testing to resolve.
Mobile testing presents a particularly challenging task, as it requires functional, compatibility, performance, UI/UX, and security testing, all rolled into one. By and large, automated mobile testing expands on the requirements of typical automated functional testing, but with the additional demands of running on real (or simulated) devices.
Automating some of these mobile testing tasks can dramatically improve turnaround time and better prepare your software for production. However, it's important to recognize that mobile test results tend to be flaky, given the challenges inherent in automated functional testing, combined with the prevalence of running automated mobile tests on emulators, rather than real world devices. The actionable feedback these tests provide, therefore, is only as good as the reliability of the systems they're running on.
Automated crowdtesting allows for human-powered testing to be performed at the same efficiency and speed as other forms of automated testing. test IO, for example, allows customers to trigger tests via a REST API. This enables automated follow-up in areas where regression checks have shown problems, as well as a final exploratory test when automated tests have come back passing, or a first test in a production environment to make sure all is working as expected.
This combination of automation with human-powered testing provides a unique benefit when used in conjunction with other forms of automated testing. Automated crowdtests are executed by real-world users, using real-world devices, providing real instructions on how to reproduce the bug in a real customer environment. With automated API integrations, crowdtest results can be delivered rapidly and automatically. This ensures that information is actionable and arrives in time for you to make an appropriate determination about whether to release the software to customers.
These best practices can help you get the most from your functional testing.
Exploratory testing has unique advantages, like better collaboration and less preparation. Learn more here.
Regression testing is important to QA but can be undervalued, especially compared to other testing.