How can you reduce risk when implementing agile development in your SDLC?
Software testing isn’t a one-size-fits-all proposition. While many people would readily understand this when it comes to other testing techniques, they assume that crowdtesting simply means finding a large number of people and pointing them to an app to find bugs. That’s one flavor of crowdtesting, but it’s not what creates lasting value for customers. That value comes from understanding what customers want each time they run a test, matching those requirements to the right set of testers, and displaying the results in a useful way.
The matching of customers’ intent to testers’ strengths is an evergreen project at test IO, and unfortunately as a customer you don’t immediately see a pretty user interface that shows the magic happening.
But we are making substantial changes to the interface that you see when you kick off a test, and increasingly those changes will also manifest themselves in how we select the testers and how we provide results to you.
Here’s one big example: as we’ve announced today, we now offer Rapid Tests -- exploratory tests where results can come back very fast, in some cases as quickly as an hour. There’s a new single-step wizarde for that, which we hope will encourage you to do more such tests: when you’re merging code into the main code branch, when you’re making a small change and then pushing to production, or maybe just to make sure you didn’t mess something up when you changed some CSS classes.
Of course, everyone wants everything faster, so you might ask why you wouldn’t choose a Rapid Test every time. Here’s where the behind-the-scenes magic is happening: for a Rapid Test, we know you’re primarily interested in speed and in critical bugs; you’re not looking to test all of the edge cases, or for comprehensive coverage of every device you might want to support -- you’re looking for a fast answer to the question, “Does this still work?” So we match testers who are well-suited to that task and give them precise instructions to help them accomplish it.
There are other testing types too: coverage tests, for when you want to make sure to cover as much of your support matrix as possible; focused tests, for when you’ve built something new and you really want to wring all of the bugs out of it; usability tests, when you want feedback; and custom tests, where you can twist all of the knobs yourself the way you always have. Software testing contains multitudes!
Increasingly, you’ll see a divergence in how these tests behave and how we display the results to you. We’re very excited by the initial feedback we’ve gotten as we’ve iterated on this with customers. There’s one thing you can do that would really help us. When you get your test results, you’ll see a little poll in the interface that asks how valuable the test results were for you. Please fill this out! This is one the signals we use to make sure you’re getting the kind of results you needed.