Exploratory testing tends to attract people who like investigating problems, solving puzzles, and discovering patterns. It’s intellectually demanding work, and it requires judgment. Like most demanding work, some people are better at it than others, and you get better at it with practice, training, and feedback.
In other words, it’s not clickwork. You wouldn’t ask untrained people from Mechanical Turk to do it -- or if you did, you wouldn’t get very good results. And you can’t simply automate it, though like most modern work, you can make it more efficient with good tools.
Unfortunately -- perhaps because crowdtesting also includes executing scripted tests, which can be done by people without skills and automated -- I’m not sure that everyone knows that crowdtesting is not clickwork.
That’s one reason I wrote this article for TechCrunch. Maybe we need to play around with the language here to make it more human-friendly. “Crowdtesting” is an odd word, because it suggests that the work is done by a “crowd,” (like the wave at a ballgame) when really it’s done by individuals. Sometimes we use the phrase “QA as a service,” or “testing as a service.” But somehow these phrases aren’t human enough either. It’s really more like “judgment as a service” or “insight as a service.” But who buys that?
Anyway, since we’re in a business where human judgment matters, we want our customers and testers to understand that we’re investing in testers’ training and development, and that we also encourage customers to communicate with testers directly.
Some of the work we’re doing in this area isn’t visible to everyone yet as we continue to refine it. But as we roll out more features that depend upon human judgment, training and feedback become increasingly important. So if you’re a tester, I think you’ll be seeing some improvements from us in the coming months. And if you rely on testers’ judgment about your product, you’ll benefit too.