Real people, real devices, on-demand.
I wrote an article for CMSWire with the self-consciously clickbaity concept that crowdtesting is like sex, for product quality. Rather than summarizing the sex analogy here on our blog, I’ll let you read it in its original glory.
To atone for the clickbaity-ness of that article, let me summarize it in engineer-friendly terms here.
Software engineers are familiar with the concept of an anti-pattern -- a commonly used solution to a problem that ends up making the software worse. Some anti-patterns are hard to spot when you’re using the product, but painfully obvious if you’re dealing with the code.
Other anti-patterns are visible to testers, and therefore to users.
And one of the benefits of having smart testers testing lots of different pieces of software is that they spot anti-patterns. We don’t know what went on in the code, but we see common effects. For example:
Smart testers zero in on these kinds of issues because, as one of them put it, “From my point of view it’s great, as I’ll get paid for the same issues” over and over.
Software QA testing of this kind is not mind-numbing clickwork, like running through scripted test cases. It’s skilled work that requires experience. Moreover, it’s the kind of exploratory testing that in-house testers typically won’t do as successfully, because they don’t test as many applications.
This is one reason crowdtesting works.