How can you reduce risk when implementing agile development in your SDLC?
We're continually improving and re-engineering how testers get selected and asked to join a test cycle. Our goal is to make sure you get the best coverage, fast turn-around, and the right testers for your particular product, and that invitations to tests are distributed fairly to testers.
How do we do group testers and figure out which ones fit for which tests? To start off, we can classify testers based on many characteristics: geography, experience, devices, when their last test was, and other criteria. We use these segments or dynamic lists of testers to power our invitations.
Primarily, we place segments into testers based on their capabilities. Based on data like the types of bugs reported, the acceptance rate, and tests participated in, our machine learning system determines whether a tester is better at rapid tests or focus tests, desktop or mobile, or any number of other preponderances.
The test IO platform invites testers in waves. We do this to make sure that the test cycle has enough of the right devices or other requested characteristics. These waves of invitations are also set up to fill uncommon devices first. For example, if a test cycle includes a relatively rare device/OS combination like iPhone 5 running iOS 8, we'll send out invitations first to testers who have this device listed. If they join this test cycle, they can only submit bugs on that device.
We also use invitations to make sure that in addition to our experienced testers, testers new to the test IO platform ("greenhorns") get invited to a broad range of tests. We do this to so our newer testers have opportunities to gain more experience. To help test IO's crowdtesters improve their software testing skills, greenhorns also get extra feedback from our experienced test leads.