How can you reduce risk when implementing agile development in your SDLC?
Accidents happen -- especially with the pressure on increased output and getting to market more quickly. A software update or patch can have unforeseen impact on existing software.
For example, recently National Public Radio released an update to their NPR One Android app. The update was apparently considered so minor that it received the designation 126.96.36.199. But it was anything but minor to some of NPR’s famously fanatical (if usually low-key and polite) listener base.
One listener downgraded her 5-star Google Play review of which she called it “THE BEST APP” (her emphasis) and noted “I use this app all the time, and love it” to a 1-star. She commented: “It’s currently crashing every time I open it…it’s not useable as is.”
Another 1-star reviewer noted the update wouldn’t load on her Google Pixel leaving her without access to the app. A third loyal listener gave the app three stars even despite her complaints: “Worked great until a recent update. Now it stalls or just plum won’t play. Requires constant babysitting. Unfortunate because the content is great.”
Or course, we’re not trying to pick on NPR here (and we complain because we love). As pros developing a complex app, they probably have a full battery of automated tests running. Still, this example shows that extensive unit, integration, and automated functional tests will identify some issues prompted by small changes — but not 100% of the time. The pace of digital transformation and the variety of devices in your user community make it more difficult to automate tests that catch all problems.
Even small releases require app testing on a phone, in the real world.
Your software may have worked on cutting edge platforms when it was released, but with the pace of technological innovation you need to be sure that your software remains supported and usable.
Even if you’re not changing your software much, the runtime environment shifts beneath your feet. Browsers change, the OS’s change, and the third party software that you link to changes. Any of these evolutions can create issues for your software.
Microsoft, for instance, might not have anticipated that an Apple OS update would not play nicely with its own Outlook app. Yet, consumer frustrations were evidenced among the backlash CNet surveyed after Apple’s release: "Outlook app is crashing after iOS 9 update. Had to uninstall and reinstall the app to get it to work again."
Suppose you have to do a maintenance release anyway to clear up some ongoing issues.To support your software’s longevity it’s a good idea to take the opportunity when doing a maintenance release to test thoroughly on the latest operating systems and browsers — or even on pre-release versions for operating systems your users care about. That way you’re ahead of the game, not behind.
Organizations today are investing even more in Quality Assurance. Some 31% of budget was allocated in 2016 to QA and testing and that allocation is expected to grow to 39% by 2018. Yet much of that investment is going to testing software in the pipeline for new release. Attention to app update testing is overshadowed much like the older sibling in the immediate wake of a new baby’s arrival.
Yet your current customer, even as they may be anticipating your company’s next big innovation, remains loyal to what they already know and love. A 2017 study, for instance, found that while updates attract new consumers, existing consumers were more likely to be alienated and lower their rating of the app.
At the same time, continuous testing can help you to keep a pulse on the status of existing software. Keep your customers loyal to your app by making sure the software doesn’t start to show its age. Regrettably, when a customer sees the fit and finish of your app is starting to fail they often won’t speak up. They simply leave, and you lose revenue.
Your development team may think it already has systems in place to flag any potential problems. But the telemetry from the production systems typically only logs exceptions helping you to spot and address system problems as they become apparent.
But in the UI, customers can have problems error logging won’t report. For instance, misaligned forms, broken fonts, out-of-date images, and icons that don’t match can matter to your users and an automated test simply can’t gauge these issues. When an app becomes sluggish, you don’t always know. On the other hand, crowdtesting offers a taste of human reactions to changes — big or small.
In a landscape that currently embraces quality at the fastest possible time, development and QA managers might see further testing as an unnecessary slowdown — and an unreasonably sized headache — but this step doesn’t need to be too painful. With the benefit of crowdtesters each focusing on different devices, OSs, browsers, or more, you can stay abreast of the latest technologies and tools without detracting from your effort to stake or retain market share.
We’re aware that testing of already operating software can be tedious and time-consuming. Nevertheless, it’s essential to the ongoing success of your software and supports product quality (and, by association, customer satisfaction). That’s why test IO offers your organization ready access to talented, clever human testers who can provide insights across all devices, OSs, and integrations along with detailed reports that help your team ensure continued software quality in the wake of changes — regardless of the scope or scale.