You should know not only who is testing your apps, but how they are testing their apps. Instead of speaking for our testers, we decided to ask them directly. The questions were: What're the first things they do when they log on to an app. Where do they focus their attention depending on the type of app? What bugs do they expect/look for immediately? What edge cases do they try to break functionality? How do they decide what is a bug? Three of our best testers, Zorica, Nikola, and Charlie, all took the time to give us some insight into what they think as they test eCommerce applications.
How Zorica Tests
The process of testing an app starts even before launching it for the first time. Usually, I delete all unnecessary apps on my phone and clean the phone before downloading an app for testing to avoid the breakage due to the apps working in the background. After a successful download and installation of an app, I pay close attention to the launching process. Sometimes it gives clues about where to look first. Once the app launches, I scroll to get to know the product and explore it a bit.
Most of the bugs I find occur on the navigational elements (navigation menu, buttons, etc.), so that is where I start testing. After that, I test according to the recommendations in test features. Once I finish that, I like to do some extra work and combine steps to see if the app can handle uncommon approaches. For instance, if an app allows adding family members to a profile, I try to add more than 5 kids. Why? My grandma gave birth to 16 kids. Those things happen even today. I am sure of it. Then if an app allows downloading PDF files, I try to download them and then navigate to the shopping bag. Sometimes those combined steps manage to crash the app.
Deciding on what is a bug is, probably, the trickiest part of our work. The tester needs to check many things before filing the bug report. For instance: the design can trick the tester if he/she is unwilling to explore the app in-depth. That is why understanding the product is crucial. Once you know what intentional design is and what is not working correctly, you've done half of the job. For visual and content bugs, things are much easier. There are rules to follow when classifying those bugs, so a tester can't get it wrong if he/she follows them.
Once I've finished my bug report and submitted it for review, I like to take one more look at the app before the test cycle ends. Sometimes, the bugs are inconsistent, and I prefer to be the one to inform the TL if I can't reproduce it anymore. Inconsistent bugs happen often, and it is better if we notice them. The info that something happened between 3 am, and 3 pm is valuable for the customers. Maybe they did an update during that time or so on. As the test cycle comes to an end, I take one more look at the app to see if I missed something that could be reported. Those 11:55 pm bugs are my favorites.
How Nikola Tests
First, I go to the product overview page to check for sorting bugs (sort by price, rating, etc.. ). After that, I go to the PDP and try the add to cart function while also checking to see if the image can be zoomed ( where possible ) if a user can see reviews ( if there are reviews on PDP ), etc... After that, I go to the Cart to see if the user can change the quantity, delete the product from Cart, etc., then I go to see the checkout process. Once I do that, I start again from the homepage to look for bugs. I register an account, check for navigation problems, etc.
On airline company's websites and apps, I check if the user can select a destination, date, and add passengers. Then on to the booking process. Are input fields working correctly, can users add baggage, can they select a seat on the plane map, etc... Streaming websites/apps - I check if all videos are working correctly. If there are ads, I check which behavior will be there with an ad block extension, try video player functions, etc. For edge cases, usually, I always check the browser back and forward buttons on browsers. On one website's booking process, I clicked on the back button and it blocked the whole booking process.
How Charlie Tests
When I test an app, the first thing I do is turn on "Do not disturb," so I don't get any notifications in my screencast. Then, if I have to set up a proxy or anything else, I do so. When Account & Registration are in scope, that is where I start testing. I'm always hunting for functional bugs (Visual and Content ones I mostly let sit till I'm done with the entire feature... I love end-to-end feature testing). Then I go as Nikola has described. Generally speaking, I focus my attention on Account & Registration then, on the entire Order Process (including Cart, therefore PDP as well), always looking for all the features that allow users to customize items. For example, getting a name added to a tool or necklace or sending the product as a gift to a friend with a personal message.
Regarding edge cases, I never try to break any functionality or force a crash. As a customer, my primary purpose is to purchase something, not start processes over and over again, which is frustrating for all customers. However, I do explore the app's background performance, as I regularly do it with my personal apps. So I kill the app from time to time to reset it, but that's all. By the way, I try the same approach from different personas (like UX designers do...or should), which allows me to test the same features from different perspectives many times.
As for how to decide what's a bug or not, well, that depends on many factors but mostly from the information I can gather while exploring the entire app to know the product well and to perform some cross-comparison, among products for instance... but the easiest way is to see if one of my personas got stuck at some point during the "happy path" without knowing what to do next, if so, then I got a relevant bug.
Crowdtesters
As you can see, there are many similarities and differences between their answers. This is because our crowd is made up of individuals. That means they are all unique and have their way of doing things. These different perspectives and focuses create the unbiased testing that companies who use crowdtesting know and love. If you are interested in seeing how these testers can apply these skills to find bugs in your application, please reach out for a demo.