1 new position now open  🙌

I’ve been testing for over six years now. The early days were very much desktop-based QA, but as the uptake of mobile browsing and the demand for responsive websites and apps has grown, I’ve spent an increasing amount of time testing on mobile devices.

I believe that a good tester/QA engineer can test pretty much anything, so moving into mobile QA and app testing didn’t fill me with fear. But if does you, remember that the mobile isn’t just a black box: don’t treat it like an enemy, and instead try to understand what it is you’re building for.

Here are some guidelines to help with the mysteries of QA for mobile, and how we do it at Made by Many.

Get involved early

QA should be involved at the very beginning of a build: it is crucial to understand what problems the business is facing so that we as testers can fully understand the solution and the nature of the product. We can then research the target market, which devices to test on using current analytics to give a best estimate as to the users’ devices, and which operating systems they are running. I usually go with the top five to ten devices with a range of OS’s to cover the testing activities on devices. The mobile market is awash with hundreds of devices, OS’s and hardware, and that’s why it’s important to set out early which ones you are going to be testing against – there is no way to test against them all. Similarly, ensure you have a range of sizes and screen resolutions – this is an area where issues can easily crop up.


With mobile testing it’s important to get back to basics. Trying to automate the process of tapping through an app can take a considerable amount of time and is, in my opinion, a waste of effort. When designs are constantly changing, tests and checks will keep breaking unless you have a dedicated person to constantly update the test scripts.

I’m by no means suggesting not to do any sort of automation – but it can lead to battling with tools for weeks just to replicate tapping a button. These tools are immature compared to their web-based counterparts – probably about four to five years behind.

I also find it useful to test the API which the apps talk to in the backend, so I know that the info coming through is correct. Another useful technique is to use a proxy on your mobile: go onto the same Wi-Fi network, start a proxy, connect your mobile to your laptop, and then all the data from your mobile will come through on your laptop. This enables you to see all the requests your mobile is making. This is a great way to check performance and how much data is being used. For example, your device could be firing requests constantly, but without going in and exploring the app you might never know this was happening.

Invest in a crash tracking tool: a personal favourite of mine is Fabric (formerly Crashlytics), which is incredibly useful for providing developers with the exact line of code where the crash is happening, along with providing replication steps in your bug tracker.

As I already mentioned, you can’t cover all devices but Genymotion is a great tool to emulate any gaps in the range, even though it won’t be a one hundred percent true reflection of the actual device.

I also like to switch my personal device every few months. Doing so enables me to understand the one I’ve switched to, and jogs my memory for native device behaviour so that I don’t get too accustomed to a certain device or OS.


My favourite thing to do is to explore: tap in places you shouldn’t, and do things that normal users of the app would do! Meanwhile, don’t just simulate network conditions: get out there and use the app on your travels. I do a daily train commute so it’s easy to pull out a phone, fire up an app and see how it copes with differing conditions, such as full network coverage to no network coverage (tunnels), or the app’s attempts to reconnect to the network. All this can expose a lot of bugs and usability issues which would be very difficult to replicate if you’re only throttling a network. It also allows for real-world scenarios: for example interrupting states such as text messages or push notifications making the user exit the app.

Another technique I liked to use is blink testing, first introduced by James Bach who describes it as “an heuristic oracle”, offering the following definition: “What you do in blink testing is plunge yourself into an ocean of data – far too much data to comprehend. And then you comprehend it.” So when I’m testing an app I line up a minimum of five devices and execute the same test across the devices at the same time, thus exposing myself to more data than I can comprehend. The result is that I can quickly notice issues and differences in layout, images and transitions across the devices laid out. Try it for yourself — it’s a powerful way to test quickly.

By the way, the devices themselves are usually the most annoying things about mobile testing. It can break your flow and lose you time when a device runs out of power, so find a way to keep them charged. I use a multiple device charging station to prevent this from happening (another issue is devs “borrowing” devices and then forgetting they have them, just when we most need them…).

Coaching developers

Speaking of developers, I also encourage them to check their own work on a device/OS they’re unfamiliar with. Most of the developers I work with have iOS devices. When it comes to developing a mobile app in, say, React Native, it’s important that the developer also understands or has some knowledge of native behaviour on Android. This can eliminate a lot of the to-ing and fro-ing between tester and developer when a bug is obvious in native behaviour, for example the “back” button on Android not working. I find it easier to sit with the developers at the start of a project and take them through what I would like them to check themselves before committing to QA. Doing that this gives me more time to explore the app and hunt down the bugs.

App store submission

This is probably the most time-consuming part of mobile development and testing. I still have no idea what Apple’s acceptance criteria are, as they seem to change all the time – we’re in the hands of the Apple gods! However, there are some things you can do to alleviate the pain: describe what the app and its purpose is, with supporting images. If you are releasing new functionality include videos to clearly outline the purpose again. Android is somewhat easier: all that’s entailed is to upload to the Play Store, and the app is live. However this will change soon with Google introducing Play Store standards similar to Apple’s.

Mobile QA is less scary than you might think

A few of the big lessons I’ve learned:

• Don’t treat devices as a black box: gather as much data incoming and outgoing using API and proxy tooling.

• Know your product and target market to ensure you are testing on the most-used devices.

• Coach developers to not only to test on their own device, but also with simple tests on an unfamiliar device. If they’re used to iOS, get them to use Android and vice versa.

• Don’t waste time trying to set up extensive automation when the time is better spent exploring real life usage.

Further Reading

Medium cannesmockup 1

Launched: Made by Many creates new app for Cannes Lions Festival

Tim Malbon

We’ve created a brand new companion app for the Cannes Lions International Festival of Creativity, which kicks-off this Saturday and runs for 8 days in the...

Medium screen shot 2017 06 09 at 14.12.38

Fear and despair in product teams (and how to navigate them)

Kristof Goossens

Making successful digital products is hard. For many businesses, it’s still a step into the unknown. It requires teams to experiment with new ways of worki...

All stories