Insufficient mobile app testing and missed bugs are a matter of numbers, or more precisely, money lost by businesses. Specifically, according to this report (although somewhat outdated, it still provides insight into the significant losses), the "cost" of software errors in the US has grown to $2.41 trillion, while the accumulated technical debt has risen to $1.52 trillion.
Moreover, considering the potential multi-million dollar fines that non-compliance with GDPR/CCPA and other policies can trigger, it becomes clear why mobile software testing is not just a step – it must be integrated into the entire SDLC to ensure the tech team prevents critical bugs during release. Below, we will share our best mobile testing practices and describe the most effective ways to integrate them into the workflow.
Types of Mobile App Testing
In recent years, the approach to mobile software testing has changed dramatically. Today, it's not enough to simply run a set of tests before release; they must be integrated into the entire SDLC. Below, we'll consider the types of testing we use in the projects we deal with.
Manual testing
Manual testing remains the primary approach for tech teams to implement app quality assurance. This involves human experts who evaluate the product from the end user's perspective (often without using predefined scenarios). This allows them to quickly identify issues that can significantly degrade the final product's user experience, such as overly confusing navigation, overloaded screens, ambiguous wording, difficulties with data entry, and so on.
Automated testing
As for automated mobile testing, this approach allows teams to speed up the verification of functionality that is too resource-intensive for manual testing (usually this applies to scenarios with repetitive tasks, large volumes of input data, frequent releases, or simply when speeding up software testing is needed). It's best to combine continuous integration and testing stages so that each update is checked before production.
Regression testing
Whenever developers make any changes to software code (even the most minor ones), there's always a risk that the software will malfunction. This is where regression testing comes in. It can combine automated solutions with manual practices. Such a hybrid approach, in particular, enables on-time bug tracking and fixing in both routine and non-obvious scenarios predefined in test case management tools (like TestRail).
Performance testing
Modern mobile users aren't willing to wait more than a few seconds for each app screen to load, which often leads to deleting the app. This is why performance testing is so essential, and it should include simulating real-world conditions with varying internet speed rates/user devices/operating systems (including legacy ones). Also, this type of app testing implies the assessment of the app’s stability with different numbers of concurrent users.
Security testing
In the context of increasingly stringent regulations on private data use in the digital environment, any mobile app that even indirectly collects sensitive information must comply with specific standards. Failure to do so risks both financial and reputational losses. For apps in highly regulated sectors (such as banking and healthcare), businesses often resort to third-party audits from world-renowned pentesting organizations (to enhance their credibility with users). In other cases, having seasoned security testing specialists on your technical team who will be able to simulate potential attacks and fix vulnerabilities will be sufficient.
Choosing the Right Testing Tools
A mistake in choosing testing tools usually results in either unnecessary costs for the manual stage or overall poor product quality. Based on our practical experience, we've settled on the following ones.
Appium
It’s a nearly one-size-fits-all solution that we frequently use when testing cross-platform applications. We particularly appreciate its unparalleled flexibility: it's suitable for complex enterprise-level apps with extensive functionality that require running synchronous tests across multiple platforms (for example, to check a large number of API interaction scenarios, complex authorization flows, integrations with external systems, etc.).
Espresso
It’s a proprietary Google product used by Android development teams. Its main advantage is its high test execution speed and seamless integration with Android Studio. In our practice, we use it on Android-based projects with frequent releases where there's a pressing need to test new features ASAP.
XCUITest
It is the default choice for iOS app testing, as the solution itself was developed by the Apple team. It integrates with Xcode, enabling end-to-end testing of user interfaces and app behavior in a highly realistic environment. This tool is especially useful for projects targeting discerning audiences, where a flawless UI and smooth animation are a part of their USPs.
Cloud testing platforms
In addition to the aforementioned testing frameworks (Appium, Espresso), we frequently use cloud testing (particularly with tools like BrowserStack, AWS Device Farm, and Firebase Test Lab) – for example, in projects with a global audience. The main benefit of this approach is the ability to ensure comprehensive device and OS compatibility without having to purchase and maintain them in real life.
Best Practices for Effective Mobile App Testing
Our main practice is developing a release readiness checklist for each build. This includes automated test coverage for key user scenarios, load test results, security reports, coverage on target user devices, and usability testing. Along with that, we also follow the next guidelines for mobile development projects:
- Test early and often. Our QA experts are involved in the project long before the first line of code is written. Furthermore, they closely interact with software engineers and designers, which allows us to run acceptance criteria and build test checklists simultaneously with prototyping. To prevent API integration errors, we describe contracts and generate consumer-driven contract tests from the very beginning. Finally, when it comes to frontend testing, we use test-first acceptance, documenting criteria, and automating checks in the pipeline.
- Cover multiple devices and OS versions. Our set of user devices for testing is always based on analytics – specifically, we collect real data about the target audience for a specific project, including store analytics, region distribution, and in-app telemetry, and, based on this, create a prioritized list of devices and OS versions. For each Android release, we typically select the top five Android models in each target region and, similarly, five OS versions. As for iOS development, we compile a list of the last 4-5 most commercially significant devices and add at least one older device for regression testing.
- Use real devices and emulators. While emulators allow us to implement smoke and unit tests as quickly as possible, we use real devices to test sensor performance, power consumption, and thermal throttling. We also sometimes create special builds with extended telemetry and secure debug hooks – this provides us with detailed CPU, battery, thermal, and frame rate metrics while the app is running.
- Automate regression and performance tests. Regression automation should be based on the principle of "easy to maintain – easy to run," so we always classify automated tests by their execution cost and stability. We also use the Performance Debt Ledger metrics log, which describes thresholds for FPS, latency, memory leak points, and power limits.
If you'd like these and many other mobile app testing best practices implemented in your project, feel free to delegate this process to us.
Integration of Testing into Development Workflow
We'd like to dedicate this section to how to properly integrate testing practices into a mobile app development workflow (we tried many scenarios, but ultimately settled on these mobile app testing strategies).
So, our mobile app QA process starts with properly building CI/CD practices, so we divide testing into layers. First comes lint/unit testing, then component/integration testing, then device/smoke testing, then staged cloud regression testing, after that, performance/security testing, and finally, canary rollout. In terms of the tech stack, for all these tasks, we most often rely on GitHub Actions or Bitrise (they enable accelerated builds), Fastlane (for delivering code), and ArgoCD or Spinnaker (for orchestrating releases). Testing is implemented in parallel, sometimes alongside build caching. As for releases, we use staged rollout and feature flags with automatic rollback triggers when the error budget is exceeded.
We also closely integrate QA into Agile teams, where QA specialists take on sprint planning and refinement. For exploratory testing, we organize mob testing and pair exploratory testing – this allows us to obtain precise insights about UX that are impossible to capture through automated testing. Another unique practice we resort to is Triage Poker, where a team of developers, QA, and the Product Manager collects new bugs once a week, assesses their potential impact on the business, and prioritizes them.
How WEZOM Ensures Quality Through Rigorous Testing
At WEZOM, we assemble individual QA teams for each project so that testing is conducted in parallel with the development process itself, rather than as a separate (sometimes final) step. This approach allows us to find and fix the vast majority of bugs before the product is released. Our QA specialists also always work closely with analysts and developers, which helps us verify both the code quality and the business logic of the mobile solution.
Another important point is that we never get stuck on the manual vs automated testing dilemma: in most projects, we practice both approaches to a) reduce time to market and client budget; b) cover as many test scenarios and input data as possible; and c) not miss bugs that can only be detected through manual testing.
At the same time, we avoid using universal testing templates, replacing them with customized scenarios that fully meet the specifics of a particular product, whether it's a sophisticated architecture, numerous complex integrations with third-party (and sometimes legacy) systems, or a non-trivial UI. This customized approach ensures complete code coverage with test cases at the lowest possible budget, reduces the risk of unpleasant surprises with application behavior and design after updates, and significantly speeds up the release cycle.
By the way, if you're interested in delving deeper into our approach to software testing as a separate service, we recommend reviewing a case study that delivered outstanding business results to our client:
- 100% test coverage of critical project functionality;
- Over 500 completed test cases;
- Regression testing time reduced by 50% through the implementation of automated tests;
- Over 100 user scenarios aimed at optimizing and improving the user experience; 98% successful regression tests confirmed system stability.
You can learn more about this case study here.
Conclusion
To summarize, we can confidently say that a well-thought-out testing strategy, seamlessly integrated into the mobile development process, significantly saves businesses time and money and reduces user churn. Therefore, in a reality where just one week of a low-quality app's presence in app stores can easily destroy months of marketing efforts, developing such a strategy is not a privilege but a must-have. So, if you would like to receive such a customized strategy for your project or fully delegate its implementation to seasoned experts, please contact us.