Your error monitoring is solid: 99%+ crash-free rates, catching critical issues before they spread. But users still complain about “buggy” experiences. Why? Performance degradations don’t throw exceptions. Eight-second cold starts, 20 frames per second (fps) scrolling, and memory leaks won’t trigger a crash report, but they will absolutely drive users away.
While finding and fixing errors remains important, mobile app performance becomes critical for companies serving hundreds of thousands of users. At this sort of scale, even minor performance issues can result in significant user churn, revenue loss and increased infrastructure costs. Conversely, better app performance drives traffic, improves user experience, [increas…
Your error monitoring is solid: 99%+ crash-free rates, catching critical issues before they spread. But users still complain about “buggy” experiences. Why? Performance degradations don’t throw exceptions. Eight-second cold starts, 20 frames per second (fps) scrolling, and memory leaks won’t trigger a crash report, but they will absolutely drive users away.
While finding and fixing errors remains important, mobile app performance becomes critical for companies serving hundreds of thousands of users. At this sort of scale, even minor performance issues can result in significant user churn, revenue loss and increased infrastructure costs. Conversely, better app performance drives traffic, improves user experience, increases revenue and and decreases costs. This is where monitoring your users’ actual experience becomes crucial.
Why Is Real User Monitoring So Important?
Real user monitoring (RUM) can reveal performance bottlenecks that surface only at scale. Having this data is critical to making informed decisions about where to invest your optimization efforts and prioritize fixes based on actual user impact. Unlike in controlled testing environments, RUM captures what’s actually happening on real devices in real-world conditions, such as slow network connections, background apps consuming memory, older hardware and diverse operating system versions.
Common Mobile Performance Problems
Before getting into solutions, however, it’s important to understand the reasons for suboptimal UX. These are the most common, real-world problems we see affecting mobile apps today. We also included some advice on how to solve them.
App Startup Time Bloat
App launch times are recognized as a critical, industry-standard metric that directly affects UX.
It is particularly important to pay attention to cold startup duration. Hot launch occurs when the app is already initialized in memory and is brought to the foreground from the background. Cold starts occur after a fresh install, an app update, the first app launch since device reboot, or if the app was killed or evicted from memory in a previous session.
A cold start requires steps such as creating the first app process, initializing the main.swift or Kotlin code, making network calls to fetch real-time content, and doing the first rendering pass to populate an initial screen.
The cold startup time may start out low. However, as multiple teams work on an app, with each one adding extra network calls or CPU-blocking operations, the risk of app startup bloat grows.
Potential Solutions
Startup time acts as a guardrail metric to help prevent multiple changes and features from increasing beyond a set baseline. Having tools in place that can detect regressions or increases in startup time can also alert you to find workarounds, such as combining multiple network calls or looking for lazy data-loading opportunities.
Watch out for user configuration bloat. If you have a lot of information about a user, you may be able to get away with loading a few key details, such as default language and location, before rendering the first page, then loading the rest of the user info on a background thread for later use.
Other opportunities will likely be more application-specific, although common issues arise around cache behavior. Consider what happens in an e-commerce application when someone abandons a shopping cart and returns to it later. If the cart isn’t available in a cache, the process of repopulating it can involve numerous database or microservices calls. Could those calls be combined?
Likewise, if your app uses third-party SDKs, perhaps to render a map view, do you need to load all the third-party SDKs at launch, or are there further opportunities for lazy loading? Perhaps you could load the map view SDK in the background or when the user first clicks on the map view, rather than on your main application thread.
These types of application-specific behaviors are good candidates for custom-labeled units of work, called “spans” in OpenTelemetry, so you can see multiple network calls as a single group with business relevance.
Likewise, if your application offers different user journeys or funnels, such as an onboarding tutorial for new users or deep linking to a specific page from an advertisement, it makes sense to separate them out if entirely different data is required for the different paths.
Network Issues
A common side effect of larger development teams working independently is when several components make simultaneous networking requests. This may hurt networking performance.
For very large teams, this may require adjustments to team structure. Consider forming a platform team that takes responsibility for the networking layer. Once established, the platform team can evaluate how concurrent requests affect performance. They might then implement a call-prioritization framework that restricts simultaneous requests, although precise network optimization strategies will vary significantly depending on the application’s functionality.
With networking, another thing to keep in mind is that while mobile infrastructure has improved greatly, it continues to lag behind mobile hardware. If your app is used by tens or hundreds of millions of users, some will experience poor network connectivity, and leveraging device metadata plays a key role in segmenting outliers from the real metrics.
Potential Solution
One possible solution could be to switch away from HTTP/2, which can perform poorly in dynamic, lossy wireless networks, to an alternative networking stack such as Cronet or QUIC. Uber reported that switching protocols from HTTP/2 to QUIC over UDP brought a “reduction of 10% to 30% in tail-end latencies for HTTPS traffic at scale in [its] rider and driver apps.”
For a change as significant as this, you would want to do a canary deploy or use feature flags to compare the performance between the old and new networking stacks, before switching over completely. Also, keep in mind that switching to a less common networking stack may require manual instrumentation for distributed tracing purposes.
Animations and UI Rendering Performance
While the performance of modern mobile devices is astonishing, poor user interface (UI) performance remains another highly visible issue, especially with custom animations on older devices. Poor, janky UI affects how users feel about your application.
The exact approach to address these issues will vary. In a game, you may be less concerned about how long it takes to load a scene, but rather how much memory and CPU the scene consumes once loaded, as this will affect rendered frames per second.
Potential Solution
Tools that help with debugging UIs include Reveal for iOS and Layout Inspector for Android. However, both add overhead, making animations and key transitions run slower and investigations harder. This is where RUM comes into play: identifying problematic areas for closer inspection in a development environment with these profiling tools.
Taking It Screen By Screen
Beyond startup time and cross-cutting concerns like networking, measuring and improving the performance of a mobile app is typically handled on a per-screen basis. To track these metrics, Airbnb calculates its Page Performance Score and optimizes for Android, iOS and web.
Once you see a visible performance issue, like a screen being slow or a memory leak in a given page, the easiest way to debug performance issues is starting from the mobile client. With distributed tracing, you can measure how long each function takes, then divide and conquer. See which calls are redundant on the mobile client, and see where you’ll need to improve the performance for the backend endpoints.
The underlying cause of a page being slow is frequently not a mobile-only concern, but a combination of the way the mobile app and the backend work together. Distributed tracing shows network calls on both the client and server, so slowdowns originating in a particular microservice can be identified, the owner’s team contacted and the issue fixed.
One issue I’ve seen repeatedly is where user tokens are refreshed too frequently; some apps do this multiple times in a single page. This can be another good candidate for a custom span. If the service you use to get new tokens slows down by a significant percentage, say 20%, that can affect the entire application.
Another frequent culprit for poor performance in mobile apps is ad providers. For example, we recently used a custom span to wrap the web view that Google pops its advert into, and we could see that it was dramatically slowing down multiple individual screens.
Potential Solutions
With any performance-tuning work, it is usually best to focus on the top two or three issues, then deploy the improved version and remeasure. As you address some of the larger issues, new areas can become the next bottleneck rather than the ones you previously observed. The key is incremental progress release over release.
From Technical Investment to Business Advantage
There is a real competitive advantage in delivering consistently smooth user experiences that keep customers engaged and satisfied. By implementing comprehensive real user monitoring and addressing performance issues systematically — from startup optimization and network efficiency to UI rendering and screen-by-screen improvements — development teams can transform user complaints about “buggy” apps into positive reviews and increased retention.
Moreover, investing in performance monitoring tools and methodologies pays dividends — not just in user satisfaction, but in reduced infrastructure costs, improved conversion rates and the ability to scale confidently as your user base grows. Performance isn’t just a technical consideration; it directly impacts your bottom line and long-term success.
Our application observability tool, BugSnag, is specifically designed to streamline the debugging process for developers. Itsdashboard delivers the information needed to quickly identify, diagnose and fix problems.
The technical implementation leverages enhanced OpenTelemetry SDKs with improved automatic instrumentation capabilities. On Android, BugSnag uses system callbacks to automatically capture screen-load events and network requests, while the iOS implementation relies on Objective-C or Swift method swizzling to wrap framework method calls at runtime. The tool is designed to minimize performance impact by performing swizzling early in the initialization phase when apps are single-threaded.
BugSnag Performance Monitoring is currently available for Android, iOS, React Native, Unity, Flutter and web apps. Pricing is calculated per span. Sign up for a free trial or read the documentation to learn more.
TRENDING STORIES