Overlooked testing conditions that affect mobile app quality and performance
Building a mobile app is difficult -- it takes a lot of time, money, and some luck. Success not only depends on the content and functionality of the app, but also on how your app performs on real devices under real user conditions. Tools and SDKs for mobile app development have improved these last few years, but often the development and testing process misses the rigor and attention to detail that is required to give your app a fighting chance.
First and foremost, you cannot rely solely on post-launch crash reporting and monitoring tools. In the web world, companies learnt the hard way that bugs that caused sites to crash or payments to fail ultimately led to unsatisfied customers and lost revenue opportunities. As a result, a culture of pre-release testing is now ingrained in web development. So why shouldn’t the same apply for mobile and apps?
71 percent of users have a low tolerance for unstable apps and will delete an app the instance it crashes (uSamp, 2013). By launching a buggy app, you run the risk of your first app store reviews being full of negative feedback -- and those initial reviews are critical to early success. When businesses spend anywhere from $25,000 to $100,000 plus on their apps (AnyPresence, July 2013), it’s important to invest in pre-launch testing to proactively manage and fix any performance issues before launching.
Another issue we see is that developers have an over-reliance on manual testing methods. With device fragmentation resulting in more than 30 device and OS combinations for iOS, and an order of magnitude more for Android, manual testing is not a scalable approach for mobile developers. To ensure performance and functionality of your app pre-launch, across all selected device and OS combinations, developers should add automation to their mobile testing tool kit.
It’s easy to forget when testing, the wide variety of real-world conditions that an app will be subjected to during an average day. Today’s user expectations for mobile, is to have an always-on, always-connected experience where apps always perform. Mobile apps need to run on a variety of different devices, across a range of screen sizes, processors, memory restrictions, and OS versions. Real user conditions dictate different networks, signal strengths, locations, orientations, and power constraints. Testing primarily on simulators and emulators won’t catch those issues that users will experience on real devices. It’s critical to include real-world, real device testing as part of your mobile strategy.
As more companies look to rollout their next mobile app -- or build a company entirely based off of an app, it’s imperative that developers optimize apps for success. Today’s consumers are impatient and fickle, quick to dismiss an app that crashes even once. Even with the most desirable user interface, a buggy app will result in undesirable reviews in the app store and online. Developers need to focus not just on functionality, but on performance and quality as well. The best way to ensure the performance of the app is up to par is to invest in thorough testing in the development and pre-launch stage -- under as many different possible real user conditions as possible.
Jay Srinivasan is the CEO and co-founder at Appurify, a mobile performance optimization startup working with major brands such as Google and Macy’s, where he is responsible for driving the company’s strategy and growth. Most recently, Jay was a revenue product manager on the Farmville franchise and a member of the M&A team at Zynga. Previously, he spent more than five years at McKinsey & Co. where he specialized in B2B sales and marketing for high-tech companies. Jay has a Ph.D. in Computer Science from the University of Illinois and a B.Tech in Electrical Engineering from the Indian Institute of Technology, Madras. He holds several patents in microprocessor reliability and mobile developer tools.