How to
October 5, 2022

A/B Testing for Mobile Apps

A/B Testing for Mobile Apps

No matter what kind of product you're creating, there comes a time when you need to roll out some changes to your product. These changes may include new features, increased functionality, or updates to the user interface. One challenge in this process is determining if the changes you make will yield the results you want, if they'll have no effect—or worse, a negative effect. You can use A/B testing in such scenarios.

A/B testing is done to compare the overall business or organizational impact between two versions of a product. Typically, version A is the one users currently see, while version B is the version to be rolled out.

A/B testing

A/B testing may involve design changes that range from a small edit to a button shape to an entirely new layout. It could also include things like adding an informational pop-up message, changing the sign-up process, or varying text on the user interface.

In this article, you'll learn more about A/B testing on mobile, including why it's needed and some challenges that you may encounter when you implement it.

The Importance of A/B Testing

The objective of A/B testing is to see if a change to the app elicits a change in the outcome or user behavior, here are some key benefits:

Wide Range of Tests

A/B testing can be carried out by any department or team, for example:

  • Marketers can use A/B testing to increase user engagement, the likelihood of conversion, or improve upsells.
  • Designers may use A/B testing to inform their design changes - improving stickiness or the general UX.
  • Data engineers can use A/B testing to gather information about user behavior, or to test the accuracy of algorithmic changes

Increased Conversion

A/B testing allows for careful changes to the user experience while collecting data on the results. Repeating this process allows you to refine the sign-up flow of the app, ensuring that users have the best experience possible when onboarding, leading to an increase in conversions and leads.

Inability to Get Qualitative Feedback from App Users

It's common to see users being asked for feedback about their experience and what they think could be improved. However, these responses are often influenced by external forces and reflect the user's mindset at the time the question was asked rather than their specific experience of using the app. Integrating A/B tests gives a better view of user preferences, and experiences, as data is collected automatically and uploaded for review without interrupting user activities.

Using Feedback on Roadmap to Reduce Costs

Building a mobile app is a nontrivial task - development on mobile is costly and resource intensive. It takes time and energy to build a feature and test it across different screen sizes, versions of the operating system, and phone types. Due to this, it's important to focus on features that have the greatest effect on the user experience, and most likely to lead to increased conversions and user retention. A/B testing allows you to get a clear picture of what to prioritize and helps inform the roadmap.

Minimize Risk by Rolling Changes Out Gradually

When A/B testing, you can test new changes on a small segment of users and observe their effects. Changes that have a negative impact are removed, and never make it to production

A/B Testing for Mobile Specifically

You can find plenty of content online to give you tips on where to start A/B testing your mobile app. The best place to begin is going to depend on your own business metrics but generally will involve either onboarding, activation or retention. Here are a few good places to start looking to see if improvements need to be made:

Activation

Your funnel measures the rate at which users complete a series of steps as they use the product. First, dig into the data and see where most of your users are dropping off, it can help to compare these against industry benchmarks. Discover not just the number of users who completed a process, but also the number of users who started but dropped off, and the specific step at which they dropped off. You can then start to A/B test removing or adding steps here to see what works best for your users

Usability

Tab bars are a frequent point of debate in mobile app development, and developers need to decide first if one needs to be used at all, and second, how to arrange items within that bar. Keeping core functionality easily accessible is crucial for an app, but it's not always easy to do when contending with the limited screen size of a mobile device. Instead of making an uninformed decision, you can use A/B testing to do a controlled experiment and determine which layout brings more value to the user's experience.

By averaging user responses across hundreds or thousands of samples, you can determine that user session length is longer with a more muted color palette, or that you get more sign-ups when the initial screen asks only for an email address, rather than an email address and password.

Referrals

Another major concern with mobile apps is bringing in new users, and, A/B tests can be used to see how different app sharing functionalities affect new installations. This can include tracking usage of referral codes, dynamic links, or link sharing on social media. You can also observe what happens when you prompt the user to share the app, such as by encouraging them to post their daily accomplishments to social media.

The possibility of thousands of data points is also a good illustration of why A/B testing is so crucial to mobile apps. Due to the large user bases of mobile apps, it's not always possible to get qualitative feedback on a meaningful scale. With A/B testing, you can use a subsection of your user base to test your goal metrics autonomously, without the need to explicitly ask the user for feedback.

The Challenges of A/B Testing

Although A/B testing is very beneficial, the implementation is challenging.

Setting the Correct Hypothesis and Areas of Testing

Before you can begin testing, you need to determine what exactly you're testing. It's best to pick a single objective, such as “increase referrals,” and research common ways of achieving it, then pick a single change to move forward with. It's important to choose an objective for which you can define clear metrics of success, and to understand the reasoning behind the testing variables that you've selected.

The changes you make should be informed by user experience data. If you're trying to increase sign-ups, is the problem getting users to complete the process, or is it getting them to start the process at all? This data is crucial to making informed changes. Without it, you may end up making aimless, random changes that don't affect your target objective, or that may not affect anything at all. If the breakpoint for most users is the third screen of the sign-up process, making the first screen more visually friendly isn't going to bring about the change you're hoping for.

Setting the Correct Sample Size

Your sample size should be large enough to provide you with meaningful data, but small enough that you're not wasting time and money processing far more data than is needed to see results. Before you can settle on a number, you need to decide which group you're interested in: all the users in your userbase, or just a specific segment that may be based on gender, age, or country. The narrower your group is, the larger your sample size needs to be to ensure that you get sufficient meaningful data from the target demographic.

Once you've decided that, you need to determine the actual size of the sample. While there are many calculators to help you with this, you'll still need to set a number of parameters yourself. You'll need to determine the approximate total number of users (or active users) of your app. Next, you need to decide on a confidence level and margin of error, which are interconnected metrics. Confidence levels represent the likelihood that, were your tests to be repeated, they would return the same results. Endeavor to have this at eighty percent or greater when your test group is all users, and lower it for smaller segmentations. The margin of error is the range of potential error, often expressed as ±5%, indicating that the data could be off by up to five percent in either direction.

For most sample size calculators, these numbers will be sufficient to determine the approximate size needed.

Collecting Data Offline

Mobile applications have evolved and improved over time. One game-changing improvement is offline caching, which allows users to interact with the application normally while offline. However, this innovation poses a challenge in collecting and posting data for A/B testing.

To overcome this, one can integrate a local database to store the A/B data being collected while offline. It's important not to store too much data, as it will bloat the size of the app, potentially forcing some users to uninstall it. Offline data also needs to be timestamped, which ensures that your data is appropriately sequenced, and that time-sensitive analyses are being done on the correct data set. To minimize these challenges, data that's collected when the user is offline should be uploaded as soon as the user goes online again, then deleted from their device.

Edge Cases

Tracking users and their actions is a difficult task. Do you tie the data to the device or the currently signed-in user? What happens when the user is not signed in, or if they use the app without an account, and then create one halfway through your testing period?

To maximize user tracking, it is advised to collect the A/B data and tie it to the device's UniqueId (UUID). If the user creates an account over the course of the test, you can associate the user's previous data with their account and details provided. If they don't create an account, you can either use the collected data without the additional information you would have gotten upon account creation, or you can decide to discard the unassociated data to allow for more specific segmentation in your results.

One thing that's difficult to account for is when a user shares a device with another user. If each user has their own account, any data collected can be mapped to that user's unique details. In some instances, though, users may share a single account, or use the app without either of them creating an account. This can lead to data points that encompass a wide range of expected behaviors, and in some cases will result in the need to discard data from that user.

Goal Attribution

While it's generally considered best practice to test one change at a time, some changes will necessarily involve multiple components. In cases like this, it can be difficult to tell which changes led to which outcomes. Understanding what's driving user behavior may require further testing, or may be aided by specialized third-party software that's able to return more nuanced insights than can easily be gleaned from your raw data.

Modifying the A/B Test Parameters

If you're finding that one or more areas you're testing aren't yielding significant results, or if you realize belatedly that a seemingly minor change is having significant effects, it can be tempting to change the focus of your tests in the middle of the process. It's important to note, though, that changes to the test parameters at later stages of testing may not yield meaningful results. The best practice is to follow your initial hypothesis through the end of its planned testing, then determine your next steps. This also reinforces the importance of limiting the scope of each area being tested.

Slow Test Results

Integrating an A/B test is just the start of the journey. You have to deploy the new app version to the platform's app store for distribution. The release time of the app stores will vary depending on the type of application and the number of changes made, but can range from a few hours to a number of days. This also means that you're only able to start new tests or change test parameters when there's a new release of your app, which means that iteration can be a slow process..

Conclusion

In this article, you learned about A/B testing, its importance, and the challenges you might face when doing A/B tests. If you're looking for an easier way to do A/B testing, consider Unflow.

Unflow is a tool that requires no additional work to implement in-app A/B content delivery to your end users and get feedback from them. A/B testing shouldn't bereserved for software developers, so Unflow was built to be simple enough that it can be used by anyone. It allows you to do A/B tests, track user behavior on a product, and easily push new content to your app. If you’re interested in using this tool or just need more information about it, you can find out more at Unflow.

Terrence Aluda

What are you waiting for?

Make apps smarter,
with Unflow

Thanks for your interest!
Something's gone wrong while submitting the form