Guide to Mobile Apps Conversion Optimization

Image for post
Image for post

There is a lot of information on the internet on how to optimize both — mobile and desktop websites. But there is still a lack of information on how to get better conversion rates on mobile apps. That’s why in this article, we accumulated all of our knowledge about App Conversion Rate Optimization (CRO) and share it with you to guide through the whole optimization process.

Step 1: Measure

Define your conversion funnel and visualize what it looks like. Conversion funnel displays the rate at which users complete each step of a user journey to reach an overall goal. Before you start experimenting with the tool, you should define the user journey you want to track. What would be the ultimate user journey that would ensure goal completion?

Image for post
Image for post

Doing A/B testing without thinking about your online goals and user behaviour in general will lead to ineffective testing. So what is your final goal?

Define your target metrics

Tip: To get better conversion rates, it’s really important to discover ad channels that bring more loyal users. You can launch A/B testing campaigns with identical targeting in different ad networks and compare their performance later.

Step 2: Research

Image for post
Image for post

Duration: 2 days — one week

Take enough time, up to a week, to do the research and analyze the market, your competitors, audience, etc.

For that you need to:

1. study app pages of your competitors and industry leaders;

2. study trends and best practices for apps in your industry;

3. assess users feedback and chat communications (if any).

Accumulate all your studies into a document and start brainstorming your first A/B testing hypotheses.

Step 3: Develop a Hypothesis

A solid hypothesis should consist of 3 parts:

  1. What do you plan to change;
  2. The result of implementing the change;
  3. Rational explanation, i.e. based on what research or conclusions you created this hypothesis.


  1. Changing the icon of the app from A to B;
  2. My app getting more installs;
  3. My target audience doesn’t have a clear association of the icon with the app content.
Image for post
Image for post

Define the place where you want to test and list all your hypotheses in excel file. Prioritize all your hypotheses and start with the high ranked. How to prioritize? Look into the funnel steps (mentioned in step 1) that are not performing well and which are important for your business objectives, and start from them.

Step 4: Run the test

Create a technical specification of a test, where you include the info, required for development and quality assurance, and also prototypes of the variations.

To run the test, you can develop your own solution or use a software or a tool of Conversion Rate Optimization which has an option to run app A/B tests. If you choose the second option, make your testing process easier by checking out our article on tools for mobile app A/B testing.

After the test is live, monitor the data on the key metrics which you determined as the success indicator of the test, the statistical significance, and sample size.

Statistical significance indicates if the change in variation impacted the conversion rate. Usually, A/B testing tools show the level of statistical confidence in the analytics dashboards or if not, you can search for it on the internet, where there are many online calculators to define the statistical significance of A/B tests. Reliable statistical significance equals 95–100%.

The sample size is the necessary volume of traffic per each variation. Same calculators count this number and it’s correlated to the statistical significance level.

Once you reached the reliable statistical significance level and the needed sample size, you can decide to stop the test and move to the final report of results.

Ensure that your experiment lasts for at least 7 days before finishing it. This way, you’ll collect data of users’ behaviour during all weekdays.

Step 5: Finishing A/B Tests and Interpreting Results

Do a summary of the test results which include:

  • test duration
  • targeting
  • the original & variations designs
  • users per variation
  • conversion rates
  • the improvement level of the variation
  • statistical significance of the results

No matter if it’s a winning or a losing variation, write down learnings and insights you got from this test.

If results show that the variation wins by 1% or less, don’t rush to implement it to the production. Only the advantage of at least 3–4% can really impact the conversion of your app.

Image for post
Image for post

From 1 week up to 1 month after the implementation, check the conversion rate and the key metrics before and after the winning variation implementation to make sure the change didn’t hurt the conversion rate of the app.

Step 6: Follow up tests

Create new hypotheses based on the insights that you got from the analysis of the previous tests and test them. It’s crucial to turn A/B testing into an essential part of your app Conversion Rate Optimization strategy. The app stores are constantly modifying their systems and you need to be ahead of the latest trends and changes to adapt to them. That’s why it’s critical to run a follow-up test to scale results.

When it comes to mobile conversion rate optimization, there is always room for improvement, and a well-structured approach to A/B testing can drastically help you get ahead in any App Store.

Originally published at on May 28, 2019.

Written by

Fully remote team of professionals providing Digital Analytics, Conversion Rate Optimization and Business Intelligence services for clients around the world.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store