Skip to main content

A/B Testing

A/B testing is an experiment of comparing two or more versions of a customer journey against each other to understand which version performs better against a specific goal. You can conduct an A/B test by splitting the traffic between different versions of customer journeys and using statistical analysis, determine which version performs better for a given conversion goal.

The potential benefits of A/B testing are:

  • Improved conversion rates
  • Better User Experience
  • Optimized User Journey

Vue.ai supports a frequentist approach to A/B testing where the Vue.ai user is completely in control of executing/monitoring and concluding A/B tests.

A/B Testing Best Practices#

Some of the best practices for A/B testing are listed below:

  • Duration: Run the A/B test for a minimum of two weeks and until the confidence level is reached.

  • Sample Size: Ensure that the sample size for each variation in your test is large enough to make your results valid. There are several online sample size calculators that are easy to use and help calculate the sample size along with the minimum detectable effect that you'd expect between the groups for a conversion metric. If you do not pre-calculate the required sample size of your test, then you may end up stopping the test too early before it collects enough data or running it for way too long and miss out on revenue

  • The integrity of the test: Do not make changes to a journey when the test is running. It will skew the results and the data and lead to biased results.

  • What to test: Always try to test one feature at a time. Keeping everything else constant is the golden rule of A/B testing and it helps gauge the impact of that one feature that you have added/modified/removed.

How to Setup A/B Tests#

Under Experiments:

  • Start by choosing the audiences for whom you want to run the test. If you want to run the test against your entire user population, choose All Visitors .
  • Choose Journeys and setup traffic split.
  • Choose one of the goals from the drop-down, which indicates the measurement criteria for your AB test.
  • Choose the metric to be analyzed, based on the goal.
  • Choose your confidence percent, which allows you to define the required level of confidence with the A/B test result, thereby allowing you to make business decisions backed by statistics.

Other Supported Capabilities#

There are options to pause an A/B Test in between for a period, when you might want to divert traffic to a particular journey and then resume it when required. This feature comes in handy when you have a sale period or want to run a campaign and would like to pause your test temporarily and would like to target the entire population to a specific sale promotion journey and then resume the test once the sale period is over.

You have the option to stop the A/B Test, once the test is conclusive or you wish to end the test at any point. We recommend that the test is stopped only when confidence is reached and a winner is declared.

If you wish to re-run a historic A/B test that was stopped, you can do so by duplicating it and starting a new test.

You can view the A/B test results anytime via a Monitor and Control screen, where you can check out how your journeys are performing against the control group and track the metrics and winning journey.