Home

When A/B Testing is Not Enough

A common practice amongst mobile app marketers and statisticians when testing out hypotheses, is the use of what is called A/B Testing. When developing an engagement strategy for your users, knowing what kind of push and email messages inspire call-to-action is crucial for customer acquisition-and customer-retention, as well as the Lifetime Value (LTV).

A common practice amongst mobile app marketers and statisticians when testing out hypotheses, is the use of what is called A/B Testing. When developing an engagement strategy for your users, knowing what kind of push and email messages inspire call-to-action is crucial for customer acquisition-and customer-retention, as well as the Lifetime Value (LTV).

Frequency of messages, types of messages, timing of messages are variables that need to be calibrated correctly in order to achieve the maximum conversion rates, and hypotheses are just that, until backed up by concrete analytics. These analytics give you the insight into your testing, as to what works and what doesn't work.

Things you can test include timing of your engagement (frequency, based on triggers, the optimal time for each user), the right sort of audience targeting, as well as content(tone, punctuation, wording, use of images, personalization, call-to-action).

A/B Testing in simple terms, allows marketers to test new compare two elements, two variants. As an example, you have an email template that you want to send out to your users, specifically to the following segment:

  • Users that have created a new account; and
  • Have less than five friends they follow.

Presuming you have a large enough unbiased sample pool, you conduct an A/B Test using two versions of your email, with different content, to try and onboard your segment users to add new friends from their Facebook friends list.

You send one version to one group of the same segment, another version to the other group, and after a period of time, evaluate which has the best open-rate. The same would apply if you had a push-notification campaign, or text-message campaign.

A/B Testing is simplistic to devise, does not require a large sample size, and easily test two different design proposals, but fact is, it's limited to A or B. That is, you only get to test two atlernatives, testing two-to-four variables on a campaign or screen.

If you need information about how many different elements interact with one another, multivariate testing is the optimal approach (Optimizely)

Multi variate Analysis takes element comparison to an entirely new and expansive dimension, allowing marketers to not only test two elements, but determine the best performing campaign using multiple variables, simultaneously. You get ot test combinations, revealing greater insight on how the variables interact with each-other.

Multivariate testing is a technique for testing a hypothesis in which multiple variables are modified. The goal of multivariate testing is to determine which combination of variations performs the best out of all of the possible combinations. (Optimizely)

So rather than just test one concept to compare, such as timing of message or tone of message, you can test multiple variations, such as tone of message, timing of message and frequency of message, and correlate. You could test five versions of a content in push notification concurrently, to find the best combination/variation. A/B Testing only tests a minimum of one variant whereas Multivariate tests multiple in combination.

Number of Variations on Element 1 (times) Number of Variations on Element 2 = Total Number of Variations.

Whereas A/B Testing doesn't require a huge sample pool, Multivariate requires a larger sample pool. Think of it this way, in A/B Testing, your sample size is split in two, 50% for A and 50% for B, whereas with multivariate analysis, you require a significant and equal proportion for more than two variables.

A useful link I use for working out what sample size is Optimizely's Sample Size Calculator.

When do you use which?

A/B Testing certainly has it's place, as does Multivariate, and choosing which tool to use when is important. If your team requires to test multiple variables at the same time, using our first example, to test the wording of the content of the email, the use of personalization (i.e mentioning friends that you can follow close to you that may be your friend) and the time of the day to send the email, you need to test multiple variables. In this case, go for Multivariate Analysis.

If you do run a multivariate analysis test and one of the subject variants do not have a singificant measurable effect on a conversion goal (i.e frequency of email to new user) but tone of content of email does, an A/B test would be more effective than multivariate testing.