5 Things You're Probably Doing Wrong with Email Testing and How to Fix Them

Test200

Testing is as essential to your email success as an up-to-date and accurate email address database. It can give you enormous insights into your audience across channels and makes you far more efficient when building campaigns.

The increased revenue you will earn from testing will help you create better marketing messages and more than offset the cost – if you set up and run the tests correctly.

That's a big "if!" And it's one of the reasons more marketers don't test – or why they don't get useful results from their testing programs.

I'll share some background information about testing, the five mistakes you're probably making right now with your email testing and how to fix those mistakes.

3 steps for successful testing

Every campaign you run, no matter how simple or complex, begins with an objective. Once you know what you want your campaign to achieve, you can build a strategy for achieving that objective. Testing comes into play to help you assess how close you came to your objective.

The actual testing process boils down to three basic steps:

  1. Create a hypothesis. This is what you predict will happen under specific conditions and usually includes a cause-effect statement: your "because" clause. (Learn more about this below.)
  2. Choose a success metric. This is how you will measure whether the results support your hypothesis. This metric must relate back to your objective. Most marketers go for the open rate, but opens are not necessarily how you will judge whether your campaign succeeded or failed.
  3. Ensure the results are statistically confident. If it is an automated email program, run the test until it achieves statistical significance. Stop it too soon, and you could get misleading results. If it’s a business-as-usual campaign, then ensure the sample size used is large enough.

5 mistakes you're probably making with email testing and how to fix them

Don't be embarrassed if you find your testing program includes one of these five mistakes. They can all be fixed!

  1. Relying only on ad hoc testing

This is one-off testing that focuses solely on testing one aspect of a campaign, like the subject line, or images, or the call to action. It usually doesn’t have a strategic or objective-based reason – you're just throwing things at a wall and seeing what sticks.

Suppose your control subject line is "Our sale is on now! Get 15% off today!" while your testing variant is "Our sale is on today! Get 15% off now!" Where's the long-term learning in that?

I'm not saying you shouldn't test the subject line or CTA. Rather, don't rely on only one element to measure whether your campaign succeeded. Also, a test like this applies only to the immediate campaign. It doesn't give you insight into the long-term value.

How to fix it: Adopt a holistic approach to testing, in which you run the same test regularly across campaigns and channels to find out not only which versions of your email get the most responses, but also which ones give you the most insight into your audience.

Start with a hypothesis: "Loss aversion copy is a stronger motivator towards conversions than benefit-led copy BECAUSE people hate losing out more than they enjoy benefiting."

The control: "Our sale is on now! Get 15% off today!" The variant: "15% off today only. Don’t miss out!" You're testing urgency versus FOMO (fear of missing out), not just a simple copy change.

An ad hoc test can give you an immediate lift, but you won't know why. That "why" is essential to all your strategic and tactical planning. With the holistic approach, which includes testing the same hypothesis multiple times, to ensure you’re not basing your results on an anomaly, you can understand why you got that uplift and realize two other benefits:

  • Gain longitudinal insights into your customers that can be used to inform testing programs for your other channels and increase revenue across the board
  • Make your email channel the main channel for testing and justify adding budget and resources to support it.
  1. Using the wrong metric to measure success

This often happens if you default to the open or click rate as your success metric. As I said above, your success metric must map back to your campaign or program-level objective.

As we've seen in previous research on subject-line length, emails that generate high open rates don't always drive the highest conversions or greatest revenue.

How to fix it: You're going to crunch a bunch of numbers here, so fire up that spreadsheet!

I call this the "litmus test," which compares campaign performance on different metrics.

Here's how:

  • Using six months of data, list your top 10 campaigns for open rates, the top 10 campaigns for click rates and the top 10 campaigns for conversions.
  • Compare the results. Are the campaigns with high open and click rates the same ones that gave you the highest conversion rates? I find they usually aren't.

Using top-of-funnel metrics like opens and clicks to measure success could lead you to optimize your emails for the wrong results.

  1. Not using a hypothesis

A hypothesis is your guide to setting up a test with the right success metrics and the best variables. Without, you're like the hiker wandering in a forest without a road map. Worse yet, as we saw previously, you could end up optimizing for the wrong results.

How to fix it: Using your campaign objective, create a statement that predicts what could happen when you compare two variables, along with a "because" statement that explains why.

One of my clients wanted to optimize a welcome email to nudge new subscribers to buy for the first time. They hypothesized, "A subject line that promotes all the savings to be had with Brand X will deliver more conversions than one stating the broad benefits BECAUSE our customers are very focused on savings."

The control email: "See how Brand X will save you money!" The variant email: "See how Brand X saves you time with worry-free shopping!"

By using a hypothesis which is focused on the consumers motivation as per the above example, it also frees you up to test multiple elements of the email i.e. subject lines, first paragraph, CTA, image and landing page – thereby ensuring more robust results as you’re not just making a decision based on one element.

But what happens if the test results don't support your hypothesis? You’ve not failed by any means – instead, you’ve now learned what works better.

This chart shows my client’s test results didn't support their hypothesis. But it wasn’t a failure because it showed that benefits can drive more conversions than savings.

Hypothesis

Open %

Click %

Conv %

Savings

34.52%

8.25%

0.61%

Benefits

34.99%

8.72%

1.21%

Results: Benefits won with a 98% uplift in conversions with 99% statistical confidence, even though the open rate was similar and not statistically significant.

  1. Focusing only on short-terms wins

Most testing models that you either learn in a marketing class or see presented in testing platforms and ESP’s reinforce stand-alone, channel-specific tests aimed to pick a winner and not much else. But you have a chance to really make these A/B split tests count and additionally focus on long-term gains.

How to fix it: Use regular, systematic testing based on a hypothesis and build on previous insights with holistic testing. Besides showing immediate uplifts, the tests also give you valuable insights into your audience and helps you understand what works best for your email program.

These long-term gains give you a solid foundation for consistent performance and incremental improvements. 

  1. Confining your learnings to email

Your email audience is a built-in target audience made up of your target audience. For example,  your subscribers/prospects, first-time customers, repeat customers, loyal customers and lost/past customers will be in your database.

Email testing is less expensive, faster and more precise and flexible than other channel tests. Segmenting your database for testing by lifecycle is far easier with your email subscribers!

Because you have this ready-made target audience, you can use what you learned in your email testing to guide testing in other channels, including websites, landing pages, paid search, social media, and banner and retargeting ads.

An essential step: Write everything down! Keep a running journal of testing plans, variables, results and conclusions. This gives you something to consult quickly as you plan new campaign strategy. Plus, you can pass it on to future teams. Your successors will thank you!

Testing doesn’t happen in a vacuum

To get the most benefit from it, you must move beyond single-channel and basic A/B testing to gain a powerful set of customer insights that continually inform, improve and direct your entire marketing program.