Measuring Success in Mobile Email
We need to talk about how we measure success when it comes to mobile email. The email experience today is miles apart from the one ten years ago – there are mobile phones, tablets (is that a mobile device or not?), watches, laptops – yet on the most part we are still using the same success metrics.
This isn’t going to be an article about design tips or how to add “the responsive code”.
Ok it might be a bit. Here’s 5 tips: shorter copy, more white space, nice big easy to tap buttons, simpler column layout, more contrast between colours. Now, I don’t have results to back this up, this is based on 15 years of experience in designing marketing campaigns to effectively communicate a message. Shorter copy and more white space are about making it easier to understand the core message, nice big buttons make it painless to interact and a simpler column layout helps make all of this possible on a smaller screen. The contrast also helps to strengthen the message and draws attention where it is needed when the user is out and likely to have distractions.
We could split test whether this sort of thing works, but our intuition as humans already tells us that making things easier to understand is a good thing to do.
And therein lies our problem – if we only measure the success of our work using opens, clicks or opens-and-clicks-in-an-impressive-equation, we only get a small view of an audience’s behaviour.
When we measure opens and clicks, that’s all we know – whether someone has opened, or if they have then gone on to click. If our sole aim as marketers is to get someone to open, that’s a pretty low bar (tip: gratuitous swear words probably help in that case). There’s many things we may want to do, but top of the list should be communicate a message to the user. The trouble is, it's very hard to measure if we’ve been successful at that, but that doesn’t mean we can just use other data in it’s place.
Let’s look at an example. You’re in charge of marketing for some product. In order to tell people about your fabulous new product, you send out an email. Let’s assume that, for some reason, there is no way that the user could have found out about your new product yet. So you send an email. A user opens it. At this point, it doesn’t really matter if the design is optimised for a mobile screen, but let’s assume it is. The user is walking down the street right now, so they don’t have the time or attention to convert properly. They’re also on a mobile device, which, according to some, is hard to use to buy anything. But what does happen is the user can get the gist of what your product is about, and is intrigued enough to want to know more. Later on, the user gets to a coffee shop, pulls out a laptop and searches for your site to find out more.
What have we tracked at that point? An open from an email, and then a search that lands on the website. Both on different devices and networks, so we can’t really track the two events to the same user yet. If the user goes on to buy from the site, we could eventually use the login or email address to tie the two events together. We’ve made some money, but on the face of it, email’s been ineffective and search has delivered us a customer. What’s happened in reality, though, is the email has done the heavy lifting to get the user interested, and search has just facilitated their journey.
So what would help there? Better attribution? A single customer view? Those buzzwords aren’t without their own issues, but that kind of data would help us measure success better. Even then, the user is out in town, so maybe they divert over a couple of streets to the department store selling your Fabulous New Product™.
Ten years ago, measuring this stuff was easy. Send an email, user opens, clicks, buys. Or user opens but doesn’t click so whatever we did in the email didn’t work (or, you know, the product sucked, but let’s not get into that here).
Now things aren’t as easy, not just because of mobile, but due to the proliferation of different subchannels too. Mobile means that users are taking their email out with them, into the world. So we have to think about a few things: Maybe it means that users are checking their email more often – so perhaps they’re seeing fewer emails at a time but checking in more often. Maybe users are more distracted – they’re checking email to save time or waste time. Maybe they’re taking emails out into places with a bad internet connection.
And that’s not even thinking about the device itself – smaller screen, touch being less precise than click, this idea that it’s harder to convert on a small screen.
When we typically think about mobile email, we think about making a design work within the constraints of a device, but we rarely think about these other factors – probably for good reason – because we can’t really predict them to any accuracy. It’s just as likely that our user could be sitting at home on a sofa, using a mobile because it was the closest device to hand. But nevertheless, the user’s environment has a massive effect on their experience of our campaigns.
So what can we do? We can certainly try to tidy up attribution so we view success at a user level, not at a channel one. We can make our emails communicate a message well on as many devices and platforms as we can. We could stop being ruled by stats and think more about doing the right thing intuitively. But we can’t go on thinking an open or a click means we’ve done our job.
Oh, and does a Tablet count as a mobile device? It doesn’t matter.
Great discussion on email analytics. I like open and clicks. I like engagement as defined by Litmus.com too. They do tell me something. Those are not my success metrics. They are journey metrics. When I worked at a bank and was trying to drive increased debit card usage, my success measure was a measure of successfully changing customer behavior. Whether email was the only channel or just one of many communication channels used, the success measure was simple. Did a marketing effort increase debit card usage. The marketing analysis included segments that were one channel, two channel, three channel and no advertising. By looking at the data this we I was able to see how just one channel improved usage over doing nothing. Similarly, I could see how adding an additional channel improved the effectiveness of driving usage. When performing this analysis I was able to measure debit card usage of the customers that were communicated with.
Again, thanks for your sharing your thoughts on email analytics