Jeanne Jennings: 3 Tips for Making "Stolen" Email Marketing Ideas Work for your Program
Last week I was honored to be included on a list of ’The 20 Best Email Marketers You Should Follow and Steal From’ published by GetResponse. A colleague asked if that last part, “Steal From,” bothered me at all.
You know, it doesn’t. We are all constantly ‘stealing’ from one another or from things we ourselves have done in the past (I often test learnings from one client with another client). Even most great “new” ideas are iterations of things that have already been done.
So here are some tips for stealing ideas from me, from your own past experiences or from someone else.
1. Always Use Scientific Method to Test What You Steal
Scientific Method (I’m a bit fan) provides a great framework for determining if what you’re stealing is really going to optimize your performance.
Just because something worked for one of my clients or for someone else doesn’t mean it will work for you.
One of the quickest ways to slow down or reverse your performance optimization efforts is to make material changes without testing them first. It doesn’t matter who you’re stealing from or how big a lift they saw, you should test it for yourself.
Scientific method assumes a control version (what you have used previously or what you would have sent without the information you are ‘stealing”) versus a test version which takes this new information (stolen or otherwise) into consideration.
As tempting as it is to forgo testing and just send the new version (since you likely believe that it will beat the control or why would you be testing it), don’t. If you do this you’ll never really know if it improved upon the control and, should someone ask later about the change you won’t have documentation to back-up the fact that it was the right move.
2. Formulate a Hypothesis
Even if you’re stealing an idea you should be able to formulate a hypothesis on why you believe it will work with your list. Formulating a hypothesis is more difficult than just coming up with an idea of what to test. A hypothesis is defined as “a supposition or proposed explanation made based on limited evidence as a starting point for further investigation.”
“I’m going to test making the header green instead of red and see if it boosts response” is not a hypothesis.
This is a hypothesis:
“I’ve been doing some research on the psychology of colors. The header on our control email message is red. Our product is a financial services advisory publication, and in the financial world ‘in the red’ is a bad thing; it means losing money. I fear that we are subconsciously undermining our key message – subscribe to get profitable stock tips – by using red in our header. My hypothesis is that making the header green, which is the color of money, will better support our message and boost subscription rates and revenue.”
This is also a hypothesis:
“I read a case study about a financial advisory publication where they saw a boost in subscriptions and revenue when they changed the header from red (since ‘in the red’ is a bad thing in financial terms) to green (which is the color of money, a good thing in financial terms). My hypothesis is that this will boost our response too, since we are a stock brokerage and the same psychology should apply here when we are approaching potential clients. I’m going to steal this idea and test it.”
To be a hypothesis there needs to be an explanation of why you believe what you’re doing will boost performance. You’ve got a better chance of reaching your goal with a sound hypothesis than you do with a guess.
3. Confirm Statistical Significance
Once you have the results of your test, you need to determine what to do about the hypothesis. The biggest mistake I see marketers making here is not checking their results for statistical significance. Statistical significance tells you whether your test really beat your control – or whether it was a statistical tie, with the results being within the margin of error.
There’s a big, hairy formula you can use to calculate values you need to compare to determine statistical significance. Or you can use a spreadsheet that does all the math for you – here’s a blog post with one you can download:
If your results aren’t statistically significant then the control won. You might need to reimagine the test in another way, use larger sample sizes or do something else to retest your hypotheses. Or you might just decide to move onto another hypothesis you developed.
So, feel free to steal ideas – from me, from other folks in the industry, from previous work that you’ve done. Just be smart about it – test it with your program before you implement it to be sure it’s really going to boost your performance.