As your team looks at results from 2017 and makes plans to improve in 2018, interpreting data is at the core of any well-run business analysis.
You use historical data to predict your company’s future, create budgets, build marketing plans, and more. Sales close rates, customer acquisition costs, website bounce rates, the list goes on and on.
Of course, with all these numbers flying around you must be careful. Use the wrong data set or incorrect formula and your planning is off track before you’ve even gotten underway.
Today I’d like to share the most common calculation error I see. It’s so common in fact, that even the most seasoned-marketers in our Marketing Roundtables peer-groups fall into this trap from time to time.
It’s comparing two numbers… Sounds simple, right?
Let’s look at an actual email I received from a Roundtables member; to protect the innocent we’ll call her “Betty”. At our last meeting, she made a commitment to the group to increase email open rates. This email was updating us all on the status of her commitment:
Here’s an update on my second commitment: To increase my newsletter open rates through more compelling subject lines.
I ran an A/B split test on my last email newsletter. The A version was “the control” and included the same subject line I’ve been using forever. The B version featured a subject line that focused on one specific article inside the newsletter body.
Here are my results: Email A: 29.1% open rate; Email B: 32.7%
That’s a 3.6% change.
Looks like you all were right. Thanks!
See you in a few months!
So, do you see the mistake? Okay, in all honesty, calling it a mistake might be a bit harsh – the math is correct. The mistake is the formula she chose to use.
Betty is looking at the absolute difference between the two open rates. What’s absolute difference? It gives you the real number difference between the two test emails. (This is all just a fancy way of saying subtraction: A minus B.)
So, what’s wrong with that? Nothing if you want to see the overall difference in performance. But that’s not telling you the whole story. The biggest problem with this method is it doesn’t scale.
What she should be focused on is the relative difference between the test results. Also known as percentage change, the relative difference between two numbers lets you see the scale impact of your test.
The percentage change formula is [(A – B) / A)*100].
So, if we plug in Betty’s numbers we get [(29.1 – 32.7) / 29.1)*100] = 12.4%.
This means that by changing her subject line, Betty increased her email open rates by 12.4%.
That’s a significant lift in performance.
Still confused? Does it sound like fuzzy math?
Maybe this statement will clear it up for you: For every email that is opened using the old subject line, 12.4% more emails are being opened using the new subject line.
In short, using relative difference goes beyond just marketing and lets you analyze performance no matter which department you are evaluating.
You will always fill in these blanks:
For every [action/behavior] in method A, there is [percentage more/less] of that [action/behavior] happening in method B.