How to measure retention marketing
And why using this method we learnt Facebook was under-attributing remarketing efforts
How do you find out how much your email marketing makes you? Go and open up Klaviyo, or Mailchimp, or whatever your email client is, and check the revenue figure, right?
But how much of that revenue would have happened anyway?
This is at the crux of almost every marketing problem. If you hadn’t have done the activity, might it have happened anyway?
In acquisition, this is a much more complex problem. But in retention marketing – i.e. where we’re marketing to an existing group of people, for whom we hold first party day – it should be easy.
How often have you seen a headline like
“Email drove £420,000 of revenue last month” and just accepted it.
Or “our reactivation direct mail reactivated 12% of churned customers profitably”
Or “our retention Facebook campaigns have a £9 CPA".”
The question that immediately springs to mind is ‘but how are you measuring that?’
Today, I’m going to show you how to prove the value of any retention marketing.
And this isn’t about just protecting downside: in the examples I’ve included below, the marketing work we did was significantly better than the platform attribution reported. In one case, we used Facebook Ads to improve retention by 36% – with huge CM3 profit.
But first, let’s explore why you should never trust the revenue figures Klaviyo or Facebook or Google or your SMS client provide you for retention.
“Show me the incentives, I’ll show you the outcome” – Charlie Munger
Email clients – like all other marketing tools you pay for – are incentivised to tell you how much money they make you.
The logic – for any fans of last-click attribution – will also be happy. Send an email out with a link to a product, if someone clicks it and buys, then we can attribute it to that email.
Once you’ve done that a few times, you want to get smarter. The next time you want to run an A/B test.
In the first email, you have a subject line related to the sunny bank holiday, and the products are all spring line collections. In the second, you lean into the new collab you’re launching. The first gets you £42k of revenue, the second gets you £48k of revenue.
The A/B test increased revenue by 14% or £6,000. Right?
But it totally ignores how many people were going to buy anyway.
It ignores the fact because it’s the first sunny weekend of the year, people wake up realising they need a wardrobe refresh. It also ignores that people woke up and read the coverage about your new collab in Sunday Times Style, or saw the post on your Instagram page. They were going to buy that new collab anyway.
It sounds like I’m picking on email. I’m not. Email is highly valuable, and I think most brands should be doing a lot more of it – but you need to measure it right.
What I’m talking about is any form of marketing where you hold the first-party customer data. Not even just customer data. Anyone in your database who is yet to purchase should still fall under this banner.
If you send your existing customers some direct mail, then it applies.
If you do reactivation work using SMS, then this applies.
If you remarket on Facebook to your existing email list, this applies.
Incrementality tests show you how much that activity really made you
Incrementality testing as I spoke about in The Complete Startup Guide to Measurement1, is the form of measurement that includes a ‘hold out’. A hold out is where you exclude certain parts of your audience.
Instead of sending an email to 100% of your mailing list, you send it to a segment – and then you can measure the activity of your control (no exposure to the activity) segment versus the experiment (exposure).
Of the three methods of measurement (attribution, incrementality testing, and modeling), it is the one that gives you the most precise view of what a piece of activity did.
What most retention marketing tools do is show you an attributed lens. They measure who clicks and then goes on to buy.
Incrementality testing, by contrast, excludes part of an audience from activity.
How to run incrementality tests for retention
At a database level, you need to segment part of your audience for exclusion. This is something that email tools by default don’t have enabled.
The simplest way to do this for one-off tests:
Download your entire list as a csv
Open it in Sheets, and add a new column called cohort
Add “A” in the first row, and “B” in the second, then apply that to your entire customer list.
You’ve now got a 50/50 split of your entire customer list. You can now used the ‘B’ cohort to run your experiment.
50/50 isn’t necessary. You may be able to measure this with a smaller % holdout, but the smaller the audience, the longer the test will have to take place for to understand the incrementality.
With your cohorts, you can upload the relevant cohort to whatever it is you’re doing – the SMS tool, the email client, a Facebook Custom Audience, shared with your direct mail fulfilment house.
How long it takes to get statistically significant data is a bit of a chicken and egg situation. To understand how big a cohort you need, you need a baseline conversion rate. But you don’t currently have a baseline conversion event because you’re running ‘on for everyone’ activity. In my experience, allowing something 4-8 weeks is likely a good amount of time if you’ve got an email list in the tens of thousands.
Measuring results in Google Sheets
Prep your data
Download your sales data for that period with order date, a unique identifier, and the revenue.
Count sales and sum revenue for your cohorts
Bring your cohort data into a new tab and then start counting and summing this data.
=countifs(orders!B:B,A2)
=sumifs(orders!C:C,orders!B:B,A2)
Analyse
Build your analysis table.
Build a cohort count
Sum the number of sales per cohort
Calculate a conversion rate for each cohort
Calculate the uplift between the two
Sum the revenue per cohort
Which produces something like this:
In this example, the default conversion rate is 40%. During this test period, 40% of people purchased with no prompting.
The exposed cohort converted at 45%, and it looks like your activity drove £460 of revenue. In fact it only actually added £82 of revenue – the difference between the baseline ‘no exposure’ cohort and the cohort that did get exposed.
More importantly, there was only a 13% uplift between your control and experiment. In the above made-up example, that cohort size is not significant. But say it was, you now know that the activity drove £82 of incremental revenue – not £460.
“I’m worried we’re sending too much email”
The other big concern I hear from brands is the worry that they’re sending too much email.
The great thing about incrementality testing in this way is you can measure all sorts of metrics.
Worried that sending a daily email leads to email list churn? Run an incrementality test.
Then you can make an informed decision that ‘7 emails per week generates us X in incremental revenue at the cost of Y subscribers per week.’
Retention marketing is not free. Measure it.
A lot of work gets done in marketing because it’s assumed that’s the way it has to be. But incrementality testing shows what the actual value of that thing is.
No retention marketing is free.
You are choosing to spend time (and therefore money) focused on it. That time and work might well be better focused elsewhere.
Incrementality testing + Facebook remarketing = CM3 heaven
Over the last year, we’ve run multiple tests with clients on understanding the value that Facebook plays in retention marketing.
For most brands, Facebook marketing should only be focused on acquisition. After all, we don’t want to pay to double pay for your customer acquisition. Great lengths go to excluding existing customers.
But there’s been two scenarios we’ve wanted to use Facebook in our retention mix.
Looking for a revenue boost at a key period
Looking for a retention boost
As an agency, we’re huge incrementality fans. We don’t like to spend a single pound without knowing it’s doing something.
So we used the above method. We segmented those audiences, added everyone to the relevant cohorts, and then exposed one cohort to ads and the other not to ads.
In all instances, the brands have been running email marketing activity alongside this and so we knew we needed to see a substantial uplift to make the extra pounds spent worthwhile.
In all three examples, the activity proved:
To have a statistically significant uplift in repeat purchase / retention
To produce enough contribution margin 2 (CM2) that the marketing spend was easily covered
To be a significant enough CM3 return, that the activity became ‘always on’
In each example, had we only have looked at the Facebook data, we would have stopped the activity. Facebook underattributed the data.
Stop assuming, start measuring
Growth is not just a startup name for marketing. Growth is above much else a mindset and process. One that is focused on prioritisation and focus. It is the understanding that for the business to succeed, everything should be focused on the most important things to that business.
That means saying ‘no’ to a large amount of work, and always considering what is the best use of your time.
With acquisition marketing, measurement is hard. Attribution is murky. Incrementality testing is expensive and requires certain thresholds of data. Modelling is slow and expensive.
But with retention marketing, incrementality testing is easy.
Take a look at all the activity you run to your existing email list and check when you last proved the value of doing it. Maybe it’s time to go run some new experiments on your ‘always on’ activity.