Discover more from Early Stage Growth
Reflections on ASC six months in.
Whenever performance marketers have new features introduced to them, it’s often met with a groan. The trend over the last decade is that each new change reduces the control you have as a marketer has in platform.
That can be frustrating. Especially when, in the example of say Performance Max, that performance seems to be mixed. Facebook had to quietly u-turn on a permanent removal of adset budget optimisation (ABO), such was the discomfort of the advertising community at the time.
So when something like Advantage Shopping Campaigns, or A+, or ASC, as it’s often shortened, is introduced, it was met by many with similar skepticism.
What are Advantage Shopping Campaigns? What is ASC?
ASC is a new machine learning model to distribute your ads on the platform. It launched in testing last year and has been slowly rolled out since. It’s part of the Advantage+ suite, and isn’t to be confused with Advantage+ creative, or the renaming of CBO.
For those reading in detail, you’ll notice “Preset settings” as core to its functionality. In many marketers eyes, this is a red flag.
So what’s missing?
It removes bid strategy (no bid caps or cost camps), dynamic creative, placement optimisations, age, and detailed audience targeting.
The big ones that have created a stir are:
for those who like to break out lookalikes, and interest audiences, this will be one of the biggest changes.
for those who like to exclude various audiences, it will be a big change. You can set one customers audience – at an account level – and amend % of budget attached to it. But that’s it. No website 30 or 50% video views engagers.
No cost caps or bid caps
The first one is something I was initially skeptical about, but have grown to worry far less. After all, the majority of our accounts are heavily skewed broad over interest these days, and lookalikes are only really working in b2b. As for exclusions, we’ve tested removing exclusions incrementally across accounts and found its better to run with as few as possible.
The second remains a problem. In the campaigns where they work, cost and bid caps only represent perhaps 20-40% of spend, but they are a stable bit of spend that we wouldn’t throw open to maximum volume.
These changes – and they are major – have caused many to not test ASC. That’s a mistake. Here’s why.
Six months of testing ASC
We ran our first ASC tests back in October 2022. The first three or four tests we ran hugely underperformed compared to the manual campaigns we were running otherwise.
We ran these both on apps and sales campaigns, in both instances, they performed worse.
Fast forward to this year, and that’s all changed. Here’s the hero stat that makes all the difference:
33% reduction in CPA
This is averaged across all of our accounts where we’ve run ASC vs Manual tests. We’ve tested these in a variety of ways from strict A/Bs to before-and-after tests. We’ve ran small/big budget, low/high volume of ads, testing/hero, and we can confidently say they outperform in almost every instance.
In one example, we ran this structure for a month long test:
A. Hero campaign – manual sales – max volume – very broad – cbo
Same ads in each campaign.
Throughout the month test, we were able to push 3x the volume through the ASC at a better CPA. The ASC was in the low-to-mid five figures of spend across the month, and the previous hero only four figures. CPA was 35% better too.
This has been reflected across other accounts too.
With one account, the ASC was running at 40% better CPA, though on much lower levels of spend. Three weeks ago, the Manual was at 4x the level of spend as the ASC. Today they are on parity, and the CPA is ~20% better.
Overall learnings, experiments and questions
As of yet, I still wouldn’t call ASC our ‘default’ structure: it’s still something we test in all accounts
AOV and ROAS fluctuate. And in some cases, ROAS is lower in the ASCs than the Manuals, implying its seeking out higher volume of customer but at lower value
Most of our data comes from Hero campaign replacements
We have used ASCs for some creative testing. This isn’t an ideal use case for us at the moment, as we prefer to have more control over creative, but it finds true behaviour faster
The volume of creatives doesn’t seem to have too much impact. Initially it seemed ASC really needed 10+ creatives, but we’ve tested setups with 3 ads recently, and they still performed very well for us.
The CostCap/BidCap issue is a problem.
Not being able to optimise for value/roas is also a problem – esp with the AOV issue
Advantage Shopping Campaigns offer a much greater advantage than the app equivalent campaigns do – where we’ve seen mixed results still.
We wouldn’t switch everything over to an ASC-only approach quite yet. But with some accounts it might not be long before we’re there.
What’s everyone else’s experience so far?