If you've been reading this newsletter for a while, you'll notice something different today.

We've renamed it. The DTC Newsletter is now Hard Margins, and alongside the rebrand, we're launching a podcast by the same name.



The newsletter isn't changing. Same framework, same frequency, same intention: the operational and strategic thinking that actually moves businesses.


What's changing is the name, and what it stands for. Hard Margins is about building businesses that are profitable by design, not by accident. That's been the thesis of every issue we've written. The name just makes it explicit.

On the podcast: Episode 1 is live today. We're also releasing 15 back-catalogue episodes covering the topics we've explored in this newsletter: entry pricing, cohort payback, return behavior, and identity resolution.

To the 10,000 founders, CEOs, marketers, and operators who've been reading, thank you. If you've found this newsletter useful, the podcast is the same idea in a different format. Subscribe wherever you listen and if you enjoy it, leave us a review.

Now, This Week’s SOTW:

⏱ 6-minute read

Most brands connect their store to Meta, watch purchase events start flowing, and consider the signal problem solved. What they've actually done is hand the platform a feed of undifferentiated purchase data: good orders, bad orders, everything in between, with no instruction to treat any of it differently.


The Conversion API was designed to fix signal loss, and for most teams it does. The account stabilizes, ROAS becomes more consistent, purchase volume looks acceptable. But if you aren't using your Conversion API to differentiate signal quality, the underlying problem remains unaddressed. Discounted orders, low-margin customers, buyers who converted once and didn't return, the platform optimizes toward all of it, because nothing in the signal indicates otherwise.

The Problem With One Event

Most teams are asking a single conversion event to do two distinct jobs, and most don't realize those jobs have different requirements.


  1. Optimization: This event is used to identify where to surface your ad and how much to bid. It needs to be stable. Change it and the algorithm essentially starts over, relearning from scratch.


  2. Evaluation: the signal that tells you whether the customers you're actually acquiring are worth having. This isn't about volume. It's about whether the right people are coming through.


These are not the same job. A single purchase event used for both gives you an account that scales efficiently toward whatever it has been learning from, which, without a filter, is the full distribution of your orders, including the ones you wouldn't choose to repeat and the cohorts that will prove destructive to margin over time.

ACTION ITEMS
5 things to do tomorrow
Before you touch your optimization event.
 
01
Keep Purchase as your optimization event.
It carries delivery history and accumulated signal. What the account needs isn't a different optimization event — it's a second one running alongside the first.
02
Build a filtered Conversion API event for measurement.
Inside RetentionX, create a second event filtered to the purchase quality you want more of: new customers only, above a defined CM1 threshold, excluding heavily discounted orders. Only for measurement, for now.
03
Use both signals to diagnose cohort quality by campaign.
Which campaigns produce high purchase volume but few filtered-event conversions? Which produce a disproportionate share of filtered-event conversions regardless of ROAS? The gap between those two is where the account's actual problems are.
04
Reallocate before you restructure.
Move spend toward campaigns where cohort quality is strongest — without touching account architecture or the optimization event.
05
Only then evaluate the filtered event as an optimization signal.
Once it has sufficient volume and stability, you can decide whether it should govern bidding. Measure first, optimize second.
BOOK YOUR AUDIT  →

The Broader Opportunity

When you send a second filtered event to Meta, the breakdown surfaces directly in Meta's campaign reporting: qualified purchase volume sitting alongside standard purchase volume at the campaign and ad set level. That comparison is where the analysis becomes actionable: which campaigns are winning on volume, which are winning on customer quality, and where those two things diverge.

Most teams treat Conversion APIs as infrastructure: a way to recover signal that was lost when browser tracking degraded.

That framing is too narrow.

The more consequential use is defining what the platform should be learning from. When you separate the event you optimize on from the event you evaluate against, the account becomes easier to read.

Campaigns that look strong on ROAS but weak on customer quality become visible. Campaigns producing durable customers, regardless of cost, get the attention they warrant.


That distinction is where most paid accounts have the most room to improve.

If you want to see how this would look in your own setup — reply SIGNAL and I'll walk you through how we'd structure it.


-Alex