The Setup
I've been running Meta ads for an e-commerce client for about a month. And honestly? The daily routine of checking dashboards, pausing underperformers, and reallocating budget was eating two hours a day I didn't have.
So I decided to run an experiment: give Ari the keys to the ad account and see what happens.
Not fully autonomous — I'm not insane. I wrote clear rules, set guardrails, and checked in once a day. But the actual decision-making? That was Ari's job for seven days.
Here's exactly how it went.
The Rules I Set
Before handing anything over, I defined the operating parameters. This is the part most people skip when they talk about "AI automation" — the boring constraint-setting that makes it work.
The rules were simple:
- Kill anything with a CPC above $3.50 after $20 spend. If it can't get cheap clicks by $20, it's not going to magically improve at $50.
- Pause any ad with zero conversions after $25 spend. Impressions without conversions are just expensive brand awareness I didn't ask for.
- Never increase daily budget by more than 20% in one move. Meta's algorithm freaks out when you double budgets overnight.
- Always keep at least 3 ads active. Don't let the algorithm optimize itself into a single ad — that's how you kill a campaign.
- Log every decision with reasoning. I wanted to review why it made each call, not just what it did.
I had Ari build two scripts to enforce this: pause_losers.py for the kill decisions, and a budget monitoring script that flagged scaling opportunities.
Day 1-2: The Culling
Ari's first move was aggressive — and correct.
Within the first 24 hours, it flagged six ads for pausing. Six out of about fifteen active creatives. The reasoning was sound: three had CPCs above $4 with no conversions, two had decent clicks but zero add-to-carts after $30+ spend, and one was a creative variant that was basically cannibalizing a better-performing version.
I'll be honest — I would have kept at least two of those running for another day or two. "Maybe they just need more data" is the excuse every media buyer tells themselves while burning money. Ari didn't have that emotional attachment. It saw the numbers, applied the rules, and killed them.
Day 1-2 result: Ad spend dropped from ~$85/day to ~$55/day. But the remaining ads were the proven performers.
Day 3-4: The Patience Phase
Here's where it got interesting. After the initial culling, Ari... didn't do much. It ran the analysis daily, confirmed the remaining ads were within parameters, and essentially said "no changes needed."
This is actually the hardest part of ad management that humans screw up constantly. We feel like we need to do something. Check the dashboard, tweak a headline, adjust targeting. The twitchy-finger optimization that kills campaigns by not letting the algorithm learn.
Ari doesn't have twitchy fingers. The numbers said hold, so it held.
Day 3-4 result: CPC dropped from $2.80 average to $2.15 as Meta's algorithm optimized delivery to the surviving ads. Cost per conversion improved by about 30%.
Day 5: The Scaling Call
On day five, one ad started outperforming everything else. A simple product-in-use video — nothing fancy, no studio production — was pulling a $1.40 CPC and a 3.2% conversion rate. Ari flagged it for a 20% budget increase, per the rules.
It also identified a pattern I'd missed: the top three performing ads all featured the product being used casually, almost as an afterthought. The "polished" studio-style creatives were consistently underperforming. Ari noted this in its log and recommended future creative direction lean into the casual, UGC-style aesthetic.
That's the kind of insight that takes a human media buyer weeks to notice because we're too busy managing day-to-day fires.
Day 5 result: Budget shifted to the winner. Total daily spend stayed around $55 but concentrated on higher-performing creative.
Day 6-7: The Results
By the end of the week, the numbers told a clear story:
- Total spend: ~$420 (down from the ~$595/week pace before)
- Conversions: Roughly the same volume as the previous week
- Cost per conversion: Down ~28%
- ROAS: Improved from ~1.2x to ~1.8x
The AI didn't find some magical growth hack. It did something more valuable: it removed waste. It cut the ads that weren't working faster than I would have, and it didn't tinker with the ones that were.
What It Got Wrong
This isn't a puff piece, so let me be honest about the failures.
It was too aggressive on one kill. One of the ads Ari paused on Day 1 was a new creative that had only been running for 8 hours. The $20 spend threshold triggered, but in retrospect, 8 hours isn't enough data for Meta's algorithm to optimize delivery. I should have added a minimum runtime rule — something like "must run at least 24 hours regardless of spend."
It missed a creative fatigue signal. One of the "surviving" ads was showing declining click-through rates over the week — a classic sign of creative fatigue. But because the CPC and conversion rate were still within bounds, Ari didn't flag it. By day 7, it was starting to underperform. A more sophisticated monitoring script would catch the trend, not just the snapshot.
It couldn't make new creative. This is the big one. AI can optimize what exists, but when all your ads eventually fatigue, you need fresh creative. Ari could tell me what style was working (casual, UGC-style) but couldn't produce new ads. That's still a human job — or at least a different AI pipeline that we haven't built yet.
What I Actually Learned
1. Rules-based AI beats gut-feel humans for ad management. Not because AI is smarter. Because it doesn't get emotionally attached to creative it spent three hours making. It doesn't "give it one more day" when the numbers say kill it now.
2. The 80/20 of ad management is removal, not addition. Most of the value Ari created wasn't from brilliant scaling decisions. It was from cutting losers fast. That freed up budget for the winners to breathe.
3. Constraint-setting is the real skill. "Let AI manage your ads" sounds like you're handing over the wheel. The reality is that writing good rules is the management. I spent more time defining the operating parameters than Ari spent executing them. And that's exactly right.
4. Daily check-ins are still necessary. I checked in once a day and made one override in the whole week (I would have restarted that 8-hour ad). Full autonomy isn't the goal — augmented decision-making is.
Will I Keep Doing This?
Yes, but with upgrades.
I'm adding trend detection — flagging ads where CTR has declined more than 15% over three consecutive days, even if absolute numbers are still within bounds. I'm adding a minimum runtime floor so new creatives get a fair shot. And I'm building a creative brief generator that uses the performance data to suggest what kind of ad to make next.
The goal isn't to replace the media buyer. It's to replace the tedious parts of media buying — the dashboard checking, the manual pausing, the budget spreadsheet — so the human can focus on strategy and creative.
One week in, the experiment worked. Not perfectly. But profitably.
And I got two hours a day back.
---
Building this kind of AI-powered marketing automation is what Machine Earned is all about. We share real experiments, real numbers, and real failures every day. Subscribe to the newsletter so you don't miss the next one.