← Work/Tumble/Retention

The churn intervention engine

An AI retention system that picks per-user from ~18 playbooks, times messages to each user's natural rhythm, and distills winning strategies from outcomes — replacing calendar-based drip campaigns.

January 18, 20265 min read

Outcome

74% of churn, first 90 days


The exact window the engine was built to defend.

The work

Context

Every subscription business has the same painful realization at some point: most of your churn doesn't happen gradually over years. It happens in the first few months, to users who never quite formed the habit.

At Tumble, when we looked carefully at our retention curves, we found that roughly 74% of churn happens in the first 90 days. After that, users who stuck around mostly kept sticking around. But that first window was a cliff, and we were losing real money off the edge of it.

We had the usual tools. A welcome sequence. A win-back campaign. A few triggered emails based on gaps between orders. All of it ran on calendars — "send email X on day 7, email Y on day 14" — and all of it treated every new user roughly the same.

The challenge

A calendar-based retention system has two problems that compound each other.

The first is that it doesn't know anything about the user. A customer who orders every Tuesday shouldn't get the same reminder cadence as one who orders once a month. A user who placed one order and loved it is a different problem from one who placed one order and was disappointed. The calendar doesn't care.

The second is that it doesn't learn. If playbook A works better for some users and playbook B works better for others, a calendar-driven system has no way to figure that out and no way to act on it even if it did.

We wanted both: per-user decisions about which intervention to try, and a learning loop that got better at those decisions over time.

The approach

We stopped thinking about retention as a campaign and started thinking about it as a question the system answers on behalf of each user, on each tick: given what we know about this user, what's the single best thing to do for them right now — if anything?

The answer is one of roughly 18 playbooks. They range from subtle (a gentle reminder that matches the user's historical ordering rhythm) to aggressive (a bag credit, a win-back offer, a direct outreach from customer support). Each playbook has preconditions, a cost, and an expected effect size.

The system picks one playbook per user, per decision window. Sometimes the answer is "do nothing" — and "do nothing" is a real choice, not a default. When a user is clearly engaged and doesn't need intervention, sending them anything at all is a small negative.

The timing is per-user too. If we can tell from history that a user tends to order on Sunday nights, we don't send a Monday morning reminder. We send it Sunday afternoon, when it actually matches their rhythm.

What we built

A decision service that runs at the user level, integrated into our existing event stream and our existing messaging infrastructure. When a user crosses a decision threshold — a certain number of days since their last order, a drop in engagement, a support ticket resolved — the service evaluates them against all 18 playbooks and picks the one with the highest expected lift.

The service writes its decisions into a table we can query. Every outreach the system sends is logged with the reasoning: which playbook, why this user, what we expected, what we're going to measure. That log was the single most important thing we built — it made the system legible to the people who had to trust it.

The outcome

Playbooks

~18


Ranging from a soft nudge to a bag credit to a personal outreach.

Target window

90 days


Where 74% of our churn was happening before the engine was running.

The first 90-day retention curve stopped being a cliff. It didn't become flat — no retention system makes a bad product sticky — but it moved in the direction we needed it to move, consistently, across user cohorts. The team running retention shifted from writing copy on a schedule to tuning the system's parameters and watching the log to see what it was doing.

What's different now

A few things we'd tell anyone trying to build something similar:

  1. "Do nothing" is a playbook. Most retention systems we've looked at bias toward action. They send something because sending something is how campaigns get measured. Treating "do nothing" as a first-class option changes what the system produces.
  2. The log is the trust layer. If your team can't read the reasoning for why the system made a specific decision, they won't trust it, and if they don't trust it they'll turn it off the first time it does something surprising. Build the log before you build the policies.
  3. Per-user timing matters more than you'd guess. Matching a message to a user's existing rhythm was, by itself, a larger effect than most of the content differences between playbooks.

This was one of roughly a dozen agents we shipped at Tumble. If you're running a subscription or usage-based business and your retention is still driven by a calendar, let's talk — there's a good chance you have more leverage here than you think.


Get in touch

Let's see if there's a fit.

The first call is always free. Scott personally replies within one business day. No slides, no pitch — just a conversation about what you're trying to figure out.