The Cockpit Rule: a framework for knowing when to let AI fly
The biggest mistake I see people make with AI isn't using it wrong — it's applying the same level of involvement to every task. Some things AI should handle alone. Some need you in the loop. Some you should never delegate at all. The question is: how do you tell the difference? That's the question this AI decision framework answers.
This is the framework I use. It's based on ideas from researchers like Ethan Mollick, simplified into something I can actually apply every day as a PM.
Think of yourself as a pilot
When a plane is cruising at 35,000 feet, the pilot engages autopilot. During takeoff and landing, they collaborate with the systems — hands on, watching carefully. In an emergency, they take full manual control. The plane doesn't change. The situation does. And the smart pilot adjusts accordingly.
AI is the same. The tool doesn't change. Your level of involvement should.
The three modes
You delegate and trust
Clear instructions. Minimal review. AI handles it end to end.
You iterate together
Multiple rounds. Neither of you could get there alone.
You do it yourself
AI can't do it well — or the cost of getting it wrong is too high.
The mistake most people make: they use Collaboration mode for everything. It feels productive — you're "working with AI" — but it's often the wrong gear. Tasks that AI can handle fully end up taking twice as long. Tasks that need human judgment get diluted by AI suggestions that sound right but miss the context.
How to pick the right mode
Before you start a task, run through three quick questions:
1. How long would this take me manually?
If it's fast anyway, the overhead of prompting and reviewing might not be worth it. If it's slow and tedious, AI almost always helps.
2. How likely is AI to get it right?
Structured, factual, well-defined tasks — high probability. Nuanced judgment calls, political context, relationship dynamics — low probability. Be honest about this. Most AI-related mistakes happen when people overestimate this number.
3. How easy is it for me to check the output?
If you can verify the result in 30 seconds, delegate more freely. If checking the output requires almost as much work as doing the task — or if the error would be invisible until it's too late — stay in the loop.
The sweet spot for Autopilot: the task takes you a long time, AI is highly capable at it, and you can easily tell if the output is wrong. That's when you hand it over and move on.
A bonus framework for data tasks
When I'm staring at a spreadsheet or a messy dataset with no clear starting point, I use a simple three-step approach I call DIG:
D — Describe
Ask AI to describe what's in the data. Column names, data quality, obvious gaps. This surfaces assumptions you didn't know you were making.
I — Introspect
Ask AI what questions it could answer with this data. You'll often discover angles you hadn't thought of — or realize the data doesn't actually support what you were planning to claim.
G — Goal
Give AI a mission briefing. What decision does this analysis need to support? What does your audience care about? This step turns a generic summary into something you can actually use.
Why this matters now
A year ago, knowing how to prompt AI was a differentiator. Now it's the baseline. The people who stand out aren't the ones who use AI more — they're the ones who know when not to use it, and when to trust it fully without second-guessing every output.
That judgment — knowing your mode, knowing when to use AI tools and when not to — is the actual skill. The tools are just tools.