We read a lot of AI ROI decks. Most of them are wrong in the same three ways — and if you bring one to a CFO who's seen a few business cases, they'll know immediately.
Mistake 1: Counting labor savings as cash.
"This saves 40 hours a week of reviewer time" does not equal "this saves $80k a year". Labor savings are only cash savings if you actually remove the role, or if the freed time produces measurable additional revenue. Otherwise it's capacity, which is real but has to be claimed in a different column.
A defensible case says: "40 hours of reviewer capacity redirected to X, which generates Y." If you can't fill in X and Y, call it capacity, not savings.
Mistake 2: Ignoring the cost of review.
Every production AI system in a regulated environment has human review. The cost of that review — the time, the tooling, the training — is usually 20–40% of the gross productivity gain. Most ROI decks leave it out. Your CFO will not.
Mistake 3: Front-loading the payback.
The first three months of a new AI system are slower than the baseline, not faster. Reviewers are learning the tool, the eval set is getting refined, the prompts are being tuned. Real payback starts in month 4–6. An ROI deck that shows month-1 breakeven is lying to you, and probably to itself.
What a defensible ROI case looks like
Three numbers: baseline cost per unit of work, projected cost per unit of work net of review, and the capacity redirect (with a named target). One curve: month-by-month, showing the slower ramp and the eventual crossover. One sensitivity table: what happens if the model gets 10% less accurate, if review takes 20% longer, if volume grows 30%.
CFOs don't actually need the case to be big. They need it to be defensible. A 25% productivity gain with a clear-eyed model beats a 60% gain with a fantasy one every time.