FOMO-driven AI adoption sucks

TL;DR: Stop treating AI adoption like a corporate email blast. Test it yourself first, let small teams experiment with metrics, then scale what works. Your FOMO-driven mandate is why half (or more) of your team is still not leveraging AI effectively.
The brutal reality of AI tool mandates
Here's what I'm hearing from developers I speak to:
CEO reads about the latest AI coding assistant over breakfast. By lunch, there's a company-wide Slack message: "Everyone must start using [insert tool] immediately!"
Six weeks later? Mixed results at best. Complete adoption failure at worst.
The uncomfortable truth: Telling developers to use an AI tool because it's trendy works about as well as telling them to switch to a new IDE because Gartner said so. (Spoiler: it doesn't.)
FOMO for using AI specifically for software development is extremely prevalent. Everyone's talking about it. Lots of companies are blaming their reduction in force (aka firing dev teams) on advances being made from AI. (I think it actually has more to do with R&D tax credits rule changes, but that's a different topic.)
The multiplication principle of tool adoption
Every tool has its sweet spot. Try to make everything look like a nail because you have a hammer, and you'll end up with broken fingers and bent nails.
But here's the kicker. When you find the right tool-developer-task combination, the results are staggering. I've seen developers go from committing hundreds of lines per week to tens of thousands. Not a typo. Not an exaggeration. Actual 10-100x multipliers in output.
Output != Outcome remember, but still, it shows SOMETHING is happening.
The trusted advisor reality check
Some people recomend that software development managers should know how to (and reguarlly) code. Agree or disagree, the same principle applies here; kind of.
What I've found works: Someone senior should have tested it. Doesn't have to be the CEO or CTO, but someone trusted should have actually used the tool on real code for real problems. Not watched a YouTube demo. Not read the marketing copy. Actually used it, for real work. For at least a week.
The framework I've found to work
Forget the big bang adoption. Here's what I've seen work in practice:
Step 1: Small team experimentation
Give a small team or individual:
- A budget
- Freedom to choose their tool (perhaps with strong guidance/veto by legal)
- A clear evaluation framework (outcome, not just output)
- 2-4 weeks to experiment
Let THEM come back to you with a proposal and a framework for evaluating success. Tweak it if need be, then run with it.
Step 2: Measure what matters
Usage metrics that have worked for me, when looking for adoption of AI tools:
- Token/credit consumption (for usage-based tools)—if they're not using credits, they're not using the tool
- Lines of code committed (yes, it's a terrible metric, but the delta can be telling)
- Actual outcomes—faster feature delivery, fewer bugs, happier customers
Satya Nadella said about AI needing to deliver real economic impact:
When we say: 'Oh, this is like the industrial revolution,' let's have that industrial revolution type of growth. That means to me, 10 percent, seven percent for the developed world. Inflation adjusted, growing at five percent, that's the real marker.
Outcomes are what actually matter, but they tend to be lagging. Which is why we also look at output and adoption.
Step 3: The champion model
Found someone who's crushing it with AI tools? Make them visible. Show other teams:
- What they're doing differently
- Their actual workflow (not the idealised version)
- Their real metrics and improvements
Then slowly expand to 1-2 more teams. Rinse. Repeat.
The pairing hack to accelerate adoption
Here's a simple but very effective technique I've used: its a combination of FOMO and pairing.
Pair your low-commit developers with your high-commit AI power users. Not as a performance review, but as a learning opportunity.
BONUS: Simple bash script for LOC delta by dev on a github repro: https://gist.github.com/a-c-m/e65a4ab3ced328a7b136538147c4fec0
The magic happens when the person with fewer commits sees the methods and tools in action. Real workflows. Real problems. Real solutions. Chances are the person doing very high numbers of commits (as long as you've filtered the code to make sure you're not getting junk results) has found effective ways to leverage AI tools.
Watch out for AI slop.
But remember, More code != Better. Track your outcome, not just the output.
Quality got cheaper (if you do it right) as a result of these tools. So while while more code might be a good indication of to high adoption, keep in mind that may not be a good thing.
A great post about on the blog of the brilliant Zed Editor, where they
make the case for quality software in an era where constraints on code production have been dramatically lifted.
Why documentation matters more than ever
Counter-intuitive insight: AI tools make documentation MORE important, not less. Every piece of context you write multiplies across every AI interaction. Both for the human and for the AI. I've got a post about that here, if you are interested:

Your move
Stop the FOMO-driven mandates. Start with:
- Today: Work with a trusted technical advisor to test an AI tool and use it for a real task.
- This week: Identify (or have them self identity) 2-3 developers interested in experimentation, give them that tool and a framework to evaluate it.
- This month: Run a controlled pilot with clear metrics
- Next month: Double down on what is working, or pivot (GOTO:1) and try again.
The best time to adopt AI tools intelligently was six months ago. The second best time is today. But absolutely its before your next all-hands where you were planning to mandate the tool you read about this morning.
Your developers will thank you.