About six months ago, my manager asked if I wanted to lead our team's adoption of AI coding assistants. I said yes before fully understanding what I was signing up for. Looking back, it was one of the more interesting things I've done at Expedia, and I learned a lot about both the technology and how to introduce new tools to skeptical engineers.
The Starting Point
Our team was like most enterprise engineering teams: a mix of experienced engineers who had seen plenty of "next big things" come and go, and newer folks who were curious but busy shipping features. Nobody was asking for AI tools. We had Copilot available but adoption was pretty low.
The company decided to pilot Claude Code through AWS Bedrock, and they needed teams to try it out and report back. I volunteered partly because I was curious, and partly because I figured someone should go first and figure out what works.
What Actually Worked
Starting with my own work
The first thing I did was just use it myself for a few weeks without telling anyone. This turned out to be important. When people eventually asked questions, I could point to real PRs I had shipped with AI assistance, not theoretical examples.
My first real win was integrating a new gRPC service. It was the kind of task I'd done before: write the client code, handle errors, add logging, write tests. Normally takes a day or two. With Claude helping, I had a working implementation in about three hours. The code wasn't perfect out of the box, but it gave me a solid starting point.
Being honest about limitations
One thing that helped build trust was being upfront about what the tools couldn't do. AI coding assistants are really good at some things: boilerplate code, test generation, explaining unfamiliar codebases, writing Splunk queries. They're not great at understanding our specific business logic or catching subtle bugs in complex distributed systems.
When I showed demos, I made sure to include examples where the AI got things wrong. Engineers can smell marketing BS from a mile away. Showing the rough edges actually made people more willing to try it, because they knew I wasn't overselling.
Making it easy to start
I wrote up a simple getting started guide: how to get access, how to configure it, and three specific tasks to try first. The tasks were things everyone on the team does regularly:
- Write unit tests for an existing method
- Generate a Splunk query for a specific metric
- Explain what a piece of unfamiliar code does
Starting with these low-stakes tasks let people get comfortable with the tool before trying anything more ambitious.
What Didn't Work
Trying to mandate usage
Early on, there was some pressure from above to track adoption metrics. How many people are using it? How often? This backfired. Engineers started using it performatively rather than productively. I pushed back and got us focused on outcomes instead: did it help ship things faster? Did code quality stay the same or improve?
Expecting it to replace thinking
A few people tried to use AI tools as a shortcut for understanding code. They'd paste in something complex, get an explanation, and move on without really grasping it. This led to some bugs that could have been avoided. The tools work best when you're using them to go faster at things you already understand, not to skip learning entirely.
The Interesting Philosophical Bit
Here's what surprised me: the engineers who got the most value weren't the ones who used AI the most. They were the ones who had really clear mental models of what they were building. They knew exactly what they wanted and used AI to get there faster.
It's kind of like having a really fast assistant. If you know what you want, a fast assistant is incredibly useful. If you're not sure what you want, a fast assistant just helps you do the wrong thing more efficiently.
Where We Are Now
Six months in, about 60% of the team uses AI tools regularly. That's not 100%, and I don't think it needs to be. Some people genuinely work better without them, and that's fine.
The biggest change is in how we write tests. Test generation is one of those things AI is genuinely good at, and our coverage has gone up without feeling like a chore. We've also gotten better at writing Splunk queries, which used to be something only a few people on the team felt comfortable with.
Am I worried about AI replacing engineers? Not really. The people who were good at their jobs before are still good at their jobs, just a bit faster. The hard parts of software engineering are still the hard parts: understanding requirements, designing systems, debugging production issues, working with people. AI doesn't help much with any of that.
Advice for Others
If you're thinking about introducing AI tools to your team:
- Use them yourself first. Seriously use them, on real work, for at least a few weeks.
- Be honest about limitations. Engineers will figure them out anyway, and you'll lose credibility if you oversold.
- Start with low-stakes tasks that provide obvious value (tests, queries, documentation).
- Focus on outcomes, not adoption metrics.
- Give people permission to not use them if they don't find them helpful.
AI coding tools are useful. They're not magic. The hype cycle will eventually calm down, and we'll figure out where they fit in normal engineering workflows. In the meantime, it's been a fun experiment.