ENGINEERING
We Started Using AI Coding Tools a Year Ago. Here's What Actually Changed.
In mid-2024, we started integrating AI coding tools into our daily workflow. Not as an experiment — on real client projects with real deadlines. A year later, here's an honest assessment of what changed.
The Tools We Actually Use
Let's get specific. We've tried most of them:
- Claude (via API and Claude Code) — our primary tool for complex reasoning, architecture decisions, and code generation
- Cursor — daily driver IDE for most of the team
- GitHub Copilot — used it for a year, most of us moved to Cursor
- ChatGPT — occasionally for quick questions, but Claude handles our heavy lifting
- v0 by Vercel — useful for UI prototyping, not production code
What Actually Got Faster
1. Boilerplate and Repetitive Code (2-3× faster)
This is the obvious one, and it's real. Writing API routes, form validation schemas, database models, test scaffolding — all of this is dramatically faster. What used to take 30 minutes of tedious typing now takes 10 minutes of prompting and reviewing.
2. Learning New Libraries (5× faster)
This was the surprise. When we needed to integrate a library we hadn't used before — say, Stripe webhooks or Pinecone vector search — AI tools compressed days of documentation reading into hours. We describe what we want, the AI generates a working starting point, we refine from there.
3. Code Reviews and Bug Hunting (genuinely useful)
We paste suspicious code into Claude and ask "what could go wrong here?" It consistently catches edge cases we miss. It's not a replacement for human code review, but it's an excellent first pass.
4. Writing Tests (2× faster)
AI is surprisingly good at generating test cases, especially edge cases you wouldn't think of. We describe the function, ask for comprehensive tests, and usually get 80% of what we need.
What Didn't Change
1. Architecture Decisions
AI tools are bad at architecture. They'll generate plausible-sounding advice about whether to use microservices or a monolith, but they don't understand your specific constraints — team size, timeline, budget, maintenance plan.
We use AI to explore options, but the decisions are still ours. This is where experience matters most.
2. Understanding Client Requirements
No AI tool can sit in a discovery call and understand what a founder actually needs versus what they say they need. The human skill of asking the right questions, pushing back on assumptions, and translating business needs into technical requirements — that's unchanged.
3. Debugging Complex Issues
When something breaks in production and the stack trace points to a race condition in your WebSocket handler, AI tools can help brainstorm causes. But the actual debugging — reproducing the issue, understanding the state, tracing the execution — still requires a human who understands the system.
4. Code Quality
Here's the uncomfortable truth: AI-generated code is consistently mediocre. It works, but it's rarely elegant. It uses verbose patterns, adds unnecessary abstractions, and doesn't understand your codebase's conventions.
Every AI-generated piece of code needs review and refinement. The developers who treat AI output as a first draft produce good code. The developers who accept AI output verbatim produce a mess.
The Real Productivity Gain
After a year, our honest estimate: we're about 30-40% faster on implementation tasks. Not 10×. Not even 2×. But 30-40% on a consulting engagement is significant — it means we can deliver MVPs in 4-5 weeks instead of 6-7.
The gains come from: - Less time on boilerplate (big win) - Faster library integration (big win) - Better test coverage in less time (medium win) - Quicker first drafts of components (medium win)
The gains do NOT come from: - Replacing developers (not happening) - Skipping code review (dangerous) - Automating architecture (not reliable)
What We Tell Our Clients
We're transparent about using AI tools. They make us faster without making us sloppy. Every line of AI-generated code goes through the same review process as human-written code. Our quality bar hasn't changed — just our speed.
Some clients ask if AI means their project should cost less. Fair question. Our answer: the time savings go into better testing, more thorough documentation, and handling edge cases we might have deferred. You get a better product in less time, not a cheaper product.
What's Next
The tools are improving fast. Claude's ability to reason about complex codebases has improved noticeably in the past six months. Cursor's codebase understanding keeps getting better. We expect the 30-40% gain to grow to 50-60% within a year.
But the fundamentals won't change. Understanding problems, making good decisions, writing maintainable code — those are human skills that AI augments but doesn't replace. At least not yet.