AI and Me: A Developer’s Perspective

Many things are on my mind when it comes to AI usage—plenty of good, but also a fair share of bad. I haven’t quite made up my mind yet. I’m far removed from the usual social media extremes of ‘AI is amazing’ or ‘AI is evil’. So far, I’ve swung back and forth between admiration and frustration. But perhaps that’s a topic for another post.
Today, I want to focus on how I experience AI in my core profession—software development.
Autocomplete
Like many others, my journey began with GitHub Copilot and its AI-powered autocomplete. It was dreadful then, and to me, it still is. It has never produced anything genuinely helpful in my codebases. The suggestions were either utter nonsense or so blatantly incorrect that I ended up fixing the AI’s code alongside my own.
Still, I understand it was a necessary first step to get the ball rolling.
If I were leading a team, I’d disable it completely for juniors and rookies. It promotes antipatterns and discourages critical thinking. For me, this kind of autocomplete is a waste of time, energy, and resources.
Chat
Chat is a different story.
Having an AI-based chat that can analyse code is genuinely helpful—as long as you’re able to evaluate the answers and recognise when the AI is hallucinating. That takes experience and a solid grasp of software development.
For example, I once had to dive into a coworker’s codebase I hadn’t seen before to fix a critical bug. With my background, I could separate the good AI answers from the bad and ask the right questions. It saved me a significant amount of time.
Similarly, a senior coworker of mine had to switch teams and start working with Python and Django—technologies she’d never used before. Thanks to her experience, she could leverage Perplexity to get up to speed and deliver a feature within three days. Her implementation met our standards from the get-go.
But I’ve also seen the flip side. A junior contributor to a community project I support tried using ChatGPT to learn a new topic. He gave it his all but lacked the ability to ask good questions or spot flawed answers. He lost a lot of time and picked up some dreadful habits. It took me half a day to get him back on track.
Eventually, I suggested he work through a well-structured book first. Three weeks later, after completing the exercises and quizzes, we gave ChatGPT another shot—this time as a custom GPT equipped with the same e-book and library reference. He now asked sharper questions and developed an instinct for judging AI responses. Another three weeks on, he’d built his first small application from scratch.
So, are the resources poured into training large language models worth it for outcomes like this? I’m genuinely unsure.
A good search engine, static analysis tools, and more interactive learning material might have led to similar results. But the interactivity and ability to ask personalised questions clearly helped both my colleague and the junior developer. Chat is, after all, closer to how we naturally communicate than traditional search.
As a team lead or manager, I’d monitor closely how people use AI chat. What questions they ask. How they learn to double-check responses. Especially with juniors, mentorship is irreplaceable. You need a baseline of knowledge to use chat effectively and separate helpful advice from hallucination.
Agent-based development
Let me start with a confession: I detest the term vibe coding—so this will be the first and last time I use it here.
From my perspective, this is the third evolution in AI-assisted development. At first, I was sceptical. I tried Cursor, Windsurf, Copilot Agents, and JetBrains’ Junie. Cursor stood out as the best of the bunch, but none of them lived up to expectations in speed, results, or experience. I was ready to give up on agents.
Then I read a post by Jeff Triplett and decided to give Claude Code another go. Once it was included in their standard plans, it became accessible to me—and it finally clicked.
Last week, I was hopelessly behind on my sprint because of other CTO duties. In just two days, I completed six feature requests, fixed five non-trivial bugs, and drafted the release notes. Would that have been possible without Claude Code? Absolutely not. Normally, that would be a week’s work—and I’d likely have delegated the release notes.
Did Claude Code do all the work? No. But it did handle the bulk of the coding, wrote commit messages, and proposed solutions. Would it have succeeded without me? Also no. I’ve known this codebase for 17 years. I asked the right questions, gave the appropriate context, and tweaked the solutions.
Claude acted as a productive junior developer. I, as a senior, made decisions and reviewed the work. While it coded, I joined meetings. Later, I reviewed everything in detail before pushing it to production.
How good were the results? Most were spot-on. Claude Code adhered closely to our coding style and used implementation strategies we already follow. About 10% of its output needed a second pass. Another 10% was rubbish and had to be redone from scratch.
Are these results worth the environmental and computational costs of training large models? I’m tempted to say yes.
That’s not the environmentalist in me speaking—but the software developer and CTO of a small European company striving to stay competitive without outsourcing. Agents could well be the edge we need.
Still, the same caveat applies: agents, like chat, require context and competence. We’re nowhere near Star Trek territory. If a manager believes an AI can handle vague tasks without clear direction—they’re misguided.
My rules for working with agents (so far)
- Only use an agent if you have some domain and tech knowledge.
- Agents need guiding, oversight, and rigorous review.
- Describe your requirements clearly.
- Ask for a plan—and review it.
- Embrace the ‘think > think hard > think harder > ultrathink’ mantra.
- Let the agent execute the plan while you grab a coffee or attend a meeting.
- Never skip the review step.
Two types of developers
Why did Claude Code click for me?
Because I’ve never defined myself by the act of writing code or using low-level tools. Success, for me, is about the concepts behind software, its maintainability, and—above all—the user experience.
I’ve always loved whiteboard discussions, sketching interfaces, and thinking through user workflows. I enjoy setting up projects and defining the architecture. Writing the code? That’s the necessary evil. I felt that way as a junior, and still do now.
So, with agents, I applied the same mindset: do the fun, conceptual stuff—let the agent handle the boilerplate. Only Claude Code met that expectation. Junie came close but is too slow right now.
Most others were either too sluggish or couldn’t explain their approach. And I don’t mean a bullet list—I want a full proposal, with pseudocode, suggested changes, and architectural reasoning. Claude’s detailed plans make me confident we’ll reach the goal.
Years ago, I had a deputy who loved tinkering—optimising code down to the last cycle. He was less interested in UX. We made a great team. That’s when I realised: there are two types of developers—the tinkerers and the builders.
We need both. But tinkerers are becoming rarer. Perhaps agents can step in?
I found Sean Grove’s The New Code video thought-provoking—at times, it felt like he was describing my own experience as a developer.

Wrapping Up
So, am I a ‘vibe coder’? An AI fanboy? No.
I see the benefits—but I see even more unresolved issues and potential pitfalls. That’s a discussion for another day. Right now, I’m reading The AI Con and The Empire of AI—both well worth your time, whether you’re a fan or a critic.
But the pressure to examine this technology seriously is real. For a small software company, AI can offer a competitive edge against firms that rely on offshore development. The larger challenges—ethical, environmental, and social—must be solved at a political level. Sadly, not at my desk.
If you’re a critic, don’t come at me with the pitchforks—let’s have a proper chat about how we tackle the challenges and still make the most of what’s promising.