Senior Engineer Diaries: My No-BS AI Integration Story
AI is everywhere. Your grandma’s toaster probably runs a transformer model. And no, this isn’t new — AI has been a concept around since the 1950s. But once OpenAI dropped ChatGPT, everything went full Michael Bay. Suddenly, everything had AI — even the things that definitely didn’t need it.
When reading the articles and listening to self-proclaimed AI experts and prompt engineers, it feels like it's the ultimate and perfect solution to everything. Generative AI creates videos, images, and bullshit marketing materials—productivity's allegedly going through the roof.
Don’t get me wrong — I’m a fan of AI when it solves a real problem. It’s great, with a clear scope and purpose. But slapping it onto everything like glitter on a school project? That’s not innovation, that’s just lazy.
I’ve been around long enough to see tech hype come and go.
From Meh to Maybe: My AI Test Phase
I started dipping my toes in cautiously. Starting to use ChatGPT sporadically, trying out Copilot in the closed Beta. It all felt like interacting with a toddler that talks back—in formal English.
GitHub Copilot code completion in the beginning was acceptable most of the time, but really horrible. Design patterns? — Scrap that. Method names and classes simply hallucinated out of thin air, outdated code samples.
Sometimes even creating obviously dangerous code. Being disappointed, I stopped using it, and even when it came out to the public, I refused to use it.
Not a long time after that, JetBrains dropped the full-line completion – somehow worse. I was, frankly, really disappointed. JetBrains was almost the gold standard for everything development related. Their IDEs are great, their project management tools and CI/CD effortless to use. Developer DNA for years. Somehow it was even worse than GitHub Copilot in the early stages. I immediately disabled it when it became a plugin enabled by default.
It would come more in my way while coding than everything else.
Copilot matured, and it got considerably better, at least using the chat turned out to be really great. Integrating well with JetBrains IDEs, it had all the context from your current file, project etc. Responses were snappy and worked most of the time. Still some degree of hallucination, but that got significantly better with each release.
After being used to Copilot and Chat mode, I was curious about JetBrains Junie. An agent for development tasks, that can automate certain tasks from start to finish autonomously. The demos looked great, so I registered for the beta testing. After months of waiting and getting the invitation, I was hyped. But that would not stick. Junie already failed on the most basic tasks after spending a ridiculous amount of analyzing. I tried creating contribution guidelines from a project with already pretty good documentation. Twenty minutes of waiting later it generated a document that was wrong in every aspect. After hours of trying to guardrail the agent, I finally resigned and removed it again.
Copilot became my casual coding companion — snappy, useful, not annoying. Asking questions sporadically or helping with generation of boilerplate code. It worked quite well, also with a reasonable amount of time.
When JetBrains AI launched, which also uses the same models as Copilot, I was skeptical—having been disappointed so many times by my favorite tooling company. To my surprise, it worked really well, obviously natively integrating into the look and feel of my familiar IDEs. It’s part of the All‑Products Pack now—which is great, since I already had it.
I tried out Junie again, which was still crap. Same for Copilot Agent mode. As I still need to tweak many things after waiting far too long. Simply put, I can code faster and have a more stable result than any of those tools. It didn’t really boost productivity—it often slowed me down. The same goes for full-line completion, which comes into my way, more frequently than it ever helps.
Vibe Coding? Miss Me With That!
Some people code with vibes. I code with intent. If your AI tool is just throwing spaghetti at the wall, I’m not eating.
I see a shocking amount of content around Vibe Coding. People that just want to make some quick money, while producing shady apps. The generated code is just accepted as long as it works. Once it stops working and AI is clueless, the “developers” of such apps just panic and give up or even worse, leave it there, knowingly.
While I see the appeal to non-tech savvy people, this trend is horrible. And it shows—insecurity, sloppy quality, zero understanding. The result is always a compromise of security, code quality and overall understanding of the code.
If your AI is just throwing spaghetti code at the wall that works, it doesn't mean that you are really coding anything. For me, it is pretty clear AI should assist decisions, not randomize them. When I just guess and add more instructions until it somehow works, that's a horrible way of development in any possible way.
Okay, Fine, I’ll Admit It: Some AI Is Useful
Copilot and JetBrains AI Chat got a lot better in the past few months. I heard some great success stories from friends, fellow developers and colleagues.
As of today, I heavily use Copilot with the Chat mode at work. Mostly to generate test cases, help with understanding code that I get thrown at, or just some rubber duck support with programming.
For my private and open-source projects, I use JetBrains AI Chat as the code is neither confidential and I would rather not buy in a license for Copilot, when I can have it “for free” bundled with my beloved IDEs.
ChatGPT outside the IDE – the real MVP. My rubber duck, documentation proofreader, and occasional therapist. It is also an excellent choice for generating bull-shit bingo documents and proposals for convincing the business part about tech-savvy decisions. It helps a lot with easier problem troubleshooting and exploring new ideas.
What works shockingly well with AI nowadays are UIs and apps. I tried this myself and most of the libraries are well known, with many public codes apparently trained on it. As it does often not need enterprise level security or difficult design patterns. I can shamelessly say that most of my frontend code is either heavily generated with AI or at least assisted by it. Occasionally, I still think to myself “Either someone fed it the entire NPM registry, or it's secretly a front-end dev working extra-hours in some dusty office”.
How I Use AI Without Selling My Soul to the Algorithm
Don’t let it write your logic, let it explain the logic and if you generate code make it really, apparent. Generally, the context it can detect is either wrong or ambiguous. I use it in an as small context as possible. Providing all the context and the exact requirements.
Usually, it is still faster for me to just implement something than throw it to an AI to do it.
Good at scaffolding, trash at architecture and complex logic. For the typical monkey-typing work like generating the typical test cases and writing the test set up boilerplate, it works really great. The same applies for smaller utils or really narrow parts of your logic. When you give it a bit too much scope, the code generated is usually using numerous antipatterns. Frequently I also still encounter hallucinated code or behavior. Either the method does not exist or has some side effects that are not desirable.
As a rule of thumb, pair it with your brain – “I’m the lead, it’s the junior dev that doesn’t talk back (most of the time).”
We don't need junior engineers anymore?
What makes me furious is some bullshit stories and statements from CTOs and AI enthusiasts like “We won't need junior engineers, we can just use AI instead”.
Newsflash: senior engineers aren’t born, they’re built — with mistakes, mentorship, and monkey work. They are not born into this world with a keyboard at hand. It takes practice, and effort that we put into people so they grow and get to a senior level.
This can, of course, be assisted by AI, but assuming that everyone will just be mature by default, is delusional.
Final Thoughts: It’s a Tool, Not a Miracle
Are the models we use all knowing? Definitely not. But they are all built to be as confident as possible, even when they might be wrong. It won’t replace developers, but it might replace the version of you that still writes for
loops manually.
Heads up: yes, we’re seeing ‘30% dev jobs replaced by AI’ headlines—Microsoft, IBM, Amazon, you name it. The result? Quality dropping fast. Coincidence? Hardly. IBM laid off multiple thousand employees, but now needs more people for other work.
I am fully convinced that AI and LLMs that are backing it will replace more and more of the boring tasks as they get better. However, I don't see good code quality or autonomous solution in that role. Creating software is more than just writing code and pushing it to production. It is a team sport, requires a lot of collaboration and implicit knowledge.
AI didn’t replace me — it just freed me from the stuff I hated anyway. And if vibe coding becomes the norm? I’ll be over here, coding like I mean it.