Users are frustrated that AI code generation often leads to bloated, inefficient, or unreadable code, even when prompted for efficiency. The AI prioritizes "code that runs" over "code that lasts" or integrates well with existing functions.
If you're worried about AI taking your job, don't be. I asked an agent to slim down a piece of code it had written. It added more lines than it deleted. I had to tell it: no, that's the opposite of what I said. It was like, "Oops." Sometimes it feels faster to just write the code myself. This is an unfortunate consequence of models trained in reinforcement learning environments, where the agent is rewarded for accomplishing a task (i.e., they wrote a piece of code that runs, and the output is correct). It's hard to reward models on the quality of that code. And if you don't reward a model for something, it's just not going to do it. In practice, models completely ignore prompting to be efficient. They won't reuse existing functions or even write in an understandable way. It's almost like it's writing code for other agents instead of for human review. As you use these agents, you're bloating your codebase more and more. Most people who are vibe coding don't even look at the code that comes out. That's not going to work at scale. In 2 weeks, your codebase will be unreadable to even the most seasoned engineers on your team. At that point, even the agent that wrote it can't make sense of what it built. You need extremely high-caliber engineers to identify this problem. If you're an engineer who actually reads the output and can tell the difference between code that runs and code that lasts, your job isn't going anywhere... at least for now.