Users are concerned that AI-generated code often includes outdated dependencies, insecure defaults, or hardcoded credentials, leading to security vulnerabilities. AI coding tools should prevent these issues.
I saw this Business Insider article (paywall - unless you have uh heard about archive dot today) about YC and "vibe coding" and... I have thoughts. tl;dr: no, that's not how this works - you (not just Garry Tan, but everyone knowledgeable in this industry that is pushing, hyping, supporting this) should know better, which means you know what's going to happen down the road. I am really, really worried about what happens when that debt is due. I am a technical startup founder extensively using AI tools to write code. I'm not quite the "vibe coder" mentioned here, but I have been on a path rapidly taking me from "I have never done frontend dev work" to "I am now rapidly iterating our live product with paying customers." My background is in cybersecurity research, I don't have the muscle memory of decades of 40+ hour weeks exclusively writing code. The only reason I am so successful doing this is because I am already a good programmer. I am a good programmer because I understand how computers work. I understand how computers work because of my education and background, including a lot of reverse engineering work for malware analysis and exploit discovery. This means I can write accurate and precise prompts to generate code. It means I can rapidly review generated code - you do review 100% of your code before accepting it, right? - and when I see something I don't understand, I can rapidly read API specs and see if it's a neat trick or hallucinated nonsense. (90+% of the time it's hallucinated nonsense.) It means I understand my codebase and know how to integrate the new code in a way that doesn't cause more problems. It means I can use the AI tool as a way of learning, and as I get better and recognize patterns and build that muscle memory, don't need to rely on it for simple things. What happens if you aren't already a good programmer? The article mentions that the "vibe coders" don't know how to modify code or debug it when it doesn't work - it either works or it doesn't. They don't know what code is in there. They don't know how it's working, or why it's not working. This is beyond horrifying. And we're encouraging it in areas like FinTech where your money or tokens are kept, PropTech that you're relying on for a home, MedTech for your health. We're laying off many knowledgeable people who write code and throwing that money after diminishing returns on tools that are effectively overpowered autocomplete ("now with more plagiarism!"™). We are exponentially increasing our technical debt in literally everything, while simultaneously ending the education and training of people who would be able to fix it. Some will say "oh but AI will be able to fix it, just wait for the next release" or "our startup is making AI that will be able to fix it, just wait for our MVP" or a similar argument that the automation will be able to fix it. I'm not convinced. I've hit the character limit - a conclusion and links in the comments. #ai #vibecoding