AI coding agents often make mistakes when generating complex SQL queries, particularly in large codebases with extensive schemas. They may misuse specific functions (e.g., jsonObjectFrom instead of jsonArrayFrom in Kysely/Prisma), leading to crashes or incorrect logic. Users need agents to have a deeper understanding of schema relations and logical concepts to prevent such errors.
AI coding agents are incredible tools, but blindly trusting them with your backend is a disaster waiting to happen. After working extensively with AI agents across various codebases, from spinning up personal projects to managing production environments, I've found some major friction points. While tools like Claude Code and Cursor are revolutionizing our workflows, there are strict boundaries we need to enforce. Things your AI agent should NEVER do on autopilot: 1️⃣ Write and execute complex SQL Queries In any meaningful codebase, schemas are massive. AI models are often eager to output a solution but miss crucial relations or logical concepts. For instance, in our Kysely and Prisma stack, AI would usually incorrectly use jsonObjectFrom instead of jsonArrayFrom, which could crash the query for certain users. Double, triple, and quadruple-check every SQL query it writes. 2️⃣ Add New Dependencies Supply chain attacks in the Node/NPM ecosystem are surging. Don't let your AI arbitrarily pull in new packages. It might grab a newly compromised version or misspell a library. Always lock your versions, ensure the package is legitimate, and consider using package manager features like pnpm's minimumReleaseAge to block brand-new, potentially sketchy updates. 3️⃣ Make Core Security Decisions AI can implement, but YOU must architect. Top-level decisions regarding rate limiting, data validation, sanitization, body size limits, and crash reporting should be centralized and completely understood by you. Don't let an AI "vibe code" your backend security architecture. 4️⃣ Access Production Databases Directly Never, ever give an AI agent write access to your production database. It should strictly be read-only, and even then, make sure you've implemented strict query and transaction timeouts so an unoptimized search doesn't suffocate your infrastructure. Also, keep your databases shielded behind an EC2 instance or VPN. I've literally seen an AI rewrite a .gitignore'd secrets.ts file just to bypass a testing environment restriction and access the database! 5️⃣ Run Git Commands & Push to Master Don't let your AI agent resolve merge conflicts or push directly to master. Generating commit descriptions is a great use case, but you need to be the one taking the final decision on which code pieces to keep and what actually ships to production. AI is a massive multiplier, but it doesn't replace solid engineering fundamentals.