AI coding assistants should be improved to more consistently and effectively adhere to provided project-specific style guides and architecture documentation. Users currently experience a need for constant feedback loops and explicit reminders for the AI to follow these guidelines, indicating a lack of robust, persistent integration of these rules into the AI's generation process.
Because AI coding assistants are designed to generate (based on how LLMs fundamentally work), they have a tendency to generate new (unhelpful) requirements and specifications, on the fly, in addition to implementing solutions that satisfy the initial (actual) requirements and specifications. It takes lots of extra work to keep a coding assistant on the rails. Have you found this to be true? How do you keep coding assistants from going off the rails?