Users want a layer in AI code generation tools that allows them to delegate specific components of the development process, such as clearly defined function requests or tasks based on robust PRD specs, to mitigate non-determinism and integrate AI more effectively into the stack.
Having AI generate all the code is hype. Not the future. When I was in a hospital once, I remember looking around at all the equipment around me and thinking, “this is newer and more state of the art than the equipment when I was born, but in time all of this will be the crappy old version in the future.” The entire building I was in has been demolished and replaced with another facility. What does this have to do with AI? So much of what we do today is going to be the “crappy old way” one of these days. One comment I started making in conversations with people last year was that LLMs generating all code one token at a time is “too probabilistic.” Don’t get me wrong, it’s a fascinating use case. I use it a lot. But. I also think that expecting non-engineers to generate code for a job, where a probabilistic algorithm is generating code one token at a time, and that code runs in mission-critical systems, is not realistic. Instead, I think the future is going to be built by people who understand that the raw flexibility you get from probabilistic systems needs a deterministic counterweight in order to balance out the randomness with order and reliability. 100% probabilistic output can amplify the expertise of the AI user—and often amplifies their lack of expertise, as well. Using AI to generate 100% probabilistic output outside of your domain expertise sounds cutting edge, like it’s the future. I think it’s going to be retired as the “crappy old way” before we *really* get to the future. I’ve been experimenting with this idea in a project I’ve been working on, and I can’t wait to tell you more about it!