Users need a dedicated layer for agent identity and authorization, distinct from human authentication. This layer should provide scoped permissions, revocable mandates, and a robust audit trail for AI agents acting on behalf of users, especially crucial for regulated industries to ensure compliance and accountability.
Right now a new infrastructure stack is being assembled for AI. It's not software, it's not agents. It's the layer underneath both of them that actually lets agents do things in the world. And it has hundreds of millions in venture capital behind it, in a category most people still can't name. We've seen this before. Twice, actually. The cloud transition had a version of this moment, and so did the API-first shift a few years later. Both times, the builders who understood the emerging stack early didn't just adapt faster. They built the companies that defined the next era. The ones who couldn't read it built on the wrong layers and paid for it in migration costs, lock-in, and lost time. The new customer for infrastructure isn't a human with a browser. It's an LLM with a tool-call interface. Every assumption about how software gets provisioned, authenticated, billed, and composed is being renegotiated. As security researcher Daniel Miessler recently pointed out at [un]prompted, we are entering an era where "your company exists as an API, and if people's AIs can't use your company in that way, you kind of don't exist." The builders who thrive won't be the ones with the best model or the cleverest prompt. They'll be the ones who can read the stack. Who know which layers to build on, which to build, and which to watch from a safe distance while someone else takes the risk.