User requests enhanced capabilities for AI agents (like those in Opencode) to handle complex, multi-step software engineering tasks. This includes scanning codebases for style, building implementation plans, implementing features, testing/linting, reviewing/refining, and automating with tools like GitHub Actions. There's a need for agents to manage tricky edge cases, make workflows more deterministic, select appropriate LLMs for each step, and ensure context is not bloated.
Hi there, Would you be able to make a video (or instruct me here) where you tackle an issue, possibly within a full-stack app? I'd like to see a multistep process and how to work with it. Let's say you have a vague data model from external source, your own db, figma designs, you have to implement a complicated issue, and ensure that the data is validated and that LLM correctly infers visual representation of the data. This would surely make use of MCPs. Such scenario would include: - scan code base to learn code style, modules, and to fit the new feature to what was already there - build plan for requested requirements - implement - test / lint - review / refine Maybe with automation in GitHub Actions? How to tackle issues that include tricky edge cases, with no straightforward requirements? A process to guide agents and make their work more deterministic and reliable would be awesome. How to choose the right LLM for each of the steps (talking proficiency), how to make sure the context for each step is not bloated with unnecessary data, should each of the modules have its own instructions (or an agent that holds that knowledge)? The list goes on. :D