User asks for control over providing context to the AI reviewer about what the reviewee is trying to achieve with the current code. This is to address the issue where AI (and even human) reviewers might misinterpret the intent or context of the code, leading to irrelevant suggestions.
AI code review is broken. But we need to get it right. Most tools treat your codebase like it exists in a vacuum. They scan diffs. Find syntax issues. Suggest "improvements" that ignore why the code was written that way in the first place. Then engineers get fatigued by the noise. They stop paying attention. The tools become shelfware. We've been hearing this from teams for months. "We tried [tool X]. It just doesn't understand our code." These tools lack the context to provide meaningful feedback. They see code, not systems. They see changes, not decisions. What a crappy loop. So we're doing something about it at Unblocked. AI code review that doesn't suck. I know, genius right? What makes ours different: it actually knows your codebase. Not just the diff you're reviewing, but the PRs that came before, the architectural decisions in your infrastructure configs, the discussions your team had about trade-offs, the patterns you've established. Context from everywhere your team works: GitHub, Slack, Confluence, Jira. The same context engine that's already helping tens of thousands of engineers ship faster in their IDEs, CLI, and workflows. The result: reviews that feel like they came from someone who's been on your team for years. Not surface-level "consider adding error handling" comments, but substantive feedback that understands your systems and decisions. "Unblocked Code Review made me reconsider my AI fatigue." - one of our early users at Clio. We built this because code review shouldn't suck. And AI shouldn't make it worse. If you're tired of superficial AI feedback, try it (link below), and give me your feedback. I think you'll be delighted.