A user has built a runtime certification layer for multi-step AI pipelines that checks outputs against specific constraints before shipping. They are looking for feedback and stress testing from other builders.
I'm running multi-step AI pipelines where the output needs to meet specific constraints before it ships. Got tired of writing validation logic per use case, so I built a single API that handles it. One call to POST /api/v1/certify: * Compiles constraints from the intent * Executes with live data (CoinGecko, Yahoo Finance for trading, or pure AI for everything else) * Checks output against every constraint * Auto-refines up to 3x if it fails, rejects if it can't fix it Live demo on the landing page — type any intent, watch it certify in real time: [aru-runtime.com](http://aru-runtime.com) Free 100 calls/month. Looking for agent builders to stress test it. What would you put through it?