Create a plug-and-play community node for AI observability that provides a dashboard with token usage, latency, errors by workflow/node, and alerts for token bleed.
**Anyone else flying blind with n8n AI workflows? Building a “Datadog for n8n.”** I’ve been building a lot of AI agent workflows in n8n and observability is a nightmare. Questions like: * Is an agent stuck in a loop burning tokens? * Which node is causing failures? * Are prompts quietly failing 20% of the time? I tried LangSmith, but it’s rough with n8n: * Hard to use on **n8n Cloud** (env var issues) * All traces go into one giant project * Hard to map traces back to specific visual nodes * Evals aren’t integrated into workflows So I’m building a **plug-and-play n8n Community Node for AI observability**. Idea: * Drop the node after AI steps * Add API key * Get a dashboard with **token usage, latency, errors by workflow/node**, alerts for token bleed, and automatic output evals. Works on **n8n Cloud** and requires no Docker setup. **Question:** If this existed today, would you use it? What features would make it a must-have?