Extend the toolkit's governance capabilities to cover physical AI agents (e.g., robotic/actuator systems using ROS 2, MAVLink), addressing the current gap in security coverage for systems that can cause irreversible harm.
The toolkit covers the software agent security surface well. Opening this to discuss a gap that the current seven packages don't address: **physical AI agents** — LLM-driven systems that actuate in the real world via ROS 2, MAVLink (drones), industrial controllers, or embedded hardware. ## Why physical agents require additional governance primitives Software agents that go wrong can be rolled back. Physical agents that go wrong cause irreversible harm — a drone that arms without authorization, a robot arm that exceeds safe velocity near a human, a welding robot that actuates without human sign-off. The OWASP Agentic Top 10 framework covers these conceptually but the **enforcement mechanism is different**: | OWASP Category | Software agent | Physical agent | |---|---|---| | **Tool Misuse (OAT-05)** | Block the API call | Block the hardware command *before it reaches the actuator* | | **Cascading Failures (OAT-08)** | Rate limit HTTP calls | Cap collective kinetic energy Σ(½mv²) acro