Add an 'Action Guard' feature to provide a centralized validation layer for agent tool calls, inspecting them before execution to prevent harmful or disallowed actions and enhance security.
### Is your feature request related to a problem? Please describe. #### Background AI agents have a growing adoption across the industry, including critical applications. AI agents that have access to tools can currently call tools directly with no centralized validation layer that inspects these calls before execution, allowing harmful or disallowed tool calls to be executed without oversight. In this package, Action Guard feature automates the validation, making the workflow secure. The [Agent-Action-Guard](https://github.com/Pro-GenAI/Agent-Action-Guard) experiments proved GPT-5.3 has a safety score of 17.33%, which shows a very high vulnerability, proving the requirement for the Action Guard. 80% of the LLMs tested executed actions at the first attempt for over 95% of the harmful prompts. Agent-Action-Guard received 962 downloads on PyPI, and 247 clones on GitHub in the first week. #### Related Work - https://github.com/Pro-GenAI/Agent-Action-Guard ### Describe alternatives y