A user has open-sourced a library of security prompts for AI coding tools and is looking for feedback on its structure and contributions for new stacks, suggesting a potential integration with GitHub.
Just open-sourced \*\*guardrails-for-ai-coders\*\* — a GitHub repo of security prompts and checklists built specifically for AI coding workflows. \*\*Repo:\*\* [https://github.com/deepanshu-maliyan/guardrails-for-ai-coders](https://github.com/deepanshu-maliyan/guardrails-for-ai-coders) \*\*The idea:\*\* Developers using Copilot/ChatGPT/Claude ship code fast, but AI tools don't enforce security. This repo gives you ready-made prompts to run security reviews inside any AI chat. \*\*Install:\*\* \`\`\` curl -sSL [https://raw.githubusercontent.com/deepanshu-maliyan/guardrails-for-ai-coders/main/install.sh](https://raw.githubusercontent.com/deepanshu-maliyan/guardrails-for-ai-coders/main/install.sh) | bash \`\`\` Creates a \`.ai-guardrails/\` folder in your project with: \- 5 prompt files (PR review, secrets scan, API review, auth hardening, LLM red-team) \- 5 checklists (API, auth, secrets, LLM apps, frontend) \- Workflow guides for ChatGPT, Claude Code, Copilot Chat, Cursor \*\*Usage:\*\* Drag any \`.prompt\` file into ChatGPT or Copilot Chat → paste your code → get structured findings with CWE references and fix snippets. MIT licensed. Would love feedback on the prompt structure and contributions for new stacks (Python, Go, Rust).