Loading request...
Users running CrowdStrike want better visibility into the actions taken by AI components, including notifications or logs when automated actions occur, to ensure they can review and manage these processes effectively.
As these platforms add more AI-driven automation: autonomous triage, auto-response, AI-based policy changes, how are you currently keeping track of what these AI components are actually doing? Not asking about threat detection quality. More about the operational side, do you know when an AI feature took an automated action? Do you review it? Is there any process around it or is it pretty much set and forget? Genuinely curious how teams are handling this in practice.