Users express significant concern over data privacy and sovereignty when M365 Copilot uses third-party LLMs (e.g., Anthropic's Claude), as data is processed outside Microsoft-managed environments. They request clearer controls, auditing capabilities, and transparency on how access to these models is managed within tenant settings, advocating for default disabling until guarantees match the rest of the M365 ecosystem.
I've deleted my post from this morning expressing excitement for Microsoft offering Claude models as part of M365 Copilot. I assumed like every other AI offering in the M365 / Azure ecosystem (including models from Open AI, Meta, Deepseek and even xAI), that all processing would be hosted in Azure and the usual data organisational data privacy and sovereignty guarantees would be in place. This is NOT the case - Microsoft say on their website: "When your organization chooses to use an Anthropic model, your organization is choosing to share your data with Anthropic ... This data is processed outside all Microsoft‑managed environments and audit controls, ... In addition, Microsoft’s data‑residency commitments, audit and compliance requirements, service level agreements, and Customer Copyright Commitment do not apply to your use of Anthropic services." As a long-standing Microsoft fanboy, in no small part because they do enterprise privacy so well, this feels like a punch to the gut to me. M365 Copilot is the AI service for work. We accept that it doesn't offer the very best models or the best feature versatility, in exchange for the safety of being able to use AI on work data without the risk of breaking GDPR, copyright or just leaking internal organisational info. That, along with its integration in the M365 ecosystem which dominates knowledge work in most orgs, means it's still a no brainer outstanding value proposition. This partnership with Anthropic is nothing like the one with Open AI where Microsoft literally owns its own versions of the GPT models and runs them entirely separately from Open AI. This Anthropic addition is effectively inviting a 3rd party AI lab to access any and all internal work data. As someone who ordinarily pleads with sysadmins to enable tech capabilities to improve employees' abilities to do their jobs, I'm taking the opposite position here: if you're a sysadmin in a country where GDPR compliance applies, and assuming you have a finite lawsuit budget, 𝐮𝐧𝐝𝐞𝐫 𝐧𝐨 𝐜𝐢𝐫𝐜𝐮𝐦𝐬𝐭𝐚𝐧𝐜𝐞𝐬 𝐬𝐡𝐨𝐮𝐥𝐝 𝐲𝐨𝐮 𝐞𝐧𝐚𝐛𝐥𝐞 𝐂𝐥𝐚𝐮𝐝𝐞 𝐚𝐜𝐜𝐞𝐬𝐬 𝐢𝐧 𝐌365 𝐂𝐨𝐩𝐢𝐥𝐨𝐭. It hurts me to say because I love the Claude models, but this is a red line.