Sensitivity Labels and Copilot
How Microsoft Purview sensitivity labels work with Copilot. Learn how labels travel with the data, how they affect summaries, and what users need to know.
Overview
In government, data classification isn’t optional. You have CUI, PII, and mission-sensitive data that requires specific handling.
The question is: When Copilot reads a “Secret” document and helps you write an email, does it know the email should also be “Secret”?
This video breaks down the interaction between Microsoft Purview Sensitivity Labels and Copilot. We’ll explain how protection travels with the data, what the user experience looks like, and how to configure your policy to ensure compliance.
What You’ll Learn
- How Copilot handles encrypted and labeled content
- The “inheritance” logic for generated content
- Policy configuration tips for government agencies
- Why user training is your most important control
Script
Hook: labels are the ‘rules of the road’ for sensitive data
In government, a label isn’t just a sticker. It’s an instruction. It tells the system—and the user—how to handle the data.
So the big Copilot question is: if I have a document labeled “CUI,” does Copilot respect that? Or does it accidentally strip the protection away?
The short answer is: Copilot relies on your labels. It doesn’t replace them.
Quick refresher: what sensitivity labels do
Sensitivity labels classify and protect your content.
When you apply a label, two things happen:
- Metadata: The file is tagged so other systems know what it is.
- Protection: You can enforce encryption, watermarks, or access restrictions.
Copilot lives downstream of this. It only sees what the user is allowed to see. If a file is encrypted and the user doesn’t have the right to open it, Copilot can’t read it either.
How labels show up in Copilot scenarios
But what happens when Copilot can read it?
Let’s say you ask Copilot to summarize a Word document labeled “Internal Use Only.”
Copilot reads the document (because you have access). It generates the summary.
Crucially: The user is responsible for the final output. Copilot creates a draft. If that draft contains sensitive info from the source, the user must ensure the new file (or email) is labeled correctly.
In some configurations, Copilot can inherit the label from the source context, but in government environments, you should treat Copilot as a draft accelerator. The human in the loop is the classification authority.
Configuration: policy choices that matter
To make this work, you need to configure your label policies intentionally.
- Taxonomy: Make sure your labels match your actual data types (CUI, PII, etc.).
- Mandatory Labeling: Consider requiring users to label documents and emails before saving or sending. This forces the “human in the loop” moment.
- Justification: If a user tries to downgrade a label, require a justification code.
Operational guidance: training + enforcement loop
This brings us to the most important control: Training.
You must train your users on a simple concept: “Label before you share.”
Copilot makes it easy to create content fast. That means it’s easy to create unlabeled content fast. Your training needs to reinforce that verifying the label is part of the drafting process.
Close: what to document for compliance
For your ATO or compliance package, document this:
- Your label taxonomy.
- Your policy settings (encryption, mandatory labeling).
- And the results of your pilot tests where you verified that Copilot respects the encryption on your sensitive files.
That is your governance story.
Next up, we’ll talk about Oversharing Risks—and why “sharing” is the root cause of most AI security headaches.
Sources & References
- Sensitivity labels and Microsoft Copilot — Guidance on how Copilot interacts with protected content within the security model
- Learn about sensitivity labels — Fundamentals of classification and protection labels in Purview
- Data, Privacy, and Security for Microsoft 365 Copilot — Service boundary and permission model context