IL2, IL4, IL5: Impact Levels and Copilot
A practical explainer of DoD Impact Levels (IL2, IL4, IL5) and how to talk about Copilot deployments in that context without over-claiming. We'll define what Impact Levels mean, how Microsoft publishes IL-related compliance information, and what you should document when deploying Copilot in GCC, GCC High, or DoD.
Overview
In DoD and defense industrial base environments, Impact Levels aren’t just jargon. They’re deployment constraints that determine which cloud environments can host which workloads and data types. When you’re evaluating Microsoft 365 Copilot for GCC, GCC High, or DoD environments, you need to be able to answer: what Impact Level context are we operating in, and where do we verify what’s authorized?
This video gives you a practical explanation of DoD Impact Levels, explains how Copilot inherits the compliance posture of your environment, and walks through the verification and documentation steps you need to align Copilot deployment with your IL requirements.
What You’ll Learn
- Impact Level Basics: What IL2, IL4, and IL5 mean in DoD cloud security requirements
- Copilot Context: How Copilot inherits the compliance posture of your Microsoft 365 environment
- Verification Process: How to verify IL alignment using Microsoft’s offering documentation
- Documentation Requirements: What to document for your SSP and ATO package
Script
Hook: the three letters that drive decisions
In DoD environments, “Impact Level” isn’t jargon. It’s a deployment constraint.
It determines which cloud you can use. It determines what workloads you can run. It determines what data you can store.
So if you’re evaluating Copilot for a DoD or defense industrial base environment, you need to be able to say: what IL context are we in, and where do we verify what’s authorized?
That’s what we’re covering today. No marketing speak. Just the practical steps to verify IL alignment and document Copilot deployment correctly.
Impact Levels in plain language
Let’s start with what Impact Levels actually are.
DoD Impact Levels are part of the Department of Defense’s Cloud Computing Security Requirements Guide, the SRG. They categorize what types of data and workloads a cloud environment can support based on the potential impact if that data were compromised.
There are six levels, zero through five, but the ones you hear about most in government cloud discussions are IL2, IL4, and IL5.
IL2 is for unclassified DoD information approved for public release. Think public-facing content, general communications, things that wouldn’t cause harm if disclosed.
IL4 is for controlled unclassified information, CUI. This is where most DoD mission work happens. It’s not classified, but it’s sensitive. Export-controlled technical data, procurement-sensitive information, law enforcement records, things that could cause real harm if they got out.
IL5 is for classified information up to Secret, plus CUI with additional controls. This is mission-critical, national security work.
Each Impact Level has progressively stricter requirements for the cloud provider and for you as the customer. Different security controls, different isolation requirements, different monitoring expectations.
Here’s the guidance line to remember: don’t rely on hallway memory for IL mappings. Use the official offering documentation for the exact service and environment you’re deploying.
Where Copilot fits conceptually
So where does Copilot fit in this?
Copilot is a Microsoft 365 capability that grounds on your tenant’s data. It works across Teams, Outlook, Word, Excel, PowerPoint, and more. It uses Microsoft Graph to access organizational content that users already have permission to see.
Here’s the key thing: Copilot doesn’t have its own separate compliance posture. It inherits the compliance posture of the Microsoft 365 environment it’s running in and the tenant controls you configure.
If you’re in GCC, Copilot runs in GCC. If you’re in GCC High, it runs in GCC High. If you’re in the DoD environment, it runs there.
The Impact Level authorization isn’t about Copilot as a feature. It’s about the authorized Microsoft 365 offering in your environment.
You validate IL posture at the environment and offering level, not at the “feature marketing” level. You’re not approving “AI.” You’re approving specific Microsoft Online Services in an authorized cloud environment, and you’re documenting how Copilot operates within that service boundary.
That’s the framing that works for ATO packages and for auditors.
How to verify IL alignment for your deployment
Now let’s talk about the actual verification steps.
Step one: identify your environment. Are you in commercial GCC? GCC High? The DoD environment? Each one maps to a different Impact Level context.
GCC generally aligns with IL2 requirements. It’s built for federal agencies working with unclassified data that’s not CUI.
GCC High is built for IL4 and IL5 workloads. It provides the isolation, control, and documentation needed for controlled unclassified information and certain classified workloads.
The DoD environment has a separate provisional authorization from DISA specifically for DoD mission owners.
Step two: identify the specific Microsoft 365 offering and Copilot experiences in scope. Are you deploying Copilot for Microsoft 365? Just Copilot in Teams? Are you using other Copilot capabilities like Copilot Studio or Copilot for Security? Each service has its own compliance documentation.
Step three: use Microsoft’s published DoD IL offering documentation for the services in question. Microsoft publishes detailed compliance information that maps offerings to Impact Levels. That’s your authoritative source. Not slide decks, not blog posts, not third-party summaries. The official offering documentation.
Step four: capture the boundary statement and shared responsibility notes for your package. This is where your SSP and ATO language gets specific. What data is in scope? What controls are inherited from Microsoft’s authorization? What controls are customer-owned? What logs and retention policies are you enforcing?
For Copilot, you need to document:
Data handling. Copilot processes prompts and responses within the Microsoft 365 service boundary. Prompts aren’t used to train foundation models. Your data stays in your tenant.
Identity and device controls. Conditional Access policies, multi-factor authentication, compliant device requirements. These are yours to configure and enforce.
Permission governance. Copilot can only access what users can access. If you have oversharing problems, you need to fix them. Sensitivity labels, data loss prevention policies, retention and deletion controls.
Audit logging and monitoring. You configure the logging, you define the retention, you build the investigation playbooks.
That’s the work that turns an authorized environment into a documented, defensible deployment.
Common pitfalls
Let’s talk about where people get tripped up.
First pitfall: over-simplifying. Impact Level is not a single label you slap on “Copilot.” It’s about the authorized environment and the specific services you’re operating. You can’t say “Copilot is IL4” as a blanket statement. You have to say “We’re deploying Copilot for Microsoft 365 in our GCC High environment, which holds the appropriate authorization for our IL4 workload.”
Second pitfall: mixing terms. FedRAMP, DoD Impact Levels, and DISA SRG requirements are related but not interchangeable. FedRAMP is a federal program built on NIST controls. Impact Levels are DoD’s categorization scheme. The SRG is the actual requirements document. Know which one you’re talking about in each conversation.
Third pitfall: forgetting customer responsibilities. Even in an IL5-authorized environment, you still own identity, permissions, monitoring, and governance. The cloud provider’s authorization doesn’t implement your organization’s policies for you.
And here’s a bonus pitfall: relying on outdated information. Microsoft’s compliance offerings evolve. New services get added. Boundaries get updated. Don’t copy-paste old ATO language without verifying current documentation.
Close: the safe, accurate answer
So here’s the answer you can give when leadership or auditors ask about Impact Levels and Copilot.
“We’ll align Copilot to our IL context by verifying the authorized Microsoft offering for our environment using Microsoft’s DoD IL documentation. Copilot operates within Microsoft 365’s service boundary and inherits tenant controls, so our deployment package focuses on documenting data handling, enforcing identity and device controls, fixing oversharing risk, and ensuring auditing and retention meet requirements.”
That’s accurate. That’s defensible. That’s what you put in writing.
Impact Levels aren’t scary once you understand they’re about verifying the environment, not labeling a feature. Do the verification work. Use authoritative sources. Document the customer responsibilities clearly.
That’s how you deploy Copilot in DoD environments without over-claiming or under-delivering.
Sources & References
- Microsoft DoD IL2 Compliance Offering — Microsoft documentation describing DoD IL2 compliance offering context
- Microsoft DoD IL4 Compliance Offering — Azure compliance documentation for DoD IL4 offering context
- Copilot Security Model — Copilot security model overview to frame boundary and control inheritance
- Copilot Data Handling and Privacy — Copilot data handling and service boundary language for documentation