Skip to content

Field note

DLP for Copilot and Third-Party AI

The smartest AI policy in the world will not help much if sensitive content is already overshared, unlabeled and available to the wrong audience inside the tenant.

Published23 Mar 2026

Updated7 weeks ago

Read time3 min · 649 words

AuthorGyorgy Bolyki

AI changes the speed of the problem, not the nature of it.

If sensitive documents are already sitting in broad SharePoint sites, if guests still have access they should have lost months ago, or if nobody can tell which files deserve stronger handling, Copilot and other AI tooling will make those weaknesses more visible. Sometimes much more visible.

That is why DLP for AI should not be framed as a magic shield. It is one important layer inside a bigger discipline: sensible permissions, clear labels, and realistic collaboration boundaries.

Microsoft now provides native DLP controls for Microsoft 365 Copilot and Copilot Chat. Those controls can block sensitive information in prompts and can restrict Copilot from using labeled files and emails in summarisation. That is useful, but only for the part of the estate those controls can actually reach. Third-party AI still pushes you back to the basics: browser policy, endpoint controls, data location, and whether users can casually copy sensitive content out in the first place.

Start with exposure, not policy slogans

Ask these before building fancy AI policy.

  1. Which sites contain sensitive data?
  2. Who can access them?
  3. Are "Everyone" or broad groups used?
  4. Are guests present?
  5. Are sensitivity labels in use?
  6. Are unmanaged devices allowed to download files?
  7. Are DLP policies active?
  8. Who reviews alerts?

These questions are blunt on purpose. If you cannot answer them, your AI policy is probably getting ahead of your security reality.

What DLP can do usefully here

RiskControl
Sensitive data typed into promptsDLP policies for Microsoft 365 Copilot and Copilot Chat
Labeled files and emails used in summarisationCopilot DLP controls tied to sensitivity labels
Sensitive files overshared in collaboration spacesSharing cleanup and permissions review
Users copying data to unmanaged contextsEndpoint, browser and Conditional Access controls
Unknown sensitive content spread across the tenantPurview discovery, labeling and posture review

A sequence that makes operational sense

Do not start with a shiny AI standard document. Start with the ugliest practical gaps.

  1. Classify the obvious sensitive locations.
  2. Clean up broad SharePoint permissions.
  3. Apply labels to high-value sites and files.
  4. Enable the Purview controls that matter for Copilot use.
  5. Create DLP policies for the data types that genuinely matter.
  6. Monitor first, then enforce once false positives are understood.

That order matters. DLP is much easier to trust when the environment has already been narrowed a bit.

The uncomfortable part

Most AI risk inside Microsoft 365 is still a Microsoft 365 hygiene problem.

If a user already has valid access to an overshared file, AI can help them find it faster, summarise it faster, and use it in more places. That is a permissions problem first. DLP helps, but it should not be asked to rescue a wide-open collaboration model all on its own.

The useful metric is not how many policies exist in the portal. It is whether risky content is becoming harder to expose by accident.

What a good first month looks like

By the end of a decent first pass, you should be able to say:

  • these are our most sensitive Microsoft 365 locations
  • these are the groups and sites we tightened first
  • these are the Copilot-related DLP controls we enabled
  • these are the exceptions we accepted for now

That is not glamorous work. It is still the work that makes later AI adoption feel deliberate instead of reckless.

References

Related notes

Need help mapping this to your own tenant, controls, or assessment timeline?