Monday morning. 50 users couldn't authenticate through Okta.
Same error across all of them: "This device does not meet your organization's requirements."
This was several weeks after we'd enabled Okta Device Trust enforcement across our Windows fleet, part of the broader MDM overhaul I wrote about in Part 1. Device Trust had been live and quiet. Until Monday.
What Okta Device Trust Was Actually Enforcing
Okta's Device Trust policy for Windows had one explicit requirement: machines running below build 26200 were not permitted to authenticate. Anything below that build was treated as non-compliant, and the Sign-On Policy rule was set to deny access for non-compliant devices.
That's the correct configuration for production. A device that hasn't received current Windows builds is a device you don't want inside your SSO perimeter.
The assumption behind the policy was that our Intune update ring had pushed build 26200 to every managed Windows machine before we enabled enforcement. That assumption was wrong.
The Intune Update Ring Gap
Intune update rings target devices through Azure AD groups. The ring had been running for weeks before enforcement went live. I took that as confirmation that every managed machine had received the build.
It hadn't.
When we pulled Intune device records against the locked-out accounts, the pattern was immediate: every affected machine was below build 26200. The ring had a targeting gap. Some devices had fallen through, and nobody had caught it because pre-enforcement, those machines were logging in normally. Nothing was blocking them, so nothing was flagging them.
The moment the deny rule went active, 50 users discovered simultaneously that their machines had never received the required build.
Finding the Common Variable
The first step in any incident like this is finding what the affected users have in common. The Intune device data gave us a clean answer fast: every locked-out machine was below build 26200, every unaffected machine was at or above it.
That clarity matters. When the root cause is unambiguous you can move to resolution immediately rather than spending time ruling out other variables. The harder question was how to restore access while fixing the underlying problem without permanently compromising the Device Trust posture we'd just enabled.
The Two-Track Response
We ran two workstreams in parallel.
The first was immediate access restoration. Okta's bypass group mechanism lets you exempt specific users from Device Trust enforcement at the Sign-On Policy level. Adding a user to the bypass group restores their access without changing the policy for everyone else. We identified all 50 affected accounts and had them in the bypass group within the first 30 minutes.
The second was root cause remediation. We built a new Intune update ring explicitly scoped to the affected devices, targeted via an Azure AD group containing machines below build 26200. Before pushing it broadly, we tested it on a small set of machines to confirm it successfully delivered the build and updated device compliance status in Intune. Once confirmed, we pushed the new ring to the full affected group via an Okta group assignment to Intune.
By mid-morning, machines were receiving the update and reporting compliant.
The Automation
The bypass group was a temporary fix. Leaving 50 users in a Device Trust bypass indefinitely defeats the point of having Device Trust at all. But removing them manually, checking each machine individually and removing each user once their machine reached the required build, is the kind of process that doesn't scale and doesn't get done consistently under time pressure.
I built an automation using the Intune and Okta APIs. The logic: poll the reported OS build version for each device associated with a user in the bypass group. When a device reached build 26200 or higher, remove the user from the bypass group and from the Intune assignment group. Log the removal.
This ran continuously until the bypass group was empty. No manual verification. No users accidentally left in bypass because the follow-up fell off someone's task list.
Total incident window: roughly 2 to 3 hours from first lockout to full resolution with cleanup underway.
What Changed After
Two things went into the runbook.
The first was a pre-enforcement compliance check. Before enabling any Device Trust policy with a specific OS version requirement, pull a compliance report from Intune and verify that every managed device in scope actually meets the requirement. Don't trust the update ring. Verify it directly against device records. This takes 20 minutes and would have prevented the incident entirely.
The second was staged enforcement. Configure the policy but set the Sign-On Policy action to allow rather than deny initially, and monitor which devices would have been blocked under a deny rule. That population is your remediation list. Fix it before switching to a hard deny.
Both of those steps exist in the runbook now because I skipped them the first time. The update ring had been running for weeks and I assumed that meant every machine had received the build. Assumptions don't survive contact with Intune device targeting edge cases.
The automation pattern here (poll a compliance attribute, auto-remove from bypass when the condition is met) applies to any situation where you need a temporary access exception tied to a specific remediation state. If you're building something similar or running into Intune update ring behavior that doesn't match your expectations, I'm on LinkedIn.