The White House just threw a wrench into every multi-state AI compliance program. On December 11, 2025, President Trump issued an executive order directing the Department of Justice to identify and challenge state AI laws that conflict with federal policy. The "Ensuring a National Policy Framework for Artificial Intelligence" order broadly targets state-level AI regulation, but Colorado's AI Act is the only law explicitly named. AI Policy Director David Sacks called it "probably the most excessive." A DOJ AI Litigation Task Force must be operational within 30 days.
So here's the dilemma. Companies that spent months building compliance programs around state frameworks like Colorado's now face a tough call: keep investing in state-level compliance, or pause and wait for the legal dust to settle?
The answer depends on your risk tolerance, your customer base, and your funding timeline. But waiting isn't free.
What Happened
President Trump's December 11 executive order establishes federal preemption as the official policy for AI regulation. The order does three concrete things.
It creates a DOJ AI Litigation Task Force with a mandate to identify state AI laws that "obstruct" federal policy and initiate legal challenges against them. The 30-day formation deadline means the Task Force should be operational by mid-January 2026. While the broader preemption strategy encompasses multiple states, only Colorado's AI Act has been explicitly named as a target. Other states with AI legislation may face scrutiny, but they haven't been specifically identified for litigation.
The order also conditions federal broadband funding on state policy alignment. States that maintain AI regulations deemed inconsistent with federal policy risk losing access to federal infrastructure dollars. That's immediate pressure on state legislatures to reconsider their AI laws or face budget consequences.
And the same day brought separate OMB guidance (M-26-04) implementing the July 2025 "Preventing Woke AI" executive order. This establishes transparency requirements for AI vendors selling to the federal government. Per OMB guidance, contractors must now provide documentation on how AI models are built, trained, and modified. The guidance includes optional enhanced disclosure provisions for AI models to cite sources for their outputs. While this guidance stems from a different executive order, it creates a parallel federal standard for AI procurement even as the administration seeks to eliminate state-level requirements.
The legal mechanism here is federal preemption doctrine. The administration is arguing that comprehensive federal AI policy should displace conflicting state regulations under the Supremacy Clause. They're pursuing this through litigation rather than waiting for Congress to pass preemptive legislation.
What It Means for Multi-State Enterprises
The compliance picture just got a lot messier.
Before December 11, companies had a clear if burdensome path: map your AI applications to each state's requirements, build controls to satisfy the strictest jurisdiction, and document everything. Colorado's AI Act was emerging as a significant benchmark, requiring algorithmic impact assessments, consumer disclosures, and notice requirements for high-risk AI applications.
Worth noting what didn't happen: California Governor Newsom vetoed SB 1047 (the AI Safety Bill) in September 2024. The comprehensive AI safety framework that bill would have established never became law. California did pass 18 AI-related laws in 2025 (from 38 bills sent to the governor), but none create the comprehensive safety requirements SB 1047 would have imposed. That leaves Colorado's AI Act as the most significant state-level AI compliance obligation currently on the books.
Now that roadmap has a question mark drawn through it. The DOJ may challenge Colorado's law in federal court, and the broader preemption policy creates uncertainty for any state AI regulation. The litigation could take years to resolve. Appeals are inevitable regardless of which side wins at the district court level. A reasonable estimate puts final resolution in 2028 or 2029 at the earliest.
This creates a genuine dilemma for compliance teams. Continue building to Colorado standards, and you may be investing in requirements that get struck down. Pause or scale back, and you risk enforcement actions in states where the laws remain on the books until courts say otherwise. Colorado's Attorney General has shown willingness to enforce consumer protection and AI statutes aggressively. There's no guarantee federal courts will issue preliminary injunctions while litigation proceeds.
But here's the counterargument to pausing. These state laws exist and are enforceable today. The executive order creates a policy direction and a litigation strategy, not an immediate legal safe harbor. Colorado's AI Act doesn't become unenforceable because the DOJ announces plans to challenge it. Until a court issues an injunction or rules the law preempted, enterprises face the same state enforcement risk they did on December 10.
There's also a pragmatic case for continuing state-level compliance work. The requirements in Colorado's AI Act, namely impact assessments, testing documentation, consumer disclosures, and incident reporting, largely overlap with what sophisticated AI buyers are demanding in commercial contracts anyway. Enterprise customers conducting vendor due diligence increasingly ask for evidence of bias testing, model governance documentation, and impact assessments. Building these capabilities positions you for commercial success regardless of how the preemption litigation resolves.
The companies most exposed? Those with significant operations or customers in Colorado and other states considering similar legislation. And those in the middle of fundraising cycles or strategic transactions where regulatory risk is a due diligence focus.
The Federal Contractor Overlay: OMB Guidance and the MD/DC Angle
For enterprises selling AI products to the federal government, December 11 added a compliance layer rather than simplifying the picture. The OMB guidance (M-26-04) issued the same day establishes new federal AI procurement requirements that apply regardless of what happens to state laws.
Per the OMB guidance, federal contractors must now provide transparency documentation for AI models including information on how models are built, trained, and modified. The guidance addresses enterprise-level governance controls and model evaluation processes. It also includes optional enhanced disclosure provisions that would allow verification of source citations for AI outputs, though this source/provenance verification is not mandatory. For companies whose models don't currently maintain provenance information, the optional nature provides some flexibility.
These requirements apply to direct sales to federal agencies and flow down to subcontractors providing AI capabilities as part of larger system deliveries. For the DC metro area's concentration of federal contractors and AI vendors, this is an immediate compliance obligation. The requirements don't depend on FAR amendments or rulemaking. They're effective now as procurement policy.
Maryland presents its own complication. While the state hasn't enacted comprehensive AI legislation comparable to Colorado, the Maryland Online Data Privacy Act (MODPA) takes effect for processing activities on April 1, 2026. MODPA includes unique provisions prohibiting the sale of sensitive data entirely and imposes penalties up to $10,000 per violation ($25,000 for repeat offenses). The law includes a 60-day cure period that remains available until April 1, 2027. Companies using AI systems that process Maryland consumer data will need to ensure their data practices comply with MODPA even as they figure out the federal preemption landscape for AI-specific requirements.
The practical effect for DC-area enterprises: you're building compliance programs that must satisfy federal AI procurement requirements, state privacy laws like MODPA, and potentially state AI laws unless and until courts rule them preempted. The executive order simplifies none of this.
The Investor Diligence Question
Companies in active fundraising should expect this to surface in legal due diligence. Sophisticated investors and their counsel will want to understand your exposure.
The questions will include: Which state AI laws apply to your AI products or features? What compliance investments have you made to date? What's your position on continuing that investment given the preemption uncertainty? How would adverse litigation outcomes in either direction affect your product roadmap or go-to-market timeline?
The good news is that thoughtful answers exist. The bad news is that "we're waiting to see what happens" isn't one of them.
Investors want to see that you've assessed the landscape, made a defensible decision about compliance investment, and can articulate how different scenarios affect the business. That might mean continuing full compliance investment because your enterprise customers demand it contractually. It might mean prioritizing federal procurement compliance under the new OMB guidance while monitoring state litigation. It might mean geographic focus on states without omnibus AI laws until the picture clarifies.
What investors won't accept is regulatory risk you haven't quantified. Series A and later rounds increasingly include regulatory risk as a diligence category on par with IP ownership and customer concentration. The AI preemption situation tests whether your legal and compliance functions are sophisticated enough to handle genuine ambiguity.
Implementation Burden: What Compliance Actually Takes
For companies that decide to maintain state-level AI compliance programs pending litigation resolution, the operational requirements remain substantial.
Colorado's AI Act mandates algorithmic impact assessments for high-risk AI systems, requires consumer disclosures when AI makes consequential decisions, and establishes notice requirements when AI is used in contexts affecting employment, housing, or credit decisions. These requirements take effect June 30, 2026, giving companies additional runway to prepare.
Building these capabilities from scratch typically requires 6-12 months for a mid-sized enterprise, assuming you have legal counsel familiar with the requirements and technical staff who can implement documentation and testing workflows. Companies that have invested in AI governance frameworks aligned with NIST's AI Risk Management Framework or ISO 42001 will find significant overlap with state requirements. That can potentially reduce implementation time to 3-6 months of gap analysis and supplementary controls.
The OMB procurement requirements add another layer. Governance documentation requirements mean building artifacts that didn't previously exist. Companies that want to sell to federal agencies need to budget for this compliance work immediately. The guidance is effective now. The optional source citation provisions, while not mandatory, may become competitive differentiators for federal contracts that value enhanced transparency.
For contract terms, expect AI compliance representations and warranties to become standard in enterprise and government sales. Customers will want contractual commitments that your AI products comply with applicable laws, and they'll want indemnification for regulatory violations. The preemption uncertainty makes these provisions harder to negotiate because neither party knows which laws will ultimately apply. Consider limiting reps to compliance with laws "as currently in effect" or including carve-outs for laws subject to pending legal challenge.
Practical Takeaways
Here's what executives should focus on:
Inventory your AI deployment footprint by state. Map which AI features or products are deployed to customers in Colorado and other states with AI legislation under consideration. This is the exposure you need to quantify for board discussions and investor conversations.
Assess your federal sales pipeline. If you sell or plan to sell AI products to federal agencies, the OMB transparency requirements apply now. Determine what documentation gaps exist for governance and training methodology. Note that source citation capabilities are optional enhanced disclosures, not mandatory requirements.
Establish a litigation monitoring process. The DOJ AI Litigation Task Force should be operational by mid-January 2026. Track the cases it files, the preliminary injunction motions, and any rulings. Your compliance strategy may need to shift based on early court decisions.
Review in-flight contracts for AI compliance terms. Check whether pending enterprise or government contracts contain representations about state AI law compliance. Consider whether those provisions need modification to address preemption uncertainty.
Brief your board on regulatory risk. This is a genuine two-way door decision. Continuing state-level compliance investment has costs. Pausing has different costs. The board should understand the tradeoffs and approve the chosen approach.
Coordinate with outside counsel on preemption timing. Get a legal assessment of when and whether preliminary injunctions might issue in the expected DOJ challenges. That timing affects whether state enforcement risk is a 2026 problem or a longer-term consideration.
Align product roadmap with compliance path. If you're planning AI features that trigger state law requirements, decide whether to proceed, delay, or design around the requirements. Document this decision and revisit it as litigation develops.
Watchlist
Mid-January 2026: DOJ AI Litigation Task Force formation deadline. The Task Force must be operational within 30 days of the December 11 executive order. Expect initial target identification and potentially the first legal filings shortly after.
January 14, 2026: NIST AI Cybersecurity Workshop. NIST's draft AI Cybersecurity Framework Profile (IR 8596) will be discussed. This framework is likely to inform future federal requirements.
January 30, 2026: Public comment deadline for NIST's draft AI Cybersecurity Framework Profile. Submit comments if the framework would affect your AI security practices.
April 1, 2026: Maryland MODPA application date. The law's processing requirements become effective, affecting companies with Maryland consumers. The 60-day cure period remains available until April 1, 2027.
June 30, 2026: Colorado AI Act effective date. The law's requirements for impact assessments and consumer disclosures take effect, assuming no court injunction is issued before then.
2026-2027: Anticipated federal court proceedings on state AI law preemption. District court rulings could arrive in late 2026, with appeals extending into 2027 or 2028.
Looking Ahead
The next 90 days will reveal whether the administration's preemption strategy has immediate legal traction. Early injunction rulings, or the lack thereof, will signal how much weight federal courts give to the executive order's preemption claims. For enterprises, the practical reality is a compliance environment where state laws remain on the books and enforceable while simultaneously facing federal legal challenge.
The companies that will handle this best are those treating state AI compliance work as building blocks for commercial differentiation rather than pure regulatory burden. Enterprise buyers want to know their AI vendors have governance, can document their testing, and can demonstrate accountability. Those capabilities have value regardless of which government ultimately regulates AI.
The legal uncertainty is real. But the commercial demand for responsible AI practices isn't waiting for courts to sort out federalism.