The DOJ's AI Litigation Task Force officially launches on January 10. Its mission: challenge state AI laws in federal court. The problem for companies building or deploying AI systems? Those state laws are already in effect. California's transparency requirements kicked in January 1. Texas's AI governance rules are enforceable now. And until a federal court says otherwise, you're on the hook for compliance.
This creates a genuine paradox. The Trump administration's December 11 Executive Order signals that state AI regulation is unwelcome. But signaling isn't law. State attorneys general can still bring enforcement actions today. Your next enterprise customer will still ask about your AI compliance posture in due diligence. And the federal preemption fight could take years to resolve.
The smart move isn't to pick a side. It's to build a compliance program flexible enough to handle both outcomes.
What Happened
The Executive Order
On December 11, 2025, President Trump signed an Executive Order titled "Ensuring a National Policy Framework for Artificial Intelligence." The order takes direct aim at state-level AI regulation, arguing that a patchwork of state laws burdens interstate commerce and should be preempted by federal policy.
The order does three significant things:
First, it establishes a DOJ AI Litigation Task Force, launching January 10, 2026. The task force will challenge state AI laws in federal court on grounds they unconstitutionally burden interstate commerce or conflict with federal regulations.
Second, it directs the FTC to issue a policy statement by March 11, 2026, potentially classifying state-mandated bias mitigation as a "per se deceptive trade practice" under Section 5 of the FTC Act. This is creative lawyering: the theory is that requiring AI models to adjust outputs for fairness actually makes them less truthful, and therefore deceptive.
Third, it instructs the Commerce Department to condition $42.45 billion in BEAD broadband infrastructure funding on states repealing AI regulations the administration deems "onerous." The list of targeted laws is due in March.
The State Laws Already in Effect
While the federal government prepares its legal challenge, multiple state AI laws became enforceable on January 1, 2026:
California's Transparency in Frontier AI Act (SB 53) requires large AI developers (those with over $500 million in annual revenue) to publish safety protocols before deploying frontier models and report critical safety incidents to the California Office of Emergency Services within 15 days, or within 24 hours if there is imminent risk of death or serious physical injury. Penalties run up to $1 million per violation.
California's AI Transparency Act (SB 942) requires businesses operating generative AI systems with over one million monthly visitors to provide free AI detection tools.
California's AI Training Data Transparency Act (AB 2013) mandates public disclosure of training data details.
Texas's Responsible AI Governance Act (TRAIGA) establishes broad prohibited practices including social scoring, unlawful discrimination via AI, and biometric capture without consent. It requires disclosure when consumers interact with AI systems. Penalties range from $10,000 to $200,000 per violation depending on severity, with continuing violations adding $2,000 to $40,000 per day.
Colorado's AI Act was delayed to June 30, 2026, but remains the broadest state AI law on the books. It requires impact assessments for high-risk AI systems and mandates "reasonable care" to prevent algorithmic discrimination. Penalties reach $20,000 per violation. The Trump EO specifically calls it out as forcing "AI models to produce false results."
What It Means
The Compliance Paradox
Here's the uncomfortable reality: regulatory uncertainty is increasing, not decreasing.
The administration frames its Executive Order as reducing compliance burden. In practice, the opposite is true. Companies now face a two-front compliance challenge. State laws are enforceable today. Federal preemption might come later. Operating as if state laws don't exist is a bet that DOJ wins its litigation quickly and completely. That's not a safe bet.
Constitutional challenges take time. The Commerce Clause arguments have merit, but federal courts don't move fast. State attorneys general, meanwhile, have every incentive to bring enforcement actions before their laws get invalidated. California's AG has been aggressive on tech enforcement. Texas has signaled similar intent. Even if preemption eventually succeeds, enforcement actions filed before a court ruling could survive.
The Counterargument
Some advisors are telling clients to pump the brakes on state compliance spending. Their logic: why invest in programs that might become legally irrelevant?
There's a kernel of sense here. If you're a smaller company not clearly covered by California's $500M revenue threshold, or if you're operating AI systems that don't fall into "high-risk" categories under these laws, aggressive early compliance spending may not be justified. The EO does create a real possibility that some state requirements get struck down or withdrawn.
But for companies deploying AI in employment, lending, healthcare, insurance, or housing decisions, the risk calculus is different. These are consequential decisions under Colorado's framework. They're exactly the use cases state regulators care about most. And they're where your enterprise customers will demand compliance assurances regardless of what happens in federal court.
The Board Question
Your board will ask about this. Probably soon.
They'll want to know whether the company is exposed to state AI enforcement, what compliance costs look like, and whether the federal preemption push changes your strategy. The right answer isn't "we're waiting to see what happens." It's: "We've built a flexible governance program that satisfies current requirements while preserving optionality as the federal picture clarifies."
That's the answer that lets you close enterprise deals, pass due diligence, and avoid being the test case for a state AG looking to make a point about AI accountability.
Maryland and DC Considerations
Neither Maryland nor DC has enacted AI legislation comparable to California, Texas, or Colorado. Companies headquartered in the DMV area don't face state-specific AI compliance requirements beyond general consumer protection and anti-discrimination laws.
However, proximity to federal regulators cuts both ways. DC-based AI companies may receive more scrutiny as the federal preemption battle plays out. And any company selling to federal agencies should expect AI governance requirements to tighten through procurement channels, regardless of what happens with state law preemption.
Maryland companies receiving BEAD broadband funding should monitor the Commerce Department's March list closely. If Maryland has enacted any AI-adjacent requirements that Commerce deems problematic, that funding could be at risk.
Practical Takeaways
Don't dismantle existing AI governance programs. State laws are enforceable now. The federal preemption fight will take years. Abandoning compliance is a bet you'll lose.
Audit your AI systems against California, Texas, and Colorado requirements. Even if you're not headquartered in these states, if you have users or operations there, you likely have obligations.
Build flexibility into your compliance infrastructure. Document your decision-making frameworks, but design them to adapt as requirements evolve. Modular beats monolithic.
Update your AI disclosure language. Texas and California both require disclosure when consumers interact with AI. Review your chatbots, customer service systems, and any consumer-facing AI for compliance.
Brief your legal and compliance teams on the March 2026 deadlines. The FTC policy statement and Commerce Department list will clarify which state laws face the most pressure. Calendar those dates now.
Review your enterprise contracts for AI compliance reps. Your customers are asking about AI governance. Make sure your reps and warranties reflect what you can actually deliver, not aspirational compliance.
Document your good-faith compliance efforts. If you do face enforcement, demonstrable effort to comply matters. Create the paper trail now.
Watch for state AG enforcement signals. California and Texas AGs may accelerate enforcement before federal courts can act. Monitor their public statements and enforcement priorities.
What We're Watching
January 10, 2026: DOJ AI Litigation Task Force officially launches. Watch for initial target selection and litigation strategy signals.
March 11, 2026: FTC policy statement on AI and state law preemption due. This could provide the legal theory for broader preemption challenges.
March 2026: Commerce Department publishes list of state AI laws deemed "onerous" for BEAD funding purposes. This signals which laws face the most federal pressure.
June 30, 2026: Colorado AI Act effective date. By then, we'll know whether federal challenges have gained traction.
State AG activity: California and Texas attorneys general have been aggressive on tech enforcement. Expect them to move before federal courts can intervene.
The federal-state AI showdown won't resolve quickly. Constitutional litigation moves slowly. State legislatures may amend their laws. The FTC's March policy statement will add another variable. For at least the next 18 months, companies building and deploying AI systems will operate in genuine regulatory uncertainty.
That's not a reason to freeze. It's a reason to build governance programs that work under multiple scenarios. The companies that get this right will have a competitive advantage: they can tell customers and investors they're compliant today and adaptable tomorrow. The ones who bet on a single outcome, whether state law survives or federal preemption wins, are taking unnecessary risk.
The framework you build now is the framework you'll compete with for the next two years. Make it flexible.