AI/ML 11 min read

The Federal Government Is Coming for State AI Laws. Here's What to Do Before March.

DOJ created an AI Litigation Task Force to sue states over AI laws. With California, Texas, and Illinois laws already in effect, here's how to handle the federal-state showdown.

By Meetesh Patel

The Department of Justice created an AI Litigation Task Force on January 9, 2026. Its job: sue states whose AI laws the administration considers too burdensome for businesses. First lawsuits are expected this month. Meanwhile, California, Texas, and Illinois AI laws are already in effect, and Colorado's law takes effect June 30.

What does this mean for you? If your company uses AI tools to make hiring decisions, screen loan applications, process customer requests, or automate any significant business function, you're caught in the middle of a fight between the federal government and the states. Both sides say they're in charge. Neither is backing down.

This isn't abstract. It affects real decisions: what your investors will ask in due diligence, what your vendors will promise in contracts, and what your customers will expect you to prove.

The Executive Order That Started This Fight

On December 11, 2025, President Trump signed Executive Order 14365, titled "Ensuring a National Policy Framework for Artificial Intelligence." Here's what the order does in plain terms.

It tells the Attorney General to create a task force of federal lawyers whose only job is to sue states over their AI laws. Attorney General Pam Bondi announced this Task Force on January 9, 2026. The team will work with David Sacks, the White House advisor on AI and cryptocurrency, to decide which state laws to challenge first.

It tells the Secretary of Commerce to publish a report by March 11, 2026 listing state AI laws the administration considers too burdensome. These laws will be referred to the Task Force for potential lawsuits.

It tells the Federal Trade Commission (the FTC, the agency that protects consumers from unfair business practices) to issue guidance by the same date explaining how federal consumer protection law applies to AI.

The order specifically calls out Colorado's AI Act as an example of state overreach, claiming it "will force AI models to produce false results." That makes Colorado the most likely first target, but California, Texas, and Illinois are also at risk.

What Are These State AI Laws, and Who Do They Affect?

Several states passed laws regulating AI that went into effect on January 1, 2026. Here's a plain-English summary.

California's Frontier AI Transparency Act: This law applies to very large AI companies, specifically those with over $500 million in revenue that build the biggest AI models (think companies like OpenAI or Anthropic). These companies must publicly document how they're managing catastrophic risks from their AI systems. If you're a startup or mid-sized company, this law probably doesn't apply to you directly, but your AI vendors may be covered.

Texas Responsible AI Governance Act: This is broader. If you do business in Texas and use AI systems, you must ensure your AI doesn't encourage self-harm, violate people's constitutional rights, discriminate illegally, or create illegal deepfakes. The Texas Attorney General can demand information about how your AI systems work, including what data you used to train them. One helpful provision: if you follow the federal government's AI risk management guidelines (called the NIST AI Risk Management Framework), you have a defense against enforcement.

Illinois HB 3773: This law focuses on employment. If you use AI to make hiring, promotion, or firing decisions, you can be held liable if the AI discriminates against protected groups (like race, gender, age, or disability). The key point: intent doesn't matter. If your AI system produces discriminatory results, you can face legal consequences even if you didn't mean for it to happen.

Colorado AI Act: This law was delayed until June 30, 2026. It requires companies using "high-risk" AI systems (like AI that decides who gets loans, insurance, or jobs) to take reasonable steps to prevent discrimination. The law applies to you if you have customers in Colorado, regardless of where your company is located. This is the law the executive order targets by name.

Why Is the Federal Government Challenging These Laws?

The Task Force will argue that state AI laws are illegal for a few reasons.

The interstate commerce argument: The U.S. Constitution gives the federal government power over commerce between states. If a state law makes it harder for companies in other states to do business, it might violate this principle. The federal government will argue that AI companies shouldn't have to comply with 50 different state laws.

The preemption argument: When federal law and state law conflict, federal law usually wins. The administration will argue that its executive orders and agency rules represent federal AI policy, and state laws that contradict this policy should be struck down.

Here's the catch: Executive orders are not the same as laws passed by Congress. Courts generally require an actual federal statute to preempt state law. The administration hasn't identified a specific federal law that these state AI laws violate.

This means the legal fight will take years to resolve. It could reach the Supreme Court.

The States Are Not Backing Down

On December 19, 2025, attorneys general from 23 states (including California, Colorado, Texas, and Washington, D.C.) sent a letter arguing that the federal government has no authority to override state AI laws. Notably, two Republican attorneys general signed this letter, making it bipartisan.

Their argument: States have always had the power to protect their residents from harmful products and practices. AI is no different. The federal government can't just announce that it's taking over without Congress passing a law.

California's Attorney General Rob Bonta has said his office "will continue to protect Californians from the harms of AI" regardless of the executive order.

What does this mean for you? State enforcement isn't stopping. Even while the federal government prepares to sue, states are continuing to enforce their laws. Companies can't simply ignore state requirements and hope the federal challenge succeeds.

What Investors Will Ask at Your Next Board Meeting

If you're raising funding, expect your investors' lawyers to ask about AI compliance. The old question was simple: "Do you comply with AI laws?"

The new question is harder: "What's your plan for a regulatory environment where the federal government and states are actively fighting over who makes the rules?"

You need a real answer. "We're waiting to see what happens" is not a strategy.

For most companies, the practical answer is this: comply with state laws now because they're enforceable today, while staying flexible enough to adapt if the federal government wins its legal challenges. This isn't playing both sides. It's the only sensible approach when the law itself is unsettled.

Federal Enforcement Is Getting Lighter

There's evidence that federal AI enforcement under the current administration will be less aggressive than state enforcement.

In 2024, the FTC sued Rytr, a company that sells an AI writing assistant. The agency claimed Rytr's tool could be used to generate fake product reviews, and it banned Rytr from offering any AI service that generates reviews or testimonials.

On December 22, 2025, the FTC reversed itself. The agency vacated (canceled) the ban by a 2-0 vote. The FTC said the original complaint didn't prove that Rytr actually caused harm to consumers. The agency also said that banning a technology just because it "could be used in a problematic manner" goes too far.

What this means: The federal government is shifting toward requiring proof of actual harm before taking action against AI companies. They're moving away from banning things just because they might be misused.

But this creates a problem for companies operating in multiple states. You face strict enforcement from California, Colorado, and Texas, but relaxed enforcement from federal agencies. Until this gets sorted out, you're dealing with conflicting signals.

Three Ways to Handle This

Option 1: Follow All State Laws Now (Lower Risk)

Comply with every applicable state AI law immediately. Yes, some of these laws might get struck down in court. But lawsuits take years. Enforcement is happening today. If you're selling to large enterprises, operating in regulated industries like healthcare or finance, or preparing for investor due diligence, this is probably the right approach.

The cost: You'll spend money complying with laws that might not survive. The benefit: You're not gambling on the outcome of constitutional litigation.

Option 2: Focus on the Basics (Medium Risk)

Identify requirements that appear across multiple state laws: being transparent about how you use AI, documenting your AI decision-making processes, testing your AI for discrimination, and having humans review high-stakes AI decisions. Implement these common requirements and skip state-specific rules that go beyond the basics.

This works if you're not using AI for sensitive decisions (like hiring or lending) and you can move quickly if a specific state starts enforcement against your industry.

Option 3: Wait for Clarity (Higher Risk)

Hold off on compliance investments until the legal picture clears up. This only makes sense if you have very limited exposure to California, Colorado, Texas, and Illinois, and you're comfortable with the possibility of enforcement.

Most companies should avoid this approach. Important guidance drops on March 11, 2026. In about five weeks, you'll have much better information about which way this is heading.

What to Do This Week

Start by figuring out where you stand. Audit your AI systems to understand which ones might be covered by California, Colorado, Texas, or Illinois laws. Look at what decisions your AI helps make. Employment screening? Customer credit checks? Insurance underwriting? These are high-risk areas under most state laws.

Check your vendor contracts. Your software vendors may have made promises about AI compliance, or they may have passed that responsibility to you. Know what you've agreed to.

Flag any AI use in hiring, credit, or insurance decisions for legal review. These are the areas where state enforcement is most likely.

What to Do by March 11

Watch for two important documents: the Commerce Department's report listing "burdensome" state AI laws, and the FTC's guidance on how federal consumer protection law applies to AI. These will signal which laws the federal government plans to challenge.

Update your board or leadership team on your compliance approach. You should be able to explain your strategy in plain terms.

If you're doing business in Texas, document your AI risk management practices. Following the NIST AI Risk Management Framework gives you a legal defense under Texas law.

What to Do by June 30

If you have customers in Colorado, complete your compliance work before the Colorado AI Act takes effect. The law requires "impact assessments" for high-risk AI systems. Start now so you're not rushing at the deadline.

Review your contracts with vendors and customers. Who bears the risk if an AI system causes problems? Make sure your agreements address this clearly.

Prepare simple explanations of your AI governance practices for sales conversations. Enterprise customers increasingly ask about this during procurement.

Key Dates to Watch

February 2026: The DOJ Task Force is expected to file its first lawsuits. Colorado is the likely first target.

March 11, 2026: The Commerce Department and FTC publish their guidance documents. This will show which state laws the federal government views as most problematic.

June 30, 2026: Colorado's AI Act takes effect.

August 2, 2026: California's law requiring AI content watermarking and detection tools takes effect.

2027 or later: Courts will begin issuing decisions on the federal challenges. Final resolution is years away.

The Bottom Line

Two levels of government are fighting over who gets to regulate AI. That fight won't end anytime soon.

The practical answer: comply with state laws that are enforceable today while building good documentation and governance practices. Focus on transparency, discrimination testing, human oversight, and keeping good records.

If the federal challenges succeed, you'll have built good practices that you can simplify later. If the states win, you'll already be compliant. Either way, you'll have a clear answer when your board or investors ask what you're doing about AI risk.

Disclaimer: This article is provided for informational purposes only and does not constitute legal advice. The information contained herein should not be relied upon as legal advice and readers are encouraged to seek the advice of legal counsel. The views expressed in this article are solely those of the author and do not necessarily reflect the views of Consilium Law LLC.