The EU AI Act is here. US states are passing laws. Here's what AI regulations actually require and how to prepare your business for compliance.
AI Regulation in 2025: What Businesses Need to Know
AI regulation went from "someday" to "now."
The EU AI Act is in effect. US states are passing laws. Other countries are following.
If your business uses AI, you need to understand what's required.
The Current Landscape
EU AI Act
- Status: In effect (phased implementation 2024-2027)
- Scope: Any AI used in or affecting EU residents
- Approach: Risk-based classification
United States
- Federal: Executive Order on AI Safety (October 2023)
- States: California, Colorado, and others passing AI laws
- Approach: Sector-specific and state-level patchwork
Other Key Jurisdictions
- UK: Pro-innovation framework, sector-specific regulation
- China: Strict rules on generative AI
- Canada: AIDA (Artificial Intelligence and Data Act) pending
EU AI Act: What You Need to Know
The Risk Framework
The EU AI Act categorizes AI systems by risk level:
Unacceptable Risk (Banned)
- Social scoring by governments
- Real-time biometric surveillance (with exceptions)
- Manipulation of vulnerable groups
- Emotion recognition in workplace/schools
High Risk (Heavy Regulation)
- AI in hiring and employment
- Credit scoring and financial access
- Educational assessment
- Immigration decisions
- Law enforcement use
- Critical infrastructure
Limited Risk (Transparency Required)
- Chatbots (must disclose AI)
- Deepfake generation (must label)
- Emotion recognition (must inform)
Minimal Risk (No Specific Rules)
- Spam filters
- AI video games
- Basic recommendations
High-Risk AI Requirements
If your AI is classified high-risk, you must:
Risk Management System
- Identify and assess risks
- Implement mitigation measures
- Document everything
Data Governance
- Training data must be relevant, representative, error-free
- Document data sources and processing
Technical Documentation
- Detailed system description
- How it was built and tested
- Performance metrics
Record Keeping
- Log system operations
- Maintain records for review
- Enable audit trails
Transparency
- Clear instructions for users
- Explain how decisions are made
- Disclose limitations
Human Oversight
- Enable human intervention
- Don't design fully autonomous high-stakes decisions
Accuracy and Security
- Demonstrate reliability
- Protect against adversarial attacks
General-Purpose AI (Foundation Models)
Special rules for large language models like GPT-4, Claude, Llama:
All GPAI providers must:
- Maintain technical documentation
- Provide information to downstream deployers
- Comply with copyright rules
- Publish training content summary
Systemic risk GPAI (largest models) must also:
- Conduct model evaluations
- Track and report incidents
- Ensure cybersecurity
- Report energy consumption
Penalties
- Up to €35 million or 7% of global revenue for prohibited AI
- Up to €15 million or 3% for other violations
- Up to €7.5 million or 1.5% for incorrect information
These are maximum penalties. Reality will vary by violation severity.
United States Regulation
Federal Level
Executive Order on AI (October 2023)
- Directs agencies to address AI risks
- Focuses on safety and security standards
- Applies mainly to government AI use
- Influences but doesn't mandate private sector
Sector-Specific Rules
- Healthcare: FDA guidance on AI medical devices
- Finance: CFPB scrutiny on AI lending
- Employment: EEOC guidance on AI hiring
State Level
Colorado AI Act (Effective 2026)
- Developers must provide documentation
- Deployers must complete impact assessments
- Focus on "high-risk" decisions
- Consumer disclosure requirements
California (Multiple Bills)
- SB-1047 (vetoed but influential)
- Various AI transparency and safety bills
- Leading indicator for other states
Other States
- Utah, Virginia, Connecticut, and others considering or passing AI laws
- Expect a patchwork of requirements
Practical Impact in US
The US lacks a comprehensive federal AI law. This means:
- Comply with sector-specific federal rules
- Monitor state laws where you operate
- Expect requirements to evolve rapidly
What Businesses Should Do Now
1. Inventory Your AI Systems
Document every AI system you use:
- What it does
- What data it uses
- What decisions it influences
- Who it affects
You can't comply if you don't know what you have.
2. Classify Risk Levels
For each AI system, determine:
- Is it banned? (unlikely but check)
- Is it high-risk? (hiring, credit, critical decisions)
- Does it require transparency? (chatbots, generated content)
- Is it minimal risk? (most business AI)
3. Assess Compliance Gaps
For high-risk systems:
- Do you have technical documentation?
- Is there human oversight?
- Can you explain decisions?
- Is training data documented?
For limited-risk systems:
- Are users informed they're interacting with AI?
- Is generated content labeled?
4. Build Compliance Infrastructure
Governance:
- Assign AI responsibility (legal, tech, compliance)
- Create review processes for new AI deployments
- Establish incident reporting procedures
Documentation:
- Create templates for AI system documentation
- Implement logging and record-keeping
- Maintain audit trails
Technical:
- Add human oversight mechanisms
- Implement explainability where required
- Test for bias and accuracy
5. Third-Party AI Assessment
If you use AI from vendors (most common):
- Review their compliance statements
- Request technical documentation
- Ensure contracts address regulatory requirements
- Understand your liability vs theirs
6. Monitor Developments
AI regulation is evolving rapidly:
- Subscribe to regulatory updates
- Join industry groups
- Consider legal counsel for high-risk applications
Common Questions
Does the EU AI Act apply to US companies?
Yes, if you:
- Offer AI systems in the EU
- Process EU residents' data
- Your AI affects EU residents
Similar to GDPR, reach is extraterritorial.
What about using ChatGPT/Claude in my business?
For general business use (drafting, analysis, coding help), these are likely minimal or limited risk. To understand which model best fits your needs, see our Claude vs GPT-4 vs Gemini comparison.
If you use them for hiring decisions, credit scoring, or other high-risk applications, additional requirements apply.
The AI provider (OpenAI, Anthropic) has their compliance obligations. You have yours as a "deployer."
Do I need an AI compliance officer?
Not explicitly required by most regulations, but:
- Someone must be responsible
- Large organizations should have dedicated oversight
- Smaller organizations can assign to existing compliance/legal
What's the timeline?
EU AI Act:
- Prohibited AI: February 2025
- GPAI rules: August 2025
- Most requirements: August 2026
- Some high-risk (products): August 2027
US:
- Colorado AI Act: February 2026
- Other states: Varies
- Federal: Ongoing agency guidance
Frequently Asked Questions
Does the EU AI Act apply to my US-based company?
Yes, the EU AI Act applies extraterritorially if you offer AI systems in the EU, process EU residents' data, or your AI affects EU residents—similar to GDPR. US companies serving European customers must comply with the risk-based requirements and documentation standards.
What are high-risk AI systems under the EU AI Act?
High-risk AI includes systems used for hiring and employment decisions, credit scoring and financial access, educational assessment, immigration decisions, law enforcement, and critical infrastructure. These systems require extensive documentation, risk management, human oversight, and compliance with specific technical requirements.
Do I need an AI compliance officer for my business?
While not explicitly required by most regulations, someone must be responsible for AI compliance. Large organizations should have dedicated oversight, while smaller organizations can assign this to existing compliance or legal teams. Clear accountability is essential regardless of company size.
What happens if I use ChatGPT or Claude for business decisions?
Using these tools for general business tasks like drafting or analysis typically falls under minimal or limited risk. However, using them for hiring decisions, credit scoring, or other high-risk applications triggers additional compliance requirements. The AI provider has their obligations, and you have yours as a deployer.
What are the penalties for non-compliance with AI regulations?
The EU AI Act imposes maximum penalties up to 35 million euros or 7% of global revenue for prohibited AI, up to 15 million euros or 3% for other violations, and up to 7.5 million euros or 1.5% for incorrect information. Actual penalties vary by violation severity but are substantial enough to demand attention.
When do I need to comply with AI regulations?
The EU AI Act has phased implementation: prohibited AI by February 2025, GPAI rules by August 2025, most requirements by August 2026. Colorado's AI Act takes effect in February 2026. Other states have varying timelines. Start compliance efforts now as deadlines are closer than they appear.
The Bottom Line
AI regulation is real and enforceable. The days of "move fast and break things" with AI are ending.
Immediate actions:
- Inventory your AI systems
- Identify high-risk applications
- Start documenting
Medium-term:
- Build governance processes
- Implement technical requirements
- Train teams on compliance
Ongoing:
- Monitor regulatory changes
- Adapt as rules evolve
- Treat compliance as continuous
The cost of compliance is real but manageable. The cost of non-compliance—fines, reputation damage, legal liability—is far higher.
Start now. The deadlines are closer than they appear.
Need help navigating AI compliance for your business? Cedar Operations helps companies implement AI responsibly. Let's discuss your needs →
Related reading: