AI Board

A board for sharing AI news and analysis.

Global A law briefing 20260302

Author
김 경진
Date
2026-03-02 02:35
Views
67




AI Legal & Regulatory Global Monitor

Global AI Law & Regulation
Daily Brief

Litigation, Investigation, Enforcement & Legislation Across Four Jurisdictions

March 2, 2026 (Sun)  |  Vol. 001


🇺🇸 US
🇨🇳 CN
🇪🇺 EU
🇰🇷 KR





Section 01

Executive Summary

Overview of Key Developments



As of March 2026, the global AI legal and regulatory landscape has entered its most complex and dynamic phase in history. In the United States, the DOJ AI Litigation Task Force is preparing federal preemption lawsuits against state AI laws, while the FTC expands antitrust and deceptive practice investigations into AI companies. China has activated a penalty framework of up to CNY 50 million (~USD 6.9M) under its amended Cybersecurity Law and published draft regulations on anthropomorphic AI services. The EU approaches its August 2 deadline for high-risk AI system enforcement under the AI Act, with Finland becoming the first member state to activate national supervision. South Korea's AI Basic Act took effect on January 22, though substantive penalties have been deferred for at least one year under a grace period.


On the copyright front, major lawsuits continue to multiply, including Korean broadcasters' suit against OpenAI, Universal Music's USD 3.1 billion lawsuit against Anthropic, and a landmark German court ruling rejecting fair use for AI training. In antitrust, the FTC's investigation into the Microsoft-OpenAI partnership has intensified significantly.



Narrative: 2026 marks the transition year from "theory to enforcement" in AI regulation. Legal frameworks established by each jurisdiction are being converted into concrete enforcement actions — lawsuits, fines, and market withdrawal orders — directly impacting AI industry business models and development practices.






Section 02

🇺🇸 United States — Legislation & Executive Orders

Federal Preemption, Task Forces & Deepfake Regulation



Legislation § DOJ AI Litigation Task Force Launched

On January 9, 2026, the Department of Justice officially launched the AI Litigation Task Force. Led by Attorney General Bondi or her designee, it includes the Civil Division, Deputy and Associate AG offices, and the Office of the Solicitor General. The Task Force will consult White House AI & Crypto Czar David Sacks to identify target state laws. Colorado is the likely first litigation target, with California, Texas, and Illinois also at risk.


Legislation § Trump Executive Order — Federal Preemption Framework

The December 11, 2025 Executive Order established a national AI policy framework asserting federal preemption over state AI laws. It directed the Commerce Department to publish an evaluation of existing state AI laws within 90 days (~early March 2026). On March 11, 2026, both the Commerce Department and FTC are scheduled to publish guidance documents. Notably, USD 42 billion in broadband infrastructure funding has been conditioned on state AI law repeals.


Legislation § Deepfake Election Regulation — Wave of State Bills

Ahead of the 2026 midterm elections, states are rushing to regulate AI deepfakes. Maryland's SB0141 passed the Senate with unanimous bipartisan support, targeting AI-generated election misinformation. SB0008 imposes penalties of up to USD 25,000 and/or 20 years imprisonment for AI deepfake impersonation. Washington State and Pennsylvania have also enacted deepfake laws. However, a California deepfake regulation was blocked by a federal court on First Amendment grounds, highlighting the tension between regulation and free speech.


Legislation § TAKE IT DOWN ACT Signed

In May 2025, President Trump signed the TAKE IT DOWN ACT, the first federal law prohibiting non-consensual intimate imagery including deepfakes, requiring online platforms to remove such content upon victim notification.



Narrative: The US AI regulatory landscape is being reshaped by an unprecedented federal-vs-state collision. The DOJ Task Force effectively serves as a federal "shield" for AI companies, potentially conflicting with the FTC's consumer protection approach. The March 11 guidance publications will be a watershed moment.


Sources: DOJ AG Memo (Jan 9, 2026); Executive Order on AI Policy Framework (Dec 11, 2025); Baker Botts, BakerHostetler, Consilium Law analyses; Maryland General Assembly SB0141, SB0008






Section 03

🇺🇸 United States — Litigation & Investigation

Copyright Wars, Antitrust Probes & FTC Enforcement



Litigation § AI Copyright Litigation — Escalating & Globalizing

Six authors, including Pulitzer Prize-winner John Carreyrou, have filed individual copyright infringement lawsuits against Anthropic, OpenAI, Google, Meta, xAI, and Perplexity AI, alleging unauthorized use of their books from pirate libraries (LibGen, Z-Library) for LLM training. Universal Music, Concord Music, and ABKCO Music filed a USD 3.1 billion lawsuit against Anthropic on January 28, 2026. Hachette and Cengage have sought to join a class action against Google. Meta's Kadrey case awaits class certification, while the consolidated OpenAI-Microsoft class action is in discovery.


Litigation § Anthropic's USD 1.5 Billion Settlement

The largest AI litigation event of 2025 was the Bartz v. Anthropic settlement at USD 1.5 billion. Anthropic faced massive statutory damages exposure for downloading millions of pirated book copies for training purposes. Settlements and licensing deals are expected to multiply throughout 2026.


Investigation § FTC "Operation AI Comply" Continues

Launched in September 2024, "Operation AI Comply" continues under the new administration, targeting AI washing. Enforcement actions have been taken against Workado, Air AI, FBA Machine (USD 15M consumer fraud), and DoNotPay ("world's first robot lawyer" deceptive claims). The FTC is also conducting a formal inquiry into children's AI services and COPPA compliance.


Investigation § FTC Escalates Microsoft-OpenAI Antitrust Probe

The FTC has escalated its antitrust investigation into Microsoft, issuing civil investigative demands to six or more competitors. The probe examines cloud licensing restrictions, AI bundling, and market dominance. The core question: whether the USD 13 billion Microsoft-OpenAI partnership enables monopolistic control of AI capabilities while maintaining the appearance of OpenAI's independence. A parallel consumer antitrust class action by 11 plaintiffs is also underway.


Litigation § Google Monopoly Ruling — Long Appeals Ahead

Judge Mehta ruled Google is a monopolist, but appeals to the Supreme Court mean no final decision is expected until 2027-2028. Whether AI Overviews reinforces monopoly power has emerged as a new antitrust front.



Narrative: AI copyright litigation is shifting from "training data" to "AI outputs." The massive Anthropic settlement (USD 1.5B) signals the materialization of litigation risk, and licensing agreements will become a primary litigation-avoidance strategy in 2026. Simultaneously, the FTC's antitrust probe represents a fundamental challenge to AI industry vertical integration.


Sources: Morrison Foerster AI Trends 2026; AI Business; Copyright Alliance 2025 Year in Review; FTC.gov; WinBuzzer; Wilson Sonsini 2026 Antitrust Preview






Section 04

🇨🇳 China — Regulation & Legislation

Cybersecurity Law Amendments & Anthropomorphic AI Rules



Legislation § Cybersecurity Law Amendments — 50x Fine Increase

Promulgated on October 28, 2025 and effective January 1, 2026, the amended Cybersecurity Law dramatically strengthened AI-related penalties. Maximum corporate fines have been raised to CNY 50 million (~USD 6.9M) or 5% of previous year's turnover. Individual liability reaches up to CNY 1 million (~USD 138K). For "particularly serious consequences," network operators and CIIOs face fines up to CNY 10 million. Critically, the mandatory prior warning requirement has been abolished, enabling immediate fines even for minor violations.


Guidelines § Draft Regulation on Anthropomorphic AI Services

On December 27, 2025, the CAC published draft "Interim Measures for the Management of Anthropomorphic AI Interactive Services." These cover AI products simulating human personality, thought patterns, and communication styles for emotional interaction. Providers must clearly notify users they are interacting with AI, and are prohibited from designing services that replace social interactions, manipulate psychology, or induce addiction. Mandatory human intervention protocols are required when users express suicidal or self-harm tendencies. Implementation is expected within 2026.


Legislation § AI Ethics Norms Codified

The amended Cybersecurity Law codifies a "development-and-regulation balance" principle, explicitly supporting AI R&D and infrastructure while simultaneously refining AI ethical norms, strengthening security risk monitoring and assessment, and promoting healthy AI development.



Narrative: China's AI regulation pursues a dual strategy of promoting development while controlling risk under its "inclusive and prudent" principle. The anthropomorphic AI service regulation is the world's first attempt to legally address AI emotional dependency and addiction — a precedent likely to influence Western regulators.


Sources: CAC Cybersecurity Law Amendment (Oct 28, 2025); CAC Anthropomorphic AI Draft (Dec 27, 2025); Hunton, Latham & Watkins, Reed Smith, China Briefing analyses; IAPP Asia-Pacific Report






Section 05

🇨🇳 China — Enforcement & Penalties

Large-Scale Crackdowns & Extraterritorial Reach



Enforcement § CAC Mass Enforcement Campaign

The CAC conducted large-scale content enforcement operations, removing over 820,000 pieces of illegal content and disabling more than 2,700 non-compliant AI agents. These figures demonstrate China's aggressive enforcement posture toward generative AI services.


Enforcement § Immediate Fine System Activated

A key change under the amended Cybersecurity Law is the abolition of the mandatory prior warning. Previously, regulators were required to issue corrective warnings first, with fines only for non-compliance. Now, immediate financial penalties can be imposed for certain cybersecurity obligation failures. However, mitigating circumstances — voluntary correction, cooperation with authorities, and good-faith compliance efforts — may reduce or waive penalties.


Regulatory Investigation § Expanded Extraterritorial Enforcement

The amendments broaden extraterritorial jurisdiction, extending regulatory authority over AI services processing Chinese citizens' data or affecting Chinese national security from abroad. Active enforcement is expected from both mainland China and Hong Kong regulators throughout 2026.



Narrative: The removal of 820,000+ content pieces and disabling of 2,700+ AI agents demonstrates China's distinctive "enforcement at scale." The elimination of prior warnings combined with immediate fine authority dramatically increases compliance pressure, particularly for foreign AI companies operating in the Chinese market.


Sources: CAC enforcement reports; Cybersecurity Law amendments; DLA Piper, A&O Shearman, Mayer Brown analyses






Section 06

🇪🇺 EU — AI Act Implementation & Regulation

High-Risk System Deadline, Finland's First-Mover & Digital Omnibus



Legislation § AI Act High-Risk Systems — D-Day August 2

On August 2, 2026, the EU AI Act's core provisions become enforceable. Annex III high-risk AI systems — covering employment/hiring, credit scoring, education, law enforcement, and biometric identification — must comply. Penalties reach up to EUR 35 million (~USD 38M) or 7% of global annual turnover for prohibited AI practices; EUR 15 million (3%) for other violations; EUR 7.5 million (1%) for false information. National market surveillance authorities hold powers to suspend or recall non-compliant AI systems from the EU market.


Guidelines § Finland — First EU Member State to Activate Supervision

On January 1, 2026, Finland became the first EU member state with fully operational AI Act enforcement powers. Finland adopted a decentralized supervision model, assigning existing market surveillance authorities in product safety, road traffic, digital infrastructure, medical devices, and financial services to oversee AI systems in their respective domains. The Data Protection Ombudsman oversees high-risk AI involving fundamental rights and personal data, with Traficom serving as the single point of contact for inter-agency coordination.


Legislation § Digital Omnibus — Potential Postponement

The European Commission's late 2025 "Digital Omnibus" package could postpone Annex III high-risk obligations to December 2027. However, formal adoption is expected later in 2026, and prudent organizations treat August 2, 2026 as the binding deadline.


Legislation § GDPR-Like Extraterritorial Scope

The AI Act has extraterritorial reach similar to GDPR. Any organization whose AI systems are used within the EU or produce outputs affecting EU residents must comply, regardless of headquarters location.



Narrative: The EU AI Act is the most significant digital regulation since GDPR, and preparation time before August enforcement is rapidly running out. Finland's proactive supervisory activation is a model, but most member states lag in national legislation. The Digital Omnibus postponement possibility could create a "false sense of security" and warrants caution.


Sources: EU AI Act official implementation timeline; Finnish Government Press Release (Jan 2026); European Commission Digital Omnibus Proposal; DataGuard, SecurePrivacy, DLA Piper analyses; IAPP






Section 07

🇪🇺 EU — Litigation & Enforcement

Munich Ruling, Prohibited Practices & Guidance Delays



Litigation § Munich Court — AI Training Is Not Fair Use

In November 2025, the Munich Regional Court ruled in GEMA v. OpenAI that OpenAI's use of song lyrics for training does not constitute fair use. The key evidence: ChatGPT produced outputs nearly identical to original lyrics. This is the first major European court ruling rejecting fair use defense for AI training, contrasting sharply with US judicial approaches.


Regulatory Investigation § European Commission Misses High-Risk AI Guidance Deadline

The European Commission has missed its deadline for publishing guidance on high-risk AI systems. The absence of detailed implementation guidelines adds uncertainty to corporate compliance preparations, with IAPP and other expert bodies identifying this as a major obstacle to effective enforcement.


Enforcement § Prohibited AI Practices — Already In Force

Since February 2, 2025, the AI Act's prohibited AI practices provisions have been in force. Banned practices include social scoring, emotion recognition in workplace/education, exploitation of vulnerable groups, and real-time remote biometric identification (except law enforcement). Violations carry fines up to EUR 35 million or 7% of global turnover.



Narrative: The Munich ruling suggests Europe will judge AI training fair use more strictly than the US. The Commission's guidance delay ironically may give companies a "safety margin," but simultaneously increases uncertainty costs — a double-edged sword.


Sources: GEMA v. OpenAI, Munich Regional Court (Nov 2025); IAPP AI Act Guidance Report; EU AI Act Article 5 (Prohibited Practices); artificialintelligenceact.eu






Section 08

🇰🇷 South Korea — Legislation & Policy

AI Basic Act, Watermark Requirements & Grace Period



Legislation § AI Basic Act — World's Second Comprehensive AI Regulation

Passed by the National Assembly on December 26, 2024, the "Act on Fostering AI and Establishing a Trust Foundation" took effect on January 22, 2026, making South Korea the second country after the EU to establish a comprehensive AI regulatory framework. It adopts a "permit first, regulate later" principle, balancing innovation promotion with risk management.


Guidelines § Watermark & Deepfake Labeling Requirements

Mandatory labeling for generative AI outputs has been introduced. General AI-generated content may use either human-readable labels or machine-readable watermarks. However, deepfakes (content difficult to distinguish from authentic) must use only methods clearly recognizable by users. Violations carry penalties of up to KRW 30 million (~USD 21,000).


Legislation § Regulatory Grace Period — At Least One Year

The Ministry of Science and ICT has decided to defer AI Basic Act enforcement for at least one year to minimize corporate confusion. During the grace period, only guidance will be issued, with fact-finding investigations limited to exceptional cases involving loss of life or human rights violations. Substantive penalty enforcement is expected from 2027 onward.


Regulatory Investigation § Industry Confusion — "Are We Even Covered?"

Following the Act's implementation, companies have been inundated with inquiries, the majority asking the fundamental question: "Is our company subject to this regulation?" With enforcement decree details still unfinalized, the situation has been characterized as "law is complete, but reality is in chaos." Meanwhile, watermark removal tools are spreading on social media, raising concerns about regulatory effectiveness.



Narrative: Korea's AI Basic Act was born between the symbolism of "world's second comprehensive regulation" and the pragmatic compromise of a "one-year deferral." The maximum penalty of KRW 30 million is conspicuously low compared to the EU (EUR 35M) and China (CNY 50M), raising questions about deterrence. The spread of watermark removal tools exposes fundamental limitations of technical regulation.


Sources: AI Basic Act (Act No. 20617); Ministry of Science and ICT press releases; Korea Policy Briefing; Korea Law Times; Tech42; Aju Business Daily






Section 09

🇰🇷 South Korea — Litigation & Administrative Investigation

Broadcaster Lawsuit, Data Protection Enforcement & Record Fines



Litigation § Three Major Broadcasters Sue OpenAI

On February 23, 2026, KBS, MBC, and SBS filed a copyright infringement and damages lawsuit against OpenAI at the Seoul Central District Court. This marks the first time Korean broadcasters have filed copyright litigation against a global AI company. The claim alleges unauthorized use of their news content for ChatGPT training, joining the global wave of media copyright lawsuits against AI companies.


Regulatory Investigation § PIPC Expands AI-Focused Investigations

The Personal Information Protection Commission (PIPC) has shifted its 2026 investigative approach from post-incident enforcement to "risk-based, full-lifecycle management." AI and blockchain are now among six priority investigation areas, encompassing systematic reviews of AI service data collection, processing, and utilization. Other priority areas include large-scale data processors, high-risk personal information, excessive collection, and dark patterns.


Enforcement § PIPC Strengthens Penalties — Punitive Fines Introduced

The PIPC is strengthening its penalty framework: revenue calculation now uses a 3-year average, repeat violations carry 15-30% surcharges, and punitive fines are being introduced. In 2025, the PIPC issued a record 227 enforcement actions, totaling KRW 167.7 billion (~USD 116M) in fines across 40 cases and KRW 580 million in administrative penalties across 125 cases.



Narrative: The broadcasters' lawsuit signals Korea's content industry has formally entered the global AI copyright battle. The PIPC's record KRW 167.7 billion in fines for 2025 demonstrates rapid quantitative growth in data-related enforcement for the AI era, and 2026's AI-focused investigations are expected to accelerate this trend further.


Sources: Seoul Central District Court filing (Feb 23, 2026); MBC/SBS/KBS reports; PIPC 2026 Investigation Direction announcement; Herald Business; E-Today; e-focus






Section 10

Key Figures & Timeline Summary

Regulatory Penalties, Litigation Tracker & Critical Dates



Jurisdiction Key Law/Regulation Max Penalty Effective
🇺🇸 US DOJ AI Litigation Task Force State law preemption suits Jan 9, 2026
🇺🇸 US FTC Operation AI Comply Case-by-case Sep 2024~
🇨🇳 China Cybersecurity Law Amendment CNY 50M
(~USD 6.9M)
Jan 1, 2026
🇨🇳 China Anthropomorphic AI Regulation TBD 2026
🇪🇺 EU AI Act (High-Risk Systems) EUR 35M
(~USD 38M)
Aug 2, 2026
🇰🇷 Korea AI Basic Act KRW 30M (~USD 21K) Jan 22, 2026
🇰🇷 Korea PIPC AI Investigations Revenue-based punitive 2026~

Major Litigation Tracker

Case Defendant Amount/Status Jurisdiction
Music copyright suit Anthropic USD 3.1B 🇺🇸
Bartz v. Anthropic settlement Anthropic USD 1.5B settled 🇺🇸
6 Authors copyright suit 6 AI companies Pending 🇺🇸
GEMA v. OpenAI OpenAI Fair use denied 🇩🇪
3 Broadcasters v. OpenAI OpenAI Pending 🇰🇷
MS-OpenAI antitrust Microsoft FTC investigating 🇺🇸

2026 Critical Timeline

Date Event
Jan 1 🇨🇳 Cybersecurity Law amendments effective / 🇫🇮 Finland AI Act supervision starts
Jan 9 🇺🇸 DOJ AI Litigation Task Force launched
Jan 22 🇰🇷 AI Basic Act takes effect
Jan 28 🇺🇸 Universal Music et al. file USD 3.1B suit against Anthropic
Feb 23 🇰🇷 Three major broadcasters sue OpenAI
Mar 11 🇺🇸 Commerce Dept/FTC AI guidance expected
Aug 2 🇪🇺 AI Act high-risk system enforcement begins
2026 🇨🇳 Anthropomorphic AI service regulation expected






Section 11

Key Watch Points This Week

Five Critical Developments to Monitor



1. March 11 — The US AI Regulation Watershed
The Commerce Department's evaluation of state AI laws and the FTC's guidance documents are both due. Which state laws are flagged as "problematic" will determine the DOJ Task Force's litigation targets — potentially fundamentally reshaping the US AI regulatory landscape. Colorado's SB 21-169 (AI bias prohibition) is the likely first target.


2. AI Copyright Litigation Goes Simultaneously Global
The US (6 authors vs. 6 AI companies, Universal Music's USD 3.1B suit), Germany (GEMA vs. OpenAI), and South Korea (3 broadcasters vs. OpenAI) — AI copyright lawsuits are proceeding simultaneously worldwide. Different courts are likely to reach divergent "fair use" conclusions, requiring AI companies to develop global copyright compliance strategies.


3. China's Anthropomorphic AI Regulation — A New Regulatory Paradigm
China is pioneering the world's first legal framework addressing AI emotional dependency and addiction. With AI companion services growing rapidly, emotional manipulation and mental health risks are emerging as new regulatory concerns that will likely influence Western regulators' AI safety discussions.


4. Korea's AI Basic Act — The "Law-Reality Gap"
One month after implementation, widespread corporate confusion reveals the urgency of finalizing enforcement decree details. How companies utilize the grace period will determine their AI compliance competitiveness. The PIPC's enhanced AI-focused investigations operate on a separate track and warrant close attention.


5. 2026 Midterms — Deepfake Regulation's Stress Test
AI deepfake regulation bills are proliferating ahead of the US midterm elections, but as the California precedent shows, First Amendment conflicts are inevitable. Combined with the FEC's partisan gridlock, elections may proceed within a regulatory vacuum.







This brief is prepared for informational purposes based on publicly available sources and does not constitute legal advice.

AI Legal & Regulatory Global Monitor © 2026  |  kimkj.com

Produced with Claude  |  Last updated: 2026-03-02



Scroll to Top