The city of Baltimore filed a landmark lawsuit on March 24 against xAI, Elon Musk's artificial intelligence company, over the platform's Grok chatbot generating non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM). It marks the first time a major American city has taken an AI company to court over deepfake pornography — a move that legal experts say could reshape how AI developers are held accountable for the content their systems produce.
The suit alleges that Grok, xAI's flagship AI assistant available on X (formerly Twitter), can be prompted into generating sexually explicit imagery of real individuals without their consent, including minors. Baltimore's city attorneys argue that xAI knew or should have known about these capabilities and failed to implement safeguards adequate to prevent abuse at scale.
"This is not about hypothetical harms," Baltimore City Solicitor Ebony Thompson said in a statement. "Children in our city have been victimized. Adults have had their images weaponized. We are holding the companies that made this possible responsible."
What Baltimore Filed
The complaint, filed in the Circuit Court for Baltimore City, names xAI as the sole defendant and brings claims under Maryland's newly enacted No Artificial Intelligence Fake Pornography Act, federal child exploitation statutes, and common law tort theories including negligence and products liability.
At the core of Baltimore's argument is a straightforward allegation: xAI built and deployed a product — Grok — that it marketed as a safe, advanced AI assistant, while knowingly or recklessly allowing that product to generate sexually explicit deepfakes. The city contends that xAI's safety testing was insufficient, its content filters were porous, and that internal red-teaming results flagging these vulnerabilities were deprioritized in favor of a faster product rollout.
The suit points to a series of documented incidents: screenshots and reports shared by Baltimore school administrators, law enforcement, and parents showing Grok-generated NCII of identifiable local residents, including images of students at Baltimore City Public Schools. While the city stops short of providing victim counts in the publicly filed complaint, the filing references an ongoing investigation by the Baltimore Police Department's Internet Crimes Against Children (ICAC) task force.
Baltimore is seeking injunctive relief — a court order requiring xAI to implement content filters that meet a legally specified minimum standard — alongside compensatory and punitive damages. The city is also asking the court to appoint an independent technical monitor to audit Grok's safety architecture on a rolling basis.
The timing is notable. The filing comes roughly six weeks after xAI released Grok 3, which the company promoted heavily for its reduced restrictions compared to competitors like OpenAI's ChatGPT and Google's Gemini. Critics argued at the time that those "reduced restrictions" included a weakened content moderation posture on explicit imagery. xAI countered that Grok 3 operates within X's platform guidelines.
Baltimore's complaint argues that distinction is legally irrelevant: the harm flows from xAI's model, and xAI cannot outsource its liability to X's terms of service.
Grok's Content Safety Failures
Grok has drawn scrutiny from safety researchers since its initial public release in late 2023. Unlike some competitors that apply blanket filters against explicit content generation, xAI has at various points marketed Grok's "unfiltered" approach as a feature differentiating it from more restrictive alternatives.
In practice, that positioning created exploitable gaps. Researchers at Stanford Internet Observatory and the Center for Countering Digital Hate documented in separate 2025 reports that Grok could be coaxed into generating sexualized imagery of named real individuals through relatively simple prompt construction — no jailbreaking required in some cases. One widely circulated finding showed Grok producing explicit fictional narratives involving public figures when users framed requests as "creative writing" or "fan fiction."
The CSAM dimension is more legally severe. Federal law under the PROTECT Act criminalizes AI-generated sexual imagery of minors regardless of whether a real child was involved in creating the source material. Baltimore's complaint alleges that Grok generated images meeting this legal definition when prompted with requests that described minors, even when those prompts did not identify a specific real child.
xAI has not disclosed its content classifier architecture in detail, but public documentation suggests Grok relies primarily on a combination of prompt-level filtering and post-generation review — a reactive rather than preventative approach. Safety researchers argue this architecture is structurally inadequate for preventing CSAM because it attempts to catch harmful outputs after the model has already generated them, rather than preventing the generation pathway from activating in the first place.
For context: OpenAI, Google, and Anthropic have all implemented what the industry calls "constitutional" or "system-level" constraints that bake refusal behaviors into the model's core training, making them harder to circumvent through prompt manipulation. xAI's approach, critics contend, treated content moderation as a layer applied on top of the model rather than baked into it.
NCII and CSAM: The Legal Framework
Non-consensual intimate imagery sits at an intersection of state, federal, and emerging AI-specific law — a patchwork that Baltimore's suit is designed to test.
At the federal level, the SHIELD Act (Stopping Harmful Image Exploitation and Limiting Distribution) was passed in 2022 and created civil remedies for victims of NCII, including deepfakes. The law allows individuals to sue distributors and platforms but historically has had limited reach over the original AI developers. Baltimore's suit stretches that reach by arguing xAI is not merely a platform but the manufacturer of a product that produces NCII by design.
Maryland state law has moved aggressively on this front. The state's AI fake pornography statute, enacted in 2025, explicitly covers AI-generated content and creates liability for companies that "knowingly or recklessly" deploy systems capable of producing NCII. The law's legislative history shows it was drafted with exactly this type of case in mind — civil enforcement by government entities, not just individual victims.
The CSAM dimension invokes even stricter federal authority. The PROTECT Act's provisions on virtual child pornography, upheld by federal courts after decades of litigation, apply to AI-generated content as long as it depicts someone who "appears to be" a minor in a sexual context. This is strict liability territory — there is no "we didn't know" defense once the government establishes the content existed and the developer had the capability to prevent its generation.
Baltimore's use of city government as plaintiff rather than individual victims is strategically significant. Individual victims of deepfake NCII face enormous practical barriers to litigation: they must identify themselves publicly, establish standing, and fund litigation against well-resourced tech companies. Cities face none of those barriers. They have legal standing as parens patriae — the legal doctrine allowing government to act on behalf of citizens who cannot effectively protect themselves — and they have access to law enforcement investigation resources that individual plaintiffs lack.
This mirrors the strategy cities used against pharmaceutical companies in opioid litigation: aggregate harm, governmental standing, and the threat of sustained legal pressure that individual suits cannot generate.
Why This Is Precedent-Setting
Baltimore's lawsuit stands apart from prior legal actions against AI companies for several compounding reasons.
First, it is the first municipal government to file directly against an AI developer over content harms. Previous lawsuits have come from individuals, advocacy organizations, or federal regulators. A city government filing puts institutional resources behind the litigation and signals that AI accountability is becoming a mainstream municipal policy concern — not a niche civil liberties issue.
Second, the suit bypasses Section 230, the federal statute that has historically shielded platforms from liability for user-generated content. Baltimore's legal theory does not argue that xAI failed to moderate user content — it argues that xAI itself is the content generator. A model that produces NCII when prompted is, under this framing, more analogous to a defective product than to a passive platform hosting third-party speech. If courts accept this framing, it would create a liability category for AI-generated harm that Section 230 was never designed to cover.
Third, the injunctive relief Baltimore is seeking — a court-mandated technical audit and minimum content safety standard — would, if granted, effectively give a federal judge oversight of an AI company's product architecture. That is a form of judicially enforced regulation that Congress has not yet achieved legislatively, and it would set a template for courts to fill the regulatory vacuum that current AI governance debates have not resolved.
Legal scholars who spoke to reporters covering the filing noted that the case closely parallels the structure of tobacco litigation in the 1990s, where cities and states began suing manufacturers directly after federal regulation stalled. That litigation ultimately produced the Master Settlement Agreement, the largest civil settlement in US history. Few observers expect a single Baltimore case to produce anything comparable, but the structural parallel — municipal government, product liability theory, injunctive relief demand, aspirations toward industry-wide behavioral change — is hard to miss.
The case also arrives as other cities are watching. City attorneys in Seattle, Chicago, Los Angeles, and Philadelphia have publicly stated they are monitoring the Baltimore filing. At least two of those cities have active ICAC task force investigations involving AI-generated imagery.
xAI's Response
xAI issued a brief statement on March 24 acknowledging the filing and disputing its central allegations.
"Grok operates within a comprehensive safety framework that complies with all applicable laws," the statement reads. "xAI takes child safety and the prevention of non-consensual content with the utmost seriousness. We believe the allegations in this complaint are factually and legally unfounded and will defend our platform vigorously."
The company did not address the specific technical claims in Baltimore's complaint — the adequacy of its content classifiers, the post-generation review architecture, or the findings of researchers who documented Grok's ability to generate NCII without jailbreaking.
Elon Musk, who regularly comments on litigation involving his companies, had not posted publicly about the lawsuit as of publication time.
Legal analysts note that xAI's initial public posture — brief denial, no specific engagement with technical allegations — is standard for early-stage litigation. The more revealing position will come in the company's formal answer to the complaint, which will need to address Baltimore's factual allegations directly or admit them by default.
xAI's lawyers will likely advance several defenses: First Amendment protections for AI-generated speech, federal preemption arguments under the PROTECT Act (which might be read to occupy the field for CSAM regulation), and challenges to the city's standing and damages calculations. The Section 230 defense may be harder to invoke given Baltimore's product liability framing, but xAI's counsel will almost certainly attempt it.
How Other Cities May Follow
Baltimore's filing has been coordinated with the National League of Cities, a Washington-based advocacy organization that represents more than 20,000 US municipalities. The organization confirmed it has been in discussions with city attorneys in multiple states about template litigation strategies for AI content harms.
The coordination matters because individual city lawsuits carry limited leverage against a well-funded technology company. But a wave of coordinated municipal litigation — similar to the coordinated state attorneys general actions against social media companies over youth mental health harms — creates a very different legal and political environment.
Several factors are accelerating that coordination. School administrators across the country have been reporting incidents involving AI-generated NCII of students since 2024. Law enforcement agencies have documented a surge in sextortion cases where perpetrators use AI tools, including Grok, to generate explicit imagery for coercion. And the political calculus for city governments has shifted: filing against an AI company over child safety is an easy political win in a way that, say, data privacy litigation is not.
States are also moving. Maryland's attorney general announced on March 25 that the state is reviewing the Baltimore complaint and considering parallel state-level action. New York's attorney general office confirmed it has an open investigation into AI-generated NCII, and Minnesota's attorney general filed a separate action against a different AI image generator earlier this month.
The emerging pattern suggests that 2026 may see AI companies face a litigation landscape resembling what social media companies encountered between 2021 and 2024 — diffuse, but cumulative pressure from governmental actors that individually lack the power to force change but collectively cannot be ignored.
The Deepfake Regulation Landscape
The Baltimore lawsuit lands in the middle of a rapidly evolving — but still incomplete — regulatory environment for AI-generated intimate imagery.
At the federal level, the DEFIANCE Act, signed into law in 2024, created federal civil remedies for victims of AI-generated NCII. But the law was designed for individual victim enforcement, not governmental action, and its damages provisions are calibrated for individual cases. Baltimore's suit will test whether the DEFIANCE Act's liability framework can be extended to municipal plaintiffs.
Congress has been debating broader AI content regulation for two years without reaching consensus. The TAKE IT DOWN Act, which would require platforms to remove AI-generated NCII within 48 hours of notice, passed the Senate in early 2025 but stalled in the House over First Amendment concerns. A comprehensive AI safety bill that would impose mandatory content standards on foundation model developers has not made it out of committee.
This legislative gridlock is precisely why Baltimore's litigation strategy is significant. Courts can move where Congress cannot, and a favorable ruling in the Baltimore case could impose content safety standards on AI developers that mirror what proposed legislation would have required.
Internationally, the regulatory picture is clearer — and moving faster. The EU AI Act explicitly classifies systems that generate CSAM as prohibited AI applications, with enforcement authority vested in member state regulators. The UK's Online Safety Act imposes affirmative content safety duties on platforms, including AI-generated content. Australia's Online Safety Act was amended in 2025 to explicitly cover AI-generated intimate imagery. The EU AI Act's treatment of NCII represents the most comprehensive international framework to date, and Baltimore's legal team is explicitly referencing EU standards in their argument about what constitutes adequate content safety measures.
This cross-border regulatory pressure matters because xAI's Grok operates globally. A finding that its safety architecture is inadequate under US law would have implications for its compliance posture across multiple jurisdictions simultaneously.
What AI Companies Should Do Now
The Baltimore filing — and the municipal litigation wave it may trigger — sends a clear signal to AI developers about the direction of legal risk.
Companies whose models can generate NCII or CSAM face exposure under a rapidly expanding set of theories: product liability, state AI-specific statutes, federal child exploitation law, and now municipal litigation backed by law enforcement investigation resources. The "we're a platform, not a publisher" defense is structurally unavailable when the AI system itself is generating the content.
Several concrete steps are emerging as industry standards that courts and regulators are beginning to treat as baseline requirements.
Training-time constraints are increasingly viewed as non-negotiable. Building refusal behaviors into model weights — rather than relying on post-generation filters — is both more effective and more legally defensible. Companies that can demonstrate their models were trained not to generate NCII, rather than filtered after the fact, are in a substantially better legal position.
Red-teaming documentation matters. Courts and regulators will ask whether AI developers tested for NCII and CSAM generation before deployment. Companies that have detailed, dated red-team reports showing they identified and addressed these vulnerabilities have a defensible record. Companies that cannot produce such documentation face a negligence exposure regardless of their current safety posture.
Independent audits are becoming table stakes. Baltimore's demand for a court-appointed technical monitor reflects an emerging expectation — from legislators, regulators, and now courts — that AI companies submit to third-party safety verification. Several industry leaders have already begun voluntary external audits; for companies in the NCII exposure zone, the voluntary option may be closing.
The Meta Ray-Ban privacy lawsuit earlier this year established that hardware-enabled surveillance by tech companies creates tort liability even absent regulatory action. The Baltimore case extends that principle to AI-generated content: the question is no longer whether AI companies can be sued for the outputs their systems produce, but how comprehensively courts will apply that liability.
California's AI governance bills pending in Sacramento would impose mandatory disclosure requirements on AI systems capable of generating synthetic intimate imagery — requirements that, if passed, would create an additional compliance layer atop the litigation risk Baltimore has now demonstrated.
What Comes Next
The Baltimore case will move through Maryland's Circuit Court with a preliminary hearing expected in May. The city will likely seek a temporary restraining order requiring xAI to implement interim content safety measures while litigation proceeds — a motion that will force the court to make an early determination about the plausibility of Baltimore's legal theory.
xAI will almost certainly seek removal to federal court, arguing that Baltimore's state-law claims are preempted by federal statute. That procedural battle could take months to resolve and will itself generate significant precedent about the proper forum for AI content harm litigation.
Whatever the outcome in Baltimore's specific case, the filing has already achieved one of its likely objectives: it has made clear to every AI company operating in the United States that municipal governments are willing and able to use their legal and investigative resources to pursue content safety accountability. The question is no longer whether AI developers face legal liability for NCII and CSAM — it is how many governments are prepared to assert it.
FAQ
Why is Baltimore the first city to sue an AI company over deepfakes?
While cities have sued tech companies before — most notably in opioid and social media youth harm litigation — the combination of AI-specific state statutes, documented local harm incidents, and coordinated legal strategy through the National League of Cities created the conditions for Baltimore to move first. The city's strong ICAC task force documentation of local cases gave prosecutors a concrete factual record to build on, and Maryland's 2025 AI fake pornography statute provided a clean state-law hook that avoided some of the federal preemption complications that might have deterred other cities.
Does Section 230 protect xAI from this lawsuit?
Baltimore's legal team has structured the complaint specifically to minimize Section 230's reach. The suit frames xAI not as a platform hosting user-generated content but as the manufacturer of a product — Grok — that generates NCII itself. Section 230 was designed to protect platforms from liability for content created by third parties; it has never been clearly applied to protect AI developers from liability for content their own systems generate. Whether courts accept this distinction is one of the central legal questions the Baltimore case will resolve.
What is the difference between NCII and CSAM in this context?
Non-consensual intimate imagery (NCII) refers to sexually explicit content depicting a real person without their consent — in the AI context, this typically means deepfakes generated using a person's likeness. CSAM (child sexual abuse material) refers specifically to sexual content depicting minors, and AI-generated CSAM is covered by the PROTECT Act's virtual imagery provisions regardless of whether a real child was involved. The two categories carry different legal frameworks: NCII is primarily a civil harm with remedies under state law and the DEFIANCE Act, while CSAM triggers federal criminal law in addition to civil remedies. Baltimore's suit addresses both categories, which significantly expands both the legal exposure and the urgency of the case.
Could this case force changes to how Grok is built?
Yes — that is the explicit goal of Baltimore's injunctive relief demand. The city is asking the court to require xAI to implement content safety measures meeting a court-specified minimum standard, and to submit to ongoing independent technical auditing. If granted, these remedies would give a Maryland circuit court ongoing jurisdiction over Grok's safety architecture — effectively functioning as judicially imposed regulation. Whether a court will grant this type of structural injunction against an AI developer is an open question, but the demand itself signals that municipal plaintiffs are prepared to pursue product redesign, not just monetary damages.
What can other AI companies learn from this lawsuit?
The primary lesson is that NCII and CSAM generation capability is now a first-order litigation risk, not a secondary compliance concern. Companies that cannot demonstrate training-time constraints, documented red-team results, and ongoing independent safety audits are exposed to the same liability theory Baltimore is asserting against xAI. The secondary lesson is that municipal governments are a new and formidable enforcement vector — one with investigative resources individual plaintiffs lack and political incentives that make AI child safety litigation an easy policy priority. AI companies that have not yet conducted comprehensive audits of their systems' NCII and CSAM generation capabilities should treat the Baltimore filing as a deadline.