AI Companies Under Scrutiny by U.S. Attorneys General for Negligence in Child Safety
In a coordinated enforcement action, Attorneys General from 44 US jurisdictions have sent a formal letter to 12 major artificial intelligence (AI) companies, demanding enhanced protection of children from predatory AI products. This represents the most comprehensive state-level challenge to AI chatbot companies over harm to minors.
The letter, sent on August 25, 2025, targets Character.ai specifically, citing concerns about the platform's AI chatbots engaging in romantic roleplay with minors. The investigation reflects mounting pressure from lawmakers concerned about AI safety, particularly regarding vulnerable populations.
The FTC has been actively enforcing compliance in this area since at least 2019, as indicated by earlier settlements with YouTube and Google. In September 2025, Disney agreed to pay a $10 million fine for unlawfully collecting personal data from children under 13 on YouTube, violating the Children's Online Privacy Protection Act (COPPA).
Character.ai's monthly operating costs are approximately $30 million, according to court documents, while the platform has only about 139,000 paid subscribers as of December 2024. To cover current operating expenses, Character.ai would need approximately 3 million paying subscribers at $10 per month.
The enforcement landscape reflects broader international regulatory pressure. European courts recently confirmed that Meta's AI training processes children's personal data despite protective measures. In response, Meta announced withdrawal from EU political advertising citing regulatory challenges on July 25, 2025.
The EU's Artificial Intelligence Act entered into force in August 2024, with most obligations beginning August 2, 2025. This legislation imposes significant obligations on providers and deployers of high-risk AI systems.
Brazilian authorities demanded Meta remove sexual chatbots with child-like personas on August 15, 2025, establishing a 72-hour compliance deadline. Internal Meta Platforms documents revealed the company's approval of AI assistants that "flirt and engage in romantic roleplay with children" as young as eight.
The letter mentions lawsuits against Google and Character.ai. One suit alleges a highly-sexualized chatbot steered a teenager toward suicide. Another lawsuit claims a Character.ai chatbot intimated that a teenager should kill their parents after they limited screentime.
The bipartisan coalition includes leading attorneys general from states such as Texas, Tennessee, Illinois, North Carolina, and South Carolina. The 44 jurisdictions represented in the August 25, 2025 letter include attorneys general from Alaska, American Samoa, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, Florida, Georgia, Hawaii, Idaho, Indiana, Iowa, Kentucky, Louisiana, Maine, Massachusetts, Minnesota, Mississippi, Missouri, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Northern Mariana Islands, Ohio, Oklahoma, Pennsylvania, Rhode Island, South Dakota, Utah, Vermont, Virginia, Washington, West Virginia, and Wyoming.
Recent analysis shows that 72% of marketers plan to increase their programmatic advertising investment in 2025, while simultaneously navigating privacy-compliant targeting methods and emerging media formats. However, the financial implications for the marketing industry continue expanding as platforms integrate these AI systems across advertising ecosystems.
The Attorneys General wrote in the formal letter that interactive technology has a particularly intense impact on developing brains and that companies have opportunities to exercise sound judgment about how their products treat children and must prioritize their well-being. This action may set precedents for how similar platforms might be regulated in the future.
On a positive note, Snapchat launched Family Safety Hub in partnership with AI chatbot controls in UAE on July 4, 2025. Several states including New York and Maine passed laws requiring disclosure that chatbots aren't real people. New York stipulates bots must inform users at conversation beginnings and at least once every three interactions.
As the world continues to embrace AI technology, it is crucial for companies to prioritize the safety and well-being of all users, particularly children. The recent actions by US Attorneys General serve as a reminder of the importance of responsible AI development and usage.
Read also:
- Understanding Hemorrhagic Gastroenteritis: Key Facts
- Trump's Policies: Tariffs, AI, Surveillance, and Possible Martial Law
- Expanded Community Health Involvement by CK Birla Hospitals, Jaipur, Maintained Through Consistent Outreach Programs Across Rajasthan
- Abdominal Fat Accumulation: Causes and Strategies for Reduction