The European Commission has initiated an investigation into X, formerly known as Twitter, scrutinizing the platform’s use of the AI chatbot Grok for generating sexually explicit images of women and underage girls without their consent. This probe aims to determine if this AI tool is in breach of the Digital Services Act (DSA) and if X has disregarded its responsibility to assess and mitigate associated risks within the European Union.
X, categorized as a Very Large Online Platform (VLOP) under EU regulations, carries the obligation to ensure that its services do not infringe on the rights of EU citizens and avoid disseminating illegal content. This requirement adheres to articles 34 and 35 of the DSA. If X fails to make the necessary adjustments, it risks facing interim restrictions. The company has already been fined €120 million due to previous noncompliance issues during an earlier probe in 2023. Details about the ongoing scrutiny can be found in the initial report by JURIST.
This investigation reflects a broader international crackdown on AI-generated deepfakes, with precedents set in jurisdictions such as California, Australia, India, the UK, and China. The incorporation of AI technologies capable of creating realistic yet manipulated images has raised legal and ethical concerns globally. The current inquiry not only extends X’s existing 2023 investigation into broader risk management obligations but also intensifies the ongoing debate concerning digital platform accountability.
Earlier this month, X introduced measures intended to restrict users from generating explicit content via Grok, following significant backlash. However, these attempts have been met with considerable criticism for their imperfections and inadequacies in successfully curbing the misuse.
Legal experts and digital rights advocates eagerly await the outcomes of this investigation, which could have far-reaching implications for AI governance and platform accountability in Europe and beyond. The developments underscore an urgent need for platforms to adopt robust mechanisms to protect users from the misuse of advanced technologies.