Meta has apparently paused plans to process significant amounts of user data to introduce new AI experiences in Europe. The decision follows pushback from data regulators, who disputed the tech giant’s claims that it had ‘legitimate interests’ in processing data from European Union (EU) and European Economic Area (EEA) users, including personal posts and pictures, to train future AI tools.
Meta’s primary European regulator, the Irish Data Protection Commission (DPC), confirmed in a statement that this decision was a result of ongoing discussions about compliance with the EU’s stringent data privacy laws, primarily the General Data Protection Regulation (GDPR). ‘The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,’ the statement read. ‘This decision followed intensive engagement between the DPC and Meta.’
The European Center for Digital Rights, known as Noyb, had filed 11 complaints across the EU and planned to submit more to obstruct Meta’s AI initiatives. The DPC had initially given Meta AI the go-ahead but later reversed its position, according to Noyb. In a blog post, Meta had previously outlined plans for new AI features in the EU, such as customized stickers and a ‘virtual assistant’ capable of answering questions and generating images. Meta argued that training on EU users’ personal data was essential to reflect ‘the diverse cultures and languages of the European communities who will use them.’
Prior to halting its plans, Meta intended to rely on the legal basis of ‘legitimate interests’ to process data, claiming it was needed to improve AI at Meta. However, Noyb and EU data regulators contended that this legal basis did not meet GDPR standards. The Norwegian Data Protection Authority suggested that ‘the most natural thing’ would have been to seek users’ consent before using their posts and images in this manner.
Rather than seeking consent, Meta had provided EU users with an opt-out option until June 26. Noyb alleged that this approach involved ‘dark patterns’ aimed at preventing users from opting out of AI data usage, thereby collecting as much data as possible for undisclosed AI technologies. Noyb further warned that once users’ data enters the system, it becomes nearly impossible to remove it.
Noyb indicated that the most plausible reason for Meta pausing its plans was due to pressure from EU officials. However, the privacy advocacy group cautioned EU users that Meta’s privacy policy has not yet been fully updated to reflect the pause. ‘We welcome this development but will monitor this closely,’ said Max Schrems, Noyb chair, in a statement to Ars Technica. ‘So far, there is no official change of the Meta privacy policy, which would make this commitment legally binding. The cases we filed are ongoing and will need a determination.’
As of this writing, Meta had not provided any comment on the issue.
Read more about this development on Ars Technica here.