Canada Launches Investigation Into X, Formerly Twitter, Over AI Privacy Issues

The Canadian privacy landscape is once again under the spotlight as the Privacy Commissioner, Phillipe Dufresne, has launched an investigation into X, the social media platform formerly known as Twitter. This probe, announced on Thursday, seeks to determine whether X is adhering to Canadian privacy legislation, particularly regarding the platform’s practices in collecting, utilizing, and disclosing personal data to advance its artificial intelligence (AI) projects. The commissioning of this inquiry marks a pivotal moment in how technology giants are scrutinized under current privacy frameworks.

The focal point of the investigation is X’s integration with the AI chatbot, Grok, developed by xAI, a company founded by tech entrepreneur Elon Musk. As X rebranded from Twitter in 2022, Grok was introduced to users aiming to rival emerging AI technologies like DeepSeek and OpenAI. Grok’s latest version, Grok-3, necessitates significant data volumes for effective training, sparking concerns about the ethical and ethical implications of leveraging users’ personal data. Additionally, apprehensions exist regarding the potential for using Canadians’ data to influence political decisions, especially amid evolving AI capabilities.

The regulatory framework of the investigation stems from the Personal Information Protection and Electronic Documents Act (PIPEDA), which prescribes how personal data should be managed by private entities in Canada. With PIPEDA as a backdrop, the Office of the Privacy Commissioner of Canada (OPC) is empowered to conduct independent inquiries into privacy complaints, further solidifying its role in safeguarding consumer data rights in the commercial sphere. Results of these investigations can be publicly disclosed if deemed beneficial to public cognizance.

The initiation of this investigation aligns closely with a recent complaint from Canadian MP Brian Masse, emphasizing concerns over the potential exploitation of citizens’ data to train AI models and possibly manipulate their political preferences. Political entities like the New Democratic Party are advocating for increased transparency and accountability in how algorithms are configured and employed.

This probe also intersects with broader international tensions, such as the ongoing dispute over digital services tax between Canada and the United States, casting technology firms in a broader geopolitical narrative. As this investigation unfolds, it may set precedents in data protection laws concerning AI advancements, impacting how tech giants engage with privacy rights across borders.