Emerging debates around TikTok’s data collection practices are spotlighting broader concerns about consumer privacy. Albany Law School’s Professor Raymond Brescia proposes a novel solution: a privacy grading system akin to the restaurant grading system currently used in cities like New York. This system would provide consumers with clear and consistent ratings, from “A” to “F”, based on the data protection practices of digital services and apps.
The idea is to create meaningful transparency in the practices of companies that manage sensitive consumer data. The current landscape often requires users to navigate extensive terms laden with complex legal jargon to understand a company’s practices fully. Many digital platforms provide cookie policy disclosures as necessitated by regulations like the EU’s General Data Protection Regulation (GDPR) and California privacy laws. However, Brescia argues that these can act as “fig leaves,” offering a façade of protection while masking the full extent of data usage.
Implementing a standard grading system would not only highlight companies with poor privacy protections but could also encourage competition among businesses to achieve higher grades, thereby improving overall data privacy standards. The system would necessitate companies to explicitly declare which privacy practices they employ, ideally deterring the use of invasive data-collection techniques.
Unlike more intrusive measures, such as government ownership proposals for foreign companies like TikTok, this disclosure approach would potentially sidestep constitutional concerns around free speech and governmental overreach. It presents an opportunity for Congress to consider broader safeguards applicable to both domestic and foreign companies alike, aiming to protect public privacy rights comprehensively.
This grading concept emphasizes the role of transparency and accountability in corporate data management, with penalties in place for non-compliance. By leveraging consumer awareness and choice, such a system could effectively mitigate risks and serve as a proactive measure against the misuse of personal data.