In an era where generative AI is increasingly integrated into business operations, regulators are setting expectations for company executives to actively manage the unique risks associated with such technologies. As businesses embark on their journey towards responsible AI adoption, ensuring transparency in AI use, particularly in how consumer data is processed, remains a significant focus.
The advent of AI-powered tools brings about an inherent conflict between the necessity to process vast amounts of data—often including personally identifiable information—and the obligations imposed by privacy laws. These laws mandate data minimization, purpose limitation, and are rooted in principles of transparency and the respect for individual rights, such as data access, correction, and deletion.
To mitigate these privacy risks, regulators, including the Federal Trade Commission, have begun adopting AI-specific legislation and are broadening their interpretations of existing laws to specifically address AI-related concerns. A notable example is the complaint against Rite Aid under Section 5 of the FTC Act, addressing the misuse of AI-based facial recognition technologies.
At the state level, Colorado set a precedent with its enactment of comprehensive AI legislation through the Colorado AI Act, obliging developers and deployers of high-risk AI systems to prevent algorithmic discrimination. Furthermore, the California Consumer Privacy Act has issued draft regulations covering businesses’ use of automated decision-making technology.
As AI developments continue to challenge traditional cybersecurity and privacy frameworks, business leaders must establish robust AI governance programs, supported by centralized governance bodies and documented controls, while promoting transparency, accountability, and AI literacy.
Despite the efficiency AI tools bring to cybersecurity, threat actors are exploiting them to launch sophisticated cyberattacks, such as creating deepfakes to facilitate social engineering attacks. To counter these threats, the New York Department of Financial Services has recently issued guidance for mitigating AI-specific cybersecurity risks.
The onus is on management to ensure frequent reviews and updates of cybersecurity policies to address new AI risks, drawing on regulatory frameworks like the guidance from New York. An effective AI governance program should incorporate generative AI-specific risk training and establish comprehensive third-party vendor risk management protocols.
The widespread adoption of generative AI will continuously test companies’ cybersecurity and privacy risk mitigation strategies. Therefore, it is vital for business leaders to prioritize the development and implementation of a robust AI-specific governance framework.
For further insights, the original article by Cleary Gottlieb attorneys can be accessed on Bloomberg Law.