The swift ascent of artificial intelligence (AI) technologies in corporate settings, particularly over the past six months, has compelled in-house legal teams to hurriedly concoct ethical and regulatory “guard rails”. However, the astonishing pace of change and the urgency to innovate have left many in-house lawyers to develop new AI governance processes ‘on the fly’.
Misha Benjamin, the general counsel of tech company Sama AI, has emphasized the need for lawyers to be deeply involved in the risks associated with AI and governance from the outset. He advises in-house lawyers to establish robust data practices and to actively participate in the early stages of any AI project. He warns, “if you don’t have the right data practices, you are going to hit trouble later on.”
Many larger organizations have started compiling ‘AI registers’ that keep track of the technology currently in use. Yet, a lot of smaller companies are only beginning to log what AI tools their employees are using. In light of the absence of explicit AI regulatory norms, in-house lawyers are expected to create AI governance structures, often adjusting them in real-time as new use cases emerge.
Another challenge that companies face is setting the correct approach towards AI projects that employ client data, trade secrets, or rely on complex third-party AI. These projects necessitate lawyers to understand not only how the technology functions and the data it relies on, but also the ethical and regulatory implications of its use. The UK’s Information Commissioners Office and the French regulator, the CNIL, have in the past levied hefty fines on companies like Clearview AI for flouting GDPR rules.
For more information and detailed insights, read the full article here.