Generative AI Adoption Stalled by 27% of Organizations Wary of Security and Legal Risks

In an environment where the implementation of Artificial Intelligence (AI) has become commonplace, a surprising finding has arisen—approximately 27% of organizations are refraining from using generative AI, wary of the potential legal and security risks. According to findings reported by the New York Law Journal, quite a few companies have chosen to forbear becoming early adopters of this technology, mainly fueled by the fear of unknown variables that could lead to security breaches or legal entanglements.

Such decision by organizations is rather a reflection of preventive measure than resistance to technology. They are focusing on identifying and evaluating the downsides before fully embracing these kinds of technologies. As rightly mentioned by Cisco’s Chief Legal Officer Dev Stahlkopf, “The risks of AI are real, but they are manageable when thoughtful governance practices are in place as enablers, not obstacles, to responsible innovation”.

Legally savvy companies understand the need for comprehensive AI governance structures and effective risk management to help avert potential issues before they emerge. When it comes to AI in business, it’s all about striking a balance between innovation and risk. Just as companies have adopted other technologies throughout history, they acknowledge that AI is not an exception. It can offer many advantages if managed well, including efficiency and cost savings.

This symbiotic relationship between law and AI is more vital than ever before. For companies advancing in the AI domain, having a sound legal foundation will play a key role in enabling and guiding their AI utilization. The narrative of 27% of companies choosing to defer use of generative AI might consequently provoke a more thoughtful engagement concerning risk evaluation and legal compliance before the wider implementation of these groundbreaking technologies.