Generative artificial intelligence (AI) is increasingly becoming a focal point for pharmaceutical leaders as it promises to transform drug discovery, preclinical testing, and clinical trials. With the potential for significant cost savings and monetary benefits, this AI-driven transformation is not without its legal risks. Companies must address potential issues related to product liability, intellectual property, and data privacy. Despite the growing importance of these challenges, there’s limited guidance available to manage these legal risks effectively.
A 2024 study by Bain & Company revealed that 75% of pharmaceutical industry leaders consider AI a top priority for C-suite and board discussions. Given this landscape, the authors suggest that pharmaceutical companies adopt a three-tiered, bottom-up corporate governance framework to mitigate legal risks associated with AI-enabled clinical development.
- First Line of Defense: Companies should form a permanent AI standing committee consisting of personnel from various functions such as corporate development, finance, legal/compliance, IT, government affairs, and investor relations. The role of this committee can be summarized by the four Rs—record, review, recommend, and report. It serves as the initial defense mechanism by maintaining comprehensive records, reviewing current AI usage, drafting recommendations, and generating quarterly reports for higher management.
- Second Line of Defense: Companies should consider appointing a Chief AI Officer and establish a C-suite AI committee. This committee includes key figures like the Chief Scientific Officer and Chief Compliance Officer. Its mandate encompasses the four As—authorize, analyze, act, and advise. It oversees the AI standing committee, analyzes findings, acts on recommendations, and advises the CEO on AI-related decisions.
- Final Line of Defense: The board of directors is responsible for primary oversight of AI-related risks. The board’s function revolves around the four Cs—care, competency, consult, and control. Board members should develop AI competency, consult external advisors on regulatory matters, and ensure proper disclosure controls to prevent AI washing. Given that the SEC has prioritized clear and reasonable AI disclosures, stringent oversight is crucial.
A recent Bloomberg article emphasized that AI-enabled clinical development is not immune to legal risks and urged industry leaders to implement robust governance structures to address these issues. For a deeper dive into corporate governance and AI management, read more on Bloomberg Law.