The generative artificial intelligence era is rapidly transforming the business landscape, compelling corporate boards to navigate a challenging matrix of opportunities and risks. As enterprises consider or advance their implementation of generative AI, the role of boards is evolving to ensure the technology’s strategic integration aligns with broader corporate goals.
While some organizations are in nascent stages of AI adoption, others have begun embedding generative AI in their operations. The board’s primary responsibility is to safeguard the brand and steer the organization towards future growth. Boards must, therefore, look beyond the inherent technological value to consider broader issues such as risk, reputation, and long-term value creation. An effective governance approach balances experienced business acumen with evolving strategic needs, as detailed in Deloitte’s generative AI use cases study.
Oversight structure is a fundamental question that boards must address, considering how generative AI fits within the organization’s enterprise strategy, operating plans, and risk management frameworks. Boards need to decide whether AI governance falls under the full board or specific committees such as audit, risk, technology, or compliance. Flexibility in oversight structure is crucial given the dynamic nature of AI deployment, potentially requiring the establishment of AI-specific committees.
In terms of skills and qualifications, boards can contribute significantly by offering external perspectives to help calibrate the management’s operational challenges. Establishing AI literacy among board members is essential, and this can be achieved through strategies like learning modules, expert briefings, or developing a skills matrix to identify and fill gaps. As business strategies evolve, boards might consider adding expertise by creating new board seats or retiring existing members. Such thoughtful approaches ensure board members remain relevant and are equipped to oversee AI deployment effectively.
Maintaining trust in AI is another crucial aspect boards need to address, as trust remains a major barrier to large-scale AI deployment. Each AI use case must be assessed on its own merits with a robust ethics framework guiding the strategic approach. Developing a nuanced understanding of AI risks can support governance efforts that uphold the organization’s integrity and foster responsible AI use. Reference can be made to Deloitte’s insights on AI risk and governance.
Ultimately, boards need to consider the human element and the broader societal impact of generative AI. Positioning AI as a tool to augment rather than replace human work is key to building trust and maximizing business value. Diverse perspectives on the board can enable a more thorough consideration of AI’s outcomes, promoting an approach centered on societal benefits and ethical use. By focusing on human trust and net societal benefits, boards can help organizations harness generative AI for equitable and sustainable value creation.
For further reading, see the full article on Bloomberg Law: Boards Need to Weigh Risk and Value With Generative AI Deployment.