Recent developments by federal authorities point towards a framework for corporate enforcement that prioritizes individual accountability, incentivizes whistleblowers, focuses on national security risks, and addresses potential pitfalls of artificial intelligence (AI).
The Department of Justice (DOJ) has clearly stated its emphasis on individual accountability. Both Attorney General Merrick Garland and Deputy Attorney General Lisa Monaco have called it the agency’s “first priority”. Garland argues that the fear of individual prosecution is the most powerful deterrent for corporate misconduct, an approach that is likely to influence how federal prosecutors charge and resolve cases in future.
Whistleblower programs are also garnering attention. The DOJ is implementing its own whistleblower program to “fill in the gaps” left by other rewards programs. A new DOJ initiative is expected to reward whistleblowers with substantial amounts from the proceeds of prosecuting corporate misconduct. The initiative seems to particularly aim at encouraging corporate insiders to report wrongdoing.
Moreover, two of the largest US Attorney’s offices are rolling out their own whistleblower programs. The Northern District of California recently announced a new initiative to prevent fraud, such as the cases involving executives of Theranos and HeadSpin. The Southern District of New York also launched a whistleblower program last month. In these voluntary disclosure policies, it is necessary for whistleblowers to be the first one to disclose information.
National security, particularly the protection of US intellectual property and the prevention of distribution of sensitive technologies, continues to be a focal point for federal agencies. The recent case of a Chinese national charged with theft of AI-related trade secrets from Google in United States v. Ding illustrates this emphasis. The prominent role of the DOJ’s National Security Division (NSD) suggests companies must carefully consider the implications of voluntarily disclosing violations, particularly given that disclosure to one agency may not be recognized by others.
The exponential growth and proliferation of AI have led to increased federal attention. Companies must ensure proper safeguards against the misuse of AI. The DOJ appointed its first chief science and technology adviser and chief AI officer, and set guidelines to increase penalties for certain AI-assisted crimes. Corporates are expected to assess risks related to disruptive technologies, as part of their compliance programs.
In light of these emerging enforcement priorities and compliance expectations, corporate decision-makers should reassess their internal strategies, allocate resources to meet regulatory requirements, and eliminate any existing gaps. More insights on this topic are provided in this Professional Perspective.