AI Regulation Risks: Stifling Innovation and Undermining Global Competitiveness

The White House’s expanded oversight of artificial intelligence (AI) development could potentially stifle both the AI marketplace and its global competitiveness if not carefully regulated. Recently, the Biden administration issued a comprehensive
executive order
for the increased supervision of AI technologies. The concern lies in the potential of these heavy regulations to disrupt the rapidly growing AI market, projected to surge to
$1.3 trillion by 2032.

The absence of clear definitions and guidelines in policy, the long list of broad concerns including privacy, safety, and discrimination, combined with a heavy government involvement in an emerging industry raises concerns about the effectiveness and potential impact of such regulations. The AI industry, with its widespread applications, has shown the potential to revolutionize sectors including medicine and healthcare by increasing
access to health care
and hastening medical breakthroughs.

However, the mounting pressure from AI regulatory advocates to pre-emptively interfere through policies could derail this progress. Various levels of government including the Federal Trade Commission and state governments such as Colorado, Missouri, Maryland, and Rhode Island have been gradually introducing policies regarding AI. The Washington D.C., for instance, has
proposed a rule
to hold software developers responsible for any potential biases in AI decision-making algorithms.

This constant influx and inconsistency of regulations could potentially do more harm than good. It could slow innovation, and instead of penalizing large corporations, it could obstruct small developers who lack resources, mimicking the arduous regulatory environment of the European Union’s
AI Act with its high initial and
maintenance costs.

However, concerns about AI abuses aren’t unwarranted. AI tools can be used maliciously by ill-intentioned individuals. It is crucial to curate safety measures and regulations according to specific use-cases of AI. Developers, who are intimately familiar with the workings of their AI and machine learning systems, are often best positioned to identify potential risks of their particular applications and to establish appropriate preventive measures. Interference that hinders AI innovation could curb numerous societal benefits which AI has the potential to offer.

The attempt to regulate AI in the name of “algorithmic fairness,” despite good intentions, could potentially deter the development of lifesaving technologies and cripple the U.S. economy. Lawmakers should take measured steps before introducing regulations that could sacrifice U.S. technological innovation, considering the enormous potential promised by AI.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Article written by
Adam Thierer, Senior Fellow for the Technology & Innovation Team at R Street Institute and
Neil Chilson, Senior Research Fellow at the Center for Growth and Opportunity at Utah State and former Chief Technologist for the FTC.

Full article published in Bloomberg