The health care industry has a substantial history of working with artificial intelligence, as elucidated by Kenneth Rashbaum of Barton. The sector’s journey with AI highlights several critical decisions and challenges that are instructive for other industries contemplating the adoption of AI technologies.
For years, health care organizations have implemented rules-based AI tools to support various functions such as diagnostic decision-making, robotic surgery guidance, and radiologic image review. These tools analyze vast sets of medical records and clinical data to assist in disease analysis, provide medication safety prompts, and assess procedure precision. For instance, the clinical decision support systems in computerized provider order entry (CPOE) platforms can alert clinicians about potential medication errors based on patient data.
However, the deployment of AI in health care is not without its challenges. One critical concern is whether these tools indeed add measurable value or require so much oversight and verification that their efficiency is nullified. This creates a tension between the potential benefits of AI and the resources necessary to monitor and verify its outputs. Continuous human oversight is imperative, as any diagnostic tool is limited to the data it was trained on and may miss critical patient information not included in those records.
Moreover, generative AI presenting unique issues highlights how such tools may “hallucinate,” or produce inaccurate or fictional information. This necessitates human review to prevent misinformation, especially in high-stakes environments like hospitals where clinicians are already stretched thin.
Liability is another significant factor in health care AI usage. Questions arise concerning whether standards of care now mandate the use of AI tools or how much documentation is necessary to justify or override AI-suggested decisions. These considerations are particularly paramount in highly regulated and litigation-prone industries.
As Rashbaum outlines, the due diligence and evaluation standards applied in health care provide a valuable roadmap for other sectors. Businesses across various fields should scrutinize their AI tools for data pertinence, rigorous pre-deployment testing, and ongoing quality control. Ensuring these measures can help mitigate economic and legal risks associated with AI deployment across industries.