Skip to main content

 

MPL Liability Insurance Sector Report: 2023 Financial Results Analysis and 2024 Financial Outlook

Wednesday, May 22, 2024, 2:00 p.m. ET
Hear analysis and commentary on 2023 industry results and learn what to watch for in the sector in 2024, including an analysis of the key industry financial drivers.

MPL Association’s National Advocacy Initiative in Full Swing

The MPL Association is shifting its focus toward state policy makers with a new program—the National Advocacy Initiative. This comes at an important time for the MPL community as the deteriorating policy environment in the states is resulting in increasing attacks on established reforms.

Inside Medical Liability

Fourth Quarter 2019

 

 

NOTES ON North America

Risk, Regulation and the Adoption of Artificial Intelligence in North America

Once it has been widely adopted, observers predict that artificial intelligence (AI) will significantly improve healthcare and change the way patients receive care.

BY DOMENIC CROLLA AND MARTIN LAPNER

 
AI is being explored, for example, as a tool for increasing diagnostic accuracy, improving treatment planning, and forecasting outcomes of care. AI technologies in medical imaging have shown particular promise for clinical application.1

To date, however, the issue of how to regulate AI has not yet been fully addressed.

Absent direction from policymakers and regulators, physicians and other health- care providers are exposed to potential medical liability risk, particularly as the function- al capacity and possible uses of AI in health-care continue to evolve.2 In an uncertain regulatory environment, it is prudent for physicians to take a cautious approach to the adoption of AI technologies. That said, Canada and other countries are now exploring potential regulatory approaches. Once these are in place, it may be easier to fully embrace AI technologies in healthcare delivery.

Risks in an uncertain regulatory environment

One of the most significant regulatory issues with respect to AI in healthcare relates to patient safety and the need to ensure that the AI algorithms used are of high quality. Indeed, despite its promise for improvements in healthcare, the current evidence about the effectiveness and reliability of the practical applications of AI is still limited.

Other challenges with AI include the inability to explain how the reasoning processes work, otherwise known as the “black box” effect.3 Physicians expose themselves to risk when they use an AI-assisted diagnosis that does not include sufficient information to verify its reliability. In addition, a host of clinical judgments (even potentially erroneous ones) or biases could be built into or introduced into AI algorithms. For example, the implicit selection biases incorporated into a machine-learning application that assessed the risk of cancer in pulmonary nodules likely explain why the model performed better on the training dataset that included patients from the U.S. National Lung Screening Trial than it did when applied to patients at the Oxford University Hospital.4 Over the longer term, there may be risks associated with human “over-reliance” on the recommendations of AI technologies.5 In these circumstances, physicians bear the legal and ethical burden of responsibility for the decisions they make using AI under their supervision.

While the regulation of AI remains in development, some medical regulatory authorities and professional associations in Canada have begun putting in place initiatives and interim guidelines for physicians’ use of AI. For example, the Federation of Medical Regulatory Authorities of Canada has established a working group on AI and the practice of medicine.6 One provincial medical regulatory authority has also suggested that physicians should apply a grading system to assess the quality of applications that incorporate AI technologies.7

Under these models, however, healthcare providers must assess the reliability of AI algorithms and critically review their results. This could of course pose practical difficulties for individual practitioners who are unlikely to have the ability or resources to comprehensively evaluate or understand AI.

In addition, AI raises important issues concerning data protection and privacy. Canada’s Standing Senate Committee on Social Affairs, Science, and Technology expressed concerns about securing the vast amounts of patient data that AI relies on to operate.8 Issues could arise relating to which data can be collected and the scope of that data. To the extent that physicians incorporate these tools into their practices, and patient health information is shared with developers of AI applications for machine learning or other quality-control purposes, healthcare professionals could bear responsibility for the (mis)use of such data.9 Compliance with privacy regulations may present significant challenges for wider adoption and development of AI technologies.

Regulatory developments

One possible solution for managing the risks associated with the use of AI in healthcare may involve a regulatory scheme similar to those used for drug approval or medical device licensing—requiring reasonable testing for patient safety before approval of an AI application for use or sale.10 Another solution may involve embedding controls directly into the AI algorithm, particularly if there are significant health risks to patients or medical ethics concerns. The regulatory controls could require that humans make the decisions when these situations arise. A certification scheme, similar to the existing programs that are in place in Canada for electronic medical record systems, might also ensure that AI tools comply with legal and regulatory requirements.

Some preliminary consultations to regulate AI use in healthcare are already underway in Canada. For example, Health Canada, a department of the Government of Canada, published a draft guidance document proposing to use existing authorities under current legislation to regulate AI or Software as a Medical Device (SaMD). Health Canada proposes to classify SaMD based on risk, such that software meant to monitor, assess, or diagnose a condition that could result in immediate danger would be required to meet more stringent licensing and monitoring requirements.11 The Government of Canada has also launched a conceptual roadmap, broadly referenced as a new “DigitalCharter,” which lays the foundation for modernizing the rules that govern the digital sphere in Canada, including the use of AI. One of the government’s initiatives has involved the establishment of an Advisory Council on AI.12

Regulatory approaches are also being considered in the U.S. For example, the FDA is carrying out a pilot program aimed at regulating SaMD. This program focusses on a voluntary “Excellence Appraisal” process, which would allow developers to demonstrate a commitment to safety and real-world performance monitoring.13 At the same time, the FDA is undertaking consultations regarding the development of a regulatory framework for AI, recognizing that the traditional approach to medical device regulation was not designed for adaptive AI technologies that continue to “learn” from acquired data and subsequently modify their performance following the approval process.14 The Algorithmic Accountability Act was also introduced in the U.S. Senate this year; this could apply to some of the healthcare applications that utilize AI. If passed, the bill would require impact assessments of automated decision systems, including AI, to evaluate their “accuracy, fairness, bias, discrimination, privacy, and security”.15

While there are some promising regulatory developments underway in Canada and abroad, there remain many crucial, undecided questions related to the use and regulation of AI in healthcare. Until those questions have been resolved, the current legal and regulatory systems will continue to be applied to developing AI technologies, in a manner that exposes physicians to medical legal risks. The risk to physicians of medical legal liability could in turn delay the development and adoption of AI in the healthcare sector until those risks have been appropriately addressed. References
1. Naylor D. On the prospects for a (deep) learning health care system. JAMA. 2018 320(11):1099–1100. doi:10.1001/jama.2018.11103; Xiaoxuan Liu et al., A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta- analysis, The Lancet Digital Health (September 2019), doi:10.1016/S2589-7500(19)30123-2; Macrae C. Governing the safety of artificial intelligence in healthcare. BMJ Qual Saf. 2019 28(6):495–498. doi:10.1136/bmjqs-2019-009484.
2. Crolla D, Lapner M. A primer on law, risk and AI in healthcare. Healthcare and Life Sciences Law Committee Update. 2018 Sept 3(1).
3. Challen R, DennyJ, Pitt M, et al. Artificial intelligence, bias and clinical safety. BMJ Qual Saf. 2019 28:231–237. doi:10.1136/bmjqs-2018-008370.
4.CanadianAssociation of Radiologists,“WhitePaper on Artificial Intelligence in Radiology” CARJ 69, 2018: 120-135 at 125.
5. Macrae, supra note 1.
6. Federation of Medical Regulatory Authorities of Canada,“ArtificialIntelligence(AI)and the Practice of Medicine Working Group”, online: https://fmrac.ca/artificial-intelligence-ai-and-the- practice-of-medicine/
7. College of Physician and Surgeons of British- Columbia,“Prescribing apps—the challenge of choice.” 6(6) December 2018, online:
8. Government of Canada, Standing Senate Committee on Social Affairs, Science and Technology,“Challenge Ahead: Integrating Robotics, Artificial Intelligence and 3D Printing Technologies into Canada’s Healthcare Systems”, (October 2017), online: https://www.cpsbc.ca/for-physicians/college-connector/2018-V06-06/10 https://sencanada.ca/content/sen/committee/421/SOC I/Reports/RoboticsAI3DFinal_Web_e.pdf
9. Dara Lambie, “CanadianPersonalDataProtection Legislation and Electronic Health Records: Transfers of Personal Health Information inITOutsourcing Agreements.” Can J Law Technol 2010. 85.
10. Supra note 2.
11. Health Canada, Draft Guidance Document, “Software as a Medical Device (SaMD)” (January 2019), online: https://www.canada.ca/content/dam/ hc-sc/documents/services/drugs-health- products/public-involvement-consultations/medical- devices/software-medical-device-draft-guidance/software-medical-device-draft-guidance-eng.pdf.
12. Government of Canada,“Government of Canada creates Advisory Council on Artificial Intelligence,” May 14, 2019, online: https://www.canada.ca/en/innovation-science-economic-development/news/2019/05/ government-of-canada-creates-advisory-council-on- artificial-intelligence.html
13. U.S. FDA,“Digital Health Software Precertification (Pre-Cert)Program, ”July 2019, online: https://www. fda.gov/medical-devices/digital-health/digital-health-software-precertification-pre-cert-program.
14. U.S.FDA,“Artificial Intelligence and Machine Learning in Software as a Medical Device,”April2, 2019,
15. U.S.Senate, Algorithmic Accountability Act, online: https://www.congress.gov/bill/116th-congress/senate- bill/1108/text.

 

 


Domenic Crolla and Martin Lapner are Partners with Gowling WLG (Canada) LLP. The authors wish to acknowledge the assistance and commentary of Rebecca Porter (Associate) and Rebecca Bromwich (Manager, Diversity and Inclusion) in the preparation of this article.