Six keys to safely bringing AI to biomeds and the HTM department

April 26, 2023
by John R. Fischer, Senior Reporter
In the last few years, artificial intelligence has attracted rapid interest in the healthcare sphere as it has become increasingly integrated in clinical and administrative operations, from prioritizing and interpreting radiology reports, to providing guidance to nonspecialists using point-of-care ultrasound systems.

Despite that undeniable upside, there are valid concerns around AI's safe and ethical use, fearing that carelessness and lack of understanding about how the technology makes decisions or functions will result in inaccurate diagnoses, lead to medical errors, or expose health information to malicious cyberattacks.

Even algorithm engineers who develop instructions for understanding AI-derived data may be unsure of how these solutions process information to produce these results, thereby creating a so-called “black box conundrum.” Additionally, AI-based technologies can make mistakes that humans would never make.

In its recently published report, “Artificial Intelligence, the Trust Issue”, the first in a series of Medical Device Safety in Focus reports, the Advancement of Medical Instrumentation (AAMI) delves into strategies for addressing these challenges.

Here are six of its recommendations:

Broader cooperation is essential
Tech companies, data scientists and algorithm engineers generally are not familiar with the quality, safety, and effectiveness standards that hospitals and other healthcare organizations follow, let alone their clinical environments, workflows, and practices.

It is therefore essential to include these professionals in meetings and events where they can communicate with healthcare stakeholders, including standards and regulatory communities, patients and patient advocacy groups, clinicians and professional associations, experts on social determinants of care and underrepresented patient populations, researchers, private and public insurers, industry associations, health technology management professionals and risk managers, experts on social, moral, legal, and ethical issues and policy implications in AI, and cybersecurity experts.

“The more manufacturers, AI developers, and organizations that have experience with AI products communicate the risks and benefits of AI technology, the more informed additional parties can be to make informed decisions,” said Mike Powers, system director of healthcare technology management at Intermountain Healthcare, and a member of AAMI’s Artificial Intelligence Committee, in the report.

Work with HTM professionals
Providers should consult with and learn from HTM professionals about what their technologies actually do and the information they transmit. They should ask how secure these data exchanges are, and if they include any patient-identifiable information. They should also understand the purpose and function of data storage and backup locations, such as the cloud, and the risks and benefits associated with using them in their practices.

Together, clinicians and HTM professionals should define objectives for using each AI-based system; evaluate datasets for systems they are considering buying to ensure the data the technology was trained on aligns with the healthcare system’s actual patient population demographics and care needs; and plan for worst-case scenarios, such as alerting clinicians to errors made by a system or when results from the system are interpreted wrong by clinicians.

At the 2022 AAMI eXchange, ECRI’s Francisco Rodríguez-Campos said that providers should form multidisciplinary committees that include HTM professionals who can provide oversight and help develop policies for AI applications. They should be able to answer questions on how AI will affect the roles of employees, employee training, compliance requirements, and addressing cyberattacks.

He also recommended Incorporating vendor assessments and risk management into the procurement process, according to the AAMI report.

Processing data correctly
To protect data quality and patient privacy from cyber threats, AI manufacturers need to question how they acquire, clean, label and enrich data. This includes managing risks across the full life cycle of the equipment, and consulting providers more closely on what information is most likely to be targeted.

“We can’t just grab a bunch of medical records and feed them into a system. Just like any other type of raw material, there’s some processing we have to do before it goes into our product,” wrote Pat Baird, senior regulatory specialist at Philips and co-chair of AAMI’s Artificial Intelligence Committee.

It is also important to ensure data meets quality standards, as biased information can perpetuate unfair practices, and make AI solutions less effective at treating patient populations that they were not trained to address.

Fighting AI-based cyberattacks with AI
Cyberattacks are using AI more to breach hospital and healthcare networks with technology that’s so sophisticated, it can blend into the background and go unseen as it retrieves data or spreads viruses to corrupt systems, putting patient safety in danger and compromising data integrity, quality, and trust.

But providers and other healthcare stakeholders can use definitive AI to identify and respond to attacks. This includes infrastructural defenses, such as tagging each image with a signature when it is created; algorithmic defenses to prevent attacks on AI or healthcare systems, or both, such as applications, information, or networks; advanced malware protection technologies; biomedical security toolkits; and user behavioral analysis tools to detect anomalous behavior on AI systems.

These innovations are more critical than ever, with the increase of interconnected devices and accelerated adoption of remote technologies, such as telehealth and telemedicine, partially fueled by the COVID-19 pandemic. According to Richard Staynings, chief security strategist at IoT cybersecurity and intelligence company Cylera and adjunct professor of cybersecurity and health informatics at the University of Denver, digital innovations have unfortunately outpaced security.

“Attacks against healthcare have gone up by 600% since the beginning of COVID. This is a massive, massive increase,” he said.

Regulating and standardizing use
Standards, regulations and guidance on safe and effective use of AI are still not where they should be but are starting to pick up the pace, with inefficient regulatory barriers eliminated over the last decade; legislation like the Medical Device User Fee Amendments in effect and expected to provide more resources to agencies such as the FDA for producing regulating device use; and more white papers and draft and final-guidance documents on AI-related topics being published.

Nevertheless, healthcare professionals still need to craft and publish more regulations to address specific concerns, including guidance on change control in AI and machine learning; post market surveillance activities; and the ethical use of AI to protect individual rights and privacy and prevent discrimination and bias.

Standards, which can be formed on the basis of white papers and partnerships between different committees, should focus on good machine learning practices; AI in operations; and AI at the point of care, three topics that are gaining considerable interest in the healthcare sector. They should also address vulnerabilities and risks, including data management, bias, overtrust, adaptive systems, and data storage, security, and privacy.

Training clinicians and staff
More training sessions, peer-reviewed articles, and updates are needed to familiarize and make clinicians and HTM professionals more comfortable and capable of using AI solutions safely and effectively and should include lessons on how these technologies are developed, the risks they pose, and how they benefit patients and practices.

This will allow providers to attract and select professionals with quality skills and experience for handling AI systems, including in cybersecurity, operations and workflow, repair and maintenance, and clinical care.

While still in its infancy, AI holds enormous potential for personalizing and creating holistic forms of care that can allow patients to not only live longer but improve care experiences by providing clinicians with more time to spend with patients. Addressing these problems now not only ensures trust among providers and patients in AI and machine learning technologies but is crucial to preserving the trust they have with one another, with the collaborations and tactics in this report meant to be a first step in this direction.