Sanjay Bhakta

Healthcare systems and generative AI: A two-edged sword

March 22, 2024
By Sanjay Bhakta

It’s hard to believe that OpenAI released ChatGPT less than two years ago. Already, healthcare systems are applying generative AI to free up caregivers from doing administrative work that takes time away from patient care. For example, as part of a pilot program, emergency room physicians at four HCA Healthcare hospitals are using GenAI to create draft medical notes from conversations during patient visits. This and other applications of GenAI promise to make healthcare better. Healthcare systems should continue to adopt GenAI But as they do, they need to be aware of GenAI’s cybersecurity risks.

And those risks can be catastrophic.

GenAI is an evolving technology. Any large language model needs to be trained on massive amounts of data, including patient data, to power an effective GenAI model. What happens if a bad actor hacks that data and corrupts the model? On the other side of the coin, what happens if a bad actor uses GenAI to commit a cyber breach against a hospital? This is not a hypothetical question. Bad actors are already using GenAI to compromise emails, spread malware, and steal data. Everyone is vulnerable, including healthcare systems.

How bad actors can attack hospitals with GenAI
In fact, bad actors can use GenAI to attack hospitals in many ways. Here are some examples:

Phishing attacks. Bad actors can use generative AI to launch hyper-personalized spear-phishing attacks against hospital staff, impersonating trusted figures with highly convincing emails, messages, or deepfakes to trick employees into clicking malicious links, giving up credentials, or taking actions that compromise the hospital’s systems. And lest you think “This could never happen to me,” consider the case of the employee at a financial services company who was duped by a sophisticated deepfake into transferring $25 million to a fraudster. Making this even easier for attackers, many tools have been developed specifically for committing harm. FraudGPT is a generative AI tool that can be used to craft malware and texts for phishing emails. It is available on the dark web for a relatively cheap price, which makes it easily accessible for even moderately-sophisticated cybercriminals.

Potent malware. Generative AI can enhance malware by making it adaptive for code obfuscation, finding undiscovered software vulnerabilities (zero-day exploits), and intelligently targeting critical hospital systems, making detection and containment much more challenging.

Social engineering. Bad actors can use generative AI to create targeted disinformation campaigns (such as fake news and social media posts) and psychologically manipulative messages aimed at damaging a hospital’s reputation, undermining trust, or tricking staff into compromising security. Here again, this can happen to anyone. For example, a bad actor could commit a ransomware attack and as part of that use GenAI to spread very convincing fake news stories about the healthcare system’s patient care being compromised, thus pressuring the organization to pay a ransomware. It’s only a matter of time before this happens to a healthcare system.

Compromising GenAI. Bad actors can use GenAI to directly compromise GenAI-based clinical tools (leading to misdiagnosis and incorrect treatment) and to disrupt critical hospital systems (ransomware, equipment sabotage), potentially causing chaos and endangering patient lives. This is an especially alarming threat because it’s no secret that large language models can be vulnerable to security lapses.

Unfortunately, these are very real scenarios. Ransomware attacks are on the rise in healthcare according to recent research from Barracuda Networks. The company believes that an uptick in AI-generated phishing attacks and development of malware is behind the surge. These attacks are costly. They endanger patients’ lives and hurt hospitals financially. All told, ransomware attacks affected at least 141 hospitals in 2023. By 2023, the typical ransom amount soared by 29,900 percent, reaching approximately $1.5 million.

How hospitals can protect themselves
So, what can hospitals do about all this? Plenty! Here are some measures they should take now:

Fight fire with fire. Effective cybersecurity defense will always come down to anticipating how bad actors work to stay a step ahead of them. This means thinking like they think and fighting fire with fire. Tools such as Google Magika continue to emerge to help all businesses, including healthcare organizations. Magika uses GenAI to detect malware, and hospitals can use it to more effectively stop phishing emails from planting ransomware.

Implement a zero trust architecture (ZTA). Traditional security models often operate on the assumption that everything inside the organization’s network is trusted, creating a strong perimeter to keep threats out. But ZTA assumes that threats can exist both outside and inside the traditional network perimeter, thus necessitating rigorous verification and control measures. As a result, a business employing ZTA protects its systems with a far greater level of rigor, Companies such as NVIDIA offer tools to help businesses implement ZTA.

Practice data loss prevention (DLP). DLP prevents the unauthorized use of sensitive information. DLP classifies sensitive data, monitors channels and devices for behavior that might indicate data is being shared or accessed inappropriately and prevents data loss. Imagine a well-meaning but overworked doctor at a busy hospital who needs to quickly reference patient charts while on the move. Lacking immediate access to a secure workstation, they decide to temporarily save a few sensitive patient files to their personal cloud storage account for easier access later. The hospital’s DLP system detects this policy violation – data marked as confidential being transferred to unapproved storage. It immediately alerts IT security while potentially blocking the transfer. This swift response prevents an accidental data breach, allowing the hospital to re-educate the doctor about secure data handling practices and avoid potential fines or reputational damage associated with data exposure.

Have an incident response plan. Develop and regularly update an incident response plan to swiftly address breaches. A hospital’s incident response plan would likely contain elements such as detection and analysis (rapid identification of potential breaches through network monitoring and alerts); containment (isolation of affected systems to prevent further spread); eradication (removal of malware/threats and thorough system analysis of the root cause); recovery (restoration of systems and data from backups, ensuring data integrity); and post-incident review (detailed analysis of the incident to pinpoint vulnerabilities, improve security protocols, and update employee training to enhance future defenses).

Test your security with Purple Teaming. Purple Teaming is a collaborative approach that strengthens security posture by having a single team simulate both cyberattacks and defenses. This allows for more realistic and comprehensive breach simulations.

Do regular security audits. Conduct security audits on AI systems and their integration points with existing healthcare IT infrastructure.

Conduct staff training. Educate physicians and staff members on the cybersecurity risks associated with generative AI and how to identify potential threats.

Choose reputable AI vendors. Prioritize vendors with stringent security practices, data privacy standards, and clear incident response plans.

GenAI will continue to prove itself to be a transformative tool in healthcare. By proactively managing the cybersecurity risks associated with GenAI, healthcare organizations can maximize those benefits.

About the author: Sanjay Bhakta is vice president and head of solutions at Centific, which is leading and developing collaborative, innovative, and disruptive solutions that help clients protect their technology infrastructures and enhance business processes. His industry knowledge spans multiple areas of digital safety that include cybersecurity, fraud detection and prevention, and the adoption of various technical frameworks and standards as best practices in the 21st century.