Over 1850 Total Lots Up For Auction at Six Locations - MA 04/30, NJ Cleansweep 05/02, TX 05/03, TX 05/06, NJ 05/08, WA 05/09

Healthcare systems and generative AI: A two-edged sword

March 22, 2024
Artificial Intelligence Business Affairs
Sanjay Bhakta
By Sanjay Bhakta

It’s hard to believe that OpenAI released ChatGPT less than two years ago. Already, healthcare systems are applying generative AI to free up caregivers from doing administrative work that takes time away from patient care. For example, as part of a pilot program, emergency room physicians at four HCA Healthcare hospitals are using GenAI to create draft medical notes from conversations during patient visits. This and other applications of GenAI promise to make healthcare better. Healthcare systems should continue to adopt GenAI But as they do, they need to be aware of GenAI’s cybersecurity risks.

And those risks can be catastrophic.

GenAI is an evolving technology. Any large language model needs to be trained on massive amounts of data, including patient data, to power an effective GenAI model. What happens if a bad actor hacks that data and corrupts the model? On the other side of the coin, what happens if a bad actor uses GenAI to commit a cyber breach against a hospital? This is not a hypothetical question. Bad actors are already using GenAI to compromise emails, spread malware, and steal data. Everyone is vulnerable, including healthcare systems.

How bad actors can attack hospitals with GenAI
In fact, bad actors can use GenAI to attack hospitals in many ways. Here are some examples:

Phishing attacks. Bad actors can use generative AI to launch hyper-personalized spear-phishing attacks against hospital staff, impersonating trusted figures with highly convincing emails, messages, or deepfakes to trick employees into clicking malicious links, giving up credentials, or taking actions that compromise the hospital’s systems. And lest you think “This could never happen to me,” consider the case of the employee at a financial services company who was duped by a sophisticated deepfake into transferring $25 million to a fraudster. Making this even easier for attackers, many tools have been developed specifically for committing harm. FraudGPT is a generative AI tool that can be used to craft malware and texts for phishing emails. It is available on the dark web for a relatively cheap price, which makes it easily accessible for even moderately-sophisticated cybercriminals.

Potent malware. Generative AI can enhance malware by making it adaptive for code obfuscation, finding undiscovered software vulnerabilities (zero-day exploits), and intelligently targeting critical hospital systems, making detection and containment much more challenging.

You Must Be Logged In To Post A Comment