Computer hackers soon could cross into a new frontier, one that would give them power to kill through insulin pumps, defibrillators, pacemakers and other personal medical devices.
With the availability of more technology for personalized health care, millions of patients are gaining access to tools connected to online networks, but the devices are vulnerable to hackers, cybersecurity experts warn. Hackers potentially could gain control of the signals the devices send and receive, perhaps to send manufacturers’ stock prices plunging, demand ransom from a individual, hospital or medical supplier, or physically harm patients.
All three U.S. manufacturers of implanted pacemakers and defibrillators – Medtronic, Boston Scientific and St. Jude Medical (now Abbott) – have had cyber-security warnings issued for their devices.
A security issue with St. Jude’s implantable cardiac machines eventually led the company to voluntarily recall hundreds of thousands of devices, following pressure from the Food and Drug Administration, according to several media reports.
Roman Lysecky, an associate professor at the University of Arizona, who was among panelists at a recent Association of Health Care Journalists conference in Phoenix, said medical device hackers are a looming threat.
“My biggest concern is that we won’t see a dramatic change until we have a cataclysmic event, a device that’s actually attacked that would kill a patient,” Lysecky said. “I fear that that might be what ends up causing a real change in the community. I’d like to see it not come to that.”
Dr. Jeff Tully, an anesthesiologist at the University of California-Davis Medical Center, said the threat of device hackers represents a different level of danger – instead of just being able to steal patients’ medical records, hackers could potentially cause direct physical harm.
He said hackers could gain control of patients’ automated insulin pumps, potentially causing devices to inject lethal doses. Or, a cybercriminal might hack into pacemakers used to regulate heartbeats, changing the settings to send patients into cardiac arrest.
Tully said that as doctors depend on more and more medical devices, the face of health care is changing. To illustrate his point, he showed an image of a patient on a hospital bed hooked up to dozens of devices.
“When we are in medical school, we kind of have this platonic ideal of what it means to be a doctor,” Tully said. “We talk about everything important in medicine happens at the bedside. I do believe that’s where the human moments take place and where you can make the biggest impacts, but the bedside today looks very, very different.”
As of now, Tully said, there have been no confirmed, large-scale medical device hacks. But emerging security vulnerabilities have prompted medical software security companies such as Zingbox, where May Wang works as the chief technology officer, to help care facilities improve cybersecurity.
One major issue, Wang said, is that many hospitals don’t have an accurate inventory of the network-connected medical devices they use.
When Zingbox visits large hospitals, their experts often find two or three times as many devices as those counted in the hospital inventory, Wang said.
“Inventory is a big headache for lots of hospitals, especially dynamic, real-time inventory,” Wang said. “And not only do they need to know … what devices you have, but it’s very important for you to know what devices are connected to your network at any given time.”
A critical security risk affecting only a small number of devices could mean a hospital would have to shut down all the devices – even those that might not be affected if the hospital didn’t know which ones were connected to the network and thus vulnerable, Wang said. For a patient connected to a device in the operating room, this could be deadly.
Another problem with addressing these vulnerabilities, Lysecky said, is that unlike an iPhone glitch that disappears with the newest update, tech experts can’t currently fix cybersecurity issues in implantable devices quite so easily.
“If you make a change to the software, you still have to make sure your software is operating correctly,” Lysecky said. “And so it’s a much longer time frame that happens between when we discover a vulnerability and when it’s actually been corrected. And that correction might actually be completely replacing the device.”
After the federal Food and Drug Administration released a statement in January 2017 warning of security vulnerabilities in implantable cardiac devices manufactured by St. Jude Medical (now known as Abbott), the company issued a warning to patients recommending they visit providers for a firmware update to their device that would take about three minutes to complete.
Although fixing this particular issue does not require doctors to remove and replace implantable pacemakers, Lysecky said that, theoretically, a more critical security vulnerability for a device could require surgery to resolve. That would present many obstacles for doctors, manufacturers and patients.
It’s key to increase public awareness of the dangers medical device hackers pose, Lysecky said, but patients should be able to trust in life-saving devices in general.
The most important thing for patients, he said, is knowing whether their devices are connected to a network.
Lysecky said patients generally trust their physicians, but “they should be aware that there are potential concerns or security concerns for any connected medical device.”
Tully, an anesthesiologist, stressed the importance for manufacturers to look closely at device vulnerabilities.
“With great connectivity comes great responsibility,” Tully said.