We can rightfully assume that everybody who uses a computer or mobile device to connect to the Internet, whether for personal or professional reasons, is aware of the impact of hackers and malware on our online experience. Even if we have not personally been hacked, we are only too aware of the defenses we have to put in place to protect our identities, our information, and our equipment. Anti-Malware software on our devices, spam filters for our email, firewalls to protect our networks, etc. We need to pay for them, maintain them, and we need them to allow us to do what we need to, or enjoy doing – interacting with social media, web surfing, communicating, shopping, studying, and more.
Yet, the U.S. government just told us there is another thing to worry about. Those of us who use implanted electronic medical devices, like pacemakers, neuro-stimulators, or insulin pumps, were just told that these devices, just like our computers or smartphones, may be at risk of being attacked and broken into. Although today this is still a hypothetical and futuristic scenario and so far demonstrated only by security researchers, it appeared serious enough for the Federal government to fund an extensive investigation and issue a 68-page report titled “Medical Devices - FDA Should Expand Its Consideration of Information Security for Certain Types of Devices” (United States Government Accountability Office (GAO); Report to Congressional Requesters; August 2012; http://www.gao.gov/assets/650/647767.pdf).
As early as 2008, security researchers have demonstrated that implantable medical devices can be reprogrammed with some basic electronic equipment, brain power, and persistence. Since these early experiments we have seen more published research demonstrating that a wide range of these devices’ wireless connections, the same ones used by doctors to adjust device settings in the first place, can be hacked and re-programmed to potentially do bad things – such as injecting a diabetic with a harmful dose of insulin.
The GAO report reviews intentional and unintentional ways a device can be made to malfunction; intentional through hacking or unintentional through external interference, e.g., a store security system. The report concludes that the current regulatory framework in place to assure quality and safety of medical devices reduces the risk from unintentional events, for example by stipulating compliance with certain EMI (Electromagnetic Interference) standards and testing. However, according to the GAO, the current regulatory framework does not sufficiently address the risk of an intentional attack. Neither as far as the premarket review process is concerned (including approval, verification, and validation), nor from a post-market surveillance perspective (e.g., incident reporting and recall management).
We should not underestimate the inherent complexities; this is not a trivial problem to solve. The current FDA regulatory framework is there for a reason: to assure safety and effectiveness of medical devices sold by manufacturers, whether they are used in our home to monitor basic vital signs, take images of us at the local imaging center, support diagnosis or treatment at a hospital or doctor’s office, or are implanted to support or substitute for a body function.
From an engineering perspective, many of these devices are severely resource-restrained. They need to be compact, reliable, and last for a long time. Adding, for example, encrypted data traffic to a pacemaker will require more memory, additional CPU cycles, and has the potential of shortening battery life – a battery which can only be replaced surgically.
Further, any additional functionality added to the device complicates the design, its testing, and therefore increases the risk of a fault. For example, introducing the need for digital keys or certificates to secure communication to the device and authenticate the user adds indeed a layer of security, but will require a highly reliable certificate management system to prevent accidental lock-outs due to authentication system failures.
The engineers designing these highly sophisticated products need to thoroughly evaluate technology and partner choices to assure a reliable solution and to carefully balance any additional security components for their reliability and against potential unintended negative consequences.
However, there is another set of risks to consider. As discussed above, the GAO report analyses the security risks of these compact, resource-restrained implantable devices. This is, of course, critical given the device’s impact on patient health and potentially, life or death. But the medical device infrastructure of healthcare systems and hospitals is far more complex. It includes a wide range of medical devices from simple to complex, from small to large, built on proprietary platforms or based on commercial, off-the-shelf software (COTS). The latter introduces its own set of risks through a “PC under the hood” design approach since the commercial operating systems or other software can easily become an entry point for malware, whether it targets the specific device or generally exploits a vulnerability specific to the chosen platform with the device then becoming “collateral damage”.
Unlike the scenario discussed in the GAO report, which would require a dedicated and targeted effort, we hear from many of our hospital customers that their x-ray machines, medication cabinets, or diagnostic equipment are being taken down by general malware, which happens to match the vulnerability profile of a common operating system or other off-the-shelf software.
The tightly regulated release process used to assure safety and reliability of devices can get in the way of manufacturers rapidly and frequently releasing security patches and also prevents hospitals to installing 3rd party security software. I am not suggesting that these regulations should be abandoned in favor of more frequent and easier software releases. This could be detrimental to product safety; after all, there is a reason why these regulations exist in the first place. As I once learned from a security researcher who himself had an electronic implant: “I would much rather have an insecure safe device than a secure unsafe device”, again expressing the complexity and tradeoff between security measures and a safe design.
It is a fine line and there is no simple path forward. Short of abandoning exiting regulations (and compromising device safety), they should rather be enhanced in consideration of the fact that today’s modern medical devices are largely software based, are capable of communicating via proprietary or standard interfaces and networks, need to be protected from intentional or unintentional security threats, and could become the target of people with bad intentions.
Many manufacturers are starting to design their systems with these considerations in mind. They harden the operating system, implement security technologies like intrusion prevention, provide security guidance to the end user, and architect their device so that software patches can be released without the need for retesting of the entire system. There is still a long way to go, though. Too many legacy systems are still out there (and will be out there due to the typically long useful product life) and too many manufacturers treat security as “not my problem”.
From the end user (hospital) side, it is critical to include the medical device infrastructure in their security risk analysis process (as for example provided through IEC 80001) but also to call on the manufacturers to disclose the security status of their system such as the use (or lack of use) of anti-malware, encryption, and other security technologies. Security companies like Symantec™ can advise hospitals and manufacturers on how to address these issues reliably and effectively.
Lastly, I am calling on all stakeholders to seriously engage with this issue, read the GAO report, understand its implications, and make sure that whatever your responsibility (regulator, manufacturer, clinical user, or patient), you help to solve this issue before a major catastrophe strikes.
Several industry organizations, like the IHE PCD working group or MDISS, have started to take on the challenge of building a framework around the security requirements of our medical device infrastructure. Input from all constituents is required for these volunteer organizations to do their work. This issue cannot be owned by just a few, it needs to be owned and solved by all involved.
Otherwise, one day you may have a hacker in you ….