My life depends on the functioning of a medical device: a pacemaker that generates each and every beat of my heart. I know how it feels to have my body controlled by a machine that is not working correctly, and this is why I encourage fellow security researchers to delve into these medical devices and find ways to make them more secure.
Four years ago, I woke up lying on the floor, but I had no idea how I’d gotten there or for how long I’d been out. Stunned, I went to the emergency room at the local hospital. It turned out I had fallen because my heart had taken a break---long enough to cause unconsciousness. Luckily, it started beating again by itself, but the resulting pulse was very low and irregular. To keep my pulse up and stop my heart from taking pauses I needed to get a medical device implanted in my chest that would monitor each heartbeat and send a small electrical signal directly to my heart via an electrode to keep it beating.
I am a security researcher, and at the time that I got this medical implant my day job was protecting the national critical infrastructure in Norway from cyber-attacks. When I got the pacemaker it was an emergency procedure. I needed the device to stay alive, so there really was no option to not get the implant. There was, however, time to ask questions. Unlike most patients---and to the surprise of my doctors---I began asking about the potential security vulnerabilities in the software running on the pacemaker and the possibilities of hacking this life-critical device. The answers were unsatisfying, and they were beside the point. I needed the pacemaker, and so I got it.
After the surgery, I started to search for more information. I found and studied the technical manual for my pacemaker. I was quite surprised when I discovered that it has built-in functionality for wireless communication. It has a near-field interface to facilitate adjusting the configuration settings and another wireless interface for remote monitoring purposes. This means that the pacemaker can connect to a server at the vendor via an access point to transmit my device logs and patient information. I realized that my heart was now wired into the medical Internet of Things, and this was done without informing me or asking for my consent. I was alarmed. I recognized right away that this remote monitoring capability is very beneficial to a lot of patients who require frequent check-ups, but with connectivity comes vulnerability. As a security researcher I see this as an increased attack surface.
After the pacemaker was implanted under my skin it needed to be configured. It has a sensor system that needs fine-tuning so that it will work seamlessly with my body to create a heart rhythm that is sufficient to put enough oxygen in my blood. When it’s working correctly, the pacemaker should recognize when I go for a run, for instance, and make my heart rhythm faster.
Since I'm younger than most pacemaker patients, the default configuration settings were not suitable for me. It took a few months of trial-and-error tweaking before the doctors could get the tuning right, and this was complicated by a software bug in the programming device that they used to adjust the settings of the pacemaker. The bug caused the actual settings of my device to differ from the those displayed on the screen at the hospital that the pacemaker technician was seeing.
The consequence of this greatly affected my well being. If I tried to run after the bus or climb up stairs I would suddenly get out of breath. The pacemaker was detecting my pulse to be outside the upper heart rate limit, which was erroneously configured to 160 beats per minute. When I reached this heart rate, the pacemaker would suddenly cut my pulse in half to 80 beats per minute due to a safety mechanism. This was a very uncomfortable feeling. All of a sudden my body could not get enough oxygen. I compare it to that feeling you get running uphill as fast as you can until you reach the point of exhaustion, except it happened instantaneously, without any warning. Like hitting a wall.
Part of the problem with doing security research in this field is that the medical devices appear as black boxes. How can I trust the machine inside my body when it is running on proprietary code and there is no transparency?
My fellow patient advocates Karen Sandler, Jay Radcliffe, and Hugo Campos have been fighting for their rights to get access to the proprietary software and the data that their devices are collecting, without getting this from the medical device vendors. A significant battle was, however, won when the DMCA exemptions for medical device security research were granted in October of last year. I really hope that this paves the way for more research.
It is already established that pacemakers can be vulnerable to hacking. In 2008 a group of researchers, led by Dr. Kevin Fu of Archimedes Center for Medical Device Security at University of Michigan, published an article showing that it is possible to extract sensitive personal information from a pacemaker or even to threaten the patient's life by turning off or changing the pacing behavior. Fortunately, such an attack required close proximity to the patient, and could not be carried out remotely.
A more threatening attack scenario was developed by the hacker Barnaby Jack, who was planning to give a lecture at the Blackhat conference in 2013 about the possibility of remotely controlling pacemakers via wireless communications at 15 meters distance. Sadly, he died just days before the conference, and his research has not been pursued.
Hacking of pacemakers via their Internet-connectivity, like you may have seen in popular TV shows, has not yet been proven possible. However, there has been no independent research looking closely into this published, so as a patient I am expected to trust the vendors when they claim to have strengthened the security of their devices so that they are no longer vulnerable against the published security concerns. That's not enough for me.
As a security researcher, I want to figure out how things actually work myself, and this is why I started a hacking project together with my friend Éireann Leverett, to look at the security of the wireless interfaces of my pacemaker. Since I started to promote this research I have gotten several offers to help with my project, and two more security researchers, Gunnar Alendal and Tony Naggs, have also joined my team, working on my project in their spare time. I have also gotten funding from my employer SINTEF to carry out this research in my day job. I am not tinkering with my own implanted device in this project---of course. Instead, we have purchased devices to hack on eBay and have also been donated used pacemakers.
I encourage more security research of medical implants simply because I do not believe that proprietary “security through obscurity” will make the devices safer for patients.
The medical device industry got a wake-up call last year when researcher Billy Rios demonstrated that drug infusion pumps had vulnerabilities that would allow unauthorized firmware updates that could give patients lethal medication dosages. This led to the FDA (US Food and Drug Administration) issuing the first-ever recall of medical devices due to cyber security vulnerabilities. This was also a very rare example of a recall by the FDA without any patients being killed due to the vulnerability. Usually drugs and medical equipment are not withdrawn from the market without evidence of harm.
The decision to implant a medical device is also a risky one. In my case the benefit of having the device clearly outweighs the risk, since I would probably not be living without the pacemaker. No patients have, as far as I know, been killed due to a hacked pacemaker, but patients have been killed due to malfunction of their medical devices, configuration errors and software bugs. This means that security research in the form of pre-emptive hacking, followed by coordinated vulnerability disclosure and vendor fixes, can help save human lives.