The New York Times is running a story entitled A Heart Device Is Found Vulnerable to Hacker Attacks.

A team of computer security researchers plans to report Wednesday that it had been able to gain wireless access to a combination heart defibrillator and pacemaker.
They were able to reprogram it to shut down and to deliver jolts of electricity that would potentially be fatal — if the device had been in a person.

Wireless access to implanted medical devices is something of great value to doctors and patients alike; monitoring and adjusting the device can be performed remotely rather than requiring surgery to get access to the device itself. This attack raises a number of questions:

  • What happens to the folks that already have these Medtronic devices implanted which have been proven to be hacked?
  • Are they going to get new implants?
  • Who will pay for the surgery required to replace them?
  • What happens if someone dies during the surgery to replace a vulnerable implant? Or, the device is not replaced and is maliciously hacked?

If I were a medical device manufacturer, I wouldn’t want to have to answer any of these questions. Fixing a vulnerable medical implant isn’t the same as patching your application or operating system—instead of “Patch Tuesday” we are talking about scalpels, anesthesia, and risk of death or disability.

This example makes it extremely clear that companies responsible for medical devices and implants must consider security throughout their product design process. The potential risk of a “hacked” medical implant is the most serious one we know of: death. Wireless implants must use strong authentication and strong encryption to prevent a catastrophe.

(via /.)