The street value of health information is 50 times greater than that of other data types. Even worse, the healthcare industry is among the weakest at protecting such information. With organized criminals trying to steal medical IDs, sloppy mistakes becoming more commonplace, mobile devices serving as single sign-on gateways to records and even bioterrorism now a factor, healthcare is ripe for some a wake-up call – one that just might come in the form a damaging "data spill."
Government Health IT Editor Tom Sullivan spoke with Larry Ponemon, chairman and founder of the Ponemon Institute, and Rick Kam, president of ID Experts (pictured below), which sponsored Ponemon's second annual Benchmark Study on Patient Privacy and Data Security. He asked about that data spill assertion, why healthcare lags other industries in privacy and security, and how the $6.5 billion spent on responding to data breaches could be better invested.
Q: The study finds that breaches are up 26 percent. Are things as bad as they seem to be?
Larry Ponemon: Data loss and data breaches happen all the time. And one of the possible reasons for increase in frequency for the data breach events can be due to the fact that organizations are more cognizant of it and are mandated by law to report it. In other words, it’s the old adage, 'If a tree falls in the middle of the forest and we don’t hear it, did it actually fall?' Well, organizations have a heightened sense of awareness, hopefully, about these laws and therefore the frequency is increasing because of that.
There is a second more nefarious possibility that data loss occurs because there’s just more criminal enterprise around data theft. And there’s evidence that, not just in healthcare, but generally that number seems to be on the increase as well.
So it’s a combination of factors, but the results of our research on a matched sample basis suggest that number certainly isn’t going down. Instead of getting better, it seems to be on the increase.
Q: What, specifically, are those factors?
Rick Kam: One of the interesting things within privacy circles is growing concern about the strategic nature of the data. For example the TRICARE information that was breached, there’s concern about the data including the vaccination and health information of our fighting forces being released or perhaps picked up by a nation-state like China or North Korea or others that would look at a bioterrorism strategy against our country in some respect. It might seem a little out there in terms of concern, but just as there’s nefarious for criminal or financial gain, there’s also nefarious for other types of issues where health information can be very useful.
Q: So, an enemy could potentially find out weaknesses in terms of vaccinations, and deduce the best way to attack our troops?
RK: Exactly. To use a bioterrorism agent that weakens the fighting forces of the U.S., knowing what they are vaccinated against and what they are not would be an important detail.
Q: Beyond the military, is the healthcare industry at large vulnerable to some sort of big data heist?
RK: Like when BP had their massive oil spill, there’s the potential for something like this to occur in the data security/privacy within healthcare – which would be a wake-up call for the industry. To put this into context, healthcare information compared to financial data or even oil is something that cannot be put back in the box. You can get a new Social Security number or a new credit card from a financial or identity theft. If you have an issue with the theft from TJX or one of those types of situations or even Sony with the email addresses and account numbers, but losing even a handful of hundreds of pieces of patient data that might surround a stigmatized illness or some variation on that theme, that information cannot be put back into the box. Once it’s out there, it’s out there forever. There are a couple of issues around that. One is that the information is worth 50 times what Social Security numbers are worth based on some of the things I’ve seen in various pieces of research, some of which Larry has done. So a Social Security number is worth, say, $1 on the street while a health insurance number and/or health information is worth $50 on the street, which points to the value of that information for other uses, whether it’s getting access to prescription drugs illegally, or health services.
So I do think there’s going to be a giant data spill of health information and that might be tens of thousands or even millions of records that create that impact. Since you’re Government Health IT, I love this example: Imagine if the health information of the U.S. Congress was compromised ... or of the GOP candidates … or some variation on that theme.
Q: The study found that sloppy mistakes are among the most prevalent causes of data breaches. What are the most common examples?
LP: Basically, it’s hard to say what the sloppiest is, or the worst example, but I think we see billing information, administrative applications like scheduling apps, definitely clinicians that are not paying attention to detail that unfortunately might lose a device like a handheld that contains patient information. Part of the whole ecosystem of healthcare is about collecting information. You have to do it. That’s why you’re in a hospital, right, to recover from an illness or for diagnostic purposes. There’s information that has to be collected about you, but there’s the handling of that between clinicians, administration, billing, and others including third-parties that creates kind of a perfect storm for data loss. There’s also the culture. I’m just going to jump in here – and this might sound pretty negative and damning to clinicians – but culturally we’re dealing with people who measure their efficiency in seconds. There’s pressure on healthcare organizations to be more efficient than they’ve previously been. There’s efficiency in terms of time, the time it takes to get something done. So if it takes a little bit of time to secure your handheld device with a password, that doesn’t get done. That goes back to the culture of healthcare where we push people to work very, very efficiently but they may not have the resources to go a little slower to be more mindful of their privacy and security responsibilities. This might also be true in other industries but based on the research we’ve done over the years healthcare seems to be one of the worst in terms of balancing the need for security with the mission of more efficiency.
Q: So why is healthcare among the worst?
LP: Well, I think there are financial challenges for many healthcare providers, so as a result of that it’s hard to get enough funding to have the right technology and the right people, the right governance processes in place to deal with these regulatory and real requirements, more than just regulatory. So that has a lot to do with it and as I said culturally the main vision in healthcare is to heal people. It’s not about protecting data. Some industries like financial services learned a long time ago that data protection is core to customer trust.
[Q&A: Between the lines of NEJM's EHR report, 'trust trumps tech'.]
That concept does not seem to pervade the healthcare organizations that participated in our study and, interestingly enough, patients, people who are the victims of data loss, if a healthcare provider loses their data, they’re going to lose trust pretty quickly and say ‘Why do I want to go to a hospital that can’t manage my data? How can they manage my illness?’ ‘How can they manage a laboratory test if I can’t trust them to manage my billing order?’ Those kinds of issues are pervasive in healthcare. Other industries experience some of these, it’s not uniquely a healthcare problem – but it does seem that healthcare has more of these challenges than other industries.
RK: Widespread use of mobile devices is one of the culprits. It’s not unique to healthcare but they are causing problems.
Q: Is there a distinction between, say, laptops, smartphones and tablets? Is one more susceptible to being breached than the others?
LP: The word mobile, a laptop computer is a mobile device, definitely, but the devices of great risk in our areas of research will be those like smartphones, maybe tablets. The smaller the device the higher the probability of it being lost. And we know that healthcare organizations on the efficiency side have discovered the great benefits of using a handheld device, for example to capture patient information, to do diagnostics, to receive the consent of a patient. The ability to do that now is just so efficient because of handhelds, and when you couple that with single sign-on technology so that the user can connect to every app without having to enter separate passwords, for the most part that creates a real improvement for people in the working environment but, of course, as more of these small devices are used and collecting data the higher the risk that something real small can contain lots of data and lead to a massive data loss. The other issue, too, is that as easy as it for the device to be lost, especially with all that data residing on the device, there’s also the fact that the device itself is a from of authentication. People forget about that, so if I’m using single sign-on, if I’m not careful I might have my visible credential on the devices. If it’s stolen by the bad guys that could open up access to a system of electronic medical records, thousands or millions of records. So that’s another issue with mobile devices, not just that they’re a form of storage but it’s also an authentication mechanism.
Q: One thing we’ve all learned from the VA, not just in its 2006 data loss, but this past October, when the agency reported the loss of an iPad, albeit an empty one, is just how easily hardware can disappear.
RK: Oh, absolutely. And in the context of healthcare, protected healthcare information and financial information used to drive medical ID theft, which seems to be one of the growing issues not only in the U.S. Medical identity theft is one of the results of the breaches in healthcare data being up 26 percent.
LP: The basic issue, when you think about data theft not data loss, because it’s hard to know whether that lost data ultimately ends up in the hands of the cybercriminal and all of these bad things occur, but in the case of identity theft, the end goal has been historically to steal a person’s identity, and just like getting a financial record, getting a health record probably has your credit card, debit card, and payment information contained in that record. The financial records are actually lucrative for the bad guy, but the health record is actually much, much more valuable item because it not only gives you the financial information but it also contains the health credential, and it’s very hard to detect a medical identity theft. What we’ve found in our studies is that medical identity theft is likely to be on the rise and, of course, there’s an awareness within the healthcare organizations that participate in our study that they’re starting to see this as more of a medical identity theft crime. It’s not just about stealing credit cards and buying goodies, it’s about stealing who you are, possibly getting medical treatment and, therefore, messing up your medical record. The victim may not know about this until he or she stumbles on something that alerts them their medical identity was stolen. That was definitely recognized by our respondents as an emerging threat they believed was affecting the patients in their organizations.
Q: Taking that into consideration, is there really even such a thing as a secure mobile device?
LP: I might get into trouble here. The answer is, mobile devices by their very nature can never be completely secure. The reason is that the whole idea is to allow the user the maximum convenience in terms of communication, which means staying connected. You can use it for so many purposes, from mobile payments to medical records to whatever else. So the other part of that reality is that the security industry is spending lots of resources and really talented companies have come up with better security solutions that don’t necessarily reduce the convenience but create a much better security environment around the devices. As you probably know, they have to be invisible to the end-user to be acceptable. We’re seeing more technologies being built with this kind of convenience in mind. Right now mobile devices are a source of great insecurity. Over time, they’ll be insecure forever, but much less insecure through the development of new technology.
Q: All of this makes plenty of patients and providers wary about sharing personal information. But for healthcare to really improve in this country – and I’m talking with or without federal health reform legislation – patients need to be willing to share their data in a protected fashion, so it can compiled and analyzed for better outcomes. How can we get over this patient consent hump?
RK: I would call it practical or pragmatic tactics, things like providers taking very simple steps to understand where their protected health information is, including when they do have an issue having a prepared response to deal with the privacy and security issues quickly, and probably most important, making sure all of the business associates and their ecosystems of contractors and providers have a focus on ensuring the information is protected and secure. Beyond that, as Larry said, it’s going to take the security and health industries making improvements in technology, training, and just general awareness that there is value to this data both for good as well as for evil. And as an important asset it needs to be protected.
LP: Ditto for my response, and I’d just add a couple things. First, I think the issue of consent is basically very, very difficult. That’s because people will give their consent, and not really give their consent. In other words, they understand it well enough so they’ll check the box that everything’s okay and then down the road when something happens, they’ll say ‘how did my data end up there?’ So we know that consent, you could talk to 100 people and get 99 different definitions of what they consider consent to be. But as the public get smarter about this whole issue – hopefully that occurs – they are going to be a bit more careful about letting providers share their data with third parties. The other issue, too, is the emergence of large health databases. It’s possible you could build in certain controls to give patients a sense of control over the data that’s collected and used about them. But it also creates the potential firestorm for cyber because instead of just hacking into one hospital, you’re hacking into a database of the United States government and that has huge infrastructure implications.
Q: Is there anything that came out of the study that I didn’t ask about?
RK: The only thing I’d add in closing, Tom, is if you think about the $6.5 billion figure and the fact that as opposed to that money being invested or spent on responding to data breaches, it being used toward good. With that $6.5 billion you could hire 81,250 registered nurses nationwide in the system to help patients.
LP: A sobering thought, there.