Mention the phrases “information leak” or “data breach” in certain healthcare circles and the conversation quite naturally turns to HIPAA regulations and large-scale spills akin to Anthem, Community Health Systems, even Sony. Yet there is another technological means by which protected health information can be inadvertently exposed: Web application vulnerabilities.
Even holes that seem small or innocuous enough promise to, in relatively short order, prove more severe than they appear today.
“While this hasn't been widely exploited right now, it's sure to be a big target as criminal sophistication rises,” Elliott Frantz, founder and CEO of ethical hacking firm Virtue Security told Healthcare IT News. “We may see breaches getting much deeper.”
Which is why Frantz, who will be demonstrating application security and network penetration testing at HIMSS15, spotlights the five most common ways Web applications leak protected health information that Virtue Security encounters during vulnerability assessments.
1. PHI in URLs: Information including Patient’s name and date-of-birth can be leaked to unauthorized actors when URLs are cached in browser history logs, Frantz says, since URLs can be cached by proxies and viewed by unauthorized people, and those URLs can potentially be cut and pasted then sent to other users.
2. Improper cache controls: Web browsers not instructed against caching data will store that information locally, thereby creating files that anyone with access to that computer could potentially view. So if a person visits a site about HIV/AIDS or substance abuse that may be visible to a subsequent user of that machine.
3. Poor Secure Sockets Layer enforcement: Health information can be secured with proper SSL connections but Virtue Security finds applications still available over HTTP – which Frantz explains as meaning that anyone with “physical access to network infrastructure between the user and server” could view or, worse, modify the data in transit. Frantz recommends that all applications should redirect users to HTTPs pages.
4. Excessive application timeouts: This one is admittedly a bit tricky because while security experts can say straightaway that every healthcare application should be tuned to expire user sessions after a period of inactivity, it’s harder to figure exactly how long that fixed length of time ought to be. Reasonable timeout periods range from 30-60 minutes, Frantz says, adding the recommendation to redirect users to a login screen to ensure PHI does not linger on the screen post-timeout.
5. Insufficient access controls: Managing active and inactive user accounts has been an ongoing problem for decades spanning multiple industries and improper validation can inadvertently enable one user to read another’s data or even take control of an application. “It is absolutely critical that every authorization check is performed by the permission granted to the session token granted to the user when their username and password were provided,” Frantz explains.
Making matters worse: HIPAA does not get as technical as these common software vulnerabilities, so there’s a huge gray area that healthcare payers and providers must grasp. And perhaps the most difficult part about application security is deciding what exactly is a quantifiable risk, and what is not.
“There are so many issues that require difficult steps and at face value seem far-fetched like they’d never happen but in industries that are heavily targeted, such as finance, are taken extremely seriously,” Frantz says. “I can never tell anyone these are not going to happen or an incident won’t result in a breach.”
Related articles: