Responding to Sweeney

I am again discussing the privacy comments from Dr. Latanya Sweeney. She testified to Congress that both the NHIN CONNECT and NHIN Direct security models where flawed.

Figure 2(b) summarizes concerns about these two designs. The NHIN Limited Production Exchange has serious privacy issues but more utility than NHIN Direct. On the other hand, NHIN Direct has fewer privacy issues, but insufficient utility. When combined, we realize the least of each design, providing an NHIN with limited utility and privacy concerns.

You mean both projects are un-private?

I have recently posted about the assumption that NHIN Direct is less functional than NHIN CONNECT. So now I want to talk only about the privacy failings that Dr. Sweeney implies.

I summarize the statement above and her testimony generally to mean that Dr. Sweeney believes that “NHIN CONNECT and NHIN Direct both fail to protect privacy, but NHIN Direct is the lesser of two evils”.   That is a meaningless statement. It is easier to see this if you speak in terms of known Open Source applications. For instance if I say “Apache does not support privacy” or “Firefox does not support privacy” it becomes pretty clear that the point is hog-wash.  I can use Apache to setup a website that only I and my family can access. I can also use Apache to create a website that will abuse users privacy the same way that facebook does. Similarly Firefox can be configured to behave in a more private, and less private way through settings.

Moreover both projects can be used as a software platform that can allow other programs to increase or decrease privacy. mod_ssl is a perfect example with Apache, and this is even more apparent with Firefox, which already has tools that help publish browsing habits, and also has several tools to make browsing more private.

As with any software project and protocol, it is possible that there are privacy implications in the underlying protocols and there is also the possibility that through mis-implementation either Open Source project could create a security or privacy risk where none exists in a correct implementation of the protocol. There is nothing to be done about this. This is a problem with software generally, and the best we can do is to put both the protocols and the implementations of those protocols out in the open so that security researchers can look for flaws. For both NHIN CONNECT and NHIN Direct this has already been done or is happening now.

Further more, the underlying protocols for both NHIN Direct and CONNECT are designed to allow for different kinds of privacy policy and enforcement. From a configuration point of view, both projects will be able to support extensive consumer consent options for instance, or they could be configured to generally ignore consumer consent. For that matter they could both be configured not to share any information at all, or to only accept in-bound data and not send data out.

From a merely rational point of view, most doctors in the US have email accounts but rarely use them to send PHI. When they do send PHI, it is usually legal and privacy respecting. This is not always true, but there is nothing that we can do with “email” to make it more true. It is not about the technology but how you use it.

Design Patterns are a Straw Man

Dr. Sweeney suggests that “For example, a domestic violence stalker can use the system to locate victims.”. But each node on the current and future NHIN Exchange will have the ability to monitor for strange searches and should be able to easily detect if a user frequently searches for 20-25 year old women who live near them. Are those policies and procedures enough? Hard to say since I have no idea what they are. Latanya does not indicate what they are either, but still makes this assertion.

Generally, assertions of privacy violations without evidence seems to be modus operandi for the patient privacy rights team and Latanya seems to be holding to form. Ironically, the new NHIN Exchange should allow for detection of the kind of abuse that Dr. Peel and Dr. Sweeney assert is common. While the current fax based system would allow them to go undetected. Dr. Sweeney gives several “for examples’

  • for example, a domestic violence stalker can use the system to locate victims.
  • for example, an insider could receive notifications of all abortions performed at other organizations.

in both of these examples, if the “black hat” in question is currently monitoring all incoming faxes at a local planned parenthood headquarters, and has a pen and paper handy, he can get all of this information now… but there is no way to detect that. It would not have to be a betrayal by an insider either. A tap could be placed on the fax line, even from outside the building. Both of these attacks are undetectable today.  Ironically, the NHIN Exchange sometimes prevent these kinds of abuses and the rest of the time it would provide precisely the evidence for information leaking that both Dr. Peel and Dr. Sweeny assert is common.

Dr Sweeny asserts:

Corrections (see Figure 5). In the data sharing environments described so far, there is no mechanism for propagating corrections or updating patient information.

But then she also says:

In one version [10], event messaging allowed 3rd party notification of patient information outside the direct care of the patient and without the patient’s knowledge.

I do not want to straw man her position here, she is talking about two different theoretical designs as she makes these statements. But there is a tradeoff here. The actual standards (Section 1.3) implemented in NHIN Exchange specifically state that:

In addition to “Query/Retrieve” and “Push”, the NHIN must support a publish and subscribe information exchange pattern in order to enable a wide range of transaction types. HIEM defines a generic publish and subscribe mechanism that can be used to enable various use cases. Examples include….   Support for notification of the availability of new or updated data

So in the real world.. we do have a problem with “event messaging” potentially used as a means to violate privacy, but because of event messaging we do not have a problem with broadcasting corrections to patient data.  There is a tradeoff here. Exactly the kind of tradeoff that Dr. Sweeney says we do not need to make:

Figure 2(a) depicts the traditional false belief of trading privacy for utility. It also shows our 9-year experience in the Data Privacy Lab of finding sweet spots of solutions that give privacy and utility. The key to our success has been technology design.

I want to be clear. I agree that there are sweet spots of “acceptable privacy and acceptable utility”. In fact in this situation the “technology fix” is to do extensive logging and auditing on who is looking at what so that you can detect the abuse that a comprehensive alert system makes possible. I could talk about how you might do that, but again looking at the actual standards you find a comprehensive description (Section 5).

Dr Sweeney ends her testimony with the suggestion that:

Performing risk analyses on design patterns provides a clear, informative path.

But that is simply not true. In fact, the her own “start” to performing a risk analysis on design patterns serves only to devalue the work that has already been done on NHIN CONNECT and is now being done on NHIN Direct. Criticizing a design pattern not used in given software, and then implying that the given software also suffers from those problems is a straw man process.

Dr Peel has said to me that she feels that Dr. Sweeney is being ignored and I can see why now that I have carefully read her testimony. I would welcome specific criticisms of the security protocol designs that we will be using on NHIN Direct. But I would suggest that Dr. Sweeney make criticisms on either particular software or at least a specific “Design” rather than a “Design Pattern”. Currently Dr. Sweeney is over-simplifying NHIN CONNECT  by talking about “design patterns” which are not used. The first consensus draft of the  NHIN Direct security protocol design does not even exist as I am writing this post, while Dr. Sweeney’s testimony is already more than a month old. I fail to see how she could say anything legitimate about the NHIN Direct security and privacy design at all, one way or another.

I would suggest, as I have said to Dr. Peel in person not 48 hours ago, that if Dr. Peel, Dr. Sweeney and Patient Privacy Rights generally want to be included in the relevant discussions, that that they try and keep their discussions relevant.  You are literally criticizing what we are NOT doing, and then implying that ONC is not working with you. I know that your hearts are in the right place, but I cannot code what you describe.

-FT

The Power of Push

Hi,

The NHIN Direct network has been criticized for lacking relevance for health information exchange. Specifically, Latanya Sweeney has submitted testimony to congress which has nothing good to say about either NHIN project. The paragraph I want to highlight says:

ONC’s website also describes NHIN Direct [11] as a parallel initiative underway [3]. The idea came from comments made by representatives from Microsoft and Cerner [12]. In current practice, two providers fax patient information as needed. So, the idea is to replace the fax with email that has secure channels to combat eavesdropping. There are numerous concerns with this design also. A glaring problem is its limitation. We cannot perform all meaningful uses with this system, so we will need an additional system, which begs the question: why build this system at all? For example, this design cannot reasonably retrieve allergies and medications for an unconscious patient presenting at an out-of-state emergency room (arguably a stage 1 meaningful use). Figure 2(b) summarizes concerns about these two designs. The NHIN Limited Production Exchange has serious privacy issues but more utility than NHIN Direct. On the other hand, NHIN Direct has fewer privacy issues, but insufficient utility. When combined, we realize the least of each design, providing an NHIN with limited utility and privacy concerns.

This  is not the first time that the NHIN Direct push-only model has come under attack, so I wanted to discuss this. Push-only means that A can send messages to B, but B cannot automatically get data  from A (that would be pulling). Email and Faxes are push models. Web pages are pull models (i.e. sent to you when your browser asks for them). The  benefits of both models are constantly debated in software design .

I am working on NHIN Direct, and not so much NHIN CONNECT, although I have great admiration for the project and many of my friends are working on that project. My experience with NHIN Direct, which has been excellent so far, has helped me to understand just how narrow-minded and short sighted these kinds of criticisms are.

Both projects, in so far as such a thing is possible while building technology, are taking a “policy-neutral” stance. That means that rather than defining policy in code, we try to code so that a broad range of reasonable policy decisions can be supported in a given protocol and codebase. But even under a given policy, there will be many many options to use these technologies in ways that are unexpected. So when anyone criticizes the “security and privacy features” of either CONNECT or Direct at this stage… it is typically by making certain poor assumptions about how the system will be actually used.

The most important poor assumption is to consider only standard uses of the technology when considering meaningful use. For instance, the NHIN Direct project concedes that mere usage of the NHIN Direct exchange will map to specific meaningful use requirements. Note the headers on that PDF to see that this map was contributed by my friend Will Ross and the Redwood Mednet team. In Open Source healthcare, as in Open Source generally, you see the same actors generating excellent contributions again and again. But these meaningful use mappings only consider the implications of mere use of the network, rather than considering anything that can be implemented on top of the network.

When people say the ‘Internet” what they usually mean is either email or the world wide web. In reality the “Internet” is a far richer technology space than this, but for most people only two of the thousands of protocols that operate over the Internet have become personally relevant: SMTP and HTTP/HTML. In fact as I say that, many of my clinical readers might not even recognize that SMTP, and sister protocols like IMAP, are the protocols that enable email, or that HTTP/HTML enable the world wide web. In fact both of these protocols rely on lower level protocols, like IP/UDP/TCP/SSL/DNS that enable the average user to surf and email.

But understand that the richness of the Internet, as we know it today, is not merely what the protocol implementations allow you to do directly (i.e. browsers let you surf the web and email clients let you read and send messages) but how those technologies are used. The web allows you to buy books on Amazon, win auctions on ebay and find dates on eharmony. Each of those website enables complex application functionality on top of the implementations of http and html.

It is easier to see how the web has more to offer than merely transferring hyper-linked web pages, to see the richness that is available at the application level that is not implied or assumed by the lower level implementations of the enabling protocols (that would be web-browsers and web-servers implementing http/html). Sometimes it easy to forget that we see the same thing with email. The email network does far, far more than merely send and receive messages . Like the web, higher level functionality is enabled by the lower level protocol driven functionality, in this case the ability to send and receive messages.

I wanted to highlight several things that you can do with email, that are examples of this higher-level functionality.

  • You can use an email account to prove that you are a human to a website. Have you ever signed up to a website that insisted that you give them an email address and then automatically sent you an email that had something to click on to prove that you owned that email address? I have done this so many times that I have lost count. This is “email for authentication”. Software often uses email messages to provide greater access to websites.
  • You can send messages to just one email address, which will then be sent to many other email addresses. Mailing lists can be pretty amazing software services, but fundamentally all they do is intelligently receive and re-send email messages. This makes email change from a one-to-one messaging system to a one-to-many messaging system. But it is implemented entirely with one-to-one messages.
  • If you push the mailing list even farther you can see that it can become something even more substantial, like craigslist, which pushes the envelope on email broadcasting and blurs the lines between email application and web application.
  • Programs can automatically send email messages when something changes, like Google Alerts tell you when the web has changed (or at least changed as-according-to Google)
  • You can have many email addresses and configure them to aggregate to one email viewing client, enabling separate relationships, and even identities to be managed in parallel. For instance your work email address really means your work identity, and your personal email means your personal identity, but you might forward both to the same email client and then answer and send messages as both identities at the same time.
  • You can use email to create a system for recycling things. Making it easier not to buy new things, and not to throw away working things. This is essentially email-enabled peer-to-peer conservationism.
  • Email clients are more than just programs we use to send and receive messages. We expect them to integrate with calendaring software. We expect them to allow us to extend them with other programs. People use powerful email clients like gmail to run their lives before people started to do that with gmail, they where running their lives with outlook or eudora.

Email is not just a method for sending messages. It is an application platform. Other applications that want to do something interesting can use email as a messaging component to achieve that greater goal.

I want to be clear. The NHIN Direct project has not settled on STMP, or email as protocol choice (although an S/MIME email is on the table). At this point we are not sure what protocol we will be choosing. But it does not matter, the point here is that NHIN Direct will at least act like, private, secure, identity-assured (at least for clinicians) email for sending clinical messages. You can expect that a NHIN Direct implementation will either be tightly or loosely integrated with a doctors EHR and a patients PHR in the same way that you have tight or loose integration between email clients and calendaring applications.

At this point it is best to think of NHIN Direct as a “cousin” to email. With lots of the same features and benefits but also limitations (to protect privacy) and new features (clinical integration, meaningful message signing, etc etc) that email does not have.

But the most important shared benifit between NHIN Direct and email will be the fact that you can build new interesting stuff on top of it.

Which brings us back to Latanyas first criticism. Will NHIN Direct support the ‘break the glass’ use-case (where your information can be gotten-to in case of an emergency) that Latanya mentions? No. Will software that implements NHIN Direct be able to use NHIN Direct as part of an something that provides break-the-glass functionality? Yes.

Very soon after an NHIN Direct network stabilizes, you will start to see this functionality addresses this use case. PHR applications like Google Health, HealthVault and Indivo X (the most important three PHr platforms) will probably develop break the glass mechanisms that work something like this…

I am an emergency room doctor and a patient comes in unconscious. In his wallet I find a card that indicates his PHR is held at johndoe@healthvault.com.

I visit healthvault.com and click the “break the glass” link. HealthVault asks me to enter my NHIN Direct address.. which is going to look a lot like an email address. So I enter fred.trotter@nhin.methodisthospital.com (not a real address). HealthVault will have already performed extensive public key exchange with Methodist Hospital, and will be able to cryptically ensure that any address under that domain name (we call them health domain names.. since they will be used exclusively for this purpose) is in fact someone that Methodist Hospital vouches for, and they will have pre-approved Methodist Hospitals PHI handling procedures. Given that pre-arrangement of trust, they will know that they can securely send messages to any published Methodist hospital NHIN Direct address.

But they are not certain, at this stage, that I am in fact fred.trotter@nhin.methodisthospital.com so they will send a message to that address with a link. I will click the link which will confirm with HealthVault that I am in control of that address, and that they should forward the contents of the johndoe@healthvault.com PHR record. Now that they are sure that this is a valid break-the-glass request from a valid user at an institution that they have a trust relationship with, they will forward the record to the address.

They will also add a record to john’s PHR to indicate that I broke the glass. If this whole process was done fraudulently, John will know and there will be hell to pay for me personally for abusing my credentials and for Methodist Hospital for giving me a credentials to abuse. Current HIPAA rules and fraud statutes would be activated if I made such a fraudulent request, that was not in John’s best interest. People who abuse the system could be detected and sent to jail.

The whole process takes minutes and works even when the patient is unconscious.

Would that particular method answer the “break the glass” components of meaningful use? It seems like it would to me. Would this be the method that we end up using? I am not sure, but it would be something similar in spirit. Most importantly, it would be something  implemented on top of, and around, the messaging model provided by NHIN Direct.

All of that is to say: Push is Powerful. It is powerful because it does not need to work alone. It can be a component of a larger system that does much more. It creates the opportunity for innovation and greater functionality similar to the one provided by the original Internet protocols.

This is all true of the NHIN CONNECT project as well. The difference is that NHIN Direct is much simpler and has true parallels with the current fax and email systems. It is easier to see how NHIN Direct might change things because we are so familiar with its cousins, email and fax.

NHIN CONNECT offers much more functionality at the price of far greater complexity. Like the NHIN Direct system, and email and web before it, the NHIN CONNECT architecture will allow for innovation to occur on top of it. But it is doing much more work than NHIN Direct is.

For instance, if I were fully NHIN CONNECT enabled, I would be able to conduct a search for John Doe and find out that three hospitals had information that were not contained in the HealthVault record. NHIN CONNECT might be able to provide a merged view of that data for me, which is a much richer process than mere messaging can achieve. But that means that NHIN CONNECT must tackle the complex problems of sorting out which records actually belonged to John Doe and therefore deserved to be merged. It would make automated, but accurate, decisions that Jonathan Doe at hospital A was my John Doe but that Johnny Doe at hospital B was not… NHIN CONNECT  should understand that a blood pressure measurement that was in the data it gathered from HealthVault was or was not a duplicate of blood pressure readings that came from the hospital C EHR, that had the same date, but not the same time stamp. These kinds of issues, plus countless more just like them, are addressed or exposed by both the underlying NHIN protocols that CONNECT implements and by the CONNECT codebase specifically.

CONNECT uses push and pull and all kinds of other software models to do something very complex.

NHIN Direct just does push, but leaves potential complexity to higher level yet-to-be-made systems.

Some people think the NHIN Direct model is superior. Others think that CONNECT is better. I think we probably need both for different reasons.. which is essentially the ONC position on the matter.

But I wanted to be sure everyone was clear: Push has Power.

-FT

The Burden of Trust

Hi,

I am a vocal participant on the NHIN Direct Security and Trust working group. Its a perfect place for me. I love Open Source healthcare, but my background was in InfoSec… and we never really forget our first love.. do we? At the NHIN Direct Security and Trust workgroup, I get to exercise all of my hats at once… and that is fun.

The purpose of NHIN Direct is to design an infrastructure for sending messages with clinical content between clinicians (and their patients). It is basically designed to be an email-like system for delivering health information. It is intend to eventually replace the current NHIN… which is the ad-hoc clinical fax network.

On a recent call, someone from the “Policy” department said something about our current plans to the effect of “I am not sure how putting the burden of Trust Decisions on individual providers will impact the ability of the project to replace the Fax network” I could not talk on the call… I was in a noisy airport… but I was surprised by that characterization of our work. In retrospect I can see how she would read what we are writing and come to the conclusion that we are putting new trust burdens on doctors… but in fact we want to lighten the trust burden they currently carry.

You don’t know the devil that you know

That is probably the most important point. The fax network comes with a very heavy trust burden. But we are used to it, so we rarely pay attention to it. This is a case of “acceptable losses”. Its kind of like Terrorism vs Auto Accidents. Many more people in the world are killed in car accidents each year than are killed by terrorism. The irony is that terrorism is much harder to fix than auto accidents. If the US Govt devoted the same budget to auto accidents that they do to the “War on Terror” we could probably prevent 99% of the auto accidents in the world. But we, as a society “accept” the burden of car crashes… because we are used to them. We have the same problem with medical errors… but that is another post.

So lets take a careful look at the “current trust burden” in the fax network. First, doctors do not actually deal with this problem directly. Typically they hire staff to do faxing. This isolates them from the problems that the “faxer” faces. It also means that they rarely hear of the errors.

“Faxers” fax to patients, and they fax to other clinicians. There are lots and lots of times when something that should have been faxed to Dr. Smith ends up going to Dr. Jones. We only hear about the most extreme cases. In fact before the existence of the NPI database, there was no reliable way to determine if a fax number was valid. If Dr. Adams wanted to send a record to Dr. Smith, his staff called their staff and wrote down the numbers. The numbers get jumbled, mislabelled and lots and lots of errors occur.

We do not hear of the cases where people were killed because information that was in a fax record was faxed to a wrong number. Perhaps sent to the “main hospital” fax line instead of the ER fax line where it was needed. These types of between-institution errors are almost impossible to detect, even the “big picture” at one large hospital is hard to sort out, and when you add another institution… no hope. Instead you get cases that are written off as “we did not know that X… oh well… nobody’s fault… nothing could be done”.

Then of course there is the assumption that fax lines are private. This is the farthest thing from the truth. Faxes, just like regular phone conversations are digitized and sent over the Internet. If a hacker gains control over a main router at a major Internet carrier, then they can re-route phone calls and faxes to themselves as well normal internet traffic. The fax network is actually going over the Internet right now… its just “obscured” rather than “encrypted”.

This is not the only problem with faxes, another problem is that institutions rarely have a firm grasp on how many fax machines are actually in operation. You can plug a computer modem into a wall and have a nearly undetectable new fax line… allowing “insiders” to send files to themselves via fax. In fact, phone lines can generally be re-purposed in to back-channel data ports in a number of ways, faxing is only one of them. Lots of my old Air Force buddies ended up at Securelogix, which is one of the top companies for phone security. They sell a telewall that can help prevent phone lines from being re-purposed. Its just what its name implies, a firewall for telephones. No large institution that I have every heard of that paid for a penetration test that include wardialing has ever had the wardialing effort return 0 rouge fax/modem instances. Clinicians should not assume that they understand their own fax infrastructure.

Even if you are really careful with who you fax to.. the current fax network is that it is difficult to maintain. Lets say that Dr. Smith sells his practice to Dr. Sneaky. If the fax number does not change, then Dr. Sneaky is going to get all of those faxes that were intended for Dr. Smith. Not good.

The problem with comparing the devil you know with the devil that you don’t know is that usually, you don’t actually know the first devil that well at all. The “trust burden” on the Fax network seems light because it is hopelessly broken  and we all just tolerate it.

A lighter burden

Which brings me to the “trust burden for NHIN Direct”. Our goal with regard to this burden is two fold:

  • When an NHIN Direct user makes a trust decision, it should me more reliable than the equivalent decision on the fax network.
  • Typical NHIN Direct users should be able to avoid directly managing trust at scale, making fewer and therefore better trust decisions.

The first one is easy. Without knowing exactly what standards we will be selecting at the time of the writing, I can already tell you that the security the NHIN Direct network will be an improvement over the Fax network. Moreover, it will provide more and better information to the users of the network than is possible with the fax network. Without going into the gory details, this is because PKI is better than post-it notes full of names and fax numbers for maintaining a secure information transfer.

The second one is a little tricky. What I mean by “trust at scale” is the problem managing lots of peer-to-peer trust relationships. If we have a NHIN where say, a third of all doctors in the Unites States participate, that is still probably over a million people. There is no way that you are going to get a doctor to make a list of all of the doctors that he/she does/does not trust taken from a million person list. Even trying to do peer-to-peer trust on a city level would not work. Hell I would be surprised if it would work even between two hospitals. (If you gave doctors the option to “not trust” some doctors at their own hospital… you would probably still get headaches). The fax trust management problem is a little simpler because you can sometimes aggregate to the organization… several clinicians share the same fax, but even that it is really difficult. Having to manage thousands of trust relationships dramatically increases the probability that you will get one of them wrong.

How do we fix that? We need trust aggregation points. So far there are two of these in our model. The first is at the organization level, just like faxes. Typical NHIN direct addresses for providers working in hospitals or clinics will look something like drsmith@nhin.localhospital.com the “nhin.localhospital.com” part of the address is the “health domain name” and you could use that to trust all of the messages that came from that health domain name. The second way is with what we are calling Anchor CAs. For those familiar with the way CAs (Certificate Authorities) work with https, it is basically the same. The difference is that there will be no “automatically included” Certificate Authorities. When you login at amazon your browser makes a secure connection automatically because the person who makes your browser decide for you that you would trust Versign CA certificates. You can find out how your browser developer makes this trust decision for you… but they are still making the decision for you.

That model… where someone else makes your trust decisions for you… is not going to fly in healthcare. The stakes are simply to high to outsource trust in this fashion.

However, the notion of aggregating trust using Certificate Authorities is a good one. Lets imagine that my home town, Houston, decided to setup a Certificate Authority. They would decide on some reasonable policies for things like:

  1. Anti-virus (think Storm Worm not Influenza)
  2. Firewalls
  3. at-rest disk Encryption
  4. Password Strength
  5. Local Authentication (two factor?)
  6. Logging
  7. etc
  8. etc

Then the Houston HIE would create a CA, and that CA would “vouch” for organizations and individuals on the NHIN Direct network. BobsClinic might signup for the CA, then the CA would follow a bunch of steps to verify that BobsClinic was legit and was willing and capable of following the policy… and then the CA would say.. ok we are willing to vouch for BobsClinic.

Most clinics in Houston who wanted to use NHIN Direct could “import” the public key of the local CA. That’s fancy talk for they would accept the vouches that the CA made for all of the organizations that signed up. Those of you with security backgrounds understand that we are talking about a pretty basic CA infrastructure, but we wanted a way to make the trust decisions that clinicians would be making under this model free of unneeded technical language. So we are calling the CA, and all of the people that the CA “vouches” for a “Trust Circle”. It makes sense… if you have not imported the certificate of the CA, you are “outside the circle”, if you have imported the public cert of the CA then you are “inside the circle”.

This “Trust Circle” notion will reduce the number of trust decisions that typical NHIN Direct users will need to make. Of course, it will be really important that clinicians are very careful when they evaluate the policies and enforcement provided by a given CA. Those policies should meet or exceed their internal standards for handling PHI. It is important because you are not just trusting one organization… you are trusting lots of organizations “through” one organization, a much bigger deal.

Trust Circles get around the thorny problem of managing peer to peer relationships, but the also dodge another bullet. They avoid the need for a top-down single CA architecture. Things would be much simpler, technically, if the NHIN (which is a too-vague term BTW) would just setup the one-ring-to-rule them CA and make everyone in the United States follow the same policy for exchanging health information. That is a deal killer for about a hundred reasons, here are a few…

  • You are going to try and force catholic charity hospitals to share information with planned parenthood clinics.. are you kidding?
  • Making psychiatric hospitals message each other in the same way that normal hospitals do?
  • Make children’s hospitals message the same way that normal hospitals do? (kids are not just short people… Think about it.. Does the step dad get NHIN Direct messages for little johnny or only his biological father get them? Tough issues there.)
  • Create a policy that is guaranteed to be legal in all 50 states? (think about the implications of medical marijuana in California alone)

Policy is really really hard, even if you do not assume that you are going to get everyone to agree. Assuming that everyone will agree… makes the NHIN a non-starter.

Trust Circles (plural) gets you out of that problem. When organizations and clinicians can see eye to eye on policy, then they can use NHIN Direct to communicate secure messages… when they can’t see eye to eye… nothing in the NHIN Direct security protocols will attempt to force or even encourage them to compromise.

Another thing to note is that there is nothing in the design that prevents NHIN Direct users from managing trust relationships one at a time. You do not have to join Trust Circles to send messages with NHIN Direct. If you want to “self-sign” your certs and exchange them on floppy disks, in person, with people you trust.. that works too! That is why I used the word “typical” above…

But now we come to the real problem.

The first step is..

Even though the trust burden of the NHIN Direct system will be less than the trust burden of the current fax network… it may not feel that way. The reason is that we have not actually taken responsibility for the trust we place in the fax network. We continue to pretend that everything is fine. But its not. The fax network is irreparably broken and the first step towards fixing it is NOT to try and design a new model without a heavy trust burden, but to recognize that we have problem. Once we do that we can see that indeed “the burden is light”.

On Being Threatened

Express Scripts, one of the nations largest pharmacy benefit management companies, is being blackmailed with the release of private health information. The blackmailer proved that he/she has access to the data by providing information on 75 Express Scripts customers.

The company has done a fine job of swallowing this bitter pill. They have done exactly the right thing by making a public announcement. This is not their fault and by choosing not to hide it they are demonstrating strong ethics in a tough situation.

I would much rather have my PHI with a company that will tell me when something like this happens rather than one that makes me “feel safe” by telling me nothing. I am a big fan of “the devil that you know”.

It bears mentioning that this is a real threat, rather than the dubious “lost laptop” problem. I have had a laptop with patient data stolen, but thanks to gpg, I have nothing to worry about. Laptops are easy to steal and easy to fence. Thankfully, there is no way for the average criminal to even know that there is potentially valuable PHI on a laptop when they steal it out of the back of a car. It is much more likely that the operating system will be reinstalled from scratch by a fence to ensure that there is no way that the laptop can be traced back to the original owner.

That means that when a laptop containing PHI is stolen, 99 times out of 100, there is nothing to worry about.

The 1 out of 100 times is when the thief already knows the PHI is on the laptop. Which is to say that a healthcare organization is the subject of a focused attack. Other security researchers are already guessing at how the blackmailer got the data. Here is my guess:

  • 65% chance this is an inside job. A rouge former or current employee is getting revenge.
  • 25% chance this is a foreign hacker. Siciliano (from the link about) correctly points out that only a foreigner would think that a US company would not go straight to the FBI after being blackmailed. A US hacker would have just sold the social security numbers to identity thieves.
  • 5% chance its a US hacker.
  • 3% chance it was a stolen laptop.
  • 2% chance something else happened.

It will be interesting to see how this plays out. If they catch the blackmailer or otherwise discover the attack vector, it will be informative for people like me, who obsess over the best way to protect health information.

If this happened because a laptop was stolen, I will eat my shorts.

-FT

Trust but Verify and Trust but Fork

I have enjoyed participating in the National Dialogue about Health IT. One of the challenges put forward to my suggestion that decision makers should insist on FOSS in Health IT, was the following comment:

 in terms of privacy, there’s nothing inherent in FOSS that makes it superior to all proprietary products.

I have discussed this issue before, mostly when discussing HealthVault, but my comments have been spread out over several articles.

There is an inherent benefit to privacy, confidentiality and security for FOSS health IT systems.

There is another idea on the National Dialogue site that I thought was useful. It separates the concepts of privacy and confidentiality. Most people blur the concepts of privacy, security and confidentiality and talk about them in the same mouthful. For now I will consider that “privacy” is the ability to control who gets to see your data. Although my points apply to confidentiality and security as well.

FOSS Health IT  are inherently better ways to respect privacy because they support “trust-but-verify”, while proprietary systems just support trust.

The only way to know what a program is doing is to read the most human-readable version of that program, which is typically called sourcecode. There are countless examples of programs doing things other than what they appear to be doing. Viruses, Spyware, Monitoring features and Bugs are classic examples of this.

When a proprietary Health IT program says it respects your privacy, there is no way to know for a user to know if this is true directly, he must trust the proprietary vendor. The fact that most proprietary vendors are honest is irrelevant. The trouble with dishonest people is that you cannot tell the difference between them and honest people. We cannot know which proprietary Health IT vendors are respecting privacy and which are not. Also, the same large organizations who you might normally “trust” have in fact a very poor history of abusing privacy; Microsoft being the best example.

So does HealthVault respect privacy? Probably. But there is no way to be sure without reading the code.

Does Dossia respect privacy? Probably. But we can check by auditing the sourcecode of Indivo, because Dossia is based the FOSS Indivo project. Suppose that you believe that Indivo does not do a sufficient job of respecting privacy, or you find a back door (unlikely). You can fork the code, remove or change the offending portions of Indivo, and then run your own Indivo server with the privacy features that you want.

FOSS supports both trust-but-verify and trust-but-fork which is the only way to absolutely certain that privacy is maintained.

Therefore FOSS does have a fundamental advantage over proprietary software with regards to privacy concerns.

-FT

Security in Medical Devices, implications

There are more and more examples of how standard hacking techniques apply in healthcare, with serious consequences. Recent issues include RFID hacking and interference issues.

Recently, a talk at BlackHat regarding hacking medical devices, including pacemakers, has begun appearing in popular blogs.

What is most dangerous about this is not actually the hack itself, but the fact that the hacks could become widespread. Think about it; there is no real benefit to a hacker to simply kill a person. It is a serious crime and unless there is something to gain by doing it, it is unlikely to generate new interest with blackhat hackers.

Now that the information regarding the vulnerability is in normal media channels, a Cracker (another name for a blackhat hacker) can blackmail a person with a pacemaker. “give me ten thousand dollars or I will remotely shut down your heart.” Before a victim would say “that’s impossible” and not worry about it. Now they go to Google and discover that it is possible. Both Victim and Cracker are aware that the only way for the Cracker to prove to the Victim that he has the ability to stop the Victims heart is for the Cracker to actually kill the Victim. Now the Victim is wondering “Can I afford to take this chance?”

If this even happens once in the real world, you will see a slew of social engineering attacks with this threat as the basis. A Cracker will simply threaten a hundred people with this attack and see how many will pay up. The Cracker would not even need to know how to make the hack work. All he would need is a list of people with pacemakers.

Now we get to the real implications. Where is the information about who has a pacemaker installed and who does not? Perhaps someday they will invent “pacemaker wardriving” but for the time being, the easiest way to get a list of people with pacemakers is to hack into someone’s Electronic Health Record system.

Currently, the Healthcare Industry under-invests in Information Technology. However, with these new vulnerabilities, the value of personal health information is steadily rising. Usually, a typical cracker strategy was to use identifying information inside PHI to steal someone’s identity, or to use healthcare information (like sexually transmitted diseases) to blackmail someone. These new vulnerabilities increase potential profit of hacking into an EHR, and hospitals, even large ones, do not typically have the kind of defence systems that banks usually invest in.

Have you ever considered why “the club” works? These devices are relatively easy for a determined thief to overcome. They work because when you park your BMW in a parking lot, and put the club on it, there is typically another BMW in the parking lot, without the club. The thief will take the car that is easier to take. The club works because of the “low-hanging fruit” principle of security. A person who has decided to take an unethical risk by stealing or cracking is basically saying; “I can tolerate this risk, because it is easier to do this then have a similar economic gain, by legitimate means”. Perhaps some are thrill-seekers, but typically people who break the rules for profit are lazy. The “low hanging fruit” principle might be phrased “A thief or cracker will always try the easiest way to profit unethically first”

As the number of ways to profit from PHI goes up, hospitals and practices will become the low-hanging fruit. This is a problem because your small country doctor is already being squeezed by third-party payers. He does not feel that he has the money to invest in proper electronic security measures, and he does not actually have the skills to tell what would be legitimate security measures in any case. Information technology mom-and-popism is rampant in healthcare. The “computer guy” for many doctors is the nephew of of the office manager; he might be the smartest kid in 9th grade, but he has no idea how to properly secure PHI. Healthcare institutions have always been easy to hack, but now they are becoming profitable to hack. They are becoming “low hanging fruit”.

Concern for these kinds of issues will do little but grow.

-FT

Update: Jon Bartels wrote to mention that Chinese researchers have pushed this concept further.