Health Internet

For whatever reason people simply do not get what the NHIN is and what its implications are.

This feels like a repeat of what happened to me more than a year ago.

The NHIN (which has been rebranded the “Nationwide Health Information Network” or NWHIN from “National Health Information Network” in response to these silly trademarks) is going to be the foundation of a new Health Internet. The US Government wisely will not call it that, because of the paranoid privacy histrionics that this would induce, but nonetheless it -is- a Health Internet. The definition of the word “Internet” is: Any set of computer networks that communicate using the Internet Protocol. The Internet, the largest global internet

The Health Internet by extension is the “largest Internet devoted to Healthcare Data”.

Here are the basic features of the Health Internet:

  • You will be able to ’email’ your doctor.
  • Your doctor will be able to ’email’ you.
  • Faxing health records will go away.
  • Eventually, your medical records will auto-magically follow you around the country, appearing when they are most needed in a moments notice.
  • All of this will be done securely and in a way that fully supports peoples legitimate need for privacy.
  • New innovative services will appear, that leverage the Health Internet data channel to create applications that were previously unthinkable.

How is this being accomplished? Simple as one two three:

  1. The EHR stimulus money will be given out in response to “meaningful use” standards which include interoperability requirements, which will require connecting and sharing data, without specifying a specific technology stack. These standards will become more and more pronounced as time moves forward.
  2. ONC is supporting the development of two Open Source projects that will serve as reference implementations of the two NHIN protocols: IHE and the newly formed Direct Protocol. Those projects are the IHE projects: (CONNECT Project if your are a federal agency and the Aurion Project if you are anyone else, updated 8-19-11) and the Direct Project (Direct). I recommend you watch this OSCON video for a basic explanation of these two projects.
  3. The Federal Government will expose its considerable health data resources (i.e. DoD and the VA) using these two protocols. Agencies which accept the reporting of meaningful use measures will accept that reporting using one or both of these two protocols.

So are these protocols being mandated? No. But then neither were HTTP, STMP, SSH, SSL, or DNS. Its just what everyone uses. The VA has the single largest pile of detailed health records in the history of mankind. They will be available using either CONNECT-complatible IHE or Direct-compatible Direct protocol. They will probably not be available using your-favorite vendors idea of a proprietary health data exchange protocol.

This is going to happen. Hell, it already is happening. These reference implementations are entirely Open Source. They are designed to eventually handle the cases of communicating across national boundaries. This is going to the start of a international Health Internet. First with Canada and Mexico, and nations promoting Medical Tourism and then everyone else. It will take time. Adoption might be slow. But there will be a Health Internet, it will use these protocols. It is only a question of how long this will take to be adopted, and how long it will take people to stop talking in the abstract about the issues of Health Data Exchange.

This is happening. Adjust.


Open Letter to the tiger team


This is an open letter to the tiger team from HIT Policy Committee as well as the committee generally. Recently a group from HITPC gave recommendations to the NHIN Direct project regarding which protocol it should choose. I realized as I heard the comments there, that this group was reading the NHIN Direct Security and Trust Working Groups latest consensus document. I am on that working group and I wrote a considerable portion of that document (most of the Intent section). I was both startled and flattered that the HITPC group was using that document as the basis for their evaluation of the protocol implementations. In fact, they eliminated the XMPP project from consideration because they felt that the SASL authentication that the XMPP implementation will use was incompatible with the following requirement from the consensus document:

2.1 Use of x.509 Certificates. The NHIN Direct protocol relies on agreement that possession of the private key of an x.509 certificate with a particular subject assures compliance of the bearer with a set of arbitrary policies as defined by the issuing authority of the certificate. For example, Verisign assures that bearers of their “extended validation” certificates have been validated according to their official “Certification Practice Statement.” Certificates can be used in many ways, but NHIN Direct relies on the embedded subject and issuing chain as indicated in the following points. Specific implementations may choose to go beyond these basic requirements.

The HITPC team felt that SASL, which does not typically use certs for authentication did not meet this requirement. As it turns out, the XMPP implementation team believes that SASL can be used with x.509 certs and therefore should not be excluded from consideration. That is a simple question of fact and I do not know the answer, but in reality it should not much matter. (will get into that later)

Even more troubling was the assessment of SMTP. The HITPC reviewers considered an all SMTP protocol network as problematic because it allowed for the use of clients which presented users with the option to make security mistakes. They felt that simpler tools should be used, that prevented these types of mistakes from being made.

None of these were unreasonable comments given the fact that they were reading all of the documents on the NHIN Direct site in parallel.

They also have a strong preference for simplicity. Of course, simplicity is very hard to define, and it is obvious that while everyone agrees that security issues are easier to manage with simpler systems,  we disagree about what simplicity means.

As I listened to the call, hearing for the first time how others where seeing my work, and the work of the rest of the NHIN Direct S&T working group, I realized that there were some gaps.¬† Ironically this is going to be primarily a discussion of what did not make into the final proposal. Most of the difficult debates that we held in the S&T group involved two divergent goals: Keep reasonable architecture options open to the implementation teams, and the consideration that security decisions that were reasonable 90% of the time were still unreasonable 10% of the time. We could not exclude end users (or implementation paths) by making technology decisions in ways that 10% of the users could not accept. 10% does not sound like much, but if you make 10 decisions and each of those decisions serves to exclude 10% of the end users… well that could be alot of exclusion. We went around and around and mostly, the result is that we settled on smaller and smaller set of things we -had- to have to make flexible trust architecture that would support lots of distributed requirements. This is not a “compromise” position, but a position of strength. Being able to define many valid sub-policies is critical for things like meeting state level legal requirements. To quote Sean Nolan:

we‚Äôve created an infrastructure that can with configuration easily not just fit in multiple policy environs, but in multiple policy environs SIMULTANEOUSLY.”

That is quite an achievement, but we should be clear about the options we are leaving open to others. I am particularly comfortable with the approach we are taking because it is strikingly similar to the model I had previously created in the HealthQuilt model. I like the term¬† HealthQuilt because it acknowledges the basic elements of the problem. “Start out different, make connections where you can, end with a pleasing result”.

But we also assumed that someone else would be answering lots of questions that we did not. Most notably we could not agree on:

How to have many CA’s?

Our thinking was that you needed to have the tree structure offered by the CA model so that you could simplify trust decisions. We rejected notions of purely peer-to-peer trust (like gpg/pgp) because it would mean that end users would have to make frequent trust decisions increasing the probability that they would get one wrong. Instead if you trust the root cert of a CA, then you can then trust everyone who is obeying the policies of that CA. So X509 generally gave us the ability to make aggregated trust decisions, but we did not decide on what “valid CA architectures would look like”. Here are some different X509 worldviews that at least some of us thought might be valid models:

  • The one-ring-to-rule-them CA model. There is one NHIN policy and one NHIN CA and to be on the NHIN you had to have some relationship with that CA. This is really simple, but it does not support serious policy disagreements. We doubt this would be adopted. The cost of certs becomes a HHS expense item.
  • The browser model. The NHIN would choose the list of CA’s commonly distributed in web browsers and then people could import that list and get certs from any of those CA’s. This gives a big market of CA’s to buy from but these CA’s are frequently web-oriented. There is wide variance of costs for browser CA certificates.
  • The no CA at all model. for people who knew they would be only trusting a small number of other end nodes, they could just choose to import their public certs directly. This would enable very limited communication but that might be exactly what some organizations want. Note that this also supports the use of self-signed certificates. This will only work in certain small environments, but it will be critical for certain paranoid users.¬† This solution is free.
  • The government endorsed CA’s. Some people feel that CA’s already approved by the ICAM Trust Framework should be used. This gives a very NISTy feel to the process, but the requirements for ICAM might exclude some solutions (i.e. ICAM certs are cheap (around $100 a year) assuming you only need a few of them.
  • peer to peer assurance CA. is a CA that provides an unlimited number of certificates to assured individuals for no cost. Becoming assured means that other already assured individuals must meet you face to face and check you government ids. For full assurance at least three people must complete that process.¬† This allows for an unlimited number of costless certs backed by a level of assurance that is otherwise extremely expensive. The code is open source, and the processes to run are open. This is essentially an “open” approach to the CA problem (I like this one best personally)

Individual vs group cert debate?

If you are going to go with any CA model other than “one ring to rule them” then you are saying that the trust relationships inside the CA’s will need to be managed by end users. Given that, some felt that we should be providing individual certs/keys to individual people. Others suggested that we should support one cert per organization. Other said that groups like “” should be supported with a sub-group cert.

In the end we decided not to try and define this issue at all. That means that sometimes messages from an address like could be signed with a cert that makes it clear that only John smith could have created the message, or by a cert that could have been used by anyone at or by some subgroup of people at might have had access to the private key for signing.

Many of us felt that flexibility in cert to address mappings was a good thing, since it would allow us to move towards greater accountability as implementations became better and better at the notoriously difficult cert management problem, while allowing simpler models to work initially. However if you have a CA model where certs are expensive, then it will be difficult to move towards greater accountability as organizations choose single certificates for cost reasons.

Mutual TLS vs TLS vs Protocol encryption?

What we could agree on whether and how to mandate TLS/SSL. This is what we did say:

2.6 Encryption. NHIN Direct messages sent over unsecured channels must be protected by standard encryption techniques using key material from the recipient’s valid, non-expired, non-revoked public certificate inheriting up to a configured Anchor certificate per 2.2. Normally this will mean symmetric encryption with key exchange encrypted with PKI. Implementations must also be able to ensure that source and destination endpoint addresses used for routing purposes are not disclosed in transit.

We did this to enable flexiblity.The only thing we explicitly forbid was not using encryption to full protext the addressing component. So no message-only encryption leaving the addresses exposed.

This is a hugely complex issue. In an ideal world, we would have liked to enforce mutual TLS, where both the system initiating the connection and the system receiving it would need to provide certs. Mutual TLS would virtually eliminate spam/ddos attacks because to even initiate a connection you would need to “mutually trusted public certs”.

However, there are lots of several practical limitations to this. First TLS does not support virtual hosting (using more than one domain with only one IP) without the TLS-SNI extension. SNI is well-supported in servers but poorly supported in browsers and client TLS implementations.

Further, only one cert can be presented by the server side of the connection, or at least that is what we have been led to believe and I have not been able to create a “dual signed” public cert in my own testing. That means in order to have multiple certs per server you have to have multiple ports open.

SRV records address both the limitations with virtual hosting and the need to present multiple certs on the server side. This is because SRV DNS records allow you to define a whole series of port and host combinations for any given TCP service. However, MX records, which provide the same fail-over capability for SMTP does not allow you to specify which port. You can implement SMTP using SRV records, but that is a non-standard configuration and the argument for that protocol is generally that it is well-understood and easier to configure.

Ironically, only the XMPP protocol supports SRV out of the box and therefore enables a much higher level of default security in commonly understood configuration. With this high-level of TLS handshaking, you can argue that only message-content-encryption and message-content-signing require certs beyond the TLS, making the debate about SASL somewhat irrelevant. From a security perspective you actually rejected the protocol with the best combination of security+availability+simplicity.

No assumption of configuration?

You rejected SMTP-only because you assumed that end users would be able to configured their NHIN Direct mail clients directly. Ironically, we did not specifically forbid things like that, because we viewed it as a “policy” decision. But the fact that we did not cover it does not imply that the SMTP configuration should happen in a way that would allow for user security configuration. This is obviously a bad idea.

No one every assumed that the right model for the SMTP end deployment would mean that a doctor installed a cert in his current Microsoft Outlook and then selectively used that cert to send some messages over the NHIN Direct network.

We were assuming SMTP deployments that present the user with options that exclude frequent security decisions. This might be as simple as saying “when you click this shortcut outlook will open and you can send NHIN Direct messages, when you click this shortcut outlook will open and you can send email messages”. The user might try to send nhin direct messages with the email client or vice versa, but when they make that mistake (which is a mistake that -will- happen no matter what protocol or interfaces are chosen) the respective client will simply refuse to send to the wrong network.

There are 16 different ways to enforce this both from a technology and a policy perspective, but we did not try to do that, because we were leaving those decisions up to local policy makers, HHS, and you.

You assumed that there where security implications by choosing SMTP that are simply not there.

On Simplicity

Lastly I would like to point out that your recommendation was actually problematically not simple. We in the S&T group spent lots of time looking at the problem of security architecture from the perspective of the four implementation groups. For each of them we focused only on the security of the core protocol. Not on the security of the “HISP-to-user” portion. We have carefully evaluated the implications of each of these protocols from that perspective. We have been assuming that the HISP to user connection might like to use lots and lots of reasonable authentication encryption and protocol combinations. Our responsibility was only to secure the connection between nodes.

With that limitation you have chosen just “REST” as the implementation choice, precisely because you see it as a “simple” way to develop the core. The REST team has done some good work, and I think that is a reasonable protocol option. But I am baffeled that you see that as “simple”.

If we choose REST we have no message exchange protocol, we have a software development protocol, we must build a message exchange protocol out of that development tool. With SMTP, XMPP and to a lesser extent IHE, you are configuring software that already exists to perform in an agreed upon secure fashion. There are distinct advantages to the “build it” approach, but from a security perspective, simplicity is not one of them. I think you are underestimating the complexity of messaging generally. You have to sort out things like

  • store and forward,
  • compatible availability schemes,
  • message validity checking (spam handling),
  • delivery status notifications,
  • character set handling,
  • bounce messages.

The REST implementation will have to either build that, or borrow it from SMTP implementations much the same way they now borrow S/MIME. I would encourage you to look at “related RFCs” for a small taste of all the messaging related problems that SMTP protocol has grown to serve. XMPP was originally designed to eclipse the SMTP standard, so it is similarly broad in scope and functionality. Both SMTP and XMPP have had extensive security analysis and multiple implementations have had vulnerabilities found and patched. IHE actually takes a more limited approach to what a message can be about and what it can do. It is not trying to be generalized messaging protocol and is arguable better at patient oriented messaging and worse at generalized messaging as a result.

But in all three cases, XMPP, SMTP and IHE, you are talking about configuring a secure messaging infrastructure instead of building one. The notion that REST is ‘faster to develop’ with is largely irrelevant. Its like saying “We have three options, Windows, Linux or writing a new operating system in python because python is simpler than C” When put that way you can see the deeply problematic notion of “simplicity” that you are putting forward.

All three of the other protocols, at least from the perspective of security,  are easier to account for because the platforms are known-quantities. A REST implementation will be more difficult to secure because you are trying to secure totally new software implementing a totally new protocol.

I want to be clear, I am not arguing against REST as an implementation choice. The central advantage of a REST implementation is that you can focus the implementation on solving the specific use-cases of meaningful use. You can have a little less focus on messaging generally, simplifying the problem of a new protocol, and focus on features that directly address meaningful use. Its a smaller target and that could be valuable. Its like a midway point between the generalized messaging approach found in XMPP and SMTP and the too specific, single-patient oriented IHE messaging protocol.

But if you do choose REST, do not do so thinking that it is the “simple” protocol choice.


Beyond the security issues, there are good reasons to prefer any of the implementation protocols. I wanted to be clear that we are expecting your group to have things to say about the things we did not decide (or at least that you know what it means to say nothing), and to make certain that something that we wrote in the S&T group was not biasing you for or against any particular implemenation, all of which are basically compatible with what our group has done.



What protocol for NHIN Direct?

[Update 6-10-10: added new spectrum “False Interface Potentia” based on comment from David Tao. Added “End User Perception” based on comment from Erik Pupo.¬† Removed controversial portions. full explanation at the end]

Currently, the NHIN Direct project is having a debate around what protocol to use.

NHIN Direct is supposed to enable something that “feels like email” to clinical and patient users, but in fact allows for the secure transfer of PHI over the Internet between clinical and patient parties. Essentially, a secure network for PHI messages.

But how to do that? We have to choose a technology suite in order to make that happen and it is not clear which protocol we should be choosing.

Yesterday, I listened to a call where the various implementation teams made the “case” for their particular protocol.

First let me say that I have great sympathy for these implementers, all of them have put in lots of work and thought into their particular implementation approach and all of them have a made a good case. However, Open Source is a meritocracy. We have to decide on one “best” approach for NHIN Direct and not everyone will get what they want. In competitions like this, we might have at least one very happy implementation group, but we are sure to have several approaches that are abandoned. Those people who have worked hard on implementations that will not be used are in a difficult social situation. They will inevitably feel that the group has chosen poorly, and that their work was overlooked. But at the same time the group will be asking for their help in unifying the project behind a single approach (if not a single protocol). As a developer, being in that situation and being rejected really hurts. I have been in the Open Source community long enough to have been on both the “winning” side and the “loosing side” of this several times. It is better to win.

It is critical that the larger group show its appreciation for the work of everyone, even as it rejects most of the approaches. Contributing to an Open Source project like this is expensive emotionally and financially, and just because some implementations will “loose” does not mean that there were valueless. In fact, as we abandon implementation approaches, the project members will do well to recognize the abandoned approaches as “templates for improvement” and/or “concessions to expediency”.”Right” is not really a position anyone gets to take.

To all of the implementation groups: Thank you. Your work, whether embraced or not, is impressive. As you can see later, every approach has merits not found in other approaches. This will be a tough decision.


XMPP is basically a chat protocol.

Advantages: Chat has some fundamental advantages over email (SMTP). When a user is offline in a chat context, you can still send them messages, so chat “falls back to” a stored message system like email. But when a user is online, that information can be selectively broadcast to other users, who can then send messages knowing that the person is right there on the keyboard. XMPP was designed from the ground up to handle real-time messaging, and so it might be more appropriate if you consider situations, (i.e. surgery) where you need to transfer messages and attachments and get information back in near-real-time. In fact you might consider an XMPP system as “more compatible” with a tele-medicine approach for this reason. There are also some interesting “subscribe/broadcast” models that XMPP has built into the protocol. There are lots of solid implementations of XMPP currently in the market and the mature ones should be sufficiently configurable to handle the complex security configuration that NHIN Direct will require. So there is a big pile of existing software available and most of it is Open Source.


Not many people understand XMPP and it is not as widely used as email. It is not out-of-the-box compatible with NHIN Exchange.


  • New Implementation Coding Required: almost none: 10
  • Open Source implementations available: Lots: 10
  • Existing experts: some but not many: 5
  • Compatibility with NHIN Exchange: must be built from scratch: 1
  • Future Flexibility: the protocol is the protocol, new stuff has to go on top: 3
  • Mature Standard: Yes huge chat networks already running: 10
  • End User Interface Familiarity: Lots of people use chat: 7
  • False Interface Potential: Might make users think “this is just chat”: 2
  • End User Perception: You are using chat for secure PHI exchange? Wrong perception of course, but still not good: 2
  • Cool feature bonus: +5 for enabling real-time chat

Total: 55


REST is web-based software design philosophy.

Advantages: REST lets you build really complex things really quickly. As proof, the REST implementation basically already has a working prototype. REST is extremely powerful and will allow rapid iteration of future versions. As we have future requirements we can build them easily.

Disadvantages: You are building a messaging infrastructure from scratch. You will have to re-invent strategies for reliability, message queuing, and countless other things that you do not realize that SMTP or XMPP are doing for you. It is not out-of-the-box compatible with NHIN Exchange.


  • New Implementation Coding Required: almost everything: 1
  • Open Source implementations available: Lots of good REST libraries: 10
  • Existing experts: no experts in what does not yet exist: 1
  • Compatibility with NHIN Exchange: must be built from scratch: 1
  • Future Flexibility: Allows for very rapid iteration: 10
  • Mature Standard: While REST is mature, the application design on top is not all, it will be totally new: 1
  • End User Interface Familiarity: Interfaces will likely mimic email but they will be untested and totally new: 3
  • False Interface Potential: Because the interfaces will be new, notions of PHI transfer can be built in: 5
  • End User Perception: Essentially none, its reputation will stand on itself, and because we will do a good job, this is not a disadvantage: 10

Total: 42


IHE is a set of profiles for exchanging health information. It is designed from the ground up to handle the complexities of Health Information exchange. The relevant profiles are XDR and XDM

Advantages: Using the IHE messaging standard means that NHIN Direct users would be participating in the more advanced NHIN Exchange, they just would not be using all of its power. Merely push messaging has some fundamental limitations and fully participating in the larger NHIN Exchange will allow providers to embrace larger portions of the more advanced health information network when those features are needed. Both Mirth, Open Health Tools, and MOSS provide excellent Open Source implementations of these protocols. IHE provides a formalized mechanism, in an international environment for improving and updating the standard. The largest and biggest EHR vendors already have support for IHE.

Disadvantages: IHE is a moving target. The NHIN CONNECT project is already handling the considerable complexities of mapping to those standards and profiles even as they are finalized. There is already tension there between what CONNECT actually does and what the standards say should be done. The whole point of NHIN Direct is to provide a much simpler model of Health Data exchange than is available with NHIN Exchange. While the big EHR vendors typically have IHE compliant implementations, it is only in the last few years that they have been successfully using them to connect to each other at the Connectathon. Given that, it is something of a stretch to hold out IHE as a fully mature standard. [Update 6-10-10 section removed]

Discussion: The NHIN Exchange will have direct messaging through IHE protocols. Unless you want an AOL/Compuserve (or Twitter/ style messaging split between the NHIN Direct/Exchange networks, there will have to be a bride between the messaging protocols. Given that bridge, the real question becomes “Is there any reason not to use the IHE messaging as a backbone and some other protocol at the edges”. (which is essentially what the IHE team is proposing) From what I could tell the critical issue is how much data is “required” to be included as a minimum in any IHE message. Apparently, it will break down if there is not a patient id of some kind in the message (I do not mean a social security number, but something assigned locally by a computer program). This excludes people who do not have a list of patient ids to send from using NHIN Direct to send messages (they can still receive). This essentially makes EHR-integration a requirement; excluding those without a computer system at all, or who are using an antiquated practice management system whose patient id’s would be difficult to access.¬† So the real question is do we use IHE as the core technology or merely bridge to it?

  • New Implementation Coding Required: Everyone except the large vendors will have to code or adopt Open Source libraries: 5
  • Open Source implementations available: Several very promising projects already in live use: 10
  • Existing experts: IHE is not a well-know protocol: 3
  • Compatibility with NHIN Exchange: it is identical: 10
  • Future Flexibility: There is a formal process in place, but it is hardly fast it has been going for decades: 4
  • Mature Standard: Still in flux: 3
  • End User Interface Familiarity: Very few clinicians use IHE now, so the interfaces are immature: 3
  • False Interface Potential: Although relatively young, the existing IHE messaging clients are designed to handle PHI notions: 7
  • End User Perception: IHE is totally “legit” they might not be happy with it, but they will respect it: 10

Total: 50


SMTP is the protocol  behind email.

Advantages: There is a huge expanse of people who are familiar with this protocol stack. It has already been extensively used for this purpose and in this way (with S/MIME). It is the simplest known-good solution. It is the protocol to beat. Plenty of Open Source implementations. Super mature standard that has withstood decades of security scrutiny.

Disadvantages: Email is great and email sucks. It is does not have some features of XMPP, and IHE, and it is such a broad protocol that there are many options that make predictions about how it will be used difficult. It is not out-of-the-box compatible with NHIN Exchange.

  • New Implementation Coding Required: Everything is done: 10
  • Open Source implementations available: old and mature projects to most anything: 10
  • Existing experts: You can’t swing a cat without hitting an SMTP expert: 10
  • Compatibility with NHIN Exchange: must be built from scratch: 1
  • Future Flexibility: the protocol is the protocol, new stuff has to go on top: 3
  • Mature Standard: Is there a more mature standard anywhere?: 10
  • End User Interface Familiarity: Many people familiar with many different mature interfaces: 10
  • False Interface Potential: Users may make the mistake of not seeing that this is -not- just email while using traditional clients: 1
  • End User Perception: they might be a little uncomfortable with “email” but if you say “secure email” that might work: 8

Total: 63


It should be noted that my scoring system is somewhat arbitrary. I think my assessments are fair, but you could easily get different “totals” be choosing different items to include in the comparison. What significant categories of comparison does this ignore? Suggest something in the comments and if it makes sense I will add it.

Update 6-10-10:

So several people have commented on this blog post, and I would encourage you to read all of the comments. They are uniformly well reasoned, even when I disagree with them. I will be updating the scoring with two new categories, and I wanted to explain them.

The first is my response to a comment from David Tao regarding the User Interfaces. My previous scoring is biased in favor of older mature user interfaces. David’s comments made me realize that if a mature interface gave the wrong idea to a clinician or a patient about the working of the system, that even though it was a more stable interface, it could detract from the overall impact of the UI. There is an argument to be make that a “new messaging paradigm (i.e. secure PHI) should be paired with a new interface that exposes the important differences involved”. However I cannot give IHE a “10” in that new category because the IHE messaging systems, in the context of a national exchange are still not mature, but they might still get some benefit from being tailored to PHI exchange. I think both the REST model and the IHE model, which favour newer interface designs should get points here, and IHE should beat REST because they have been testing their new designed for some time. So this is what I mean by “false interface potential” category.

The second is my response to a comment from Erik Pupo. Erik believes that a secure messaging infrastructure based on a chat protocol (XMPP) will not be taken as seriously as other protocols. This is somewhat sad, since I think this is a incorrect public perception about the value of a reliable protocol, but it is unwise for us to be trying to change that unfair public perception even as we try to encourage adoption. Its like saying “here use this tool that you do not take seriously..” not a cogent message for us. So I have added the category of ‘End User Perception” and dinged XMPP for it because of this. I think email, as a known protocol, will also have a slight disadvantage here.

Lastly, I have removed my original criticism of EHRA. Of course, I am still right about the issue ūüėČ but the project managers decided that the discussion was¬† unproductive. Given that, it seemed wise to remove that content from this page so that it could remain as a “somewhat not totally subjective” resource. This post is intended to further the legitimate and focused technical debate, and not have us going around in circles about more fundamental issues that we are unlikely to agree on.


What is NHIN Direct? (alpha)

Recently a member of the FOSS Health community wrote to me:

So I’m confused by NHIN Direct. Why not simply use S/MIME or PGP email? Why five different ways of addressing people, when really only the email-address format makes sense to the average internet user?

I have been pretty confused by the NHIN Direct for quite some time. But I have finally invested enough time that I can discuss the aim of the project somewhat succinctly. Note that this is essentially my re-phrasing of the NHIN Direct FAQ item “What is NHIN Direct“. To implement code, we need to have very exact definitions of what we will or will not do, and often that careful phrasing, while making it easier to code, makes things harder to understand. So the site above, in its current definition reads like this:

NHIN Direct is the set of standards, policies and services that enable simple, secure transport of health information between authorized care providers. NHIN Direct enables standards-based health information exchange in support of core Stage 1 Meaningful Use measures, including communication of summary care records, referrals, discharge summaries and other clinical documents in support of continuity of care and medication reconciliation, and communication of laboratory results to providers.

Lets re-write that in English.

NHIN Direct is like “email for doctors”. NHIN Direct is way for doctors, patients and other healthcare providers to send each other messages, which will feel like email messages, but are different in two important ways. First, the messages can have smart “attachments” that are essentially patient records in standardized formats (CCR/CCD/etc) and second, unlike email, the messages will be sent over a secure network in a HIPPA compliant way. Generally NHIN Direct should replace the current use of fax and email for the transfer of medical records in the US, and provide a stepping stone to greater interoperability with the NHIN Exchange (which is much smarter than just email)

The problem is, at this stage, that you cannot really go much deeper than this high-level thinking, because the NHIN Direct project has not yet settled on which protocol it will be using to enable the messaging. The current candidates are SMTP with S/MIME for handling encryption, XMPP also with S/MIME, REST and the IHE direct messaging profile. I am going to follow this post with a more detailed discussion of that particular decision and its implications, but until that decision is made, it is not really possible to further discuss the NHIN Direct model. In that later model I will discuss more clearly the first part of my friends question.

The second part of the question: “Why the different ways of addressing people?” can be answered now. The NHIN Direct group had a “how do we address” discussion, before we settled on an implementation protocol. That meant that the addressing specification had to implementable using several different protocol stacks. However, the decision was made that all of the addressing mechanisms must be “transferable” into something that looks just like email. Lets imagine that I was going to host my own NHIN Direct node. My address might look like When my doctor wanted to send me a message, then that is what he would type into his messaging system. If NHIN Direct decided to go with SMTP, then my address, as it is routed across the NHIN Direct network would look just the way my doctor typed it. But if NHIN Direct uses REST, then it might get transformed into a URI, like this: . That might look scary, but everyone using NHIN Direct can think in terms of email addresses, because the REST implementation would convert automatically, we would never even know it was happening.

Eventually, I will extend this article into a better natural language description of the NHIN Direct project, which means later versions will not discuss “if we choose X protocol” but instead focus on the protocol that is actually chosen.


The Power of Push


The NHIN Direct network has been criticized for lacking relevance for health information exchange. Specifically, Latanya Sweeney has submitted testimony to congress which has nothing good to say about either NHIN project. The paragraph I want to highlight says:

ONC’s website also describes NHIN Direct [11] as a parallel initiative underway [3]. The idea came from comments made by representatives from Microsoft and Cerner [12]. In current practice, two providers fax patient information as needed. So, the idea is to replace the fax with email that has secure channels to combat eavesdropping. There are numerous concerns with this design also. A glaring problem is its limitation. We cannot perform all meaningful uses with this system, so we will need an additional system, which begs the question: why build this system at all? For example, this design cannot reasonably retrieve allergies and medications for an unconscious patient presenting at an out-of-state emergency room (arguably a stage 1 meaningful use). Figure 2(b) summarizes concerns about these two designs. The NHIN Limited Production Exchange has serious privacy issues but more utility than NHIN Direct. On the other hand, NHIN Direct has fewer privacy issues, but insufficient utility. When combined, we realize the least of each design, providing an NHIN with limited utility and privacy concerns.

This  is not the first time that the NHIN Direct push-only model has come under attack, so I wanted to discuss this. Push-only means that A can send messages to B, but B cannot automatically get data  from A (that would be pulling). Email and Faxes are push models. Web pages are pull models (i.e. sent to you when your browser asks for them). The  benefits of both models are constantly debated in software design .

I am working on NHIN Direct, and not so much NHIN CONNECT, although I have great admiration for the project and many of my friends are working on that project. My experience with NHIN Direct, which has been excellent so far, has helped me to understand just how narrow-minded and short sighted these kinds of criticisms are.

Both projects, in so far as such a thing is possible while building technology, are taking a “policy-neutral” stance. That means that rather than defining policy in code, we try to code so that a broad range of reasonable policy decisions can be supported in a given protocol and codebase. But even under a given policy, there will be many many options to use these technologies in ways that are unexpected. So when anyone criticizes the “security and privacy features” of either CONNECT or Direct at this stage… it is typically by making certain poor assumptions about how the system will be actually used.

The most important poor assumption is to consider only standard uses of the technology when considering meaningful use. For instance, the NHIN Direct project concedes that mere usage of the NHIN Direct exchange will map to specific meaningful use requirements. Note the headers on that PDF to see that this map was contributed by my friend Will Ross and the Redwood Mednet team. In Open Source healthcare, as in Open Source generally, you see the same actors generating excellent contributions again and again. But these meaningful use mappings only consider the implications of mere use of the network, rather than considering anything that can be implemented on top of the network.

When people say the ‘Internet” what they usually mean is either email or the world wide web. In reality the “Internet” is a far richer technology space than this, but for most people only two of the thousands of protocols that operate over the Internet have become personally relevant: SMTP and HTTP/HTML. In fact as I say that, many of my clinical readers might not even recognize that SMTP, and sister protocols like IMAP, are the protocols that enable email, or that HTTP/HTML enable the world wide web. In fact both of these protocols rely on lower level protocols, like IP/UDP/TCP/SSL/DNS that enable the average user to surf and email.

But understand that the richness of the Internet, as we know it today, is not merely what the protocol implementations allow you to do directly (i.e. browsers let you surf the web and email clients let you read and send messages) but how those technologies are used. The web allows you to buy books on Amazon, win auctions on ebay and find dates on eharmony. Each of those website enables complex application functionality on top of the implementations of http and html.

It is easier to see how the web has more to offer than merely transferring hyper-linked web pages, to see the richness that is available at the application level that is not implied or assumed by the lower level implementations of the enabling protocols (that would be web-browsers and web-servers implementing http/html). Sometimes it easy to forget that we see the same thing with email. The email network does far, far more than merely send and receive messages . Like the web, higher level functionality is enabled by the lower level protocol driven functionality, in this case the ability to send and receive messages.

I wanted to highlight several things that you can do with email, that are examples of this higher-level functionality.

  • You can use an email account to prove that you are a human to a website. Have you ever signed up to a website that insisted that you give them an email address and then automatically sent you an email that had something to click on to prove that you owned that email address? I have done this so many times that I have lost count. This is “email for authentication”. Software often uses email messages to provide greater access to websites.
  • You can send messages to just one email address, which will then be sent to many other email addresses. Mailing lists can be pretty amazing software services, but fundamentally all they do is intelligently receive and re-send email messages. This makes email change from a one-to-one messaging system to a one-to-many messaging system. But it is implemented entirely with one-to-one messages.
  • If you push the mailing list even farther you can see that it can become something even more substantial, like craigslist, which pushes the envelope on email broadcasting and blurs the lines between email application and web application.
  • Programs can automatically send email messages when something changes, like Google Alerts tell you when the web has changed (or at least changed as-according-to Google)
  • You can have many email addresses and configure them to aggregate to one email viewing client, enabling separate relationships, and even identities to be managed in parallel. For instance your work email address really means your work identity, and your personal email means your personal identity, but you might forward both to the same email client and then answer and send messages as both identities at the same time.
  • You can use email to create a system for recycling things. Making it easier not to buy new things, and not to throw away working things. This is essentially email-enabled peer-to-peer conservationism.
  • Email clients are more than just programs we use to send and receive messages. We expect them to integrate with calendaring software. We expect them to allow us to extend them with other programs. People use powerful email clients like gmail to run their lives before people started to do that with gmail, they where running their lives with outlook or eudora.

Email is not just a method for sending messages. It is an application platform. Other applications that want to do something interesting can use email as a messaging component to achieve that greater goal.

I want to be clear. The NHIN Direct project has not settled on STMP, or email as protocol choice (although an S/MIME email is on the table). At this point we are not sure what protocol we will be choosing. But it does not matter, the point here is that NHIN Direct will at least act like, private, secure, identity-assured (at least for clinicians) email for sending clinical messages. You can expect that a NHIN Direct implementation will either be tightly or loosely integrated with a doctors EHR and a patients PHR in the same way that you have tight or loose integration between email clients and calendaring applications.

At this point it is best to think of NHIN Direct as a “cousin” to email. With lots of the same features and benefits but also limitations (to protect privacy) and new features (clinical integration, meaningful message signing, etc etc) that email does not have.

But the most important shared benifit between NHIN Direct and email will be the fact that you can build new interesting stuff on top of it.

Which brings us back to Latanyas first criticism. Will NHIN Direct support the ‘break the glass’ use-case (where your information can be gotten-to in case of an emergency) that Latanya mentions? No. Will software that implements NHIN Direct be able to use NHIN Direct as part of an something that provides break-the-glass functionality? Yes.

Very soon after an NHIN Direct network stabilizes, you will start to see this functionality addresses this use case. PHR applications like Google Health, HealthVault and Indivo X (the most important three PHr platforms) will probably develop break the glass mechanisms that work something like this…

I am an emergency room doctor and a patient comes in unconscious. In his wallet I find a card that indicates his PHR is held at

I visit and click the “break the glass” link. HealthVault asks me to enter my NHIN Direct address.. which is going to look a lot like an email address. So I enter (not a real address). HealthVault will have already performed extensive public key exchange with Methodist Hospital, and will be able to cryptically ensure that any address under that domain name (we call them health domain names.. since they will be used exclusively for this purpose) is in fact someone that Methodist Hospital vouches for, and they will have pre-approved Methodist Hospitals PHI handling procedures. Given that pre-arrangement of trust, they will know that they can securely send messages to any published Methodist hospital NHIN Direct address.

But they are not certain, at this stage, that I am in fact so they will send a message to that address with a link. I will click the link which will confirm with HealthVault that I am in control of that address, and that they should forward the contents of the PHR record. Now that they are sure that this is a valid break-the-glass request from a valid user at an institution that they have a trust relationship with, they will forward the record to the address.

They will also add a record to john’s PHR to indicate that I broke the glass. If this whole process was done fraudulently, John will know and there will be hell to pay for me personally for abusing my credentials and for Methodist Hospital for giving me a credentials to abuse. Current HIPAA rules and fraud statutes would be activated if I made such a fraudulent request, that was not in John’s best interest. People who abuse the system could be detected and sent to jail.

The whole process takes minutes and works even when the patient is unconscious.

Would that particular method answer the “break the glass” components of meaningful use? It seems like it would to me. Would this be the method that we end up using? I am not sure, but it would be something similar in spirit. Most importantly, it would be something¬† implemented on top of, and around, the messaging model provided by NHIN Direct.

All of that is to say: Push is Powerful. It is powerful because it does not need to work alone. It can be a component of a larger system that does much more. It creates the opportunity for innovation and greater functionality similar to the one provided by the original Internet protocols.

This is all true of the NHIN CONNECT project as well. The difference is that NHIN Direct is much simpler and has true parallels with the current fax and email systems. It is easier to see how NHIN Direct might change things because we are so familiar with its cousins, email and fax.

NHIN CONNECT offers much more functionality at the price of far greater complexity. Like the NHIN Direct system, and email and web before it, the NHIN CONNECT architecture will allow for innovation to occur on top of it. But it is doing much more work than NHIN Direct is.

For instance, if I were fully NHIN CONNECT enabled, I would be able to conduct a search for John Doe and find out that three hospitals had information that were not contained in the HealthVault record. NHIN CONNECT might be able to provide a merged view of that data for me, which is a much richer process than mere messaging can achieve. But that means that NHIN CONNECT must tackle the complex problems of sorting out which records actually belonged to John Doe and therefore deserved to be merged. It would make automated, but accurate, decisions that Jonathan Doe at hospital A was my John Doe but that Johnny Doe at hospital B was not… NHIN CONNECT¬† should understand that a blood pressure measurement that was in the data it gathered from HealthVault was or was not a duplicate of blood pressure readings that came from the hospital C EHR, that had the same date, but not the same time stamp. These kinds of issues, plus countless more just like them, are addressed or exposed by both the underlying NHIN protocols that CONNECT implements and by the CONNECT codebase specifically.

CONNECT uses push and pull and all kinds of other software models to do something very complex.

NHIN Direct just does push, but leaves potential complexity to higher level yet-to-be-made systems.

Some people think the NHIN Direct model is superior. Others think that CONNECT is better. I think we probably need both for different reasons.. which is essentially the ONC position on the matter.

But I wanted to be sure everyone was clear: Push has Power.


The Burden of Trust


I am a vocal participant on the NHIN Direct Security and Trust working group. Its a perfect place for me. I love Open Source healthcare, but my background was in InfoSec… and we never really forget our first love.. do we? At the NHIN Direct Security and Trust workgroup, I get to exercise all of my hats at once… and that is fun.

The purpose of NHIN Direct is to design an infrastructure for sending messages with clinical content between clinicians (and their patients). It is basically designed to be an email-like system for delivering health information. It is intend to eventually replace the current NHIN… which is the ad-hoc clinical fax network.

On a recent call, someone from the “Policy” department said something about our current plans to the effect of “I am not sure how putting the burden of Trust Decisions on individual providers will impact the ability of the project to replace the Fax network” I could not talk on the call… I was in a noisy airport… but I was surprised by that characterization of our work. In retrospect I can see how she would read what we are writing and come to the conclusion that we are putting new trust burdens on doctors… but in fact we want to lighten the trust burden they currently carry.

You don’t know the devil that you know

That is probably the most important point. The fax network comes with a very heavy trust burden. But we are used to it, so we rarely pay attention to it. This is a case of “acceptable losses”. Its kind of like Terrorism vs Auto Accidents. Many more people in the world are killed in car accidents each year than are killed by terrorism. The irony is that terrorism is much harder to fix than auto accidents. If the US Govt devoted the same budget to auto accidents that they do to the “War on Terror” we could probably prevent 99% of the auto accidents in the world. But we, as a society “accept” the burden of car crashes… because we are used to them. We have the same problem with medical errors… but that is another post.

So lets take a careful look at the “current trust burden” in the fax network. First, doctors do not actually deal with this problem directly. Typically they hire staff to do faxing. This isolates them from the problems that the “faxer” faces. It also means that they rarely hear of the errors.

“Faxers” fax to patients, and they fax to other clinicians. There are lots and lots of times when something that should have been faxed to Dr. Smith ends up going to Dr. Jones. We only hear about the most extreme cases. In fact before the existence of the NPI database, there was no reliable way to determine if a fax number was valid. If Dr. Adams wanted to send a record to Dr. Smith, his staff called their staff and wrote down the numbers. The numbers get jumbled, mislabelled and lots and lots of errors occur.

We do not hear of the cases where people were killed because information that was in a fax record was faxed to a wrong number. Perhaps sent to the “main hospital” fax line instead of the ER fax line where it was needed. These types of between-institution errors are almost impossible to detect, even the “big picture” at one large hospital is hard to sort out, and when you add another institution… no hope. Instead you get cases that are written off as “we did not know that X… oh well… nobody’s fault… nothing could be done”.

Then of course there is the assumption that fax lines are private. This is the farthest thing from the truth. Faxes, just like regular phone conversations are digitized and sent over the Internet. If a hacker gains control over a main router at a major Internet carrier, then they can re-route phone calls and faxes to themselves as well normal internet traffic. The fax network is actually going over the Internet right now… its just “obscured” rather than “encrypted”.

This is not the only problem with faxes, another problem is that institutions rarely have a firm grasp on how many fax machines are actually in operation. You can plug a computer modem into a wall and have a nearly undetectable new fax line… allowing “insiders” to send files to themselves via fax. In fact, phone lines can generally be re-purposed in to back-channel data ports in a number of ways, faxing is only one of them. Lots of my old Air Force buddies ended up at Securelogix, which is one of the top companies for phone security. They sell a telewall that can help prevent phone lines from being re-purposed. Its just what its name implies, a firewall for telephones. No large institution that I have every heard of that paid for a penetration test that include wardialing has ever had the wardialing effort return 0 rouge fax/modem instances. Clinicians should not assume that they understand their own fax infrastructure.

Even if you are really careful with who you fax to.. the current fax network is that it is difficult to maintain. Lets say that Dr. Smith sells his practice to Dr. Sneaky. If the fax number does not change, then Dr. Sneaky is going to get all of those faxes that were intended for Dr. Smith. Not good.

The problem with comparing the devil you know with the devil that you don’t know is that usually, you don’t actually know the first devil that well at all. The “trust burden” on the Fax network seems light because it is hopelessly broken¬† and we all just tolerate it.

A lighter burden

Which brings me to the “trust burden for NHIN Direct”. Our goal with regard to this burden is two fold:

  • When an NHIN Direct user makes a trust decision, it should me more reliable than the equivalent decision on the fax network.
  • Typical NHIN Direct users should be able to avoid directly managing trust at scale, making fewer and therefore better trust decisions.

The first one is easy. Without knowing exactly what standards we will be selecting at the time of the writing, I can already tell you that the security the NHIN Direct network will be an improvement over the Fax network. Moreover, it will provide more and better information to the users of the network than is possible with the fax network. Without going into the gory details, this is because PKI is better than post-it notes full of names and fax numbers for maintaining a secure information transfer.

The second one is a little tricky. What I mean by “trust at scale” is the problem managing lots of peer-to-peer trust relationships. If we have a NHIN where say, a third of all doctors in the Unites States participate, that is still probably over a million people. There is no way that you are going to get a doctor to make a list of all of the doctors that he/she does/does not trust taken from a million person list. Even trying to do peer-to-peer trust on a city level would not work. Hell I would be surprised if it would work even between two hospitals. (If you gave doctors the option to “not trust” some doctors at their own hospital… you would probably still get headaches). The fax trust management problem is a little simpler because you can sometimes aggregate to the organization… several clinicians share the same fax, but even that it is really difficult. Having to manage thousands of trust relationships dramatically increases the probability that you will get one of them wrong.

How do we fix that? We need trust aggregation points. So far there are two of these in our model. The first is at the organization level, just like faxes. Typical NHIN direct addresses for providers working in hospitals or clinics will look something like the “” part of the address is the “health domain name” and you could use that to trust all of the messages that came from that health domain name. The second way is with what we are calling Anchor CAs. For those familiar with the way CAs (Certificate Authorities) work with https, it is basically the same. The difference is that there will be no “automatically included” Certificate Authorities. When you login at amazon your browser makes a secure connection automatically because the person who makes your browser decide for you that you would trust Versign CA certificates. You can find out how your browser developer makes this trust decision for you… but they are still making the decision for you.

That model… where someone else makes your trust decisions for you… is not going to fly in healthcare. The stakes are simply to high to outsource trust in this fashion.

However, the notion of aggregating trust using Certificate Authorities is a good one. Lets imagine that my home town, Houston, decided to setup a Certificate Authority. They would decide on some reasonable policies for things like:

  1. Anti-virus (think Storm Worm not Influenza)
  2. Firewalls
  3. at-rest disk Encryption
  4. Password Strength
  5. Local Authentication (two factor?)
  6. Logging
  7. etc
  8. etc

Then the Houston HIE would create a CA, and that CA would “vouch” for organizations and individuals on the NHIN Direct network. BobsClinic might signup for the CA, then the CA would follow a bunch of steps to verify that BobsClinic was legit and was willing and capable of following the policy… and then the CA would say.. ok we are willing to vouch for BobsClinic.

Most clinics in Houston who wanted to use NHIN Direct could “import” the public key of the local CA. That’s fancy talk for they would accept the vouches that the CA made for all of the organizations that signed up. Those of you with security backgrounds understand that we are talking about a pretty basic CA infrastructure, but we wanted a way to make the trust decisions that clinicians would be making under this model free of unneeded technical language. So we are calling the CA, and all of the people that the CA “vouches” for a “Trust Circle”. It makes sense… if you have not imported the certificate of the CA, you are “outside the circle”, if you have imported the public cert of the CA then you are “inside the circle”.

This “Trust Circle” notion will reduce the number of trust decisions that typical NHIN Direct users will need to make. Of course, it will be really important that clinicians are very careful when they evaluate the policies and enforcement provided by a given CA. Those policies should meet or exceed their internal standards for handling PHI. It is important because you are not just trusting one organization… you are trusting lots of organizations “through” one organization, a much bigger deal.

Trust Circles get around the thorny problem of managing peer to peer relationships, but the also dodge another bullet. They avoid the need for a top-down single CA architecture. Things would be much simpler, technically, if the NHIN (which is a too-vague term BTW) would just setup the one-ring-to-rule them CA and make everyone in the United States follow the same policy for exchanging health information. That is a deal killer for about a hundred reasons, here are a few…

  • You are going to try and force catholic charity hospitals to share information with planned parenthood clinics.. are you kidding?
  • Making psychiatric hospitals message each other in the same way that normal hospitals do?
  • Make children’s hospitals message the same way that normal hospitals do? (kids are not just short people… Think about it.. Does the step dad get NHIN Direct messages for little johnny or only his biological father get them? Tough issues there.)
  • Create a policy that is guaranteed to be legal in all 50 states? (think about the implications of medical marijuana in California alone)

Policy is really really hard, even if you do not assume that you are going to get everyone to agree. Assuming that everyone will agree… makes the NHIN a non-starter.

Trust Circles (plural) gets you out of that problem. When organizations and clinicians can see eye to eye on policy, then they can use NHIN Direct to communicate secure messages… when they can’t see eye to eye… nothing in the NHIN Direct security protocols will attempt to force or even encourage them to compromise.

Another thing to note is that there is nothing in the design that prevents NHIN Direct users from managing trust relationships one at a time. You do not have to join Trust Circles to send messages with NHIN Direct. If you want to “self-sign” your certs and exchange them on floppy disks, in person, with people you trust.. that works too! That is why I used the word “typical” above…

But now we come to the real problem.

The first step is..

Even though the trust burden of the NHIN Direct system will be less than the trust burden of the current fax network… it may not feel that way. The reason is that we have not actually taken responsibility for the trust we place in the fax network. We continue to pretend that everything is fine. But its not. The fax network is irreparably broken and the first step towards fixing it is NOT to try and design a new model without a heavy trust burden, but to recognize that we have problem. Once we do that we can see that indeed “the burden is light”.

NHIN-Direct leans towards HealthQuilt Security Model

My last big project, before the skunkworks project I am doing now for Cautious Patient, was as the Chief Architect at HealthQuilt.

HealthQuilt was a prototype project for a Health Information Exchange in Houston T.X. hosted by UT SHIS. (Which just won the status of Regional Extension Center under ARRA . My boss at HealthQuilt, project leader Dr. Kim Dunn will be the director of the new REC. Dr. Dunn built a community of the local “interested parties” in Health IT during the HealthQuilt project. Ultimately, politics (remember this was pre-incentive) would prevent any data being transferred between organizations using our model before funding ran out. But now the community that Dr. Dunn built will be vital in her new role as REC director.

My job at HealthQuilt was to choose which technologies we would use to prototype the HIE. HealthQuilt was committed to Open Source from the beginning, so I was an obvious choice to handle the detailed technology choices. We spent alot of time with  Houston Health Information Security professionals aling with the crews at Mirth and MOSS, designing a workable trust model.

I am happy to say that just as Dr. Dunn will be able to build on the HealthQuilt community for the Houston-based REC, the NHIN-Direct project may decide to reuse some of the concepts (and perhaps some of the code) that we developed at HealthQuilt. Here are some of the basic, core concepts of the HealthQuilt model.

  • The Health Information Network should be built using point-to-point ssl VPN or https connections.
  • The trust model should use X.509 PKI Certificates.
  • It should use many (rather than one) Certificate Authorities (CA).
  • Both the recipient and the destination of a given VPN tunnel or https connection must have certificates. This is very different than the PKI model used on the Internet, where servers are generally certified but clients are generally anonymous.
  • This “encrypted Health Internet” should run entirely underneath any healthcare protocols. That means trust is handled first at the network level. If some actor in the network is no longer trusted, CRL or blacklists will prevent -any- communication with them, rather than relying solely on relatively young implementations of health protocols to provide adequate encryption.
  • The “relatively young implementations of health protocols” should still implement encryption, as though there was no network security in place. (This one is actually Sean Nolans idea.. more later)
  • This allows for a natural layering of security, which makes security wonks like me feel all warm and fuzzy.
  • The “core NHIN organization” should have a list of “typically trusted CAs” called “anchor CAs” that it recommends to all network participants. This is similar to the way that normal Internet CA’s are “suggested” to you by automatic inclusion in your browser of choice.
  • Individual network participants can also choose to trust other CAs, like those provided by a hospital they are affiliated with.
  • The job of the CA’s will be (roughly) to make sure that anyone they issue a certificate to is, in fact, a particular clinical entity (doctor, clinic, hospital, etc) who has the right to receive and/or send PHI.
  • This means that members of the network do not need to sort out trust relationships on a peer-to-peer basis. They can assume that everyone who the CA trusts is trustworthy, and they can automatically share data with them when a clinical need justifies it.
  • If, for some reason, two members of the network do not trust each other, they can still use a blacklist to prevent communication.

The browser providers determine what bar CA’s must reach for automatic inclusion in each browser. That “automatic inclusion” is the foundation of the trust model of the Internet. That is what gets you a secure connection to Amazon to buy a book, even though you do not think too much about “how do I know that is really Amazon?”

So why did HealthQuilt come up with this model? We knew that each institution in the Houston area would need to make trust decisions on its own. They would never tolerate us saying “Here are the ten other hospital systems in the network, take it or leave it”. The answer would always be “leave it”.¬† Some of our constituents were very concerned that a blanket trust policy would mean that they would trust organizations that they do not have a real-world trust relationship with, i.e. Planned Parenthood clinics vs. Catholic Charity Clinics. In order to participate in the network, they needed to have fine-grain control over the trust decisions. Most participants planned to trust everyone else in the network, but they did not want to trust that the network itself would remove bad actors in the future. The combination of blacklists (which is how a node can cut off communication with another node) and CRL’s (which is how a CA says “I do not love you anymore”)¬† provide both network and node level control over dealing with “bad-actors”.

Most importantly, the network-level security model is technologically identical to the current Internet trust technology. The policies and the trust decisions are very different, but the technology is basically the same; ssl + x509. That is good, because it means that the trust issues are not entirely handled at a level where new protocols are being developed. If you rely only on message security, and you discover that one implementation was “leaking” information by encrypting slightly less than what was intended in a given “message” that could be a real problem. SSL-vpns and https, using x509 PKI is a known-quantity (not the same thing as “safe” mind you). Using that “underneath” the new stuff that NHIN Direct and CONNECT projects will develop will help ensure that implementation or design mistakes do not automatically imply a broad attack vector.

Moreover, when advances (i.e. quantum cryptography) makes the current Internet Trust model obsolete, it will have to replaced with something. Whatever it is replaced with will have to play at least some of the same roles as the current X509/ssl infrastructure. That means the whole Internet will work with us to upgrade the network trust model.

I should point out that the NHIN Direct team was certainly not doing nothing until I showed up and told them what I had done with HealthQuilt. I think that something very like the basic HealthQuilt Trust model would have been embraced in any case anyway. I am just happy to be able to present a package of thought-out ideas to the NHIN Direct team. Ironically, even before I made my suggestions, Sean Nolan, of Microsoft HealthVault, was already arguing against the “single CA, top down trust model”. Once you make the concession that you are not going to attempt to do trust entirely using CA’s and proxy CA’s (the top down model) then most of the HealthQuilt Trust model, is just incremental obvious choices.

I will be calling this trust model, the “HealthQuilt Trust Model”. This is despite the fact that the NHIN Direct trust model seems like it could justifiably also be called the “Microsoft Model”. Microsoft has some really talented technical people and it makes me feel good to see them reaching the same conclusions that I do, in parallel. Still I seriously doubt that the new NHIN Direct trust model will ever be called “The Microsoft Model” since the name does not actually describe the model at all.¬† This is good, because the phrase “Microsoft Model” generally makes the hair on the back of my neck stand up and do the polka. It should also be noted that my original ideas on the HealthQuilt model were pretty useless without adjustment from Ignacio Valdes of LinuxMedNews, my brother rick, or David Whitten of WorldVistA and the VA and several of the Mirth engineers and Alesha Adamson of MOSS. All of whom gave me valuable feedback. It is also important to note that the model has improved substantially in response to the excellent thinking done by Brian Behlendorf and the rest of the NHIN Direct Security and Trust Workgroup.

Still, I will be using the name because it is truly indicative of how the trust model should work. It should be like a quilt, legitimately different ideas about trust and security implemented by different organizations, but despite those differences, still connected. The Internet has shown time and time again, that uniformity is not the only way to cooperate.

You can follow whats happening on the NHIN Direct Security and Trust Workgroup forum. If you are truly a glutton for reading, you can read my posts and the responses