NHIN and others at OSCON

I am just home from the first ever amazing health IT track at OSCON. The quality of the content is simply amazing, and soon, you will be able to see the many talks available (thanks to Robert Wood Johnson for paying for the videos)

As I think about what I will be blogging about, I wanted to post some quick links to those who are already thinking about what was said and what it means. First the conference organizer, or at least from the health IT point of view, was Andy Oram. He already has two posts, one on the first day, and one highlighting the VistA controversies exposed at the conference.

Most of all, I wanted to point to this awesome interview with the leaders of the NHIN open source projects: NHIN CONNECT and NHIN Direct.

Tolven invited to privacy party

The Open Source Tolven project has been invited to the Privacy Technology showcase for the HITPC S&P Tiger team.

This is well-deserved recognition. Tolven has an extremely innovative architecture, that dispenses with many of the bad assumptions that other EHR platforms make. The first is that an EHR platform should only be an EHR platform. Tolven is a combined EHR and PHR.

The second is a well-thought out encryption at rest strategy.

Hopefully a recording of the presentation will be available after the meeting.

NPI data, the doctors social network

(Update Feb 18 2011: search.npidentify.com has moved to docnpi.com, I have adjusted links accordingly)  I have been working, part time, on a project for nearly two years to dramatically improve the quality and depth of information that is available on the Internet from the NPI database. For those not familiar, NPI or national provider identifier,  is a government issued health provider enumeration system. Anyone who bills Medicare or subscribes medication now has to have an NPI record, which basically means that it is a comprehensive list of individual and organizational healthcare providers in the United States. You can download the entire NPI database as a csv file under FOIA. There are a little over three million records in that download.

Each healthcare provider provides both credentialing and taxonomy data for inclusion in the database. Healthcare provider taxonomy codes are a fancy way of detailing just what type of doctor you are. Because each provider -can- provide such rich data, there is a tremendous amount of un-used information in the database. NPPES does not do very much data checking, so there is a lot of fat finger data too. I have been working on scripts to improve the overall quality of the data as well as accelerate some obvious  datamining applications. I am happy to announce that after several years of development I am ready to beta launch a dramatically improved NPI search service.

Please visit docnpi.com to try it out.

I have recorded several videos that I will attempt to embed here to show you just what it does, but for those of you who prefer to read:

  • The NPPES search engine has a limit of 150 results, docnpi has no limit
  • NPPES does not allow you to search by type of provider or organization, docnpi allows you to search by both type and group of types.
  • NPPES only lists one taxonomy per provider and it is often over-general, docnpi lists all provider taxonomies in each result
  • NPPES pages the results, while all of the results are listed at one page at docnpi (lets you use your browsers ctrl+f function to do quick sub-searches)
  • the results from any search you do is downloadable as json, xml, or excel/csv
  • No search, except an completely empty search, is too general for docnpi. If you can wait for us to process the data, we will do it for you.
  • Each NPI page automatically exposes the “social network” of any provider or organization by listing all other NPI records that share addresses, phone numbers or identifiers
  • Each NPI page displays a google map for the practice and mailing address listed in the NPI record.

I have lots more features on the way, and I know I need to optimize the site. Loading single NPI record takes too long, because I am doing several huge SQL queries across a very full database. Still if you have some patience you can give me some feedback on the site now. Here are the videos that demo specific searches and expose the data richness of the NPI dataset.

Videos

Implications

Most people have no idea how much information is truly available in the NPI database.

The one group of people who will probably immediately find the site helpful is medical billers, or insurance company employees who want to understand the relationships between different providers. They have been frustrated by the NPPES search tool for a long, long time.

But most people have no idea that this kind of information is even there! I cannot tell you how many people have no idea, for instance, that public health offices very often have an NPI record. Just looking at the taxonomy drop down, should be very enlightening. Using this search engine, you can get very specific and detailed information about the relationships between location and healthcare provider density. You can ask questions like “How many foot doctors does Denver have per-capita compared to New York”. Before docnpi.com, one had to download the data yourself and then run your own queries. But the data download is not normalized and it almost impossible to determine who shares an address unless you normalize across address. Even then without database optimizations (I have learned so much more about MySQL optimization on this project…) complex queries could take hours to complete. The site probably will feel “slow” to you because it can take a long time even to analyze the data for a single provider (30-50 seconds) but many of the matching provider data displays would have been impossible before. I hope to do more optimization and other improvements and I would like to have your advice doing so. Please click the red feedback tab and tell me how to improve the site!

Essentially this site will make the NPI data set far more accessible that it has been before. Stuff that is now easy to do, was previously the domain of expensive data toolkits or data mining experts. This data should be usable by everyone… and now it is.

I should be frank, this site has to pay for itself.  I have not decided how to charge or what I should charge for or even if I should charge. I will probably have to think about this once the number of searches starts to take the server to its knees. Once that happens I will have to spend “real money” on a dedicated virtual server/cluster and that will mean the site must be monetized somehow.  I will be probably end up limiting the number of searches that a given user can do until they pay $20 or something like. That will let most people use the site without paying, but when people start to overuse my CPU cycles they can afford to pay a little. But until my server starts to choke, everything is free. Enjoy.

Empathy over implementations and another straw man

I think the recent work of the NHIN Direct implementation teams has been amazing. But I think that by implementing, all of the teams have succumbed to different extents a common software developer error. They are implementing rather than empathizing with their users.

There are two groups (possibly three if you count patients, but I will exclude them from consideration for just a moment) of potential NHIN Direct end users. But before we talk about that, I would like to talk about Facebook and Myspace.

Or more accurately I want to remember the controversy when the Military chose to block users of Myspace but not Facebook. This caused quite a stir, because at the time, Myspace was very popular with the high-school educated enlisted personnel, but Facebook, which even then was “higher technology” was more popular with the college educated Officers. Controversy aside, it showed a digital divide between different types of users.

Ok, back to Healthcare IT.

We have something strangely similar to this in the United States with the level of IT adoption with doctors.

On Having

Most doctors are low-health-information-technology. Most doctors work in small practices. Most small practices have not adopted EHR technology. Even those small practices that have adopted EHR technology have often done so from EHR vendors who have not focused on implementing the tremendously complex and difficult IHE health data interchange profiles. That is one group of doctors. You can call them Luddite, late adopters, low-tech or “the have not’s”. No matter what you call them, the HITECH portion of ARRA was designed to reach them. It was designed to get them to become meaningful users of EHR technology. (Note that “EHR technology” is kind of a misleading term because what it hass essentially has been redefined to is “software that lets you achieve meaningful use”. People still have lots of different ideas about what an “EHR” is, because people still have lots of disagreements about the best way to achieve meaningful use.)

I have to admit, I am primarily sympathetic with this user group. I originally got into Open Source because my family business (which my grandfather started) was a Medical Manager VAR. Our clients loved us, and they hated the notion of spending a bucket load of money on an EHR. I started looking for an Open Source solution to our EHR problems, and when I could not find what I needed, I started contributing code. There are a small cadre of people working on the NHIN Direct project that share my basic empathy with this type, the “have nots”, of clinical user for different reasons.

But the majority of the people working on NHIN Direct represent the whiz-kid doctors. These are the doctors who work in large clinics and hospitals that have found moving to an EHR system prudent. Sometimes, smaller groups of doctors are so tech-hungry that they join this group at great personal expense. These doctors, or the organizations that employ them have invested tremendous amounts of money in software that is already IHE aware. Often groups of these doctors have joined together to form local HIE systems. It is fair to say that if you are a doctor who has made an investment in technology that gives you IHE systems, you paid alot for it, and you want that investment to pay off. We can call these doctors the “whiz-bang crowd”, the EHR lovers, or simply “the haves”.

Today, in the NHIN Direct protocol meeting we had a polite skirmish (much respect for the tone everyone maintained despite the depth of feeling) between the representatives of the “have nots” like me, Sean Nolan, David Kibbe and other who are thinking primarily about the “have nots” and the vendors of large EHR systems, HIE’s and other participants in the IHE processes, these people tend represent the “haves”.

To give a little background for my readers who are not involved with the NHIN Direct project:

A Little Background

NHIN Exchange is a network of that anyone who speaks IHE can join. If you speak IHE, it means that you should be able to meet all of the requirements of the data exchange portions of meaningful use. It also means that you pretty much have to have some high technology: a full features EHR or something that acts like one. IHE has lots of features that you really really need in order to do full Health Information Exchange right. But it has never been deployed on a large scale and it is phenomenally complex. ONC started an Open Source project called NHIN CONNECT that implements IHE and will form the backbone of the governments IHE backbone. Beyond CONNECT both the Mirth guys and OHT/MOSS have substantial IHE related software available under FOSS licenses. There are lots of bolt on proprietary implementations as well. IHE is complex, but the complexity is required to handle the numerous use cases of clinical data exchange. Exchanging health data is vastly more complex than exchanging financial information etc etc. But to use IHE you have to have an EHR. Most doctors do not have an IHE-aware EHR.

ONC knew that HITECH would convince many doctors to invest in EHR technology that would ultimately allow them to connect to NHIN Exchange. However they also knew that many doctors, possibly most doctors, might choose not to adopt EHR technology. Someone (who?) proposed that ONC start a project to allow doctors to replace their faxes with software that would allow them to meet most, but not all, of the meaningful use data interchange requirements, without having to “take the EHR plunge”. This new project could meet all of the requirements that could be met with a fax-like or email-like “PUSH” model. I explained some of this in the power of push post. This project was called NHIN Direct.

Whats the problem

So what is the problem? A disproportionate number of the people who signed up to work on the NHIN Direct project are EHR vendors and other participants who represent lots of people who have made extensive investments in IHE. In short, lots of “haves” representatives. Some of the “haves” representatives proposed that NHIN Direct also be built with the subset of IHE standards that cover push messages. But remember, IHE is a complex set of standards. Push, in IHE, is much more complicated than the other messaging protocols that were under consideration. I have already made a comparison of the protocols under consideration.

IHE is really good for the “haves”. If you “have” IHE, then all kinds of really thorny and difficult problems are solved for you. Moreover, one of the goals of meaningful use is to get more EHRs (by which I mean “meaningfully usable clinical software”) into the hands of doctors. The US, as a country, need more people using IHE. It really is the only “right” way to do full health information exchange.

But IHE is not trivial. It is not trivial to code. It is not trivial to configure. It is not trivial to deploy or support. It is not trivial to understand. It could be simple to use for clinicians once all of the non-trivial things had been taken care of. But realistically, the number of people who understand IHE well enough to make it simple for a given clinical user is very very few.

The other options seemed to be SMTP or REST-that-looks-and-acts-just-like-SMTP-so-why-are-we-coding-it-again (or just REST). Both of these are much much simpler than the IHE message protocols. These would be much simpler for the “have not’s” to adopt easily and quickly. Of course, they would not get the full benefit of an EHR, but they would be on the path. They would be much better off than they are now with the fax system. It would be like the “meaningful use gateway drug”. It would be fun and helpful to the doctors, but leave them wanting something stronger.

The NHIN Direct project, fundamentally creates a tension with the overall NHIN and meaningful use strategy. As a nation we want to push doctors into using good health IT technology. But does that mean pushing them towards the IHE-implementing EHRs on the current market? or should we push them towards simple direct messaging? The answer should be something like:

“if doctors would have ordinarily chosen to do nothing, we would want them to use NHIN Direct, if they could be convinced to be completely computerized, then we should push them towards IHE aware clinical software that meets all of the meaningful use requirements”.

Given that split, the goal of NHIN Direct should be:

“For doctors who would have refused other computerization options, allow them to meaningfully exchange health information with as little effort and expense on their part as possible”

I, and others who realize just how little doctors like this will tolerate in terms of cost and effort, strongly favor super simple messaging protocols that can be easily deployed in multiple super-low cost fashions. I think that means SMTP and clearly rules out IHE as a backbone protocol for “have-nots” that are communicating with other “have-nots”.

Empathizing with the Haves

But the danger of focusing on just the requirements of your own constituents is that you ignore how the problems of those you do not empathize with the impact that your designed will have on other users. Both the representatives of the “have’s” and the “have nots” like me have been guilty of this. After listening to the call I realized that the EHR vendors pushing HIE where not being greedy vendors who wanted to pad their wallets. Not at all! They were being greedy vendors who wanted to pad their wallets -and- protect the interests of the doctors already using their systems. (that was a joke btw, I really did just realize that they were just empathizing with a different group of doctors than I was)

If you are a “have” doctor, you have made a tremendous investment in IHE. Now you are in danger of getting two sources of messages. You will get IHE messages from your other “have” buddies, but you will have to use a different system to talk with all of the “have-nots” who want to talk with you. That means you have to check messages twice and you can imagine how it might feel if for one doctor to be discussing one patient, across the two systems. Lots of opportunity for error there.

From the perspective of the IHE team, by making the “have nots” make the concession of dealing with IHE, rather than cheaper, simpler easier SMTP they reduce to one messaging infrastructure that eliminates balkanization. No longer will any user be faced with using two clinical messaging systems, instead they can have only one queue. Moreover, since we ultimately want “fully meaningful users” it is a good thing that the IHE-based NHIN Direct system would provide a clear path to getting onto the NHIN Exchange with a full EHR. From their perspective, having more difficult adoption for the “have nots”, and the resulting loss of adoption would be worth it because it would still get you faster to where we really need to be, which is doing full IHE Health Information Exchange with everyone participating.

Everyone wants the same thing in the end, we just have different ideas about how to get there! I believe that we should choose protocol designs for NHIN Direct that fully work for both sets of clinical users. I think we can do this without screwing anyone, or making anyone’s life more difficult.

The new empathy requirements

I would propose that we turn this around into a requirements list: We need a NHIN Direct protocol that

  • Allows the “have nots” to use NHIN Direct as cheaply as possible, that means enabling HISPs with simple technology that can be easily deployed and maintained using lots of different deliver models. (i.e. on-site consulting and server setup, ASP delivery etc etc)
  • Allows the “haves” to view the world as if it was one beautiful messaging system, and that was based on IHE. They should not have to sacrifice the investment they have made and they should not have to deal with two queues.

My Straw Man: Rich if you can read it

The IHE implementation group believes that all SMTP (or REST-like-SMTP) messages that leave the “have-nots” should be converted “up” into IHE messages and then, when they get close to other “have-not” doctors, the messages should be converted back “down” to SMTP. Which means that they are suggesting that HISPs that handle communications between “have-not” doctors should have to implement IHE in one direction and SMTP in another direction even though the message will have no more content or meaning after being sent.

The problem with that is that the HISPs that maintain this “step-up-and-down” functionality will have to cover their costs and develop the expertise to support this. This is true, even if the “edge” protocol is SMTP. The only approach that will work for this design is an ASP model, so that the HISP can afford to centralize the support and expertise needed to handle this complexity. That means low-touch support and low-touch support, and high costs translate to low-adoption. In fact doctors would probably be better off just investing in an ASP EHR that was fully IHE aware. So the IHE model is a crappy “middle step”.

But there is no reason that the HISP needs to handle step-down or step-up as long as they are only dealing with “have-not” doctors. If you allowed SMTP to run the core of NHIN Direct, HISPs could leverage current expertise and software stacks (with lots of security tweaking discussed later), to ensure that messages went into the right SMTP network. No PHI in regular email.  Local consultants as well as current email ASP solutions could easily read the security requirements, and deploy solutions for doctors that would send messages across the secure SMTP core. With careful network design, we could ensure that messages to NHIN Direct users would never be sent across the regular email backbone. I will describe how some other time (its late here) but it is not that hard.

But you might argue: “that is basically just SMTP as core! This is no different than your original approach. You are still screwing the haves!” Patience grasshopper.

To satisfy the “haves” we have to create a new class of HISP. These HISPs are “smart”. They understand the step-up and step-down process to convert IHE messages to SMTP messages. When they have a new outgoing message, they first attempt to connect to the receiving HISP using HIE. Perhaps on port 10000. If port 10000 is open, they know that they are dealing with another smart HISP, and they send their message using IHE profiles. Some smart HISPs will actually be connected to the NHIN Exchange, and will use that network for delivery when appropriate.

The normal or “dumb” HISP never needs to even know about the extra functionality that the smart HISP possess.  They just always use the NHIN Direct SMTP port (lets say 9999) to send messages to any HISP they contact. While smart HISPS prefer to get messages on port 10000, when they get an SMTP message on port 9999 they know they need to step-up from SMTP to IHE before passing to the EHR of the end user.

From the Haves perspective, there is one messaging network, the IHE network. They get all of their messages in the same queue. Sometimes the messages are of high quality (because they where never SMTP messages, but IHE messages sent across NHIN Exchange or simply between two smart HISPS).

Now, lets look at the winners and losers here. For the “have nots” the core is entirely SMTP. As a result they have cheap and abundant technical support. They are happy. For the “haves” they get to think entirely in IHE, they might be able to tell that some messages have less useful content, but that is the price to pay for communicating with a “have not” doctor. The “have nots” will get rich messages from the IHE sites and will soon realize that there are benifits to moving to an EHR that can handle IHE.

Who looses? The smart HISPs. They have to handle all of the step-up and step-down. They will be much more expensive to operate -unless- there is a NHIN Direct sub-project to create a smart HISP. This is what the current IHE implementation should morph into. We should relieve this burden by creating a really solid bridge.

This model is a hybrid of the SMTP and IHE as “core” model. Essentially it builds an outer core for “have not” users and an inner core IHE users. From the projects perspective, those that feel that simple messaging should be a priority for the “have nots” (like me) can choose to work with the SMTP related code. People who want to work in the interests of the “haves” can work on the universal SMTP-IHE bridge.

I call this straw man “Rich if you can read it”. From what I can tell it balances the core perspectives of the two interest groups on the project well, with places for collaboration and independent innovation. It’s more work, but it does serve to make everyone happy, rather than everyone equally unhappy with a compromise.

Footnotes and ramblings:

Don’t read this if you get bored easily.

I believe that this proposal excludes a REST implementation unless it acts so much like SMTP that SMTP experts can support it easily. Which begs the question, why not just use SMTP. SMTP fully supports the basic work case. The code already works and a change to REST would reduce the pool of qualified supporters.

I should also note that no one, ever is suggesting that the same program be used for email as for NHIN Direct messages. I think we should posit a “working policy assumption” that any NHIN Direct SMTP user would be required to have a different program for sending NHIN Direct messages. Or at least a different color interface. Perhaps Microsoft can release a “red and white GUI” clinical version of Outlook for this purpose… Sean can swing it…. or users could use Eudora for NHIN Direct and Outlook for regular mail. Or they could be provided an ASP web-mail interface.

We might even try and enforce this policy in the network design:

We should use SRV records for the NHIN Direct SMTP network rather than MX records. There are reasons for doing this for security reasons (enables mutual TLS) and most importantly, it means that there will be no PHI going across regular email. When someone tries to send an email message to me@fredtrotter.com their SMTP implementation looks up an MX record for fredtrotter.com to see where to send the message. If we use SRV for the DNS records, and you tried to send PHI to me@nhin.fredtrotter.com you would create an invalid MX record for nhin.fredtrotter.com such that MX for nhin.fredtrotter.com -> nothing.fredtrotter.com and nothing.fredtrotter.com was not defined in DNS. This will cause an error in the local SMTP engine without transmitting data. But the NHIN Direct aware SMTP server or proxy would query for SRV for the correct address and enforce TLS and be totally secure. Normal email messages to nhin Direct addresses would break before transfer across an insecure network but the secure traffic would move right along. Obviously this is not required by the core proposal, but it is a way of ensuring that the two networks would be confused much less frequently. This plan might not work, but something like this should be possible.

This is a pretty complex email setup, but SRV is growing more common because of use with XMPP and SIP. Normal SMTP geeks should be able to figure it out.

Open Letter to the tiger team

Hi,

This is an open letter to the tiger team from HIT Policy Committee as well as the committee generally. Recently a group from HITPC gave recommendations to the NHIN Direct project regarding which protocol it should choose. I realized as I heard the comments there, that this group was reading the NHIN Direct Security and Trust Working Groups latest consensus document. I am on that working group and I wrote a considerable portion of that document (most of the Intent section). I was both startled and flattered that the HITPC group was using that document as the basis for their evaluation of the protocol implementations. In fact, they eliminated the XMPP project from consideration because they felt that the SASL authentication that the XMPP implementation will use was incompatible with the following requirement from the consensus document:

2.1 Use of x.509 Certificates. The NHIN Direct protocol relies on agreement that possession of the private key of an x.509 certificate with a particular subject assures compliance of the bearer with a set of arbitrary policies as defined by the issuing authority of the certificate. For example, Verisign assures that bearers of their “extended validation” certificates have been validated according to their official “Certification Practice Statement.” Certificates can be used in many ways, but NHIN Direct relies on the embedded subject and issuing chain as indicated in the following points. Specific implementations may choose to go beyond these basic requirements.

The HITPC team felt that SASL, which does not typically use certs for authentication did not meet this requirement. As it turns out, the XMPP implementation team believes that SASL can be used with x.509 certs and therefore should not be excluded from consideration. That is a simple question of fact and I do not know the answer, but in reality it should not much matter. (will get into that later)

Even more troubling was the assessment of SMTP. The HITPC reviewers considered an all SMTP protocol network as problematic because it allowed for the use of clients which presented users with the option to make security mistakes. They felt that simpler tools should be used, that prevented these types of mistakes from being made.

None of these were unreasonable comments given the fact that they were reading all of the documents on the NHIN Direct site in parallel.

They also have a strong preference for simplicity. Of course, simplicity is very hard to define, and it is obvious that while everyone agrees that security issues are easier to manage with simpler systems,  we disagree about what simplicity means.

As I listened to the call, hearing for the first time how others where seeing my work, and the work of the rest of the NHIN Direct S&T working group, I realized that there were some gaps.  Ironically this is going to be primarily a discussion of what did not make into the final proposal. Most of the difficult debates that we held in the S&T group involved two divergent goals: Keep reasonable architecture options open to the implementation teams, and the consideration that security decisions that were reasonable 90% of the time were still unreasonable 10% of the time. We could not exclude end users (or implementation paths) by making technology decisions in ways that 10% of the users could not accept. 10% does not sound like much, but if you make 10 decisions and each of those decisions serves to exclude 10% of the end users… well that could be alot of exclusion. We went around and around and mostly, the result is that we settled on smaller and smaller set of things we -had- to have to make flexible trust architecture that would support lots of distributed requirements. This is not a “compromise” position, but a position of strength. Being able to define many valid sub-policies is critical for things like meeting state level legal requirements. To quote Sean Nolan:

we’ve created an infrastructure that can with configuration easily not just fit in multiple policy environs, but in multiple policy environs SIMULTANEOUSLY.”

That is quite an achievement, but we should be clear about the options we are leaving open to others. I am particularly comfortable with the approach we are taking because it is strikingly similar to the model I had previously created in the HealthQuilt model. I like the term  HealthQuilt because it acknowledges the basic elements of the problem. “Start out different, make connections where you can, end with a pleasing result”.

But we also assumed that someone else would be answering lots of questions that we did not. Most notably we could not agree on:

How to have many CA’s?

Our thinking was that you needed to have the tree structure offered by the CA model so that you could simplify trust decisions. We rejected notions of purely peer-to-peer trust (like gpg/pgp) because it would mean that end users would have to make frequent trust decisions increasing the probability that they would get one wrong. Instead if you trust the root cert of a CA, then you can then trust everyone who is obeying the policies of that CA. So X509 generally gave us the ability to make aggregated trust decisions, but we did not decide on what “valid CA architectures would look like”. Here are some different X509 worldviews that at least some of us thought might be valid models:

  • The one-ring-to-rule-them CA model. There is one NHIN policy and one NHIN CA and to be on the NHIN you had to have some relationship with that CA. This is really simple, but it does not support serious policy disagreements. We doubt this would be adopted. The cost of certs becomes a HHS expense item.
  • The browser model. The NHIN would choose the list of CA’s commonly distributed in web browsers and then people could import that list and get certs from any of those CA’s. This gives a big market of CA’s to buy from but these CA’s are frequently web-oriented. There is wide variance of costs for browser CA certificates.
  • The no CA at all model. for people who knew they would be only trusting a small number of other end nodes, they could just choose to import their public certs directly. This would enable very limited communication but that might be exactly what some organizations want. Note that this also supports the use of self-signed certificates. This will only work in certain small environments, but it will be critical for certain paranoid users.  This solution is free.
  • The government endorsed CA’s. Some people feel that CA’s already approved by the ICAM Trust Framework should be used. This gives a very NISTy feel to the process, but the requirements for ICAM might exclude some solutions (i.e. CACert.org). ICAM certs are cheap (around $100 a year) assuming you only need a few of them.
  • CACert.org peer to peer assurance CA. CACert.org is a CA that provides an unlimited number of certificates to assured individuals for no cost. Becoming assured means that other already assured individuals must meet you face to face and check you government ids. For full assurance at least three people must complete that process.  This allows for an unlimited number of costless certs backed by a level of assurance that is otherwise extremely expensive. The CACert.org code is open source, and the processes to run CACert.org are open. This is essentially an “open” approach to the CA problem (I like this one best personally)

Individual vs group cert debate?

If you are going to go with any CA model other than “one ring to rule them” then you are saying that the trust relationships inside the CA’s will need to be managed by end users. Given that, some felt that we should be providing individual certs/keys to individual people. Others suggested that we should support one cert per organization. Other said that groups like “emergencydepartment@nhin.example.com” should be supported with a sub-group cert.

In the end we decided not to try and define this issue at all. That means that sometimes messages from an address like johnsmith@nhin.ahospital.com could be signed with a cert that makes it clear that only John smith could have created the message, or by a cert that could have been used by anyone at ahosptial.com or by some subgroup of people at ahospital.com might have had access to the private key for signing.

Many of us felt that flexibility in cert to address mappings was a good thing, since it would allow us to move towards greater accountability as implementations became better and better at the notoriously difficult cert management problem, while allowing simpler models to work initially. However if you have a CA model where certs are expensive, then it will be difficult to move towards greater accountability as organizations choose single certificates for cost reasons.

Mutual TLS vs TLS vs Protocol encryption?

What we could agree on whether and how to mandate TLS/SSL. This is what we did say:

2.6 Encryption. NHIN Direct messages sent over unsecured channels must be protected by standard encryption techniques using key material from the recipient’s valid, non-expired, non-revoked public certificate inheriting up to a configured Anchor certificate per 2.2. Normally this will mean symmetric encryption with key exchange encrypted with PKI. Implementations must also be able to ensure that source and destination endpoint addresses used for routing purposes are not disclosed in transit.

We did this to enable flexiblity.The only thing we explicitly forbid was not using encryption to full protext the addressing component. So no message-only encryption leaving the addresses exposed.

This is a hugely complex issue. In an ideal world, we would have liked to enforce mutual TLS, where both the system initiating the connection and the system receiving it would need to provide certs. Mutual TLS would virtually eliminate spam/ddos attacks because to even initiate a connection you would need to “mutually trusted public certs”.

However, there are lots of several practical limitations to this. First TLS does not support virtual hosting (using more than one domain with only one IP) without the TLS-SNI extension. SNI is well-supported in servers but poorly supported in browsers and client TLS implementations.

Further, only one cert can be presented by the server side of the connection, or at least that is what we have been led to believe and I have not been able to create a “dual signed” public cert in my own testing. That means in order to have multiple certs per server you have to have multiple ports open.

SRV records address both the limitations with virtual hosting and the need to present multiple certs on the server side. This is because SRV DNS records allow you to define a whole series of port and host combinations for any given TCP service. However, MX records, which provide the same fail-over capability for SMTP does not allow you to specify which port. You can implement SMTP using SRV records, but that is a non-standard configuration and the argument for that protocol is generally that it is well-understood and easier to configure.

Ironically, only the XMPP protocol supports SRV out of the box and therefore enables a much higher level of default security in commonly understood configuration. With this high-level of TLS handshaking, you can argue that only message-content-encryption and message-content-signing require certs beyond the TLS, making the debate about SASL somewhat irrelevant. From a security perspective you actually rejected the protocol with the best combination of security+availability+simplicity.

No assumption of configuration?

You rejected SMTP-only because you assumed that end users would be able to configured their NHIN Direct mail clients directly. Ironically, we did not specifically forbid things like that, because we viewed it as a “policy” decision. But the fact that we did not cover it does not imply that the SMTP configuration should happen in a way that would allow for user security configuration. This is obviously a bad idea.

No one every assumed that the right model for the SMTP end deployment would mean that a doctor installed a cert in his current Microsoft Outlook and then selectively used that cert to send some messages over the NHIN Direct network.

We were assuming SMTP deployments that present the user with options that exclude frequent security decisions. This might be as simple as saying “when you click this shortcut outlook will open and you can send NHIN Direct messages, when you click this shortcut outlook will open and you can send email messages”. The user might try to send nhin direct messages with the email client or vice versa, but when they make that mistake (which is a mistake that -will- happen no matter what protocol or interfaces are chosen) the respective client will simply refuse to send to the wrong network.

There are 16 different ways to enforce this both from a technology and a policy perspective, but we did not try to do that, because we were leaving those decisions up to local policy makers, HHS, and you.

You assumed that there where security implications by choosing SMTP that are simply not there.

On Simplicity

Lastly I would like to point out that your recommendation was actually problematically not simple. We in the S&T group spent lots of time looking at the problem of security architecture from the perspective of the four implementation groups. For each of them we focused only on the security of the core protocol. Not on the security of the “HISP-to-user” portion. We have carefully evaluated the implications of each of these protocols from that perspective. We have been assuming that the HISP to user connection might like to use lots and lots of reasonable authentication encryption and protocol combinations. Our responsibility was only to secure the connection between nodes.

With that limitation you have chosen just “REST” as the implementation choice, precisely because you see it as a “simple” way to develop the core. The REST team has done some good work, and I think that is a reasonable protocol option. But I am baffeled that you see that as “simple”.

If we choose REST we have no message exchange protocol, we have a software development protocol, we must build a message exchange protocol out of that development tool. With SMTP, XMPP and to a lesser extent IHE, you are configuring software that already exists to perform in an agreed upon secure fashion. There are distinct advantages to the “build it” approach, but from a security perspective, simplicity is not one of them. I think you are underestimating the complexity of messaging generally. You have to sort out things like

  • store and forward,
  • compatible availability schemes,
  • message validity checking (spam handling),
  • delivery status notifications,
  • character set handling,
  • bounce messages.

The REST implementation will have to either build that, or borrow it from SMTP implementations much the same way they now borrow S/MIME. I would encourage you to look at “related RFCs” for a small taste of all the messaging related problems that SMTP protocol has grown to serve. XMPP was originally designed to eclipse the SMTP standard, so it is similarly broad in scope and functionality. Both SMTP and XMPP have had extensive security analysis and multiple implementations have had vulnerabilities found and patched. IHE actually takes a more limited approach to what a message can be about and what it can do. It is not trying to be generalized messaging protocol and is arguable better at patient oriented messaging and worse at generalized messaging as a result.

But in all three cases, XMPP, SMTP and IHE, you are talking about configuring a secure messaging infrastructure instead of building one. The notion that REST is ‘faster to develop’ with is largely irrelevant. Its like saying “We have three options, Windows, Linux or writing a new operating system in python because python is simpler than C” When put that way you can see the deeply problematic notion of “simplicity” that you are putting forward.

All three of the other protocols, at least from the perspective of security,  are easier to account for because the platforms are known-quantities. A REST implementation will be more difficult to secure because you are trying to secure totally new software implementing a totally new protocol.

I want to be clear, I am not arguing against REST as an implementation choice. The central advantage of a REST implementation is that you can focus the implementation on solving the specific use-cases of meaningful use. You can have a little less focus on messaging generally, simplifying the problem of a new protocol, and focus on features that directly address meaningful use. Its a smaller target and that could be valuable. Its like a midway point between the generalized messaging approach found in XMPP and SMTP and the too specific, single-patient oriented IHE messaging protocol.

But if you do choose REST, do not do so thinking that it is the “simple” protocol choice.

Conclusion

Beyond the security issues, there are good reasons to prefer any of the implementation protocols. I wanted to be clear that we are expecting your group to have things to say about the things we did not decide (or at least that you know what it means to say nothing), and to make certain that something that we wrote in the S&T group was not biasing you for or against any particular implemenation, all of which are basically compatible with what our group has done.

Thanks,

-FT

What protocol for NHIN Direct?

[Update 6-10-10: added new spectrum “False Interface Potentia” based on comment from David Tao. Added “End User Perception” based on comment from Erik Pupo.  Removed controversial portions. full explanation at the end]

Currently, the NHIN Direct project is having a debate around what protocol to use.

NHIN Direct is supposed to enable something that “feels like email” to clinical and patient users, but in fact allows for the secure transfer of PHI over the Internet between clinical and patient parties. Essentially, a secure network for PHI messages.

But how to do that? We have to choose a technology suite in order to make that happen and it is not clear which protocol we should be choosing.

Yesterday, I listened to a call where the various implementation teams made the “case” for their particular protocol.

First let me say that I have great sympathy for these implementers, all of them have put in lots of work and thought into their particular implementation approach and all of them have a made a good case. However, Open Source is a meritocracy. We have to decide on one “best” approach for NHIN Direct and not everyone will get what they want. In competitions like this, we might have at least one very happy implementation group, but we are sure to have several approaches that are abandoned. Those people who have worked hard on implementations that will not be used are in a difficult social situation. They will inevitably feel that the group has chosen poorly, and that their work was overlooked. But at the same time the group will be asking for their help in unifying the project behind a single approach (if not a single protocol). As a developer, being in that situation and being rejected really hurts. I have been in the Open Source community long enough to have been on both the “winning” side and the “loosing side” of this several times. It is better to win.

It is critical that the larger group show its appreciation for the work of everyone, even as it rejects most of the approaches. Contributing to an Open Source project like this is expensive emotionally and financially, and just because some implementations will “loose” does not mean that there were valueless. In fact, as we abandon implementation approaches, the project members will do well to recognize the abandoned approaches as “templates for improvement” and/or “concessions to expediency”.”Right” is not really a position anyone gets to take.

To all of the implementation groups: Thank you. Your work, whether embraced or not, is impressive. As you can see later, every approach has merits not found in other approaches. This will be a tough decision.

XMPP

XMPP is basically a chat protocol.

Advantages: Chat has some fundamental advantages over email (SMTP). When a user is offline in a chat context, you can still send them messages, so chat “falls back to” a stored message system like email. But when a user is online, that information can be selectively broadcast to other users, who can then send messages knowing that the person is right there on the keyboard. XMPP was designed from the ground up to handle real-time messaging, and so it might be more appropriate if you consider situations, (i.e. surgery) where you need to transfer messages and attachments and get information back in near-real-time. In fact you might consider an XMPP system as “more compatible” with a tele-medicine approach for this reason. There are also some interesting “subscribe/broadcast” models that XMPP has built into the protocol. There are lots of solid implementations of XMPP currently in the market and the mature ones should be sufficiently configurable to handle the complex security configuration that NHIN Direct will require. So there is a big pile of existing software available and most of it is Open Source.

Disadvantages:

Not many people understand XMPP and it is not as widely used as email. It is not out-of-the-box compatible with NHIN Exchange.

Score:

  • New Implementation Coding Required: almost none: 10
  • Open Source implementations available: Lots: 10
  • Existing experts: some but not many: 5
  • Compatibility with NHIN Exchange: must be built from scratch: 1
  • Future Flexibility: the protocol is the protocol, new stuff has to go on top: 3
  • Mature Standard: Yes huge chat networks already running: 10
  • End User Interface Familiarity: Lots of people use chat: 7
  • False Interface Potential: Might make users think “this is just chat”: 2
  • End User Perception: You are using chat for secure PHI exchange? Wrong perception of course, but still not good: 2
  • Cool feature bonus: +5 for enabling real-time chat

Total: 55

REST

REST is web-based software design philosophy.

Advantages: REST lets you build really complex things really quickly. As proof, the REST implementation basically already has a working prototype. REST is extremely powerful and will allow rapid iteration of future versions. As we have future requirements we can build them easily.

Disadvantages: You are building a messaging infrastructure from scratch. You will have to re-invent strategies for reliability, message queuing, and countless other things that you do not realize that SMTP or XMPP are doing for you. It is not out-of-the-box compatible with NHIN Exchange.

Score:

  • New Implementation Coding Required: almost everything: 1
  • Open Source implementations available: Lots of good REST libraries: 10
  • Existing experts: no experts in what does not yet exist: 1
  • Compatibility with NHIN Exchange: must be built from scratch: 1
  • Future Flexibility: Allows for very rapid iteration: 10
  • Mature Standard: While REST is mature, the application design on top is not all, it will be totally new: 1
  • End User Interface Familiarity: Interfaces will likely mimic email but they will be untested and totally new: 3
  • False Interface Potential: Because the interfaces will be new, notions of PHI transfer can be built in: 5
  • End User Perception: Essentially none, its reputation will stand on itself, and because we will do a good job, this is not a disadvantage: 10

Total: 42

IHE

IHE is a set of profiles for exchanging health information. It is designed from the ground up to handle the complexities of Health Information exchange. The relevant profiles are XDR and XDM

Advantages: Using the IHE messaging standard means that NHIN Direct users would be participating in the more advanced NHIN Exchange, they just would not be using all of its power. Merely push messaging has some fundamental limitations and fully participating in the larger NHIN Exchange will allow providers to embrace larger portions of the more advanced health information network when those features are needed. Both Mirth, Open Health Tools, and MOSS provide excellent Open Source implementations of these protocols. IHE provides a formalized mechanism, in an international environment for improving and updating the standard. The largest and biggest EHR vendors already have support for IHE.

Disadvantages: IHE is a moving target. The NHIN CONNECT project is already handling the considerable complexities of mapping to those standards and profiles even as they are finalized. There is already tension there between what CONNECT actually does and what the standards say should be done. The whole point of NHIN Direct is to provide a much simpler model of Health Data exchange than is available with NHIN Exchange. While the big EHR vendors typically have IHE compliant implementations, it is only in the last few years that they have been successfully using them to connect to each other at the Connectathon. Given that, it is something of a stretch to hold out IHE as a fully mature standard. [Update 6-10-10 section removed]

Discussion: The NHIN Exchange will have direct messaging through IHE protocols. Unless you want an AOL/Compuserve (or Twitter/identi.ca/GoogleBuzz) style messaging split between the NHIN Direct/Exchange networks, there will have to be a bride between the messaging protocols. Given that bridge, the real question becomes “Is there any reason not to use the IHE messaging as a backbone and some other protocol at the edges”. (which is essentially what the IHE team is proposing) From what I could tell the critical issue is how much data is “required” to be included as a minimum in any IHE message. Apparently, it will break down if there is not a patient id of some kind in the message (I do not mean a social security number, but something assigned locally by a computer program). This excludes people who do not have a list of patient ids to send from using NHIN Direct to send messages (they can still receive). This essentially makes EHR-integration a requirement; excluding those without a computer system at all, or who are using an antiquated practice management system whose patient id’s would be difficult to access.  So the real question is do we use IHE as the core technology or merely bridge to it?

  • New Implementation Coding Required: Everyone except the large vendors will have to code or adopt Open Source libraries: 5
  • Open Source implementations available: Several very promising projects already in live use: 10
  • Existing experts: IHE is not a well-know protocol: 3
  • Compatibility with NHIN Exchange: it is identical: 10
  • Future Flexibility: There is a formal process in place, but it is hardly fast it has been going for decades: 4
  • Mature Standard: Still in flux: 3
  • End User Interface Familiarity: Very few clinicians use IHE now, so the interfaces are immature: 3
  • False Interface Potential: Although relatively young, the existing IHE messaging clients are designed to handle PHI notions: 7
  • End User Perception: IHE is totally “legit” they might not be happy with it, but they will respect it: 10

Total: 50

SMTP

SMTP is the protocol  behind email.

Advantages: There is a huge expanse of people who are familiar with this protocol stack. It has already been extensively used for this purpose and in this way (with S/MIME). It is the simplest known-good solution. It is the protocol to beat. Plenty of Open Source implementations. Super mature standard that has withstood decades of security scrutiny.

Disadvantages: Email is great and email sucks. It is does not have some features of XMPP, and IHE, and it is such a broad protocol that there are many options that make predictions about how it will be used difficult. It is not out-of-the-box compatible with NHIN Exchange.

  • New Implementation Coding Required: Everything is done: 10
  • Open Source implementations available: old and mature projects to most anything: 10
  • Existing experts: You can’t swing a cat without hitting an SMTP expert: 10
  • Compatibility with NHIN Exchange: must be built from scratch: 1
  • Future Flexibility: the protocol is the protocol, new stuff has to go on top: 3
  • Mature Standard: Is there a more mature standard anywhere?: 10
  • End User Interface Familiarity: Many people familiar with many different mature interfaces: 10
  • False Interface Potential: Users may make the mistake of not seeing that this is -not- just email while using traditional clients: 1
  • End User Perception: they might be a little uncomfortable with “email” but if you say “secure email” that might work: 8

Total: 63

Epilogue

It should be noted that my scoring system is somewhat arbitrary. I think my assessments are fair, but you could easily get different “totals” be choosing different items to include in the comparison. What significant categories of comparison does this ignore? Suggest something in the comments and if it makes sense I will add it.

Update 6-10-10:

So several people have commented on this blog post, and I would encourage you to read all of the comments. They are uniformly well reasoned, even when I disagree with them. I will be updating the scoring with two new categories, and I wanted to explain them.

The first is my response to a comment from David Tao regarding the User Interfaces. My previous scoring is biased in favor of older mature user interfaces. David’s comments made me realize that if a mature interface gave the wrong idea to a clinician or a patient about the working of the system, that even though it was a more stable interface, it could detract from the overall impact of the UI. There is an argument to be make that a “new messaging paradigm (i.e. secure PHI) should be paired with a new interface that exposes the important differences involved”. However I cannot give IHE a “10” in that new category because the IHE messaging systems, in the context of a national exchange are still not mature, but they might still get some benefit from being tailored to PHI exchange. I think both the REST model and the IHE model, which favour newer interface designs should get points here, and IHE should beat REST because they have been testing their new designed for some time. So this is what I mean by “false interface potential” category.

The second is my response to a comment from Erik Pupo. Erik believes that a secure messaging infrastructure based on a chat protocol (XMPP) will not be taken as seriously as other protocols. This is somewhat sad, since I think this is a incorrect public perception about the value of a reliable protocol, but it is unwise for us to be trying to change that unfair public perception even as we try to encourage adoption. Its like saying “here use this tool that you do not take seriously..” not a cogent message for us. So I have added the category of ‘End User Perception” and dinged XMPP for it because of this. I think email, as a known protocol, will also have a slight disadvantage here.

Lastly, I have removed my original criticism of EHRA. Of course, I am still right about the issue 😉 but the project managers decided that the discussion was  unproductive. Given that, it seemed wise to remove that content from this page so that it could remain as a “somewhat not totally subjective” resource. This post is intended to further the legitimate and focused technical debate, and not have us going around in circles about more fundamental issues that we are unlikely to agree on.

-FT

What is NHIN Direct? (alpha)

Recently a member of the FOSS Health community wrote to me:

So I’m confused by NHIN Direct. Why not simply use S/MIME or PGP email? Why five different ways of addressing people, when really only the email-address format makes sense to the average internet user?

I have been pretty confused by the NHIN Direct for quite some time. But I have finally invested enough time that I can discuss the aim of the project somewhat succinctly. Note that this is essentially my re-phrasing of the NHIN Direct FAQ item “What is NHIN Direct“. To implement code, we need to have very exact definitions of what we will or will not do, and often that careful phrasing, while making it easier to code, makes things harder to understand. So the site above, in its current definition reads like this:

NHIN Direct is the set of standards, policies and services that enable simple, secure transport of health information between authorized care providers. NHIN Direct enables standards-based health information exchange in support of core Stage 1 Meaningful Use measures, including communication of summary care records, referrals, discharge summaries and other clinical documents in support of continuity of care and medication reconciliation, and communication of laboratory results to providers.

Lets re-write that in English.

NHIN Direct is like “email for doctors”. NHIN Direct is way for doctors, patients and other healthcare providers to send each other messages, which will feel like email messages, but are different in two important ways. First, the messages can have smart “attachments” that are essentially patient records in standardized formats (CCR/CCD/etc) and second, unlike email, the messages will be sent over a secure network in a HIPPA compliant way. Generally NHIN Direct should replace the current use of fax and email for the transfer of medical records in the US, and provide a stepping stone to greater interoperability with the NHIN Exchange (which is much smarter than just email)

The problem is, at this stage, that you cannot really go much deeper than this high-level thinking, because the NHIN Direct project has not yet settled on which protocol it will be using to enable the messaging. The current candidates are SMTP with S/MIME for handling encryption, XMPP also with S/MIME, REST and the IHE direct messaging profile. I am going to follow this post with a more detailed discussion of that particular decision and its implications, but until that decision is made, it is not really possible to further discuss the NHIN Direct model. In that later model I will discuss more clearly the first part of my friends question.

The second part of the question: “Why the different ways of addressing people?” can be answered now. The NHIN Direct group had a “how do we address” discussion, before we settled on an implementation protocol. That meant that the addressing specification had to implementable using several different protocol stacks. However, the decision was made that all of the addressing mechanisms must be “transferable” into something that looks just like email. Lets imagine that I was going to host my own NHIN Direct node. My address might look like fredtrotter@nhin.fredtrotter.com. When my doctor wanted to send me a message, then that is what he would type into his messaging system. If NHIN Direct decided to go with SMTP, then my address, as it is routed across the NHIN Direct network would look just the way my doctor typed it. But if NHIN Direct uses REST, then it might get transformed into a URI, like this: https://nhin.fredtrotter.com/nhin/v1/nhin.fredtrotter.com/fredtrotter/ . That might look scary, but everyone using NHIN Direct can think in terms of email addresses, because the REST implementation would convert fredtrotter@nhin.fredtrotter.com automatically, we would never even know it was happening.

Eventually, I will extend this article into a better natural language description of the NHIN Direct project, which means later versions will not discuss “if we choose X protocol” but instead focus on the protocol that is actually chosen.

-FT

NHIN Security and Privacy, who is responsible?

I do not know the answer to this question but I am trying to figure it out.

I an active member of the Security and Trust Workgroup of the NHIN Direct project. We are making a few decisions there regarding “rubber meets the road” security infrastructure decisions. But we are very intentionally trying to “bubble up” security and privacy policy decisions to other policy making organizations. But I have to admit, I am not sure who those people are.

In one sense, every healthcare provider in the US will have to make security and privacy policy decisions on their own. There are already some good laws regarding health information and one might argue that given those laws, specific policy details should be left up to providers.

Of course, HHS has an ARRA created group called the HITPC or (Health Information Technology Policy Committee) that will apparently be playing a central role in general NHIN policy making. Further there is a sub-committee there called the Privacy & Security Policy Workgroup. Apparently, if there was a single group who my group would “bubble up” issues to… this would be it. Their charter is:

The Privacy & Security Policy Workgroup will address Privacy and Security in the health IT policy context. At a very high level, the new Privacy & Security Policy Workgroup will define and address the policy challenges related to privacy and security; discuss a set of principles around privacy and security; and various methods of ensuring privacy and security.

The term “very high level” is somewhat problematic from my perspective because the kinds of questions I would like to see answered are pretty specific like “What should NHIN Direct users take into consideration as they choose a provider of X.509 certificates?” That does not sound like to me to be “very high level”.

However, there are some people in this group who have technical know-how. At least some of them should be able to speak the language that I am trying to use. Some of them I know personally. Others I have never heard of. I decided that I would share with you what little information I was able to glean about this small group…

This is exactly the type of group that should be overlooking high-level security and privacy issues. They have lots of different perspectives and lots of different skills, but they all have a very relevant role to play in the future of healthcare information privacy in the United States. But I do not think this is the group to answer the question: “What should NHIN Direct users take into consideration as they choose a provider of X.509 certificates?”

I am happy that at least some of the members of this group would at least know what I am talking about.

I hope this linked list of names is more helpful to you then the list at HHS, which does not really tell you much.

-FT

Responding to Sweeney

I am again discussing the privacy comments from Dr. Latanya Sweeney. She testified to Congress that both the NHIN CONNECT and NHIN Direct security models where flawed.

Figure 2(b) summarizes concerns about these two designs. The NHIN Limited Production Exchange has serious privacy issues but more utility than NHIN Direct. On the other hand, NHIN Direct has fewer privacy issues, but insufficient utility. When combined, we realize the least of each design, providing an NHIN with limited utility and privacy concerns.

You mean both projects are un-private?

I have recently posted about the assumption that NHIN Direct is less functional than NHIN CONNECT. So now I want to talk only about the privacy failings that Dr. Sweeney implies.

I summarize the statement above and her testimony generally to mean that Dr. Sweeney believes that “NHIN CONNECT and NHIN Direct both fail to protect privacy, but NHIN Direct is the lesser of two evils”.   That is a meaningless statement. It is easier to see this if you speak in terms of known Open Source applications. For instance if I say “Apache does not support privacy” or “Firefox does not support privacy” it becomes pretty clear that the point is hog-wash.  I can use Apache to setup a website that only I and my family can access. I can also use Apache to create a website that will abuse users privacy the same way that facebook does. Similarly Firefox can be configured to behave in a more private, and less private way through settings.

Moreover both projects can be used as a software platform that can allow other programs to increase or decrease privacy. mod_ssl is a perfect example with Apache, and this is even more apparent with Firefox, which already has tools that help publish browsing habits, and also has several tools to make browsing more private. (Update Jan 9, 2018 These FireFox links will no longer work, as described here)

As with any software project and protocol, it is possible that there are privacy implications in the underlying protocols and there is also the possibility that through mis-implementation either Open Source project could create a security or privacy risk where none exists in a correct implementation of the protocol. There is nothing to be done about this. This is a problem with software generally, and the best we can do is to put both the protocols and the implementations of those protocols out in the open so that security researchers can look for flaws. For both NHIN CONNECT and NHIN Direct this has already been done or is happening now.

Further more, the underlying protocols for both NHIN Direct and CONNECT are designed to allow for different kinds of privacy policy and enforcement. From a configuration point of view, both projects will be able to support extensive consumer consent options for instance, or they could be configured to generally ignore consumer consent. For that matter they could both be configured not to share any information at all, or to only accept in-bound data and not send data out.

From a merely rational point of view, most doctors in the US have email accounts but rarely use them to send PHI. When they do send PHI, it is usually legal and privacy respecting. This is not always true, but there is nothing that we can do with “email” to make it more true. It is not about the technology but how you use it.

Design Patterns are a Straw Man

Dr. Sweeney suggests that “For example, a domestic violence stalker can use the system to locate victims.”. But each node on the current and future NHIN Exchange will have the ability to monitor for strange searches and should be able to easily detect if a user frequently searches for 20-25 year old women who live near them. Are those policies and procedures enough? Hard to say since I have no idea what they are. Latanya does not indicate what they are either, but still makes this assertion.

Generally, assertions of privacy violations without evidence seems to be modus operandi for the patient privacy rights team and Latanya seems to be holding to form. Ironically, the new NHIN Exchange should allow for detection of the kind of abuse that Dr. Peel and Dr. Sweeney assert is common. While the current fax based system would allow them to go undetected. Dr. Sweeney gives several “for examples’

  • for example, a domestic violence stalker can use the system to locate victims.
  • for example, an insider could receive notifications of all abortions performed at other organizations.

in both of these examples, if the “black hat” in question is currently monitoring all incoming faxes at a local planned parenthood headquarters, and has a pen and paper handy, he can get all of this information now… but there is no way to detect that. It would not have to be a betrayal by an insider either. A tap could be placed on the fax line, even from outside the building. Both of these attacks are undetectable today.  Ironically, the NHIN Exchange sometimes prevent these kinds of abuses and the rest of the time it would provide precisely the evidence for information leaking that both Dr. Peel and Dr. Sweeny assert is common.

Dr Sweeny asserts:

Corrections (see Figure 5). In the data sharing environments described so far, there is no mechanism for propagating corrections or updating patient information.

But then she also says:

In one version [10], event messaging allowed 3rd party notification of patient information outside the direct care of the patient and without the patient’s knowledge.

I do not want to straw man her position here, she is talking about two different theoretical designs as she makes these statements. But there is a tradeoff here. The actual standards (Section 1.3) implemented in NHIN Exchange specifically state that:

In addition to “Query/Retrieve” and “Push”, the NHIN must support a publish and subscribe information exchange pattern in order to enable a wide range of transaction types. HIEM defines a generic publish and subscribe mechanism that can be used to enable various use cases. Examples include….   Support for notification of the availability of new or updated data

So in the real world.. we do have a problem with “event messaging” potentially used as a means to violate privacy, but because of event messaging we do not have a problem with broadcasting corrections to patient data.  There is a tradeoff here. Exactly the kind of tradeoff that Dr. Sweeney says we do not need to make:

Figure 2(a) depicts the traditional false belief of trading privacy for utility. It also shows our 9-year experience in the Data Privacy Lab of finding sweet spots of solutions that give privacy and utility. The key to our success has been technology design.

I want to be clear. I agree that there are sweet spots of “acceptable privacy and acceptable utility”. In fact in this situation the “technology fix” is to do extensive logging and auditing on who is looking at what so that you can detect the abuse that a comprehensive alert system makes possible. I could talk about how you might do that, but again looking at the actual standards you find a comprehensive description (Section 5).

Dr Sweeney ends her testimony with the suggestion that:

Performing risk analyses on design patterns provides a clear, informative path.

But that is simply not true. In fact, her own “start” to performing a risk analysis on design patterns serves only to devalue the work that has already been done on NHIN CONNECT and is now being done on NHIN Direct. Criticizing a design pattern not used in given software, and then implying that the given software also suffers from those problems is a straw man process.

Dr Peel has said to me that she feels that Dr. Sweeney is being ignored and I can see why now that I have carefully read her testimony. I would welcome specific criticisms of the security protocol designs that we will be using on NHIN Direct. But I would suggest that Dr. Sweeney make criticisms on either particular software or at least a specific “Design” rather than a “Design Pattern”. Currently Dr. Sweeney is over-simplifying NHIN CONNECT  by talking about “design patterns” which are not used. The first consensus draft of the  NHIN Direct security protocol design does not even exist as I am writing this post, while Dr. Sweeney’s testimony is already more than a month old. I fail to see how she could say anything legitimate about the NHIN Direct security and privacy design at all, one way or another.

I would suggest, as I have said to Dr. Peel in person not 48 hours ago, that if Dr. Peel, Dr. Sweeney and Patient Privacy Rights generally want to be included in the relevant discussions, that that they try and keep their discussions relevant.  You are literally criticizing what we are NOT doing, and then implying that ONC is not working with you. I know that your hearts are in the right place, but I cannot code what you describe.

-FT

Halamka on Open Source Healthcare

John Halamka is a pretty important blogger and policy maker. He is a fan of Open Source and has been positive about it for years. He even let me write a guest post on his blog about it.

John Williams sent me a link for the recorded audio and slides of his recent talk at Open Your World Forum

He discusses just how prevalent the notions of Open Source and open data standards are already in the coming ARRA-based Health Informatics world. He even hones in on critical problems like the proprietary CPT codebase.

He also gives compelling clues about how critical Red Hat servers are in his environment, and how Linux and other platform level Open Source has been well-adopted.

-FT