On Internet Marketing

Recently I have had several people who have asked me for advice and council on how to do Internet Marketing. It looks like my day job is considering taking the plunge as well. As a blogger, I know an opportunity to kill two birds with one blog post when I see one. So here are my thoughts on Internet Marketing in the age of social media.

Some context

First, a little history. It used to be that Internet Marketing was all about communicating effectively with a web site and email communications.

For a website, the general advice was that you wanted it to be important in Google’s eyes, essentially the process of SEO. You wanted to make sure that your domain name was easy to type, and easy to spell. You wanted to make sure that your users could find what they wanted on your web site. A website needs great analytics tools so that you can track how your web site is being used.

For email communications you had to decide if you wanted to have only an outgoing email campaign (broadcast only), or a mailing list (communication between everyone). If you wanted and email campaign you wanted to make sure that you had beautiful html emails that degraded gracefully into text emails. You needed to be sure that your html emails worked in the most common mail clients (harder than it sounds). For a mailing list you wanted to make sure that no one was added by mistake, and that no one was spamming people through your list.

Anyone who knows about these types of marketing systems can tell that I am barely scratching the surface on these issues. Moreover, it is also clear that while so-called “Social Media” has become really important, these older modes of communication are not less important, they are just… older.

So that is the backdrop, in brief, for the Social Media revolution. What is the big deal about Social Media? Way out of scope for this post, but I will inline probably the best video proving the point that I have seen. If you have not seen it, then watch it. If you have, then you probably already know why Social Media is a big deal.

The question posed at the beginning of this video is “Is it a fad or is it a revolution?” As is often the case to questions like that, the answer is “Yes”. Social media has lots and lots of people connecting meaningfully with lots and lots of people, but that does not mean that you will be able to get your message across using Social Media. All it means is that there are people there. Its a lot like Lubbock, Texas. For whatever reason, there are hundreds of thousands of people living in what appears to be a desert. Why would you want to live there? Because there are hundreds of thousands of people there. There is probably a reason why the original people made that city, but no one moves there now because of scenery, they move there because there are already people living there. This is much different than something like Las Vegas. Its a city in the desert that was built specifically so gambling could be legal. That is why people moved there. (I also do not understand Phoenix…)

So the next question is “What is your message?”

Your Message

Social Media people often “sell” Social Media as “the new business requirement”. They say things like “You have to be on Facebook” or “You need to have a Twitter account”. But that is really not the first question. The first question is “What is your message?”. Unless you can define your message, clearly, in a sentence or two then nothing else I am going to say is going to make sense. For fun, and because we are going to talk about Twitter soon, see if you can put your message into 140 characters. That is basically two short sentences or one really long one. Its OK if you need to go over a little, but if you need to have four or five 140 character blocks then that should be insight that you have more than one message. That is OK, but you need to recognize that you might need to follow significantly different strategies for each of your different messages.

So do you have your message(s) in your head? OK then.

Impact

To make an impact you have to learn to use the Internet Marketing tools well and then you have to apply them in  meaningful way. This is a lot like a carpenter’s toolbelt or a musician’s set of instruments. First you have to master the tool and understand the deep implications that subtle details of each  given tool. Just because a tool is a type of hammer does not mean you can use it do mount photos (imagine using a sledgehammer to tap a nail into drywall). Just because it is an instrument does not mean that you will fit in with a given band (imagine bringing a tuba into a rock band). The first level of tool mastery is understanding how to use the right tool for the right job. the second stage of mastery is knowing when to ignore what you learned in the first phase (for instance, Ska is a movement, within rock music, to embrace brass instruments).

Note that true mastery of a tool is being able to use the tool to do something else amazing. When Michelangelo was painting the ceiling… there were thousands of painters who knew how to use a paintbrush as carefully as he did. But they were probably painting signs, or the sides of barns. The ability to use the paintbrush is only the first step towards being Michelangelo.

This should sound obvious. But here is how I think this basic tool mastery is playing out in Internet marketing.

Phases of Online Marketing Tool Use
Phases of Online Marketing Tool Use

OK, so what does these phases mean? First I should admit that I was inspired to make this chart by two different sources, one is Meatball Sundae: Is Your Marketing out of Sync? By Seth Godin. The other is a blog post entitled: The multiple phases of social media integration which is where I borrowed my three phases ( of course as a computer scientist, I must count from zero).

  • 0 level is nothing.. If you read this article and go “What is Facebook?” or “What is Twitter?” then this level is where you are. No problem, I will try and help you out with lots of good links.
  • 1 Level is using the Internet as a megaphone. This is when you treat your web site, Facebook page, Twitter account or whatever as a mass media device. You use it the same way people use radio, TV or newspapers: to send messages out to lots of people all at once.
  • 2 Level is to use the Internet as a camp fire. When you sit around a campfire and talk, it shifts between you speaking to the group (like a megaphone) and the group speaking to you, one at a time. The group also speaks about you, in front of you. It enables public conversation in lots of different directions. In real life campfires are a great time and place to do this, because the typical night-time acoustics allow for a large group (10 to 30 people even) to participate in a single conversation. But this does not scale. The whole point of Social Medial is that you can have a campfire chat with hundreds, thousand or even millions of people all at once.
  • The third level represents full tool mastery. But this does not automatically mean that you get to paint the Sistine chapel. It just means you know how to use the tool.
  • 3a is named after Blendtech, a company who has successfully used Social Media to create a Sistine chapel (more in a moment).
  • 3b is someone who is using the tools well, to do OK things, but is not doing anything truly meaningful. This would be like the painters who during 1512 were painting portraits or landscapes and are today forgotten. But they made a good living and their customers were happy.
  • 3c is like someone who was drawing graffitti on the walls in 1512. No matter how pretty an picture, painted at night on barn might be, in the morning it will be whitewashed. The skill is irrelevant, it is a matter of message and medium match up.

Obviously, what everyone wants is to make an impact with their marketing. To leave people with a message burned into their minds, and happy that it happened. Once people see the Sistine chapel (on my bucket list) they will never never think about it the same way, and they will never forget the experience.

Is this possible with Internet Marketing? Yes. I will give you two examples.

First, if you have bought a book recently online and you immediately typed in Amazon.com then you have experienced this effect. The word “Amazon” has nothing to do with books. Yet when you want to buy a book on the Internet you probably go there automatically. Why? Because you have had a Sistine Chapel- style experience there and you will always remember it. Note that this is a great example of a company that was able to achieve this with just a website and without any kind of Social Media. Ebay and Google are other good examples.

Rather than just talk about the second example I will show you. Blendtech is a company that makes really good blenders. That is their message. That is what they want burned into your brain. After you watch the following videos, you will always remember that Blendtech is a company that makes really good blenders. You will be unable to remember the name of any other blender manufacturer, but you will never again think about where you would get a really good blender…   if you needed a really good blender.  Please watch the following two episodes of “Will it Blend”.

What is a Meatball Sundae?

Its two things that are great by themselves but still do not go together. Meatballs and Whipped Cream/Chocolate. This is the worst case scenario for Internet Marketing efforts. This is what happens when you fail to recognize that Internet Marketing and/or Social Media (two terms for the same things nowadays) really does change things deeply in in your industry, but you are unwilling to make the fundamental changes needed to make the leap.

This is actually a fundamental mistake that happens often in Health IT, which is what I like to call “Technology as Paint”. The basic notion is that technology can be liberally applied to make any existing thing better. This is the way you use paint. My wife and I recently bought a desk for $35. It was banged up and looked awful. We painted it. Now it looks like it cost $350. Paint is awesome like that!

But technology is not paint. You cannot take something that works without technology, merely make it “online” or “computerized” and assume that it will be better than the original system. the cardinal example of that in health IT is the we- are- going- to- computerize- the- dumb- doctors. Here is how the plan unfolds:

Doctor: “Hey business man, I want you to computerize me.”

Business Man: “That’s great! I have my favorite coder here with me, and we can help.”

Coder: “I can easily computerize you! I just did it for a Gas Station last week! No more paper forms at the Gas Station!! All I need to do is see all of your paper forms, and then I will computerize you by making computer versions of those forms.”

Doctor: “OK, here are the ten forms I regularly use.”

Coder: “OK I will be back in a week with your computer system built!!”

Five years pass…

Coder: “The system is almost ready, I have just finally got the ontology mapping tool together, you can go live next week!!”

Doctor: “You are fired. You have been charging me for five years to code and you have nothing to show for it. I still have to use paper because your system does not even to 10% of what the paper system does. Now I have five years worth of data in both paper and electronic records, and I can no longer afford to maintain the electronic system. I am sooo screwed, but at least I am going to stop paying you!”

This happens again and again and again in Health IT because so many technologist view technology as paint: Standard technology, liberally applied, solves all problems.

Seth Godin’s book is really required reading. It details, very explicitly how Social Media is not technology paint for marketing purposes. Any good summary of his points will show that you have to figure out if, and when your message is right for the Internet Medium in question. So when you hire someone to help you with Social Media, and they fail to show you how a given Social Media platform is good for your message, then they have failed. A pretty good idea of when you are getting bad advice here is that they are recommending that you go with the usual suspects. If they say: “you should be on Youtube, Twitter and Facebook” without discussing how your message will play in those environments, then you need to take a step back.

It would be much better for you to do what Blendtech did, which is to find the one medium that allows you to create a super-compelling version of your message, and make that medium into the “Sun” in your marketing “Solar System”. Sure Blendtech uses Twitter, and Facebook, but they do that to funnel people to their awesome videos, which in turn funnel people into buying an awesome blender.

Social Media Strategy as a Solar System
Social Media Strategy as a Solar System

Message and Medium as a Solar System

What follows is a little more conjecture. I am pretty darn sure about the notions I have explained above. But without dealing with a specific message, it is difficult to know what the right center-of-gravity medium might be. But still here are some guidelines that make sense to me:

  • If your idea is best communicated in pictures, try Flickr or Picasa. They have really advanced tools that allow you to view a series of constantly updated photos as a steadm on another site. A big hint when using pictures is that pictures with people in them are almost always more interesting then pictures without people. You can make both Twitter and Facebook, and plain old web-page follow those photo streams. You might want to use Flickr/Picasa as the centre of gravity if before and after photos are more compelling than a video, for instance. Lots of people have made this approach work.
  • If your points make sense as really short catch-phrases or have a very important real-time component, then Twitter or Identi.ca might be for you (Identi.ca is a more freedom/less popular version of Twitter). Shit My Dad Says, which is now a television show and a book (pretty good planets!!) is a good example of the catch-phrase style Twitter feed. In Portland there are some food carts that you can only find by following them on twitter. Note that you easily add Facebook and Google Buzz as planets merely by propagating your status updates to those platforms.
  • If you want a deeper social engagement that includes videos, text, pictures or perhaps an application that you are writing yourself, perhaps Facebook is for you. A good question to ask about Facebook is “Am I anything like Farmville?” Again you can easily make Facebook updates propagate across the other platforms.
  • If you are trying to make a series of arguments that require carefully constructed arguments, then you need a blog. This gives you the ability to tie in all kinds of other content (like I did with Youtube videos here) to make very specific and complex points. But if you make enough of these points, then perhaps you are really slowly writing a book, and you should consider self-publishing it on CreateSpace or Lulu.
  • If you have already written a book, perhaps you need to split apart into a blog.
  • Videos can be tremendously engaging and personal. If you have a story to tell, a parable of some kind, then this is the right medium. Even just a camera, pointed at you can be very very compelling if your story is good enough. You should be looking into Youtube, which is the king of the space, but also perhaps screencast.com if you want to show films of computer programs, or some other site if you have other specific video hosting requirements. Again, you can turn your video feed into facebook, twitter, identi.ca, and Google Buzz integration.
  • If for some reason your content would work really really well next to gmail, you might look at Google Buzz.
  • If you want to create complex person to person engagement between lots of people around a particular topic that they have a high level of interest in, then I would consider either email mailing lists, or online forums, or something like Google Groups, which is a pretty good fusion of both. Getting the “full message” in your email Inbox is pretty valuable.
  • If you want to have things showing up in email Inboxes, but do not want to enable communication between the recipients then you probably want an email broadcasting service like MailChimp.
  • If you want to engage with professionals of one kind of another Linked in is where you should start.
  • If you want to generate written content, you need a Wiki.
  • Face to face Events can now be deeply connected into the Internet. I like using eventbrite to schedule things like conferences, I like meetup.com for regular meetings, and when a meeting is really important, it should be live streamed with something like livestream.com.
  • Sometimes, what you need is a simulated three dimensional space. Frankly I have trouble understanding when this is a good thing… but if you see it is valuable you want to use Second Life.
  • If you want a Facebook style social network that you control you want Ning.
  • If you want full control, including source code for your social network, then you want one of these.
  • If you have a health IT application that needs to interface with Doctors socially, then you want to work with Sermo.com

I hope this is helpful to people that I am trying to counsel on Social Media. Its not just about using it, its about finding a way to use it in a compelling way!

-FT

You might be a cyborg….

People often do not get why I am so convinced that only GPL Software should be used in Medicine. I can understand why. Without understanding the nature of Healthcare, people assume that I am being religious about the issue. This is the furthest thing from the truth.

It has been a while since I have blogged over at GPLMedicine.org. In fact you can see that I still have some site maintenance to do. But recently more attention has been given to the issue of Open Source and Software Freedom in medicine.

The Software Freedom Law Center has just released a paper called Killed by Code: Software Transparency in Implantable Medical Devices

Awesome title. Even more awesome paper.

The form of the argument is so simple:

  1. Hey you are putting hardware AND software in my body? yep.
  2. I cannot look at the software? nope.
  3. And the software is hackable? yep.
  4. Well that kinda sucks.

Feels kinda icky don’t it?

One thing I love about people with pacemakers or other implantable medical devices, is that they know they are cyborgs. Most people living in modern countries are cyborgs, but unlike people with pacemakers, they do not see it that way, because they carry their electronics, rather than implanting them. Makes no difference. In fact lets play a variant of “You might be a redneck“: I call it “You might be a cyborg..”;

  • If you leave your cell phone at home, and you -must- to leave work to go home and get it, you might be a cyborg.
  • If you will sleep through the morning unless a machine wakes you up, you might be a cyborg.
  • If your spouse is jealous of your cell phone, tablet, laptop, server or workstation, you might be a cyborg
  • If not wearing a watch makes you uneasy, you might be a cyborg
  • If you view any relationship you have with an online service as an addiction, you might be a cyborg
  • If you try to avoid walking more than 100ft in favor of a segway, bicycle, golf cart, or automobile, you might be a cyborg
  • If you try to avoid walking more than 100ft in favor of a lawn mower, you might be a cyborg and a redneck

Our relationship with technology is becoming more and more personal, and the operating system to your mobile phone, the software your medical devices uses and the EHR system that your doctor uses to track your health information make software freedom ethical issues into personal freedom ethical issues.

Today, its people with pacemakers, but tomorrow, there will things that people consider normal to do with their own bodies that will either use software that the user controls, or software that some random company controls.

Thanks to the Software Freedom Law Center, for helping to make this issue more personal.

-FT

Funambol in healthcare

One of the things that I love about conferences like OSCON is that you met people who are doing really interesting things coming out of left field. I often feel like I “know everyone” in Open Source healthcare, but every time I hear about something like this I am reminded just how big the world is. People are reworking Open Source tools to work in healthcare all the time!

The most recent example is from the Funambol project. That project is made to sync cell phone data, like calendars and contacts. But the Funambol teleport project instead uses the stack to move healthcare data around. I would go into detail, but there is no need, since Simona does a much better job:

OpenStack and Software Freedom in Healthcare IT

As clinicians, doctors and other healthcare providers are the stewards of their patients data. But what happens when they lose control over that healthcare data? Most people focus on what happens when that private data becomes too available. But far more commonly healthcare data becomes trapped. Far too often, it becomes buried in one way or another, lost forever and useless to patients.

I am probably the most vocal proponent of the notion that software freedom, the heart and soul of the Open Source movement is the only way to do healthcare software. Over that time I have tried to highlight the threat posed by vendor lock-in with healthcare software. But “vendor-lock” is not the only way that healthcare data can become buried. Ignacio Valdes was the first to make this case clearly against ASP healthcare solutions with his post about how Browser Based EMR’s Threaten Software Freedom . That was written in 2007.

So you can imagine the types of concerns Ignacio and I had as we built Astronaut Shuttle (very much beta) together. Ignacio had the VistA EHR chops and I had enough cloud experience to create the first-ever cloud based EHR offering. Its a simple system, you can use a simplified web interface to launch cloud-based instances of an EHR. The main difference between this kind of web interface, and something like RightScale, is that the launching system performed whole-disk encryption, allowing you to ensure that Amazon could not access your healthcare data. As far as I know, no one else has built anything like this but us (love to hear otherwise in comments).

Why are we some of the few people trying things like this? For one thing , encryption is pretty difficult to do in the cloud,  there are lots of approaches and it is pretty easy to brick a cloud instance with an improper encryption configuration.

But more importantly, there is a perception that storing private healthcare data in the cloud is a bad idea, dangerous because it meant putting all of your eggs in one basket.

Given how concerned Ignacio and I were about vendor lock, and ASP lock, you can imagine our feelings about cloud lock. We had to be sure that our customers, doctors and other clinicians, would be able to restore linux images containing precious EHR data off-site using off-site backups.

When we looked out across the available cloud options we decided to implement our service using Amazons ec2 service, specifically because of  Eucalyptus an open source implementation of the Amazon cloud hosting infrastructure.

However, we have been deeply concerned about this approach. Currently, you might say that Amazon has a “friendly” relationship with Eucalyptus, which of course means that Amazon has not crushed it like an itty-bitty bug. For Amazon, being able to point out that there were FOSS implementations available made it easier for ec2 to acquire certain customers. At the same time by refusing to treat the ec2 and other AWS api’s as open standards, or to specifically state they would not sue an open source implementation of their API, Amazon could always ensure that Eucalyptus would never be a threat.

“What a minute!” you might say… “Amazon is a Linux-friendly company! They would -never- betray the community by going after Eucalyptus…”

I think the Open Source community needs to wake up to corporations whose basic legal stance towards Open Source projects is to leave open the “smash if they succeed” option.

IBM has been a “friend” to the community for years. IBM even promised not to use specific software patents against us. They assured us that they are not a threat. But then they broke that promise. The broke it because someone in the community decided to implement software that threatened to break their monopoly on mainframe implementations. IBM turned on our community just as soon as our freedom started to threaten their bottom line. You are kidding yourself if you think Amazon will lose a billion dollars to Eucalytpus without reacting. Amazon has been very aggressive in acquiring software patents and will use them if Open Source implementations ever really get good.

I think Eucalytpus is an awesome project but it lives at the whim of a corporation who tolerates it precisely because it is not a business threat.

It was with great trepidation that Ignacio and I built a health data infrastructure that we knew relied on the whim of a really big bookstore. (When you say it like that… you can see the problem more clearly)

With that said, I am happy to support and endorse the new OpenStack project. OpenStack is a move by Rackspace Cloud, the number one competitor to Amazon, to completely Open Source their cloud infrastructure. They will be releasing this work under the Apache license.

Open Source license are the only trust currency that I, as health software developer, can trust to ensure that no one can ever trap health data with software that I have recommended. “Probably won’t trap” or “Open Source friendly” simply do not cut it after IBM. Simply put, a full Open Source release is the most extreme thing that Rackspace can do to win my trust in their cloud infrastructure.

I have also been discussing with the Rackspace team about the importance of building in support for cloud-initiated-encryption and cloud audit (thanks for the tip samj)  into Open Stack.  These are must-have features to make healthcare data in the could a viable option.

As soon as we have the dev cycles available, we will be moving Astronaut Shuttle over to the Rackspace Cloud. I invite anyone who gives a damn about Software Freedom, or health information software generally, to follow us over.

-FT

NHIN and others at OSCON

I am just home from the first ever amazing health IT track at OSCON. The quality of the content is simply amazing, and soon, you will be able to see the many talks available (thanks to Robert Wood Johnson for paying for the videos)

As I think about what I will be blogging about, I wanted to post some quick links to those who are already thinking about what was said and what it means. First the conference organizer, or at least from the health IT point of view, was Andy Oram. He already has two posts, one on the first day, and one highlighting the VistA controversies exposed at the conference.

Most of all, I wanted to point to this awesome interview with the leaders of the NHIN open source projects: NHIN CONNECT and NHIN Direct.

Tolven invited to privacy party

The Open Source Tolven project has been invited to the Privacy Technology showcase for the HITPC S&P Tiger team.

This is well-deserved recognition. Tolven has an extremely innovative architecture, that dispenses with many of the bad assumptions that other EHR platforms make. The first is that an EHR platform should only be an EHR platform. Tolven is a combined EHR and PHR.

The second is a well-thought out encryption at rest strategy.

Hopefully a recording of the presentation will be available after the meeting.

NPI data, the doctors social network

(Update Feb 18 2011: search.npidentify.com has moved to docnpi.com, I have adjusted links accordingly)  I have been working, part time, on a project for nearly two years to dramatically improve the quality and depth of information that is available on the Internet from the NPI database. For those not familiar, NPI or national provider identifier,  is a government issued health provider enumeration system. Anyone who bills Medicare or subscribes medication now has to have an NPI record, which basically means that it is a comprehensive list of individual and organizational healthcare providers in the United States. You can download the entire NPI database as a csv file under FOIA. There are a little over three million records in that download.

Each healthcare provider provides both credentialing and taxonomy data for inclusion in the database. Healthcare provider taxonomy codes are a fancy way of detailing just what type of doctor you are. Because each provider -can- provide such rich data, there is a tremendous amount of un-used information in the database. NPPES does not do very much data checking, so there is a lot of fat finger data too. I have been working on scripts to improve the overall quality of the data as well as accelerate some obvious  datamining applications. I am happy to announce that after several years of development I am ready to beta launch a dramatically improved NPI search service.

Please visit docnpi.com to try it out.

I have recorded several videos that I will attempt to embed here to show you just what it does, but for those of you who prefer to read:

  • The NPPES search engine has a limit of 150 results, docnpi has no limit
  • NPPES does not allow you to search by type of provider or organization, docnpi allows you to search by both type and group of types.
  • NPPES only lists one taxonomy per provider and it is often over-general, docnpi lists all provider taxonomies in each result
  • NPPES pages the results, while all of the results are listed at one page at docnpi (lets you use your browsers ctrl+f function to do quick sub-searches)
  • the results from any search you do is downloadable as json, xml, or excel/csv
  • No search, except an completely empty search, is too general for docnpi. If you can wait for us to process the data, we will do it for you.
  • Each NPI page automatically exposes the “social network” of any provider or organization by listing all other NPI records that share addresses, phone numbers or identifiers
  • Each NPI page displays a google map for the practice and mailing address listed in the NPI record.

I have lots more features on the way, and I know I need to optimize the site. Loading single NPI record takes too long, because I am doing several huge SQL queries across a very full database. Still if you have some patience you can give me some feedback on the site now. Here are the videos that demo specific searches and expose the data richness of the NPI dataset.

Videos

Implications

Most people have no idea how much information is truly available in the NPI database.

The one group of people who will probably immediately find the site helpful is medical billers, or insurance company employees who want to understand the relationships between different providers. They have been frustrated by the NPPES search tool for a long, long time.

But most people have no idea that this kind of information is even there! I cannot tell you how many people have no idea, for instance, that public health offices very often have an NPI record. Just looking at the taxonomy drop down, should be very enlightening. Using this search engine, you can get very specific and detailed information about the relationships between location and healthcare provider density. You can ask questions like “How many foot doctors does Denver have per-capita compared to New York”. Before docnpi.com, one had to download the data yourself and then run your own queries. But the data download is not normalized and it almost impossible to determine who shares an address unless you normalize across address. Even then without database optimizations (I have learned so much more about MySQL optimization on this project…) complex queries could take hours to complete. The site probably will feel “slow” to you because it can take a long time even to analyze the data for a single provider (30-50 seconds) but many of the matching provider data displays would have been impossible before. I hope to do more optimization and other improvements and I would like to have your advice doing so. Please click the red feedback tab and tell me how to improve the site!

Essentially this site will make the NPI data set far more accessible that it has been before. Stuff that is now easy to do, was previously the domain of expensive data toolkits or data mining experts. This data should be usable by everyone… and now it is.

I should be frank, this site has to pay for itself.  I have not decided how to charge or what I should charge for or even if I should charge. I will probably have to think about this once the number of searches starts to take the server to its knees. Once that happens I will have to spend “real money” on a dedicated virtual server/cluster and that will mean the site must be monetized somehow.  I will be probably end up limiting the number of searches that a given user can do until they pay $20 or something like. That will let most people use the site without paying, but when people start to overuse my CPU cycles they can afford to pay a little. But until my server starts to choke, everything is free. Enjoy.

Empathy over implementations and another straw man

I think the recent work of the NHIN Direct implementation teams has been amazing. But I think that by implementing, all of the teams have succumbed to different extents a common software developer error. They are implementing rather than empathizing with their users.

There are two groups (possibly three if you count patients, but I will exclude them from consideration for just a moment) of potential NHIN Direct end users. But before we talk about that, I would like to talk about Facebook and Myspace.

Or more accurately I want to remember the controversy when the Military chose to block users of Myspace but not Facebook. This caused quite a stir, because at the time, Myspace was very popular with the high-school educated enlisted personnel, but Facebook, which even then was “higher technology” was more popular with the college educated Officers. Controversy aside, it showed a digital divide between different types of users.

Ok, back to Healthcare IT.

We have something strangely similar to this in the United States with the level of IT adoption with doctors.

On Having

Most doctors are low-health-information-technology. Most doctors work in small practices. Most small practices have not adopted EHR technology. Even those small practices that have adopted EHR technology have often done so from EHR vendors who have not focused on implementing the tremendously complex and difficult IHE health data interchange profiles. That is one group of doctors. You can call them Luddite, late adopters, low-tech or “the have not’s”. No matter what you call them, the HITECH portion of ARRA was designed to reach them. It was designed to get them to become meaningful users of EHR technology. (Note that “EHR technology” is kind of a misleading term because what it hass essentially has been redefined to is “software that lets you achieve meaningful use”. People still have lots of different ideas about what an “EHR” is, because people still have lots of disagreements about the best way to achieve meaningful use.)

I have to admit, I am primarily sympathetic with this user group. I originally got into Open Source because my family business (which my grandfather started) was a Medical Manager VAR. Our clients loved us, and they hated the notion of spending a bucket load of money on an EHR. I started looking for an Open Source solution to our EHR problems, and when I could not find what I needed, I started contributing code. There are a small cadre of people working on the NHIN Direct project that share my basic empathy with this type, the “have nots”, of clinical user for different reasons.

But the majority of the people working on NHIN Direct represent the whiz-kid doctors. These are the doctors who work in large clinics and hospitals that have found moving to an EHR system prudent. Sometimes, smaller groups of doctors are so tech-hungry that they join this group at great personal expense. These doctors, or the organizations that employ them have invested tremendous amounts of money in software that is already IHE aware. Often groups of these doctors have joined together to form local HIE systems. It is fair to say that if you are a doctor who has made an investment in technology that gives you IHE systems, you paid alot for it, and you want that investment to pay off. We can call these doctors the “whiz-bang crowd”, the EHR lovers, or simply “the haves”.

Today, in the NHIN Direct protocol meeting we had a polite skirmish (much respect for the tone everyone maintained despite the depth of feeling) between the representatives of the “have nots” like me, Sean Nolan, David Kibbe and other who are thinking primarily about the “have nots” and the vendors of large EHR systems, HIE’s and other participants in the IHE processes, these people tend represent the “haves”.

To give a little background for my readers who are not involved with the NHIN Direct project:

A Little Background

NHIN Exchange is a network of that anyone who speaks IHE can join. If you speak IHE, it means that you should be able to meet all of the requirements of the data exchange portions of meaningful use. It also means that you pretty much have to have some high technology: a full features EHR or something that acts like one. IHE has lots of features that you really really need in order to do full Health Information Exchange right. But it has never been deployed on a large scale and it is phenomenally complex. ONC started an Open Source project called NHIN CONNECT that implements IHE and will form the backbone of the governments IHE backbone. Beyond CONNECT both the Mirth guys and OHT/MOSS have substantial IHE related software available under FOSS licenses. There are lots of bolt on proprietary implementations as well. IHE is complex, but the complexity is required to handle the numerous use cases of clinical data exchange. Exchanging health data is vastly more complex than exchanging financial information etc etc. But to use IHE you have to have an EHR. Most doctors do not have an IHE-aware EHR.

ONC knew that HITECH would convince many doctors to invest in EHR technology that would ultimately allow them to connect to NHIN Exchange. However they also knew that many doctors, possibly most doctors, might choose not to adopt EHR technology. Someone (who?) proposed that ONC start a project to allow doctors to replace their faxes with software that would allow them to meet most, but not all, of the meaningful use data interchange requirements, without having to “take the EHR plunge”. This new project could meet all of the requirements that could be met with a fax-like or email-like “PUSH” model. I explained some of this in the power of push post. This project was called NHIN Direct.

Whats the problem

So what is the problem? A disproportionate number of the people who signed up to work on the NHIN Direct project are EHR vendors and other participants who represent lots of people who have made extensive investments in IHE. In short, lots of “haves” representatives. Some of the “haves” representatives proposed that NHIN Direct also be built with the subset of IHE standards that cover push messages. But remember, IHE is a complex set of standards. Push, in IHE, is much more complicated than the other messaging protocols that were under consideration. I have already made a comparison of the protocols under consideration.

IHE is really good for the “haves”. If you “have” IHE, then all kinds of really thorny and difficult problems are solved for you. Moreover, one of the goals of meaningful use is to get more EHRs (by which I mean “meaningfully usable clinical software”) into the hands of doctors. The US, as a country, need more people using IHE. It really is the only “right” way to do full health information exchange.

But IHE is not trivial. It is not trivial to code. It is not trivial to configure. It is not trivial to deploy or support. It is not trivial to understand. It could be simple to use for clinicians once all of the non-trivial things had been taken care of. But realistically, the number of people who understand IHE well enough to make it simple for a given clinical user is very very few.

The other options seemed to be SMTP or REST-that-looks-and-acts-just-like-SMTP-so-why-are-we-coding-it-again (or just REST). Both of these are much much simpler than the IHE message protocols. These would be much simpler for the “have not’s” to adopt easily and quickly. Of course, they would not get the full benefit of an EHR, but they would be on the path. They would be much better off than they are now with the fax system. It would be like the “meaningful use gateway drug”. It would be fun and helpful to the doctors, but leave them wanting something stronger.

The NHIN Direct project, fundamentally creates a tension with the overall NHIN and meaningful use strategy. As a nation we want to push doctors into using good health IT technology. But does that mean pushing them towards the IHE-implementing EHRs on the current market? or should we push them towards simple direct messaging? The answer should be something like:

“if doctors would have ordinarily chosen to do nothing, we would want them to use NHIN Direct, if they could be convinced to be completely computerized, then we should push them towards IHE aware clinical software that meets all of the meaningful use requirements”.

Given that split, the goal of NHIN Direct should be:

“For doctors who would have refused other computerization options, allow them to meaningfully exchange health information with as little effort and expense on their part as possible”

I, and others who realize just how little doctors like this will tolerate in terms of cost and effort, strongly favor super simple messaging protocols that can be easily deployed in multiple super-low cost fashions. I think that means SMTP and clearly rules out IHE as a backbone protocol for “have-nots” that are communicating with other “have-nots”.

Empathizing with the Haves

But the danger of focusing on just the requirements of your own constituents is that you ignore how the problems of those you do not empathize with the impact that your designed will have on other users. Both the representatives of the “have’s” and the “have nots” like me have been guilty of this. After listening to the call I realized that the EHR vendors pushing HIE where not being greedy vendors who wanted to pad their wallets. Not at all! They were being greedy vendors who wanted to pad their wallets -and- protect the interests of the doctors already using their systems. (that was a joke btw, I really did just realize that they were just empathizing with a different group of doctors than I was)

If you are a “have” doctor, you have made a tremendous investment in IHE. Now you are in danger of getting two sources of messages. You will get IHE messages from your other “have” buddies, but you will have to use a different system to talk with all of the “have-nots” who want to talk with you. That means you have to check messages twice and you can imagine how it might feel if for one doctor to be discussing one patient, across the two systems. Lots of opportunity for error there.

From the perspective of the IHE team, by making the “have nots” make the concession of dealing with IHE, rather than cheaper, simpler easier SMTP they reduce to one messaging infrastructure that eliminates balkanization. No longer will any user be faced with using two clinical messaging systems, instead they can have only one queue. Moreover, since we ultimately want “fully meaningful users” it is a good thing that the IHE-based NHIN Direct system would provide a clear path to getting onto the NHIN Exchange with a full EHR. From their perspective, having more difficult adoption for the “have nots”, and the resulting loss of adoption would be worth it because it would still get you faster to where we really need to be, which is doing full IHE Health Information Exchange with everyone participating.

Everyone wants the same thing in the end, we just have different ideas about how to get there! I believe that we should choose protocol designs for NHIN Direct that fully work for both sets of clinical users. I think we can do this without screwing anyone, or making anyone’s life more difficult.

The new empathy requirements

I would propose that we turn this around into a requirements list: We need a NHIN Direct protocol that

  • Allows the “have nots” to use NHIN Direct as cheaply as possible, that means enabling HISPs with simple technology that can be easily deployed and maintained using lots of different deliver models. (i.e. on-site consulting and server setup, ASP delivery etc etc)
  • Allows the “haves” to view the world as if it was one beautiful messaging system, and that was based on IHE. They should not have to sacrifice the investment they have made and they should not have to deal with two queues.

My Straw Man: Rich if you can read it

The IHE implementation group believes that all SMTP (or REST-like-SMTP) messages that leave the “have-nots” should be converted “up” into IHE messages and then, when they get close to other “have-not” doctors, the messages should be converted back “down” to SMTP. Which means that they are suggesting that HISPs that handle communications between “have-not” doctors should have to implement IHE in one direction and SMTP in another direction even though the message will have no more content or meaning after being sent.

The problem with that is that the HISPs that maintain this “step-up-and-down” functionality will have to cover their costs and develop the expertise to support this. This is true, even if the “edge” protocol is SMTP. The only approach that will work for this design is an ASP model, so that the HISP can afford to centralize the support and expertise needed to handle this complexity. That means low-touch support and low-touch support, and high costs translate to low-adoption. In fact doctors would probably be better off just investing in an ASP EHR that was fully IHE aware. So the IHE model is a crappy “middle step”.

But there is no reason that the HISP needs to handle step-down or step-up as long as they are only dealing with “have-not” doctors. If you allowed SMTP to run the core of NHIN Direct, HISPs could leverage current expertise and software stacks (with lots of security tweaking discussed later), to ensure that messages went into the right SMTP network. No PHI in regular email.  Local consultants as well as current email ASP solutions could easily read the security requirements, and deploy solutions for doctors that would send messages across the secure SMTP core. With careful network design, we could ensure that messages to NHIN Direct users would never be sent across the regular email backbone. I will describe how some other time (its late here) but it is not that hard.

But you might argue: “that is basically just SMTP as core! This is no different than your original approach. You are still screwing the haves!” Patience grasshopper.

To satisfy the “haves” we have to create a new class of HISP. These HISPs are “smart”. They understand the step-up and step-down process to convert IHE messages to SMTP messages. When they have a new outgoing message, they first attempt to connect to the receiving HISP using HIE. Perhaps on port 10000. If port 10000 is open, they know that they are dealing with another smart HISP, and they send their message using IHE profiles. Some smart HISPs will actually be connected to the NHIN Exchange, and will use that network for delivery when appropriate.

The normal or “dumb” HISP never needs to even know about the extra functionality that the smart HISP possess.  They just always use the NHIN Direct SMTP port (lets say 9999) to send messages to any HISP they contact. While smart HISPS prefer to get messages on port 10000, when they get an SMTP message on port 9999 they know they need to step-up from SMTP to IHE before passing to the EHR of the end user.

From the Haves perspective, there is one messaging network, the IHE network. They get all of their messages in the same queue. Sometimes the messages are of high quality (because they where never SMTP messages, but IHE messages sent across NHIN Exchange or simply between two smart HISPS).

Now, lets look at the winners and losers here. For the “have nots” the core is entirely SMTP. As a result they have cheap and abundant technical support. They are happy. For the “haves” they get to think entirely in IHE, they might be able to tell that some messages have less useful content, but that is the price to pay for communicating with a “have not” doctor. The “have nots” will get rich messages from the IHE sites and will soon realize that there are benifits to moving to an EHR that can handle IHE.

Who looses? The smart HISPs. They have to handle all of the step-up and step-down. They will be much more expensive to operate -unless- there is a NHIN Direct sub-project to create a smart HISP. This is what the current IHE implementation should morph into. We should relieve this burden by creating a really solid bridge.

This model is a hybrid of the SMTP and IHE as “core” model. Essentially it builds an outer core for “have not” users and an inner core IHE users. From the projects perspective, those that feel that simple messaging should be a priority for the “have nots” (like me) can choose to work with the SMTP related code. People who want to work in the interests of the “haves” can work on the universal SMTP-IHE bridge.

I call this straw man “Rich if you can read it”. From what I can tell it balances the core perspectives of the two interest groups on the project well, with places for collaboration and independent innovation. It’s more work, but it does serve to make everyone happy, rather than everyone equally unhappy with a compromise.

Footnotes and ramblings:

Don’t read this if you get bored easily.

I believe that this proposal excludes a REST implementation unless it acts so much like SMTP that SMTP experts can support it easily. Which begs the question, why not just use SMTP. SMTP fully supports the basic work case. The code already works and a change to REST would reduce the pool of qualified supporters.

I should also note that no one, ever is suggesting that the same program be used for email as for NHIN Direct messages. I think we should posit a “working policy assumption” that any NHIN Direct SMTP user would be required to have a different program for sending NHIN Direct messages. Or at least a different color interface. Perhaps Microsoft can release a “red and white GUI” clinical version of Outlook for this purpose… Sean can swing it…. or users could use Eudora for NHIN Direct and Outlook for regular mail. Or they could be provided an ASP web-mail interface.

We might even try and enforce this policy in the network design:

We should use SRV records for the NHIN Direct SMTP network rather than MX records. There are reasons for doing this for security reasons (enables mutual TLS) and most importantly, it means that there will be no PHI going across regular email. When someone tries to send an email message to me@fredtrotter.com their SMTP implementation looks up an MX record for fredtrotter.com to see where to send the message. If we use SRV for the DNS records, and you tried to send PHI to me@nhin.fredtrotter.com you would create an invalid MX record for nhin.fredtrotter.com such that MX for nhin.fredtrotter.com -> nothing.fredtrotter.com and nothing.fredtrotter.com was not defined in DNS. This will cause an error in the local SMTP engine without transmitting data. But the NHIN Direct aware SMTP server or proxy would query for SRV for the correct address and enforce TLS and be totally secure. Normal email messages to nhin Direct addresses would break before transfer across an insecure network but the secure traffic would move right along. Obviously this is not required by the core proposal, but it is a way of ensuring that the two networks would be confused much less frequently. This plan might not work, but something like this should be possible.

This is a pretty complex email setup, but SRV is growing more common because of use with XMPP and SIP. Normal SMTP geeks should be able to figure it out.

Open Letter to the tiger team

Hi,

This is an open letter to the tiger team from HIT Policy Committee as well as the committee generally. Recently a group from HITPC gave recommendations to the NHIN Direct project regarding which protocol it should choose. I realized as I heard the comments there, that this group was reading the NHIN Direct Security and Trust Working Groups latest consensus document. I am on that working group and I wrote a considerable portion of that document (most of the Intent section). I was both startled and flattered that the HITPC group was using that document as the basis for their evaluation of the protocol implementations. In fact, they eliminated the XMPP project from consideration because they felt that the SASL authentication that the XMPP implementation will use was incompatible with the following requirement from the consensus document:

2.1 Use of x.509 Certificates. The NHIN Direct protocol relies on agreement that possession of the private key of an x.509 certificate with a particular subject assures compliance of the bearer with a set of arbitrary policies as defined by the issuing authority of the certificate. For example, Verisign assures that bearers of their “extended validation” certificates have been validated according to their official “Certification Practice Statement.” Certificates can be used in many ways, but NHIN Direct relies on the embedded subject and issuing chain as indicated in the following points. Specific implementations may choose to go beyond these basic requirements.

The HITPC team felt that SASL, which does not typically use certs for authentication did not meet this requirement. As it turns out, the XMPP implementation team believes that SASL can be used with x.509 certs and therefore should not be excluded from consideration. That is a simple question of fact and I do not know the answer, but in reality it should not much matter. (will get into that later)

Even more troubling was the assessment of SMTP. The HITPC reviewers considered an all SMTP protocol network as problematic because it allowed for the use of clients which presented users with the option to make security mistakes. They felt that simpler tools should be used, that prevented these types of mistakes from being made.

None of these were unreasonable comments given the fact that they were reading all of the documents on the NHIN Direct site in parallel.

They also have a strong preference for simplicity. Of course, simplicity is very hard to define, and it is obvious that while everyone agrees that security issues are easier to manage with simpler systems,  we disagree about what simplicity means.

As I listened to the call, hearing for the first time how others where seeing my work, and the work of the rest of the NHIN Direct S&T working group, I realized that there were some gaps.  Ironically this is going to be primarily a discussion of what did not make into the final proposal. Most of the difficult debates that we held in the S&T group involved two divergent goals: Keep reasonable architecture options open to the implementation teams, and the consideration that security decisions that were reasonable 90% of the time were still unreasonable 10% of the time. We could not exclude end users (or implementation paths) by making technology decisions in ways that 10% of the users could not accept. 10% does not sound like much, but if you make 10 decisions and each of those decisions serves to exclude 10% of the end users… well that could be alot of exclusion. We went around and around and mostly, the result is that we settled on smaller and smaller set of things we -had- to have to make flexible trust architecture that would support lots of distributed requirements. This is not a “compromise” position, but a position of strength. Being able to define many valid sub-policies is critical for things like meeting state level legal requirements. To quote Sean Nolan:

we’ve created an infrastructure that can with configuration easily not just fit in multiple policy environs, but in multiple policy environs SIMULTANEOUSLY.”

That is quite an achievement, but we should be clear about the options we are leaving open to others. I am particularly comfortable with the approach we are taking because it is strikingly similar to the model I had previously created in the HealthQuilt model. I like the term  HealthQuilt because it acknowledges the basic elements of the problem. “Start out different, make connections where you can, end with a pleasing result”.

But we also assumed that someone else would be answering lots of questions that we did not. Most notably we could not agree on:

How to have many CA’s?

Our thinking was that you needed to have the tree structure offered by the CA model so that you could simplify trust decisions. We rejected notions of purely peer-to-peer trust (like gpg/pgp) because it would mean that end users would have to make frequent trust decisions increasing the probability that they would get one wrong. Instead if you trust the root cert of a CA, then you can then trust everyone who is obeying the policies of that CA. So X509 generally gave us the ability to make aggregated trust decisions, but we did not decide on what “valid CA architectures would look like”. Here are some different X509 worldviews that at least some of us thought might be valid models:

  • The one-ring-to-rule-them CA model. There is one NHIN policy and one NHIN CA and to be on the NHIN you had to have some relationship with that CA. This is really simple, but it does not support serious policy disagreements. We doubt this would be adopted. The cost of certs becomes a HHS expense item.
  • The browser model. The NHIN would choose the list of CA’s commonly distributed in web browsers and then people could import that list and get certs from any of those CA’s. This gives a big market of CA’s to buy from but these CA’s are frequently web-oriented. There is wide variance of costs for browser CA certificates.
  • The no CA at all model. for people who knew they would be only trusting a small number of other end nodes, they could just choose to import their public certs directly. This would enable very limited communication but that might be exactly what some organizations want. Note that this also supports the use of self-signed certificates. This will only work in certain small environments, but it will be critical for certain paranoid users.  This solution is free.
  • The government endorsed CA’s. Some people feel that CA’s already approved by the ICAM Trust Framework should be used. This gives a very NISTy feel to the process, but the requirements for ICAM might exclude some solutions (i.e. CACert.org). ICAM certs are cheap (around $100 a year) assuming you only need a few of them.
  • CACert.org peer to peer assurance CA. CACert.org is a CA that provides an unlimited number of certificates to assured individuals for no cost. Becoming assured means that other already assured individuals must meet you face to face and check you government ids. For full assurance at least three people must complete that process.  This allows for an unlimited number of costless certs backed by a level of assurance that is otherwise extremely expensive. The CACert.org code is open source, and the processes to run CACert.org are open. This is essentially an “open” approach to the CA problem (I like this one best personally)

Individual vs group cert debate?

If you are going to go with any CA model other than “one ring to rule them” then you are saying that the trust relationships inside the CA’s will need to be managed by end users. Given that, some felt that we should be providing individual certs/keys to individual people. Others suggested that we should support one cert per organization. Other said that groups like “emergencydepartment@nhin.example.com” should be supported with a sub-group cert.

In the end we decided not to try and define this issue at all. That means that sometimes messages from an address like johnsmith@nhin.ahospital.com could be signed with a cert that makes it clear that only John smith could have created the message, or by a cert that could have been used by anyone at ahosptial.com or by some subgroup of people at ahospital.com might have had access to the private key for signing.

Many of us felt that flexibility in cert to address mappings was a good thing, since it would allow us to move towards greater accountability as implementations became better and better at the notoriously difficult cert management problem, while allowing simpler models to work initially. However if you have a CA model where certs are expensive, then it will be difficult to move towards greater accountability as organizations choose single certificates for cost reasons.

Mutual TLS vs TLS vs Protocol encryption?

What we could agree on whether and how to mandate TLS/SSL. This is what we did say:

2.6 Encryption. NHIN Direct messages sent over unsecured channels must be protected by standard encryption techniques using key material from the recipient’s valid, non-expired, non-revoked public certificate inheriting up to a configured Anchor certificate per 2.2. Normally this will mean symmetric encryption with key exchange encrypted with PKI. Implementations must also be able to ensure that source and destination endpoint addresses used for routing purposes are not disclosed in transit.

We did this to enable flexiblity.The only thing we explicitly forbid was not using encryption to full protext the addressing component. So no message-only encryption leaving the addresses exposed.

This is a hugely complex issue. In an ideal world, we would have liked to enforce mutual TLS, where both the system initiating the connection and the system receiving it would need to provide certs. Mutual TLS would virtually eliminate spam/ddos attacks because to even initiate a connection you would need to “mutually trusted public certs”.

However, there are lots of several practical limitations to this. First TLS does not support virtual hosting (using more than one domain with only one IP) without the TLS-SNI extension. SNI is well-supported in servers but poorly supported in browsers and client TLS implementations.

Further, only one cert can be presented by the server side of the connection, or at least that is what we have been led to believe and I have not been able to create a “dual signed” public cert in my own testing. That means in order to have multiple certs per server you have to have multiple ports open.

SRV records address both the limitations with virtual hosting and the need to present multiple certs on the server side. This is because SRV DNS records allow you to define a whole series of port and host combinations for any given TCP service. However, MX records, which provide the same fail-over capability for SMTP does not allow you to specify which port. You can implement SMTP using SRV records, but that is a non-standard configuration and the argument for that protocol is generally that it is well-understood and easier to configure.

Ironically, only the XMPP protocol supports SRV out of the box and therefore enables a much higher level of default security in commonly understood configuration. With this high-level of TLS handshaking, you can argue that only message-content-encryption and message-content-signing require certs beyond the TLS, making the debate about SASL somewhat irrelevant. From a security perspective you actually rejected the protocol with the best combination of security+availability+simplicity.

No assumption of configuration?

You rejected SMTP-only because you assumed that end users would be able to configured their NHIN Direct mail clients directly. Ironically, we did not specifically forbid things like that, because we viewed it as a “policy” decision. But the fact that we did not cover it does not imply that the SMTP configuration should happen in a way that would allow for user security configuration. This is obviously a bad idea.

No one every assumed that the right model for the SMTP end deployment would mean that a doctor installed a cert in his current Microsoft Outlook and then selectively used that cert to send some messages over the NHIN Direct network.

We were assuming SMTP deployments that present the user with options that exclude frequent security decisions. This might be as simple as saying “when you click this shortcut outlook will open and you can send NHIN Direct messages, when you click this shortcut outlook will open and you can send email messages”. The user might try to send nhin direct messages with the email client or vice versa, but when they make that mistake (which is a mistake that -will- happen no matter what protocol or interfaces are chosen) the respective client will simply refuse to send to the wrong network.

There are 16 different ways to enforce this both from a technology and a policy perspective, but we did not try to do that, because we were leaving those decisions up to local policy makers, HHS, and you.

You assumed that there where security implications by choosing SMTP that are simply not there.

On Simplicity

Lastly I would like to point out that your recommendation was actually problematically not simple. We in the S&T group spent lots of time looking at the problem of security architecture from the perspective of the four implementation groups. For each of them we focused only on the security of the core protocol. Not on the security of the “HISP-to-user” portion. We have carefully evaluated the implications of each of these protocols from that perspective. We have been assuming that the HISP to user connection might like to use lots and lots of reasonable authentication encryption and protocol combinations. Our responsibility was only to secure the connection between nodes.

With that limitation you have chosen just “REST” as the implementation choice, precisely because you see it as a “simple” way to develop the core. The REST team has done some good work, and I think that is a reasonable protocol option. But I am baffeled that you see that as “simple”.

If we choose REST we have no message exchange protocol, we have a software development protocol, we must build a message exchange protocol out of that development tool. With SMTP, XMPP and to a lesser extent IHE, you are configuring software that already exists to perform in an agreed upon secure fashion. There are distinct advantages to the “build it” approach, but from a security perspective, simplicity is not one of them. I think you are underestimating the complexity of messaging generally. You have to sort out things like

  • store and forward,
  • compatible availability schemes,
  • message validity checking (spam handling),
  • delivery status notifications,
  • character set handling,
  • bounce messages.

The REST implementation will have to either build that, or borrow it from SMTP implementations much the same way they now borrow S/MIME. I would encourage you to look at “related RFCs” for a small taste of all the messaging related problems that SMTP protocol has grown to serve. XMPP was originally designed to eclipse the SMTP standard, so it is similarly broad in scope and functionality. Both SMTP and XMPP have had extensive security analysis and multiple implementations have had vulnerabilities found and patched. IHE actually takes a more limited approach to what a message can be about and what it can do. It is not trying to be generalized messaging protocol and is arguable better at patient oriented messaging and worse at generalized messaging as a result.

But in all three cases, XMPP, SMTP and IHE, you are talking about configuring a secure messaging infrastructure instead of building one. The notion that REST is ‘faster to develop’ with is largely irrelevant. Its like saying “We have three options, Windows, Linux or writing a new operating system in python because python is simpler than C” When put that way you can see the deeply problematic notion of “simplicity” that you are putting forward.

All three of the other protocols, at least from the perspective of security,  are easier to account for because the platforms are known-quantities. A REST implementation will be more difficult to secure because you are trying to secure totally new software implementing a totally new protocol.

I want to be clear, I am not arguing against REST as an implementation choice. The central advantage of a REST implementation is that you can focus the implementation on solving the specific use-cases of meaningful use. You can have a little less focus on messaging generally, simplifying the problem of a new protocol, and focus on features that directly address meaningful use. Its a smaller target and that could be valuable. Its like a midway point between the generalized messaging approach found in XMPP and SMTP and the too specific, single-patient oriented IHE messaging protocol.

But if you do choose REST, do not do so thinking that it is the “simple” protocol choice.

Conclusion

Beyond the security issues, there are good reasons to prefer any of the implementation protocols. I wanted to be clear that we are expecting your group to have things to say about the things we did not decide (or at least that you know what it means to say nothing), and to make certain that something that we wrote in the S&T group was not biasing you for or against any particular implemenation, all of which are basically compatible with what our group has done.

Thanks,

-FT