My Health2.0 submission goes to eleven

I decided to enter the Health 2.0 Developer Challenge.

My submission is

My goal was to find a way meet one of the challenges using some kind of integration with our new Audio PHR system Your Doctors Advice. (At the time of the writing its in a closed beta… you can sign in, but cannot use it yet) This has been the project that I have been working on for almost a year, with Cautious Patient.  but I let time get away from me, and I could not find a way to get anything interesting done in time, that applied to any of the Challenge categories. Next year we will consider writing our own challenge.

But I discovered that a part-time project that I have been working on for the last month or so actually applied to a contest to re-implement some of the original Project HealthDesign applications. One of those applications was, specifically, an application designed to track Observations of Daily Living (ODLs) for people with chronic pain. I have friends and family with chronic pain, and I actually wrote this application to help one of them keep a pain/food journal more easily.

The contest had two requirements: re-implement one of the original designs from one of the videos, and use a “commercially available PHR service that can securely store the data”.  What is a PHR? It is a personal (or “personally controlled”) health record. What the contest makers meant was to try and get new functionality available in Google Health, or HealthVault or Dossia or the like.

Well Google Health has been, until very recently been incapable of storing ODLs because it insists on the limited data that a CCR can encompass. Even with the new update, Google Health has limited abilities to store arbitrary data. Im not sure if I can easily include pictures of food in a Google Health app (this might have changed recently). HealthVault has famously supported the recordings of arbitrary data, but is so capable in this regard that there is little need to force the data into any standard at all.  If I want to participate in either approach, I have to go through very extensive integration and approval process. Neither platform is truly open. Perhaps that makes them “safer” for patients, but perhaps that just makes them walled gardens with useless inscrutable iphone-app-store-like approval process that stifles true innovation… you know… one or the other….

I much prefer Twitter and Facebook as application platforms, who have far more open application approval processes. These platforms have realized that their success is tied to the openness of their networks. They are acting much more like “Internets with new protocols”.

But Twitter is not appropriate for health data… because it is essentially a public broadcast medium? Right? That is the way it is typically used, and it is certainly a broadcast technology. But it is not necessarily “public” broadcasting. You can change any twitter account into a protected account, that will ensure that only people you want to see it can see it. So Twitter is “commercially available” that can “securely store the data”, but is it a PHR? I think it is if you use it like one. In fact, it is probably one of the most popular platforms for logging health, wellness and fitness information on the planet.

I think an application that relies on Google Health or HealthVault faces an uphill battle, because that is not where people are tracking ODLs. People actually use Twitter and Facebook to track what is happening in their lives. They use it to record their mood changes, their stress levels and details about how their bowels are moving. Most importantly for my purposes people are already using Twitter as a food diary.  If you want to save time you can just take pictures of your food and use that instead of trying to write down anything. If you are interested in finding out what food might cause pain you are more interested in ingredients than calories, and so food diaries that focus on accurately tracking calorie intake are overkill.

So I wanted a way to simply and easily track ODLs of pain, using Twitter, right beside the already smooth process of tracking food intake.

But I wanted to do this in a way that would be easy to data mine, so that I could take the food data, or pictures, and overlay that in a data-mining friendly way with the pain data. Obviously, I needed a good syntax for recording the pain information, and so I did some research on micro-blogging ODL syntax options. I ultimately settled on the grafitter syntax, because its site was actually up and doing cool data mining stuff. But I did not want my friend to need to actually learn any kind of “twitter syntax” to log her pain. Instead I wanted her to have a simple web form that she could use on her smart phone, that would allow her to quickly and accurately describe her pain. Just like the video from Project Health Design on Helena.

So what is Helena doing? In the video, she is recording which medications she is on. But that is one of the few things that doctors have (or should have) accurate data on. Other than recording that medication data, she is simply creating a ODL system that is customized to her world, allowing her to track -when- she takes the medications, which in her world is just “the yellow pill” or “the big pill”. Moreover, she wants to log three specific things that seem to impact her pain: sleep, yoga and the local weather. But the Twitter ecosystem already takes care of all of that. There are devices that track sleep, that can log the sleep quality data to Twitter. Helena could use fitbit which could log her movement during yoga workouts to Twitter. She can even use Twitter to track the local weather.

Helena and my friend, both have the ability to log very different kinds of data to twitter that are difficult and or time-consuming to acquire in any other way. All they need is a method to create and record their own “Pain Tracking Interface” that could be used to describe what they were going through. One of the project health design teams described one such interface, as part of the many things that the teams released.

So I built a new Twitter application to do that. Its easier to understand if you see it in action once, so here it is:

You can try the application yourself at

So this is not only a method for creating a “Pain Tracking Interface” but for tracking anything you want to perform careful date-stamped quantitative analysis. There is little that you could not track using the system, and it will perfectly interface with any other Twitter data stream so that you can perform data analysis on yourself easily, using Grafitter, or something else just like it.

I want to be clear. I did not write this application to win the contest. I wrote this application to help my friend. It is far enough along that my friend can do what she needs to figure out her pain. This idea can easily go farther, but this is not my main priority. I am going to talk about where this should go, but do not assume that I am going to be the one to do this. Competition and collaboration welcome.

I am releasing all of the php sourcecode for as Open Source as soon as I have the cycles. I think the following work needs to be done on the system.

  1. Create a replacement data analysis tool for grafitter, with more functionality, specific to ODLs
  2. Improve the iphone and android specific interfaces to the ODL forms
  3. Build in facebook integration for those that do not want to link twitter to facebook
  4. Create much better form-builders that make it more obvious how to build forms for different things, specially targeting HTML5 (I feel doing fancy work in the HTML4 world is a waste of time at this stage)
  5. Allow users to share their ODL forms that “standardized” ODL forms might become popular based on some kind of crowd-sourcing approach.
  6. The interface is a little sparse, could be improved alot with some good design work.

I also did not submit the application to win the contest, as much as to try and reframe the problem. I want to make a difference in the world of Personal Health Information, and I know I am not alone in this. But we need to stop trying to force people into behaviors that they will never commit to. The largest single “addressable” healthcare issue is compliance. If we all did what we should know that we are supposed to do, then many of the difficult healthcare problems, like Diabetes, Heart Disease and even HIV would become rare events, and manageable for our society.  I am overweight. So I have a compliance problem. I need to focus on losing the weight, which will protect my heart in the long run, rather than interfacing with some software that I have to comply with. Our goal with the Audio PHR, is to create an application that helps people do the right thing more than it creates new user burdens. I am not convinced that our PHR philosophies are simple enough. I am not convinced that or our Audio PHR is simple enough. But they are simpler. That is a step in the right direction.

Normally, I would also talk about how we need to be working together on open systems using open source software at this point. But Project Health Design and Robert Wood Johnson are absolutely the choir when it comes to that sermon. They understand the potential of Open Source. My goal with submitting this application is not to win, although that would be nice. My goal is to completely reframe the problem. I want them to see that their notion of PHR is trying to force people to move against the current. Facebook and Twitter applications, whatever else you want to say about them… are “with” the current. People are there, using those systems, that is where the action is. We need to bring the behavior changing healthcare innovations to the people, not the the people to the innovations. This is a big paradigm shift, but it is at the heart of Open Source. Essentially I am a developer on one of the Health Design projects, and I have made a pretty signifigant problem into what Torvalds calls a “shallow bug”. The bug is “How do we get people to signup to use this stuff?” I have solved half of that problem, people are already signed up to use Twitter, they just have to use Twitter in a new way… as a PHR.

Eric Raymond experienced this kind of paradigm-shift from a contribution with his fetchmail project. I have quoted this before and I will quote it again:

The real turning point in the project was when Harry Hochheiser sent me his scratch code for forwarding mail to the client machine’s SMTP port. I realized almost immediately that a reliable implementation of this feature would make all the other delivery modes next to obsolete.

For many weeks I had been tweaking fetchmail rather incrementally while feeling like the interface design was serviceable but grubby – inelegant and with too many exiguous options hanging out all over. The options to dump fetched mail to a mailbox file or standard output particularly bothered me, but I couldn’t figure out why.

What I saw when I thought about SMTP forwarding was that popclient had been trying to do too many things. It had been designed to be both a mail transport agent (MTA) and a local delivery agent (MDA). With SMTP forwarding, it could get out of the MDA business and be a pure MTA, handing off mail to other programs for local delivery just as sendmail does.

Why mess with all the complexity of configuring a mail delivery agent or setting up lock-and-append on a mailbox when port 25 is almost guaranteed to be there on any platform with TCP/IP support in the first place? Especially when this means retrieved mail is guaranteed to look like normal sender-initiated SMTP mail, which is really what we want anyway.

There are several lessons here. First, this SMTP-forwarding idea was the biggest single payoff I got from consciously trying to emulate Linus’ methods. A user gave me this terrific idea – all I had to do was understand the implications.

underline mine.

I really think the Twitter-centric design is fundamentally more effective than the whole Common Platform effort. While I think both the Common Framework, and its top competitor the Indivo X modular architecture, have some value, they are fundamentally fighting a losing fight, trying to turn the masses into using PHR systems to log data. Forcing each application developer to write code to a separate API which does exactly the same thing in a different way is a non-starter. The only time that happens is when developers have a big motivation. An API is only useful when it has users behind it, and lets be honest, HealthVault and Google Health do not have users. Neither do Indivo X (which is still alpha/beta) or any implementation of the Common Platform. The only PHR systems that actually have any users are the MyHealtheVet application from the VA and the Kaiser PHR. Both of those applications are gateways into deep connection into those respective integrated healthcare delivery systems, something Google Health and HealthVault are not (although they might be someday soon.)

My grandfather once advised me “Never play a man at his own game”. I have taken that to heart. I now live by a modified form of that advice:

If I am failing and cannot see why: change the game or change the rules.

That is why I work with Open Source. It allows me to change the rules. I want to thank Robert Wood Johnson for their commitment to Open Source. The information that their project released made the application that I built for my friend more capable. It allowed me to flesh out the design and prevented me from building a special purpose application. Thanks to Michael Botsko for making a good jQuery Form builder. People like him build tools that let people like me try and make a difference.


Why I am supporting my father for congress

My father, Clayton Trotter, is running for congress in San Antonio, T.X.

He has won the Republican Nomination against incumbent Democrat Gonzalez. He has the support of the local Tea Party.

My father is a true conservative both socially and fiscally. A federalist in the tradition of Ron Paul. My personal opinions often go against my father. Like the majority of Americans I tend to be financially conservative but socially liberal.

I doubt the rest of my immediate or extended family will be following my lead. They do not support my fathers complete conservative bent. They seemed shocked to learn that I would be supporting my father. I am also very close to many liberal friends who also might be baffled by my decision. How could I support my father when we diverge on so many issues?

The answer is simple. My father, whatever is political stance, is far more qualified to represent San Antonio and Texas in the United States Congress. It is not because he is a legal scholar of the first rank, though he is, and it not because of his conservative politics (which I can assure you are sincere.)

I am supporting him because he knows the price of war. My brother John Trotter, (Byron to his family) was killed in Iraq during Fallujah II while fighting in nearby Ramadi.

Losing him broke my heart. It broke my fathers heart. It devastated my family.

I will support my fathers campaign to run for congress because his broken heart qualifies him to decide whether the United States should go to war. His broken heart qualifies him to monitor and approve defense spending. My father is qualified to prevent a defense spending program that allowed defense contractors to make billions, even as national guardsman were self armoring vehicles in Iraq. I believe that he will read those bills a little more carefully, that he will pay closer attention when generals and admirals testify, and he will work harder than the thousands of congressmen who have no concept of the real consequences of even their smallest decisions. I know that he will do this in peacetime as well as during war, so that our soldiers, sailors, marines and airmen have what they need before they are sent to fight for us. I know that he will not tolerate the Washington bullshit that puts our service members at risk. I know that he will not put politics, even Republican politics, before the military. I know that he would happily give up his political career to save the life of even one soldier; someone-else’s Byron.

Our minor political differences pale in comparison to this single issue.

My father did not know it, but he prompted me to write this post by leaving the following message on my google voice account. He decided to visit my brothers grave at Arlington national cemetery today (the anniversary of 9/11) on his visit to Washington. While he was there he left this message for me. I am publishing this without his permission, because I want to give you insight into the man he truly is. I want you to see him as I see him, formidable, but also deeply vulnerable. His own words are the deepest endorsement I can make for him. I hope he does not mind too much.

Clayton Trotter 9-11 Call to Fred Trotter

Like all politicians, he needs contributions to win, please consider helping him.



Personal Science

Recently I have been approached by a clinic in Austin that operates under the assumption that there is a causal relationship between childhood vaccinations and autism.

This will not the first foray into the thick of bio-ethical debates. I have, in the past, advised both planned parenthood and catholic clinics on how to use open source healthcare software.

My policy for organizations like this is simple: I almost always help them. My software or software ideas can improve the experience of patients at any clinic, even if that clinic is taking a position on an ethical issue that I am unsure about or against. Most importantly there is nothing that I can do to change the position of the clinics in question, one way or another.

I hope that in the end, open source software will help to resolve some of these ethical debates by providing a cheaper means to get better quality data. While my opinions cannot change policy better data can.

So will I help this clinic? Probably. Will I allow this clinic to advertise my help as any kind of endorsement? Definitely not. Which is basically the same position I take on any reasonably complex bioethical issue where I can see both sides of an issue. Of course many in my community would say that members of the anti-vaccine community do not deserve this kind of benefit of the doubt.

Before climate-gate I might have agreed. But now I am much more sympathetic to arguments that run contrary to modern scientific consensus. I fell very betrayed that centrally referenced climate data was fudged by respected scientist in order to support a very specific conclusion. Because of the pharmaceutical corporate interest I am afraid that vaccine safety data might have been similarly fudged.

It is my hope that providing a clinic with a dramatically different agenda than the main stream medical community with cheap and effective tools to do advanced data gathering and analysis that I might provide them with a kind of truth-over-pressure. If vaccines can cause autism, then they should be able to generate some reproducible data that shows that. If vaccines do not cause autism then by giving this clinic better data tools I hope that I might be able to create a kind ideological implosion within the organization. I hope that I am not enabling an organization that is torturing kids with invasive, traumatizing procedures for nothing. But unfortunately I am not in a position to make that determination.

I hope, truly, that I am right to take this approach. I hope that the assumption that transparent code plus transparent data can create pressure to find the truth for other difficult issues is right. It feels like, more and more like I am betting more than my career on this idea of open source software in healthcare… I am betting my conscience too.

I would like your comments about my approach generally and about this situation specifically.

VistA modernization must use MUMPS

Once more, the debate about “whether” to migrate VistA from MUMPS has come up again in main stream press.

This always makes me sad because it shows just how fundamentally ignorant people are of what VistA is.

So lets get something straight. If you are not using MUMPS, in some form or fashion, it is not VistA. It is a new software project. New Software projects to develop comprehensive EHR solutions, do not work. Ever. That is called “Big Bang Development” and it is utterly doomed to fail.

In order to create an EHR system you have to grow one. You start with a system that is not comprehensive, you use it anyway, and then it grows into something that is a comprehensive EHR system. You cannot take a comprehensive EHR and assume that you can re-write it, from scratch in another language and that it will work. That is just unfathomable.

This has been tried, several times, and consistently failed.

So the reason that it is not “an option” is that it will fail. Thinking about it as an option is simply madness.

It is very like saying, “We need the Linux Kernel to improve, so we will recode it in Java.. not enough people are trained in low-level C programming” The folly and hubris should seem clearer now perhaps?

You might try drastically reinventing what MUMPS is, like ClearHealth, but you cannot simply “get away from it”.

Another example might be “New York has proven that the street + subway system is effective, the city planners of Venice should adopt that in place of the canal system that they currently use. Obviously New York shows that these modern features are capable of moving far more people…” The reason Venice does not consider a subway system is that it -cannot- work. The city planners there know that, so they never try.

It is ironic that people who say “we should move away from MUMPS” consistently consider those of us who actually understand the architecture and design of the system, and insist that we continue with MUMPS as a kind of “particularly obstinate political faction”, from the link above:

“Is MUMPS the right entity? I think the obvious answer is ‘no,’” Meagher said. “It just happens to have a bunch of very committed people who want to stay in that environment.”

When an engineer says “I can think of no way to achieve near light speeds in our lifetime”, he is not taking an obstinate political position. It is not “pro-light-speed” vs “against-light-speed”. The engineer is taking a position based on what he understands to be achievable.

When I say “VistA must stay with MUMPS”, I do so based on the only relevant evidence’ efforts to move away from MUMPS have consistently, and expensively, failed. I do not like MUMPS at all, but I have a pretty solid understanding of software engineering and you simply do not simply migrate to a new language for a codebase as large as this.


On Internet Marketing

Recently I have had several people who have asked me for advice and council on how to do Internet Marketing. It looks like my day job is considering taking the plunge as well. As a blogger, I know an opportunity to kill two birds with one blog post when I see one. So here are my thoughts on Internet Marketing in the age of social media.

Some context

First, a little history. It used to be that Internet Marketing was all about communicating effectively with a web site and email communications.

For a website, the general advice was that you wanted it to be important in Google’s eyes, essentially the process of SEO. You wanted to make sure that your domain name was easy to type, and easy to spell. You wanted to make sure that your users could find what they wanted on your web site. A website needs great analytics tools so that you can track how your web site is being used.

For email communications you had to decide if you wanted to have only an outgoing email campaign (broadcast only), or a mailing list (communication between everyone). If you wanted and email campaign you wanted to make sure that you had beautiful html emails that degraded gracefully into text emails. You needed to be sure that your html emails worked in the most common mail clients (harder than it sounds). For a mailing list you wanted to make sure that no one was added by mistake, and that no one was spamming people through your list.

Anyone who knows about these types of marketing systems can tell that I am barely scratching the surface on these issues. Moreover, it is also clear that while so-called “Social Media” has become really important, these older modes of communication are not less important, they are just… older.

So that is the backdrop, in brief, for the Social Media revolution. What is the big deal about Social Media? Way out of scope for this post, but I will inline probably the best video proving the point that I have seen. If you have not seen it, then watch it. If you have, then you probably already know why Social Media is a big deal.

The question posed at the beginning of this video is “Is it a fad or is it a revolution?” As is often the case to questions like that, the answer is “Yes”. Social media has lots and lots of people connecting meaningfully with lots and lots of people, but that does not mean that you will be able to get your message across using Social Media. All it means is that there are people there. Its a lot like Lubbock, Texas. For whatever reason, there are hundreds of thousands of people living in what appears to be a desert. Why would you want to live there? Because there are hundreds of thousands of people there. There is probably a reason why the original people made that city, but no one moves there now because of scenery, they move there because there are already people living there. This is much different than something like Las Vegas. Its a city in the desert that was built specifically so gambling could be legal. That is why people moved there. (I also do not understand Phoenix…)

So the next question is “What is your message?”

Your Message

Social Media people often “sell” Social Media as “the new business requirement”. They say things like “You have to be on Facebook” or “You need to have a Twitter account”. But that is really not the first question. The first question is “What is your message?”. Unless you can define your message, clearly, in a sentence or two then nothing else I am going to say is going to make sense. For fun, and because we are going to talk about Twitter soon, see if you can put your message into 140 characters. That is basically two short sentences or one really long one. Its OK if you need to go over a little, but if you need to have four or five 140 character blocks then that should be insight that you have more than one message. That is OK, but you need to recognize that you might need to follow significantly different strategies for each of your different messages.

So do you have your message(s) in your head? OK then.


To make an impact you have to learn to use the Internet Marketing tools well and then you have to apply them in  meaningful way. This is a lot like a carpenter’s toolbelt or a musician’s set of instruments. First you have to master the tool and understand the deep implications that subtle details of each  given tool. Just because a tool is a type of hammer does not mean you can use it do mount photos (imagine using a sledgehammer to tap a nail into drywall). Just because it is an instrument does not mean that you will fit in with a given band (imagine bringing a tuba into a rock band). The first level of tool mastery is understanding how to use the right tool for the right job. the second stage of mastery is knowing when to ignore what you learned in the first phase (for instance, Ska is a movement, within rock music, to embrace brass instruments).

Note that true mastery of a tool is being able to use the tool to do something else amazing. When Michelangelo was painting the ceiling… there were thousands of painters who knew how to use a paintbrush as carefully as he did. But they were probably painting signs, or the sides of barns. The ability to use the paintbrush is only the first step towards being Michelangelo.

This should sound obvious. But here is how I think this basic tool mastery is playing out in Internet marketing.

Phases of Online Marketing Tool Use
Phases of Online Marketing Tool Use

OK, so what does these phases mean? First I should admit that I was inspired to make this chart by two different sources, one is Meatball Sundae: Is Your Marketing out of Sync? By Seth Godin. The other is a blog post entitled: The multiple phases of social media integration which is where I borrowed my three phases ( of course as a computer scientist, I must count from zero).

  • 0 level is nothing.. If you read this article and go “What is Facebook?” or “What is Twitter?” then this level is where you are. No problem, I will try and help you out with lots of good links.
  • 1 Level is using the Internet as a megaphone. This is when you treat your web site, Facebook page, Twitter account or whatever as a mass media device. You use it the same way people use radio, TV or newspapers: to send messages out to lots of people all at once.
  • 2 Level is to use the Internet as a camp fire. When you sit around a campfire and talk, it shifts between you speaking to the group (like a megaphone) and the group speaking to you, one at a time. The group also speaks about you, in front of you. It enables public conversation in lots of different directions. In real life campfires are a great time and place to do this, because the typical night-time acoustics allow for a large group (10 to 30 people even) to participate in a single conversation. But this does not scale. The whole point of Social Medial is that you can have a campfire chat with hundreds, thousand or even millions of people all at once.
  • The third level represents full tool mastery. But this does not automatically mean that you get to paint the Sistine chapel. It just means you know how to use the tool.
  • 3a is named after Blendtech, a company who has successfully used Social Media to create a Sistine chapel (more in a moment).
  • 3b is someone who is using the tools well, to do OK things, but is not doing anything truly meaningful. This would be like the painters who during 1512 were painting portraits or landscapes and are today forgotten. But they made a good living and their customers were happy.
  • 3c is like someone who was drawing graffitti on the walls in 1512. No matter how pretty an picture, painted at night on barn might be, in the morning it will be whitewashed. The skill is irrelevant, it is a matter of message and medium match up.

Obviously, what everyone wants is to make an impact with their marketing. To leave people with a message burned into their minds, and happy that it happened. Once people see the Sistine chapel (on my bucket list) they will never never think about it the same way, and they will never forget the experience.

Is this possible with Internet Marketing? Yes. I will give you two examples.

First, if you have bought a book recently online and you immediately typed in then you have experienced this effect. The word “Amazon” has nothing to do with books. Yet when you want to buy a book on the Internet you probably go there automatically. Why? Because you have had a Sistine Chapel- style experience there and you will always remember it. Note that this is a great example of a company that was able to achieve this with just a website and without any kind of Social Media. Ebay and Google are other good examples.

Rather than just talk about the second example I will show you. Blendtech is a company that makes really good blenders. That is their message. That is what they want burned into your brain. After you watch the following videos, you will always remember that Blendtech is a company that makes really good blenders. You will be unable to remember the name of any other blender manufacturer, but you will never again think about where you would get a really good blender…   if you needed a really good blender.  Please watch the following two episodes of “Will it Blend”.

What is a Meatball Sundae?

Its two things that are great by themselves but still do not go together. Meatballs and Whipped Cream/Chocolate. This is the worst case scenario for Internet Marketing efforts. This is what happens when you fail to recognize that Internet Marketing and/or Social Media (two terms for the same things nowadays) really does change things deeply in in your industry, but you are unwilling to make the fundamental changes needed to make the leap.

This is actually a fundamental mistake that happens often in Health IT, which is what I like to call “Technology as Paint”. The basic notion is that technology can be liberally applied to make any existing thing better. This is the way you use paint. My wife and I recently bought a desk for $35. It was banged up and looked awful. We painted it. Now it looks like it cost $350. Paint is awesome like that!

But technology is not paint. You cannot take something that works without technology, merely make it “online” or “computerized” and assume that it will be better than the original system. the cardinal example of that in health IT is the we- are- going- to- computerize- the- dumb- doctors. Here is how the plan unfolds:

Doctor: “Hey business man, I want you to computerize me.”

Business Man: “That’s great! I have my favorite coder here with me, and we can help.”

Coder: “I can easily computerize you! I just did it for a Gas Station last week! No more paper forms at the Gas Station!! All I need to do is see all of your paper forms, and then I will computerize you by making computer versions of those forms.”

Doctor: “OK, here are the ten forms I regularly use.”

Coder: “OK I will be back in a week with your computer system built!!”

Five years pass…

Coder: “The system is almost ready, I have just finally got the ontology mapping tool together, you can go live next week!!”

Doctor: “You are fired. You have been charging me for five years to code and you have nothing to show for it. I still have to use paper because your system does not even to 10% of what the paper system does. Now I have five years worth of data in both paper and electronic records, and I can no longer afford to maintain the electronic system. I am sooo screwed, but at least I am going to stop paying you!”

This happens again and again and again in Health IT because so many technologist view technology as paint: Standard technology, liberally applied, solves all problems.

Seth Godin’s book is really required reading. It details, very explicitly how Social Media is not technology paint for marketing purposes. Any good summary of his points will show that you have to figure out if, and when your message is right for the Internet Medium in question. So when you hire someone to help you with Social Media, and they fail to show you how a given Social Media platform is good for your message, then they have failed. A pretty good idea of when you are getting bad advice here is that they are recommending that you go with the usual suspects. If they say: “you should be on Youtube, Twitter and Facebook” without discussing how your message will play in those environments, then you need to take a step back.

It would be much better for you to do what Blendtech did, which is to find the one medium that allows you to create a super-compelling version of your message, and make that medium into the “Sun” in your marketing “Solar System”. Sure Blendtech uses Twitter, and Facebook, but they do that to funnel people to their awesome videos, which in turn funnel people into buying an awesome blender.

Social Media Strategy as a Solar System
Social Media Strategy as a Solar System

Message and Medium as a Solar System

What follows is a little more conjecture. I am pretty darn sure about the notions I have explained above. But without dealing with a specific message, it is difficult to know what the right center-of-gravity medium might be. But still here are some guidelines that make sense to me:

  • If your idea is best communicated in pictures, try Flickr or Picasa. They have really advanced tools that allow you to view a series of constantly updated photos as a steadm on another site. A big hint when using pictures is that pictures with people in them are almost always more interesting then pictures without people. You can make both Twitter and Facebook, and plain old web-page follow those photo streams. You might want to use Flickr/Picasa as the centre of gravity if before and after photos are more compelling than a video, for instance. Lots of people have made this approach work.
  • If your points make sense as really short catch-phrases or have a very important real-time component, then Twitter or might be for you ( is a more freedom/less popular version of Twitter). Shit My Dad Says, which is now a television show and a book (pretty good planets!!) is a good example of the catch-phrase style Twitter feed. In Portland there are some food carts that you can only find by following them on twitter. Note that you easily add Facebook and Google Buzz as planets merely by propagating your status updates to those platforms.
  • If you want a deeper social engagement that includes videos, text, pictures or perhaps an application that you are writing yourself, perhaps Facebook is for you. A good question to ask about Facebook is “Am I anything like Farmville?” Again you can easily make Facebook updates propagate across the other platforms.
  • If you are trying to make a series of arguments that require carefully constructed arguments, then you need a blog. This gives you the ability to tie in all kinds of other content (like I did with Youtube videos here) to make very specific and complex points. But if you make enough of these points, then perhaps you are really slowly writing a book, and you should consider self-publishing it on CreateSpace or Lulu.
  • If you have already written a book, perhaps you need to split apart into a blog.
  • Videos can be tremendously engaging and personal. If you have a story to tell, a parable of some kind, then this is the right medium. Even just a camera, pointed at you can be very very compelling if your story is good enough. You should be looking into Youtube, which is the king of the space, but also perhaps if you want to show films of computer programs, or some other site if you have other specific video hosting requirements. Again, you can turn your video feed into facebook, twitter,, and Google Buzz integration.
  • If for some reason your content would work really really well next to gmail, you might look at Google Buzz.
  • If you want to create complex person to person engagement between lots of people around a particular topic that they have a high level of interest in, then I would consider either email mailing lists, or online forums, or something like Google Groups, which is a pretty good fusion of both. Getting the “full message” in your email Inbox is pretty valuable.
  • If you want to have things showing up in email Inboxes, but do not want to enable communication between the recipients then you probably want an email broadcasting service like MailChimp.
  • If you want to engage with professionals of one kind of another Linked in is where you should start.
  • If you want to generate written content, you need a Wiki.
  • Face to face Events can now be deeply connected into the Internet. I like using eventbrite to schedule things like conferences, I like for regular meetings, and when a meeting is really important, it should be live streamed with something like
  • Sometimes, what you need is a simulated three dimensional space. Frankly I have trouble understanding when this is a good thing… but if you see it is valuable you want to use Second Life.
  • If you want a Facebook style social network that you control you want Ning.
  • If you want full control, including source code for your social network, then you want one of these.
  • If you have a health IT application that needs to interface with Doctors socially, then you want to work with

I hope this is helpful to people that I am trying to counsel on Social Media. Its not just about using it, its about finding a way to use it in a compelling way!


Funambol in healthcare

One of the things that I love about conferences like OSCON is that you met people who are doing really interesting things coming out of left field. I often feel like I “know everyone” in Open Source healthcare, but every time I hear about something like this I am reminded just how big the world is. People are reworking Open Source tools to work in healthcare all the time!

The most recent example is from the Funambol project. That project is made to sync cell phone data, like calendars and contacts. But the Funambol teleport project instead uses the stack to move healthcare data around. I would go into detail, but there is no need, since Simona does a much better job:

OpenStack and Software Freedom in Healthcare IT

As clinicians, doctors and other healthcare providers are the stewards of their patients data. But what happens when they lose control over that healthcare data? Most people focus on what happens when that private data becomes too available. But far more commonly healthcare data becomes trapped. Far too often, it becomes buried in one way or another, lost forever and useless to patients.

I am probably the most vocal proponent of the notion that software freedom, the heart and soul of the Open Source movement is the only way to do healthcare software. Over that time I have tried to highlight the threat posed by vendor lock-in with healthcare software. But “vendor-lock” is not the only way that healthcare data can become buried. Ignacio Valdes was the first to make this case clearly against ASP healthcare solutions with his post about how Browser Based EMR’s Threaten Software Freedom . That was written in 2007.

So you can imagine the types of concerns Ignacio and I had as we built Astronaut Shuttle (very much beta) together. Ignacio had the VistA EHR chops and I had enough cloud experience to create the first-ever cloud based EHR offering. Its a simple system, you can use a simplified web interface to launch cloud-based instances of an EHR. The main difference between this kind of web interface, and something like RightScale, is that the launching system performed whole-disk encryption, allowing you to ensure that Amazon could not access your healthcare data. As far as I know, no one else has built anything like this but us (love to hear otherwise in comments).

Why are we some of the few people trying things like this? For one thing , encryption is pretty difficult to do in the cloud,  there are lots of approaches and it is pretty easy to brick a cloud instance with an improper encryption configuration.

But more importantly, there is a perception that storing private healthcare data in the cloud is a bad idea, dangerous because it meant putting all of your eggs in one basket.

Given how concerned Ignacio and I were about vendor lock, and ASP lock, you can imagine our feelings about cloud lock. We had to be sure that our customers, doctors and other clinicians, would be able to restore linux images containing precious EHR data off-site using off-site backups.

When we looked out across the available cloud options we decided to implement our service using Amazons ec2 service, specifically because of  Eucalyptus an open source implementation of the Amazon cloud hosting infrastructure.

However, we have been deeply concerned about this approach. Currently, you might say that Amazon has a “friendly” relationship with Eucalyptus, which of course means that Amazon has not crushed it like an itty-bitty bug. For Amazon, being able to point out that there were FOSS implementations available made it easier for ec2 to acquire certain customers. At the same time by refusing to treat the ec2 and other AWS api’s as open standards, or to specifically state they would not sue an open source implementation of their API, Amazon could always ensure that Eucalyptus would never be a threat.

“What a minute!” you might say… “Amazon is a Linux-friendly company! They would -never- betray the community by going after Eucalyptus…”

I think the Open Source community needs to wake up to corporations whose basic legal stance towards Open Source projects is to leave open the “smash if they succeed” option.

IBM has been a “friend” to the community for years. IBM even promised not to use specific software patents against us. They assured us that they are not a threat. But then they broke that promise. The broke it because someone in the community decided to implement software that threatened to break their monopoly on mainframe implementations. IBM turned on our community just as soon as our freedom started to threaten their bottom line. You are kidding yourself if you think Amazon will lose a billion dollars to Eucalytpus without reacting. Amazon has been very aggressive in acquiring software patents and will use them if Open Source implementations ever really get good.

I think Eucalytpus is an awesome project but it lives at the whim of a corporation who tolerates it precisely because it is not a business threat.

It was with great trepidation that Ignacio and I built a health data infrastructure that we knew relied on the whim of a really big bookstore. (When you say it like that… you can see the problem more clearly)

With that said, I am happy to support and endorse the new OpenStack project. OpenStack is a move by Rackspace Cloud, the number one competitor to Amazon, to completely Open Source their cloud infrastructure. They will be releasing this work under the Apache license.

Open Source license are the only trust currency that I, as health software developer, can trust to ensure that no one can ever trap health data with software that I have recommended. “Probably won’t trap” or “Open Source friendly” simply do not cut it after IBM. Simply put, a full Open Source release is the most extreme thing that Rackspace can do to win my trust in their cloud infrastructure.

I have also been discussing with the Rackspace team about the importance of building in support for cloud-initiated-encryption and cloud audit (thanks for the tip samj)  into Open Stack.  These are must-have features to make healthcare data in the could a viable option.

As soon as we have the dev cycles available, we will be moving Astronaut Shuttle over to the Rackspace Cloud. I invite anyone who gives a damn about Software Freedom, or health information software generally, to follow us over.


NHIN and others at OSCON

I am just home from the first ever amazing health IT track at OSCON. The quality of the content is simply amazing, and soon, you will be able to see the many talks available (thanks to Robert Wood Johnson for paying for the videos)

As I think about what I will be blogging about, I wanted to post some quick links to those who are already thinking about what was said and what it means. First the conference organizer, or at least from the health IT point of view, was Andy Oram. He already has two posts, one on the first day, and one highlighting the VistA controversies exposed at the conference.

Most of all, I wanted to point to this awesome interview with the leaders of the NHIN open source projects: NHIN CONNECT and NHIN Direct.

Tolven invited to privacy party

The Open Source Tolven project has been invited to the Privacy Technology showcase for the HITPC S&P Tiger team.

This is well-deserved recognition. Tolven has an extremely innovative architecture, that dispenses with many of the bad assumptions that other EHR platforms make. The first is that an EHR platform should only be an EHR platform. Tolven is a combined EHR and PHR.

The second is a well-thought out encryption at rest strategy.

Hopefully a recording of the presentation will be available after the meeting.

Empathy over implementations and another straw man

I think the recent work of the NHIN Direct implementation teams has been amazing. But I think that by implementing, all of the teams have succumbed to different extents a common software developer error. They are implementing rather than empathizing with their users.

There are two groups (possibly three if you count patients, but I will exclude them from consideration for just a moment) of potential NHIN Direct end users. But before we talk about that, I would like to talk about Facebook and Myspace.

Or more accurately I want to remember the controversy when the Military chose to block users of Myspace but not Facebook. This caused quite a stir, because at the time, Myspace was very popular with the high-school educated enlisted personnel, but Facebook, which even then was “higher technology” was more popular with the college educated Officers. Controversy aside, it showed a digital divide between different types of users.

Ok, back to Healthcare IT.

We have something strangely similar to this in the United States with the level of IT adoption with doctors.

On Having

Most doctors are low-health-information-technology. Most doctors work in small practices. Most small practices have not adopted EHR technology. Even those small practices that have adopted EHR technology have often done so from EHR vendors who have not focused on implementing the tremendously complex and difficult IHE health data interchange profiles. That is one group of doctors. You can call them Luddite, late adopters, low-tech or “the have not’s”. No matter what you call them, the HITECH portion of ARRA was designed to reach them. It was designed to get them to become meaningful users of EHR technology. (Note that “EHR technology” is kind of a misleading term because what it hass essentially has been redefined to is “software that lets you achieve meaningful use”. People still have lots of different ideas about what an “EHR” is, because people still have lots of disagreements about the best way to achieve meaningful use.)

I have to admit, I am primarily sympathetic with this user group. I originally got into Open Source because my family business (which my grandfather started) was a Medical Manager VAR. Our clients loved us, and they hated the notion of spending a bucket load of money on an EHR. I started looking for an Open Source solution to our EHR problems, and when I could not find what I needed, I started contributing code. There are a small cadre of people working on the NHIN Direct project that share my basic empathy with this type, the “have nots”, of clinical user for different reasons.

But the majority of the people working on NHIN Direct represent the whiz-kid doctors. These are the doctors who work in large clinics and hospitals that have found moving to an EHR system prudent. Sometimes, smaller groups of doctors are so tech-hungry that they join this group at great personal expense. These doctors, or the organizations that employ them have invested tremendous amounts of money in software that is already IHE aware. Often groups of these doctors have joined together to form local HIE systems. It is fair to say that if you are a doctor who has made an investment in technology that gives you IHE systems, you paid alot for it, and you want that investment to pay off. We can call these doctors the “whiz-bang crowd”, the EHR lovers, or simply “the haves”.

Today, in the NHIN Direct protocol meeting we had a polite skirmish (much respect for the tone everyone maintained despite the depth of feeling) between the representatives of the “have nots” like me, Sean Nolan, David Kibbe and other who are thinking primarily about the “have nots” and the vendors of large EHR systems, HIE’s and other participants in the IHE processes, these people tend represent the “haves”.

To give a little background for my readers who are not involved with the NHIN Direct project:

A Little Background

NHIN Exchange is a network of that anyone who speaks IHE can join. If you speak IHE, it means that you should be able to meet all of the requirements of the data exchange portions of meaningful use. It also means that you pretty much have to have some high technology: a full features EHR or something that acts like one. IHE has lots of features that you really really need in order to do full Health Information Exchange right. But it has never been deployed on a large scale and it is phenomenally complex. ONC started an Open Source project called NHIN CONNECT that implements IHE and will form the backbone of the governments IHE backbone. Beyond CONNECT both the Mirth guys and OHT/MOSS have substantial IHE related software available under FOSS licenses. There are lots of bolt on proprietary implementations as well. IHE is complex, but the complexity is required to handle the numerous use cases of clinical data exchange. Exchanging health data is vastly more complex than exchanging financial information etc etc. But to use IHE you have to have an EHR. Most doctors do not have an IHE-aware EHR.

ONC knew that HITECH would convince many doctors to invest in EHR technology that would ultimately allow them to connect to NHIN Exchange. However they also knew that many doctors, possibly most doctors, might choose not to adopt EHR technology. Someone (who?) proposed that ONC start a project to allow doctors to replace their faxes with software that would allow them to meet most, but not all, of the meaningful use data interchange requirements, without having to “take the EHR plunge”. This new project could meet all of the requirements that could be met with a fax-like or email-like “PUSH” model. I explained some of this in the power of push post. This project was called NHIN Direct.

Whats the problem

So what is the problem? A disproportionate number of the people who signed up to work on the NHIN Direct project are EHR vendors and other participants who represent lots of people who have made extensive investments in IHE. In short, lots of “haves” representatives. Some of the “haves” representatives proposed that NHIN Direct also be built with the subset of IHE standards that cover push messages. But remember, IHE is a complex set of standards. Push, in IHE, is much more complicated than the other messaging protocols that were under consideration. I have already made a comparison of the protocols under consideration.

IHE is really good for the “haves”. If you “have” IHE, then all kinds of really thorny and difficult problems are solved for you. Moreover, one of the goals of meaningful use is to get more EHRs (by which I mean “meaningfully usable clinical software”) into the hands of doctors. The US, as a country, need more people using IHE. It really is the only “right” way to do full health information exchange.

But IHE is not trivial. It is not trivial to code. It is not trivial to configure. It is not trivial to deploy or support. It is not trivial to understand. It could be simple to use for clinicians once all of the non-trivial things had been taken care of. But realistically, the number of people who understand IHE well enough to make it simple for a given clinical user is very very few.

The other options seemed to be SMTP or REST-that-looks-and-acts-just-like-SMTP-so-why-are-we-coding-it-again (or just REST). Both of these are much much simpler than the IHE message protocols. These would be much simpler for the “have not’s” to adopt easily and quickly. Of course, they would not get the full benefit of an EHR, but they would be on the path. They would be much better off than they are now with the fax system. It would be like the “meaningful use gateway drug”. It would be fun and helpful to the doctors, but leave them wanting something stronger.

The NHIN Direct project, fundamentally creates a tension with the overall NHIN and meaningful use strategy. As a nation we want to push doctors into using good health IT technology. But does that mean pushing them towards the IHE-implementing EHRs on the current market? or should we push them towards simple direct messaging? The answer should be something like:

“if doctors would have ordinarily chosen to do nothing, we would want them to use NHIN Direct, if they could be convinced to be completely computerized, then we should push them towards IHE aware clinical software that meets all of the meaningful use requirements”.

Given that split, the goal of NHIN Direct should be:

“For doctors who would have refused other computerization options, allow them to meaningfully exchange health information with as little effort and expense on their part as possible”

I, and others who realize just how little doctors like this will tolerate in terms of cost and effort, strongly favor super simple messaging protocols that can be easily deployed in multiple super-low cost fashions. I think that means SMTP and clearly rules out IHE as a backbone protocol for “have-nots” that are communicating with other “have-nots”.

Empathizing with the Haves

But the danger of focusing on just the requirements of your own constituents is that you ignore how the problems of those you do not empathize with the impact that your designed will have on other users. Both the representatives of the “have’s” and the “have nots” like me have been guilty of this. After listening to the call I realized that the EHR vendors pushing HIE where not being greedy vendors who wanted to pad their wallets. Not at all! They were being greedy vendors who wanted to pad their wallets -and- protect the interests of the doctors already using their systems. (that was a joke btw, I really did just realize that they were just empathizing with a different group of doctors than I was)

If you are a “have” doctor, you have made a tremendous investment in IHE. Now you are in danger of getting two sources of messages. You will get IHE messages from your other “have” buddies, but you will have to use a different system to talk with all of the “have-nots” who want to talk with you. That means you have to check messages twice and you can imagine how it might feel if for one doctor to be discussing one patient, across the two systems. Lots of opportunity for error there.

From the perspective of the IHE team, by making the “have nots” make the concession of dealing with IHE, rather than cheaper, simpler easier SMTP they reduce to one messaging infrastructure that eliminates balkanization. No longer will any user be faced with using two clinical messaging systems, instead they can have only one queue. Moreover, since we ultimately want “fully meaningful users” it is a good thing that the IHE-based NHIN Direct system would provide a clear path to getting onto the NHIN Exchange with a full EHR. From their perspective, having more difficult adoption for the “have nots”, and the resulting loss of adoption would be worth it because it would still get you faster to where we really need to be, which is doing full IHE Health Information Exchange with everyone participating.

Everyone wants the same thing in the end, we just have different ideas about how to get there! I believe that we should choose protocol designs for NHIN Direct that fully work for both sets of clinical users. I think we can do this without screwing anyone, or making anyone’s life more difficult.

The new empathy requirements

I would propose that we turn this around into a requirements list: We need a NHIN Direct protocol that

  • Allows the “have nots” to use NHIN Direct as cheaply as possible, that means enabling HISPs with simple technology that can be easily deployed and maintained using lots of different deliver models. (i.e. on-site consulting and server setup, ASP delivery etc etc)
  • Allows the “haves” to view the world as if it was one beautiful messaging system, and that was based on IHE. They should not have to sacrifice the investment they have made and they should not have to deal with two queues.

My Straw Man: Rich if you can read it

The IHE implementation group believes that all SMTP (or REST-like-SMTP) messages that leave the “have-nots” should be converted “up” into IHE messages and then, when they get close to other “have-not” doctors, the messages should be converted back “down” to SMTP. Which means that they are suggesting that HISPs that handle communications between “have-not” doctors should have to implement IHE in one direction and SMTP in another direction even though the message will have no more content or meaning after being sent.

The problem with that is that the HISPs that maintain this “step-up-and-down” functionality will have to cover their costs and develop the expertise to support this. This is true, even if the “edge” protocol is SMTP. The only approach that will work for this design is an ASP model, so that the HISP can afford to centralize the support and expertise needed to handle this complexity. That means low-touch support and low-touch support, and high costs translate to low-adoption. In fact doctors would probably be better off just investing in an ASP EHR that was fully IHE aware. So the IHE model is a crappy “middle step”.

But there is no reason that the HISP needs to handle step-down or step-up as long as they are only dealing with “have-not” doctors. If you allowed SMTP to run the core of NHIN Direct, HISPs could leverage current expertise and software stacks (with lots of security tweaking discussed later), to ensure that messages went into the right SMTP network. No PHI in regular email.  Local consultants as well as current email ASP solutions could easily read the security requirements, and deploy solutions for doctors that would send messages across the secure SMTP core. With careful network design, we could ensure that messages to NHIN Direct users would never be sent across the regular email backbone. I will describe how some other time (its late here) but it is not that hard.

But you might argue: “that is basically just SMTP as core! This is no different than your original approach. You are still screwing the haves!” Patience grasshopper.

To satisfy the “haves” we have to create a new class of HISP. These HISPs are “smart”. They understand the step-up and step-down process to convert IHE messages to SMTP messages. When they have a new outgoing message, they first attempt to connect to the receiving HISP using HIE. Perhaps on port 10000. If port 10000 is open, they know that they are dealing with another smart HISP, and they send their message using IHE profiles. Some smart HISPs will actually be connected to the NHIN Exchange, and will use that network for delivery when appropriate.

The normal or “dumb” HISP never needs to even know about the extra functionality that the smart HISP possess.  They just always use the NHIN Direct SMTP port (lets say 9999) to send messages to any HISP they contact. While smart HISPS prefer to get messages on port 10000, when they get an SMTP message on port 9999 they know they need to step-up from SMTP to IHE before passing to the EHR of the end user.

From the Haves perspective, there is one messaging network, the IHE network. They get all of their messages in the same queue. Sometimes the messages are of high quality (because they where never SMTP messages, but IHE messages sent across NHIN Exchange or simply between two smart HISPS).

Now, lets look at the winners and losers here. For the “have nots” the core is entirely SMTP. As a result they have cheap and abundant technical support. They are happy. For the “haves” they get to think entirely in IHE, they might be able to tell that some messages have less useful content, but that is the price to pay for communicating with a “have not” doctor. The “have nots” will get rich messages from the IHE sites and will soon realize that there are benifits to moving to an EHR that can handle IHE.

Who looses? The smart HISPs. They have to handle all of the step-up and step-down. They will be much more expensive to operate -unless- there is a NHIN Direct sub-project to create a smart HISP. This is what the current IHE implementation should morph into. We should relieve this burden by creating a really solid bridge.

This model is a hybrid of the SMTP and IHE as “core” model. Essentially it builds an outer core for “have not” users and an inner core IHE users. From the projects perspective, those that feel that simple messaging should be a priority for the “have nots” (like me) can choose to work with the SMTP related code. People who want to work in the interests of the “haves” can work on the universal SMTP-IHE bridge.

I call this straw man “Rich if you can read it”. From what I can tell it balances the core perspectives of the two interest groups on the project well, with places for collaboration and independent innovation. It’s more work, but it does serve to make everyone happy, rather than everyone equally unhappy with a compromise.

Footnotes and ramblings:

Don’t read this if you get bored easily.

I believe that this proposal excludes a REST implementation unless it acts so much like SMTP that SMTP experts can support it easily. Which begs the question, why not just use SMTP. SMTP fully supports the basic work case. The code already works and a change to REST would reduce the pool of qualified supporters.

I should also note that no one, ever is suggesting that the same program be used for email as for NHIN Direct messages. I think we should posit a “working policy assumption” that any NHIN Direct SMTP user would be required to have a different program for sending NHIN Direct messages. Or at least a different color interface. Perhaps Microsoft can release a “red and white GUI” clinical version of Outlook for this purpose… Sean can swing it…. or users could use Eudora for NHIN Direct and Outlook for regular mail. Or they could be provided an ASP web-mail interface.

We might even try and enforce this policy in the network design:

We should use SRV records for the NHIN Direct SMTP network rather than MX records. There are reasons for doing this for security reasons (enables mutual TLS) and most importantly, it means that there will be no PHI going across regular email. When someone tries to send an email message to their SMTP implementation looks up an MX record for to see where to send the message. If we use SRV for the DNS records, and you tried to send PHI to you would create an invalid MX record for such that MX for -> and was not defined in DNS. This will cause an error in the local SMTP engine without transmitting data. But the NHIN Direct aware SMTP server or proxy would query for SRV for the correct address and enforce TLS and be totally secure. Normal email messages to nhin Direct addresses would break before transfer across an insecure network but the secure traffic would move right along. Obviously this is not required by the core proposal, but it is a way of ensuring that the two networks would be confused much less frequently. This plan might not work, but something like this should be possible.

This is a pretty complex email setup, but SRV is growing more common because of use with XMPP and SIP. Normal SMTP geeks should be able to figure it out.