NHIN-Direct leans towards HealthQuilt Security Model

My last big project, before the skunkworks project I am doing now for Cautious Patient, was as the Chief Architect at HealthQuilt.

HealthQuilt was a prototype project for a Health Information Exchange in Houston T.X. hosted by UT SHIS. (Which just won the status of Regional Extension Center under ARRA . My boss at HealthQuilt, project leader Dr. Kim Dunn will be the director of the new REC. Dr. Dunn built a community of the local “interested parties” in Health IT during the HealthQuilt project. Ultimately, politics (remember this was pre-incentive) would prevent any data being transferred between organizations using our model before funding ran out. But now the community that Dr. Dunn built will be vital in her new role as REC director.

My job at HealthQuilt was to choose which technologies we would use to prototype the HIE. HealthQuilt was committed to Open Source from the beginning, so I was an obvious choice to handle the detailed technology choices. We spent alot of time with  Houston Health Information Security professionals aling with the crews at Mirth and MOSS, designing a workable trust model.

I am happy to say that just as Dr. Dunn will be able to build on the HealthQuilt community for the Houston-based REC, the NHIN-Direct project may decide to reuse some of the concepts (and perhaps some of the code) that we developed at HealthQuilt. Here are some of the basic, core concepts of the HealthQuilt model.

  • The Health Information Network should be built using point-to-point ssl VPN or https connections.
  • The trust model should use X.509 PKI Certificates.
  • It should use many (rather than one) Certificate Authorities (CA).
  • Both the recipient and the destination of a given VPN tunnel or https connection must have certificates. This is very different than the PKI model used on the Internet, where servers are generally certified but clients are generally anonymous.
  • This “encrypted Health Internet” should run entirely underneath any healthcare protocols. That means trust is handled first at the network level. If some actor in the network is no longer trusted, CRL or blacklists will prevent -any- communication with them, rather than relying solely on relatively young implementations of health protocols to provide adequate encryption.
  • The “relatively young implementations of health protocols” should still implement encryption, as though there was no network security in place. (This one is actually Sean Nolans idea.. more later)
  • This allows for a natural layering of security, which makes security wonks like me feel all warm and fuzzy.
  • The “core NHIN organization” should have a list of “typically trusted CAs” called “anchor CAs” that it recommends to all network participants. This is similar to the way that normal Internet CA’s are “suggested” to you by automatic inclusion in your browser of choice.
  • Individual network participants can also choose to trust other CAs, like those provided by a hospital they are affiliated with.
  • The job of the CA’s will be (roughly) to make sure that anyone they issue a certificate to is, in fact, a particular clinical entity (doctor, clinic, hospital, etc) who has the right to receive and/or send PHI.
  • This means that members of the network do not need to sort out trust relationships on a peer-to-peer basis. They can assume that everyone who the CA trusts is trustworthy, and they can automatically share data with them when a clinical need justifies it.
  • If, for some reason, two members of the network do not trust each other, they can still use a blacklist to prevent communication.

The browser providers determine what bar CA’s must reach for automatic inclusion in each browser. That “automatic inclusion” is the foundation of the trust model of the Internet. That is what gets you a secure connection to Amazon to buy a book, even though you do not think too much about “how do I know that is really Amazon?”

So why did HealthQuilt come up with this model? We knew that each institution in the Houston area would need to make trust decisions on its own. They would never tolerate us saying “Here are the ten other hospital systems in the network, take it or leave it”. The answer would always be “leave it”.  Some of our constituents were very concerned that a blanket trust policy would mean that they would trust organizations that they do not have a real-world trust relationship with, i.e. Planned Parenthood clinics vs. Catholic Charity Clinics. In order to participate in the network, they needed to have fine-grain control over the trust decisions. Most participants planned to trust everyone else in the network, but they did not want to trust that the network itself would remove bad actors in the future. The combination of blacklists (which is how a node can cut off communication with another node) and CRL’s (which is how a CA says “I do not love you anymore”)  provide both network and node level control over dealing with “bad-actors”.

Most importantly, the network-level security model is technologically identical to the current Internet trust technology. The policies and the trust decisions are very different, but the technology is basically the same; ssl + x509. That is good, because it means that the trust issues are not entirely handled at a level where new protocols are being developed. If you rely only on message security, and you discover that one implementation was “leaking” information by encrypting slightly less than what was intended in a given “message” that could be a real problem. SSL-vpns and https, using x509 PKI is a known-quantity (not the same thing as “safe” mind you). Using that “underneath” the new stuff that NHIN Direct and CONNECT projects will develop will help ensure that implementation or design mistakes do not automatically imply a broad attack vector.

Moreover, when advances (i.e. quantum cryptography) makes the current Internet Trust model obsolete, it will have to replaced with something. Whatever it is replaced with will have to play at least some of the same roles as the current X509/ssl infrastructure. That means the whole Internet will work with us to upgrade the network trust model.

I should point out that the NHIN Direct team was certainly not doing nothing until I showed up and told them what I had done with HealthQuilt. I think that something very like the basic HealthQuilt Trust model would have been embraced in any case anyway. I am just happy to be able to present a package of thought-out ideas to the NHIN Direct team. Ironically, even before I made my suggestions, Sean Nolan, of Microsoft HealthVault, was already arguing against the “single CA, top down trust model”. Once you make the concession that you are not going to attempt to do trust entirely using CA’s and proxy CA’s (the top down model) then most of the HealthQuilt Trust model, is just incremental obvious choices.

I will be calling this trust model, the “HealthQuilt Trust Model”. This is despite the fact that the NHIN Direct trust model seems like it could justifiably also be called the “Microsoft Model”. Microsoft has some really talented technical people and it makes me feel good to see them reaching the same conclusions that I do, in parallel. Still I seriously doubt that the new NHIN Direct trust model will ever be called “The Microsoft Model” since the name does not actually describe the model at all.  This is good, because the phrase “Microsoft Model” generally makes the hair on the back of my neck stand up and do the polka. It should also be noted that my original ideas on the HealthQuilt model were pretty useless without adjustment from Ignacio Valdes of LinuxMedNews, my brother rick, or David Whitten of WorldVistA and the VA and several of the Mirth engineers and Alesha Adamson of MOSS. All of whom gave me valuable feedback. It is also important to note that the model has improved substantially in response to the excellent thinking done by Brian Behlendorf and the rest of the NHIN Direct Security and Trust Workgroup.

Still, I will be using the name because it is truly indicative of how the trust model should work. It should be like a quilt, legitimately different ideas about trust and security implemented by different organizations, but despite those differences, still connected. The Internet has shown time and time again, that uniformity is not the only way to cooperate.

You can follow whats happening on the NHIN Direct Security and Trust Workgroup forum. If you are truly a glutton for reading, you can read my posts and the responses

-FT

OSCON includes Healthcare

Update: I am speaking at the 2010 OSCON.

I am happy to spread the news that OSCON, probably the most important Open Source conference in the country, will have a healthcare track in 2010.

Andy Oram has explained the decision to add a healthcare track to OSCON.

They have asked me to help promote the conference and I want to be sure that our community offers up the very best in talks and technical content. This is a really good way to access the developer mind-share in the broader Open Source community and we need to jump all over it.

I can honestly say that this conference will be vastly more important than the little shindig I am putting on in Houston. If you had to attend just one of the two, then you should probably go to OSCON… God bless you if you can go to both!!

With a healthcare track at OSCON, and a healthcare track at SCALE (DOHCS) we are finally moving towards general Open Source healthcare meetups.

I should take a moment to promote OpenMRS, CONNECT and WorldVistA all of which have great project-focused meetings already.

Happy days!

-FT

Technology vs Policy for privacy

I have long been an advocate of reasonable and measured reaction to “privacy scare tactics”. I have argued, for instance, that it was a good thing that HIPAA does not cover PHR systems. But that does not mean that I do not think privacy is important. In fact there has been something nagging at the back of my brain for several years now:

We typically use technology to provide security, but we use typically use policy to protect privacy.

That is deeply problematic. To see why we should carefully consider privacy and security in a simple context.

Imagine that you have purchased a safe-deposit box at a bank. Safe-deposit boxes are a great example, because if the bank accidentally gives away the money from your account, its not really a big deal… after all, its a bank, if they lose your money, they probably have some laying around somewhere else. The “lost money” could be restored to you from bank profits. But if you have baby pictures, your medical records, your will, Document X or your family jewels (not a metaphor) in your box, the bank cannot replace them.

Now to “protect the contents” really subdivides into two issues, security and privacy.

Security is the degree of protection against danger, loss, and criminals… according to wikipedia.. at least for today 😉

In the bank context “Security” means that no one can easily break into the bank  vault and grab your “Document X” from your box and run off. Of course there are many movies about how bank security can be overrun, but, thankfully, it is not a typical event for a bank have its security deposit boxes robbed. Banks use both technology (the vault, the alarm system, the video system) and policy (only the bank manager can open the vault, the vault can only be open during the day, etc etc) to protect the security of the bank boxes. Note that security is a spectrum of more secure to less secure, there is no such thing as just “secure”. Security can almost always be improved, but eventually improving security begins to interfere with the usefulness of something. If the bank vault could only open once a year, it would be more secure, but not very useful.

In the world of health information “Security” means that is difficult for a hacker to break in and get access to someone’s private health record. That is an oversimplification, but a useful one for this discussion.

Privacy is something else. The source for all knowledge (at least until someone changes it) says: Privacy is the ability of an individual or group to seclude themselves or information about themselves and thereby reveal themselves selectively.

In the bank example, Privacy is all about who the bank lets into the deposit box. For instance, if they decide suddenly that all blood cousins will now get automatic access to each others safe-deposit boxes, that would be a violation of your privacy. If your cousin could get access to your Document X because the bank let him, then the problem is not that the vault walls are not thick enough or that the combination is not hard enough. The problem is that the bank changed the basic relationship with you in terms of privacy. This is the first of several principles: Security Technology does not necessarily help with privacy.

Banks do not typically change the deal like that, sadly, modern websites regularly do. Two recent examples are the launch of Google Buzz and the Facebook privacy problems including the most recent expansion. The problem with modern information companies is that they usually take the corporate upper hand with their privacy policies. Sometimes this is really egregious, such as the original HealthVault Privacy policy, which gave permission for Microsoft to host your health data in China. But almost always it includes the key phrase “We reserve the right to change this privacy policy at any time”. If we sum up the problems with using privacy policies to protect privacy we find another principle: Privacy Policies often provide little privacy as written, often give permission to change the policy in the future which negates any notion of commitment, and even then policies are often ignored or misinterpreted. Privacy Policies should not be the only thing protecting our privacy.

Perhaps you want your spouse to be able to get access to your box. Perhaps you even want your cousin to have access. But the idea that the bank can just “change the deal” from what you had explicitly allowed is pretty strange. Thankfully banks rarely do that. But you could use technology to ensure that your privacy was protected, even if the bank arbitrarily changed its policy.

If you wanted you could keep a safe inside your safety deposit box, and keep your Document X inside the safe. Then you could give your combination to your spouse, so she/he could also open both the safe, (as long as you had also told the bank to give the access to your  to the safety deposit box to your spouse). Even if the bank decided that your cousin should have access to your box too, it would not matter, since your cousin could not open your safe. (we will pretend for the sake of this analogy that explosives and other means to circumvent the security of the safe would not work and the safe could not be removed from the safety deposit box). Our next principle: It is possible to use technology to help protect privacy.

Note that the safe also protects your Document X against access from bank employees. This is important because it does not matter what the privacy policy is, if it is not enforced by the employees of an institution, or if it the policy is arbitrarily changed in a way that you feel violates your privacy. It is also important to note that the government employees could not open the safe either. Of course we all know that governments are not inclined to violate the privacy of its citizens, but if the government did get a look inside the safety deposit box, all they would see would be the safe. Here we have another principle: Privacy technologies should prevent unwanted access from insiders and authority figures as well as from “bad” guys.

Which brings me to my point. We need to have more technologies available for protecting health information privacy. We have lots of technologies available for protecting security, but these do not protect privacy at all. These technologies, if they are going to work, need to give people the power to ensure that their health information is protected even from the people who provide a given technology service.

So far we have several principles:

  1. Security technology does not necessarily help with privacy.
  2. Privacy policies should not be the only thing protecting our privacy.
  3. It is possible to use technology to help protect privacy.
  4. Privacy technologies should prevent unwanted access from insiders and authority figures as well as from “bad” guys.

To these I would like to add some implied corollaries:

Encryption, by itself is not a privacy technology. It is a security technology, but it is only a privacy technology depending on who has the keys. This was the problem with the infamous clipper chip. The first issue with Information System privacy is, and will always be, “who has the keys?”.  So when a service says in response to a privacy challenge “oh don’t worry its all encrypted” that’s like saying: “You are afraid that I will fax document X to your mother? Don’t worry I keep document X at home in my safe!” If you are afraid that I am going to fax document X to someone, the fact that it is now in my safe should not make you feel more comfortable. I can still get in my safe and I can still fax document X wherever I want. Document X being in a safe is only helpful if you trust everyone with the keys to the safe.

The other thing is that you simply cannot trust proprietary software to provide privacy. If you cannot read the sourcecode to see what the software does, then it does not matter what kinds of privacy features it advertises or even who has the keys. At any time the developers could change the code and make it violate your privacy. To further extend and abuse our example, it does not help you to have a safe inside the safe-deposit box if the bank can change the safe’s combination whenever they want, without you being the wiser. Only Open Source systems can be trusted to provide privacy features.  I do not argue that this is enough, but it is an important starting point.

I have been working on a relatively complicated system for achieving these goals. My initial application is blessedly simple, and so I have been able to avoid some of the issues that make these kinds of systems commonly intractable. My service is still in skunkworks and will continue to be very hush-hush until I make the beta launch. But I will be announcing my designs soon and I have already submitted those designs to some Internet Security professionals that I respect to make sure I am moving in a sane direction. This kind of thing is really technically difficult, so I am certainly not promising that I can deliver this kind of technology, but perhaps I can deliver something working that would give other people a place to start. As you might expect, the sourcecode for this project will eventually be made available under a FOSS license.

I do not want to get into the design details yet (so please don’t ask) or even talk about my new application (which is just entering closed beta) but I wanted to start talking about why these issues are important. Please feel free to comment on the features that privacy protecting technology should have…

-FT

Welcome NPR

Apparently, npr mentioned my blog. They contacted me for a comment on Medsphere’s ROI calculator.

So if you are from NPR I thought I should actually give you the entire email I sent to Mr. Weaver:

My impression of Medsphere’s prices is that they are a “premium” VistA vendor. If you look you can find cheaper VistA support. However, there are only a handful of VistA support companies with real experience and all of them charge like Medsphere does. On the other hand, even “premium” support for VistA is cheaper than proprietary solutions by 30% to 80%. This is simple evidence of the economic pressures of Open Source. Medsphere can and does charge as much as it can. But it is competing against other groups using the same software that it developed, and against hospitals doing the work internally. There is objective value in going with a vendor who has succeeded with an Open Source project in the past. The balance between open competition (which lowers the price) and proven track record (with raises it) means that Medsphere’s price is objectively fair, and subject to market forces both before and after the installation of the software.

This is the real story about Medsphere and other Open Source Health IT vendors. It’s not that they are inexpensive, although they are much cheaper. Its that their prices actually reflect the markets response to their ongoing performance. That not only makes things cheaper now, but ensures long term cost savings and improvements in performance.

Proprietary lock-in is a form of monopoly where the vendor has an economic incentive to provide poor support. Therefore at least the support services from proprietary vendors are not subject to market forces. That is why they are so poor and Senator Grassely seems to think they warrant investigation.

Medspheres Open Price-Stimulus Calculator

Medsphere marketing has making some pretty savvy moves lately. First the bus and now an open ROI calculator.

This is really cool. Medsphere is being transparent about its prices, and showing you exactly how much it is putting its neck on the line for its customers. This tool will let any Hospital IT person calculate just what they can expect from the stimulus package, and how much Medsphere will cost.

I do not know of anyone else, proprietary or otherwise, that has anything like this.

It would be neat to run through several hundred test cases so see where Medsphers “sweet spot” is… I am betting small-to-medium hospitals.

-FT

Announcing the Patient Participation Conference

With pleasure I announce the 2010 Patient Participation Conference. This is a project of my employer Cautious Patient

The subject of the conference is simple “How to be an e-patient or e-patient caregiver”. This should be broad enough subject matter to cover any current discussion in the e-patient community. The basic principles of the conference will be:

  • An unconference: content generated as much by attendees as conference organizers http://en.wikipedia.org/wiki/Unconference
  • Low cost: preferably under $200 for the average attendee, we want real patients and caregivers to be able to attend out-of-pocket. That means we also need to low-glitz; you might get a t-shirt as a handout, but no ipod.
  • Scholarship some anchor and keynote speakers: we would like to find sponsors (and are willing to sponsor ourselves) attendees who would be important speakers, leaders in the e-patient community. We cannot afford to pay everyone, but we want to do what we can.
  • Everything video recorded, and published for free in near real time. This makes the conference for everyone, not just those who can afford to pay to attend
  • Small, short and intimate.

While most of the conference will be unplanned and free-form, generated in real time by people attending the conference, I want to have a few pre-planned anchor talks and keynotes that will serve to ensure that attendees are guaranteed to get at least a few things that are useful to them. With that in mind I would like to find basically two types of talks, either leaders in the community that are also strong speakers/teachers or talks with amazing content, delivered by people who are just OK speakers. I would rather hear a great story than a great speaker. Here are my personal biases as far e-patient speaker selection:

  • The User over the CEO – I would rather hear about a user who used HealthVault to improve his health than the guy who runs HealthVault at Microsoft.
  • Tactics over Strategy – I would rather hear “How to be a great diabetes mom” than “how to improve diabetes compliance in the U.S.”
  • Technology, but in the back seat – I think “How to really use Google Health” and “How to use gmail to manage your health” would be equally relevant. I would -love- a talk entitled “My health notebook kicks your PHR’s a**”
  • Hard stuff over the easy stuff – losing weight and lowering cholesterol is a good goal for almost half the country, but it is not the same thing as living with Diabetes or a failed kidney, and that is not the same thing as cancer or anything with “terminal” in the name. This also applies to controversial issues.
  • Evidence over Anecdote, but both are best – I want to make sure “the science is on our side”. I think talks that emphasize how patients can embrace and understand research are critical. I think a “tale of two e-patients” is a great example of a talk that nails this issue.

What other “biases” should an e-patient conference have in its anchor speaker selection? What specific speakers would you like to hear? If I am going to invite the “CEO” what companies are really enabling the e-patient movement?

Please help me out by contacting me with the answers, but you can also just leave a comment!

The Super Silo

I wanted to republish a post that I made to emrupdate here.

This was in response to a person who wants to help a doctor with what amounts to a super-simple note-keeping system for her nursing home patients. I should note that I think the mashups of technology that this person is suggesting is pretty clever. This is obviously a bright guy who is responding to a doctors sense of the record keeping problem. This is the dangerous first step of healthcare informatics, where a geek feels like he understands the requirements, but really has only scratched the surface. What feels like a good idea to this pair will create problems later that neither of them could effectively handle.

Here is the summary of the  original proposal:

Essentially I am using File Explorer (and optionally, Launchy) as the EMR system, and Notepad (with its .LOG function) as the note-taking application. Since she is the only physician in her practice and nobody else needs access to these files, I see no need to complicate things further. This system should also be extremely small in size, and trivial to back-up.

Is this sufficient, and is it HIPAA-compliant?

My response (which references more of the message)

> Is this sufficient?

No. For the love of all that is good. No.

What you are talking about is a system that is designed to make -your- doctor more effective, at the express cost of the ability to have information sent to or from other healthcare providers. The notion that this is acceptable is tragic. Your doctor is simply considering rendering her note into a format that would have maximal use for herself, at minimal cost, both in dollars and effort. Because she has so few patients, there will never be enough financial incentive to justify porting this data into a form that can be merged with other patient data. In short, you are considering creating a kind of super-silo.What happens to this data when your doctor retires? What happens to this data if you are not around to support the solution? Neither of you has thought this through, or even taken the time to consider that you to not even have the needed tools to properly think it through. I would encourage you to play a long game of “what-if” sprinkled with some “what-happens-when?” so that you can fully appreciate that you have not thought this through.

In twenty years people will talk about projects like this in the same terms as people do now about doctors participating in blood-letting and refusing to wash their hands because they were “gentlemen”. Your doctor is considering excusing herself from the responsibility to participate in Science and you are enabling her. The informatics community does not have that much figured out (sadly) but we know enough to say that this plan is a bad one.

Do not feel too bad. Yours is just an especially bad version of the bad decision that most doctors are making to use any proprietary EHR system. The problems that you will face will happen to most doctors in America, just a little later.

Please reconsider going down this path. I would recommend ClearHealth or OpenMRS as other simple and cheap EHR systems that you can use in your environment. But really you should look into VistA, either Astronaut or OpenVistA (which both have good licenses and OK installers) b/c the VA runs many nursing homes and VistA has a reputation of handling that use-case well.

Open Source at HIMSS 2010

Hey,
I wanted to create a post for those interested in Open Source at HIMSS. I am out of the country, (Finland is so much warmer today at 0 deg Celsius) , so I cannot make it.
So far I know that

Alesha Adamson (MOSS) and Skip McGaughey (OHT) are speaking at the DSS educational session, and

Brian Behlendorf is speaking about CONNECT.

Brian wrote to also remind me…

Tell folks to come to the “interoperability showcase” in exhibit hall C, in particular to the NHIN and Connect area, where we are presenting on Connect with 60 other partners (most non-Fed) who have piloted or set up exhanges with NHIN standards, most of them with Connect, many of them by integrating with other Open Source med software.

CDC has a talk on the Biosense project.

I wish I could see those talks.

You should stop by booth 233 at the Interoperability Showcase and see MOSS demonstrate OpenPIXPDQ, OpenXDS and OpenATNA. MOSS also has a regular vendor booth numbered 7470.  DSS is at booth 2521. PatientOS is at booth 4124 with orange shirts and lunch bags…

Medsphere and ClearHealth abstain this year (I think). I know the Mirth guys are around too.. If you are there and want to get ahold of them, send me a mail and I will do my best to get an introduction…

There was a code drop for a population health tool.

If you are at HIMSS 2010 and care about Open Source, let me know… I would be happy to add you to this post..

-FT

How far we have not come

I meet Dr. Koppel at the last Indivo X meeting at Harvard. After hearing about his work I realized… this man must be the one of the saddest people in the industry, it is basically his job to study the interface between people and EHR systems. He is the guy who documents just exactly how EHR systems fail. e-patient Dave is blogging about a talk that Dr. Koppel gave.

EHR software famously under-performs.

As Dr. Koppel points out, from the perspective of the clinicians, the design of the EHR systems are pretty bone-headed. The standard answer from vendors is “we cannot fix that” until they have a financial incentive (like a lost sale or contract) to get something done.

For instance, Dr. Koppel points out that there is problem with the sorting of drug dosages drop down in an EHR. For fun I can show you how it would actually look:

Hard to imagine why the options would be in that order, why doesn’t it look like this?

The reason it looks like this is because the software is sorting alphabetically by written name… you can see it more easily if you write it like this:

But why is this happening at all? Why wouldn’t the software automatically do the right thing? I could imagine several possibilities. Perhaps the dosages are stored as strings and then converted to numerals for display. Perhaps this decision was made because there is a mix of numerical and text data in the doses field of the underlying database. Something like “seven milligrams” and then “seven milligrams time release” or some such. But I do not know. More importantly, Dr. Koppel likely does not know, and the clinicians who have to carefully choose a dose using choosers like this every day… do not know why the system is designed like this. I could be wrong, Dr. Koppel could know… but if he does, it is because he has been told by someone who can read the sourcecode.

I think we should pause at the irony here. If we could name the chief endeavor that modern medicine is undertaking one might say “crack the DNA code”. Our cells are “programmed” with a code that until recently we could not read and that we still cannot comprehend. We are seeing “through the glass darkly” into our own cells for a thousand natural reasons.

However, we tolerate a situation where clinicians see “through the glass darkly” into their own health software. Why? Because the have decided to use proprietary software. Why do they make that choice? It seems so illogical that it makes me dizzy. I wanted to respond specifically to some of the things that were said in the presentation:

Dr. Koppel: Customization is a sales gimmick and not meaningful.

The only way to make customization meaningful is to have full source code access with the right to modify the code running in the hospital.

Problems could be fixed by smart 14 year old.

If and only if they have access to the sourcecode. The insight here is that is a question of  “access” and not “complexity”

Let me take a close look at how open source licensing impacts each of these.

  • Open error reporting… and dissemination – The license gives you permission to publish bug fixes, and by implication the bugs that the fixes… fix.
  • Rapid repairs e-hazard tracking – Same story.
  • Meaningful meaningful use standards – Meeting standards of any kind can only happen when you try, fail and recode.
  • Meaningful evaluation – can only happen when you try different version of the sourcecode, and perform studies on which version works best.
  • Focus on clinical needs 1st and back office 2nd – Ha! Cannot say that the software license will change your basic motivation.
  • Interoperability for a clinical setting – Interoperability can only be achieved by coding to another code, this is what the project Laika
  • Certification as more than a sales strategy and sinecure – Certification is a poor workaround to not getting at the sourcecode. If you do not have access, certification is making promises it cannot really verify.
  • The simple reality is that the funding from ARRA will go towards installing software that will stagnate and rot in the hospitals and clinics across America precisely because clinicians do not understand the implications of software licensing. Dr. Koppel focuses on “the software contract” which is mostly an irrelevant afterthought. Unless the software license allows you to fire the software vendor and get one that will reorder your lists correctly, the contents of the software contract are irrelevant. The right to fire in an Open Source software license has teeth. It changes the power dynamic in ways that the contract cannot.

    My grandfather once told me never to play another mans game. “If his game is pool, play him in chess. If his game is chess… play him in pool.” Clinicians are losing the software game again and again, they need to stop playing the game that has been setup by the proprietary software vendors.

    -FT