Lack of Transparency in Houston, T.X.

I am quite happy to say that your insane business practices will soon be coming to an end.

Do you honestly think that any other business could get away with not providing up-front pricing? You actually expect me to visit in-person to determine whether you have a fair price for services? Try calling Walmart or IHOP or any other business that sells standardized products, you will find that they will be happy to publish those prices. Most of those businesses publish their prices on the Internet.

Prepare to be blogged about as a fine example of what cannot continue.
Do let me know if you decide to change your policy so that I can
update my blog piost.


On Mon, Nov 2, 2009 at 3:31 PM, Midtown Dentistry
<> wrote:
> Good afternoon Mr. Trotter,  Thank you for contacting Midtown Dentistry.  We
> don’t give quotes over the phone or emails.  I will be happy to give you an
> appointment for and exam and consultation.  At this appointment you will
> have a full exam, consultation & x-rays.  Dr. Penchas will go over all of
> your needs and you will be given a treatment plan will all costs involved.
> Please call me at the phone number below so that I may assist you with an
> appointment.  Thank you again,
> Glenda Cornell
> Midtown Dentistry
> 315 Westheimer
> Houston, Texas 77006
> censored
> —–Original Message—–
> From: Fred Trotter [mailto:censored]

> Sent: Monday, November 02, 2009 11:13 AM
> To: censored
> Subject: Contact us message
> Contact Us Message
> Name : Fred Trotter
> Email : censored
> Phone :  censored
> Message:
> Hi, I need to have a extensive cleaning done (according to my previous
> dentist) and I would prefer to have it done under general anesthesia. I will
> be paying cash, so I would like to know the cost of an “extensive” or “deep”
> cleaning under general anesthesia factoring in any cash discounts you may
> offer
> Regards,
> -FT

Who owns the data

Who owns the health information?

  • the patient to whom it refers?
  • the health provider who created it?
  • the IT specialist who has the greatest control over it?
  • the researcher who aggregates it?
  • the health 2.0 company that harvested it?

the notion of ownership is inadequate for health information. No one has an absolute right to destroy health information. But we all understand what it means to own an automobile: You can drive the car you own into a tree or into the ocean if you want to. No one has the right to do things like that a “master copy” of health information.

All of the groups above have a complex series of rights and responsibilities relating to health information that should never be trivialized into ownership.

But asking the question at all is a hash argument.

What is a hash argument?

Come to think of it, there’s a certain class of rhetoric I’m going to call the “one way hash” argument. Most modern cryptographic systems in wide use are based on a certain mathematical asymmetry: You can multiply a couple of large prime numbers much (much, much, much, much) more quickly than you can factor the product back into primes. A one-way hash is a kind of “fingerprint” for messages based on the same mathematical idea: It’s really easy to run the algorithm in one direction, but much harder and more time consuming to undo.  Certain bad arguments work the same way—skim online debates between biologists and earnest ID (Intelligent Design) aficionados armed with talking points if you want a few examples: The talking point on one side is just complex enough that it’s both intelligible—even somewhat intuitive—to the layman and sounds as though it might qualify as some kind of insight… The rebuttal, by contrast, may require explaining a whole series of preliminary concepts before it’s really possible to explain why the talking point is wrong.

At some point I will modify this article to actually do the rebuttal. At this point it is enough to say that -even asking the question- who owns the data is creating a hash argument. The question presumes that the notion of ownership is valid and jettisons those foolish enough to try and answer the question into needless circular debate. Once you mistakenly assume that the question is answerable you cannot help but back an unintelligible position.

People asking this question at conferences is a pet peeve for me.

(update 2012: fleshing out this post, for reposting to radar)

So the reason that “ownership” does not apply to well to health data is that “ownership” means a little too much to apply well for anyone. Here is a quick chart that shows what is possible depending on a given role.

Person/Privilege Delete their copy of data Arbitrarily (without logs) edit their copy of data Correct the providers copy of the data Append to the providers copy of the data Acquire copies of HIPPA covered data
Sourcing Provider No. HIPPA mandates that the provider who creates HIPAA covered data must ensure that a copy of the record is available. Mere deletion is not a privilege that a provider has with their copies patient records. No. While the provider can change the contents of the EHR, they are not allowed to change the contents without a log of those changes being maintained. Many EHRs contain the concept of “signing” EHR data, which translates to “the patient data entering the state where it cannot be changed without logging anymore”. Yes. The provider can correct their copy of the EHR data, providing the maintain a copy of the incorrect version of the data. Yes. The providing merely add to data, without changing the “correctness” of previous instances of the data. Sometimes. Depending on the providers ongoing “treatment” status of the patient, they typically have the right to acquire copies of treatment data from other treating providers. If they are “fired” then they can lose this right.
Patient rights Yes they can delete their own copies of their patient records, but requests to providers that their charts be deleted will be denied. No. Patients cannot change the “canonical” version of a patient record No. While a patient has the right to comment on and amend the file, they can merely suggest that the “cannonical” version of the patient record be updated. Yes. The patient has the right to append to EHR records under HIPPA. HIPPA does not require that this amendment impact the “canonical” version of the patient record, but these additions must be present somewhere, and there is likely to be a substantial civil liability for providers who fail to act in a clinically responsible manner on the amended data. The relationship between “patient amendments” and the “canonical version” is a complex procedural and technical issue that will see lots of attention in the years to come. Usually. A patient typically has the right to access the contents of an EHR system assuming they pay a copying cost. EHRs frequently make this copying cost unreasonable and the results are so dense that they are not useful. There are also exceptions this “right to read” which includes psychiatric notes, and legal investigations.
True Copyright Ownership (i.e. the relationship you have with paper you have written or a photo you have taken). Yes. You can destroy things you own. Yes. You can change things you own without recording what changes you made. No. If you hold copyright to material and someone has purchased a right to a copy of that material, you cannot make them change it, even if you make “corrections”. Sometimes, people use licensing rather than mere “copy sales” to enforce this right (i.e. Microsoft might have the right to change your copy of Windows, etc…) No. Again you have no rights to change another persons copy of something you own the copyright to. Again, some people use licensing as a means to gain this power rather than just “sale of a copy”. No. You do not have automatic right to copies of other peoples copyrighted works, even if they depict you somehow. (this is why your family photographer can gouge you on reprints.

Ergo: neither a patient, nor a doctor has an “ownership” relationship with patient data. So asking “who owns the data” is a meaningless time-wasting and shallow conceptualization of the issue which is at hand.

The real issue is: “What rights to patients have regarding healthcare data that refers to them?” This is a deep question because patient rights to data vary depending on how the data was aquired. For instance a PHR record is primarily governed by the EULA between you and the PHR provider (which usually gives you wildly varying rights depending), while right to your doctors EHR data is dictated by both HIPPA and Meaningful Use standards.

Usually, what people really mean when they say “The Patient owns the data” is “The patients needs and desires regarding data should be respected”. That is a great instinct, but unless we are going to talk about very specific privileges enabled by regulation or law, it really means “Whatever the provider holding the data thinks it means”.

For instance, while current Meaningful Use does require providers to give patients digital access to summary documents, there is no requirement for “complete” and “instant” access. While HIPPA mandates “complete” access, the EHR serves to make printed copies of previously digitized patient data completely useless. The devil is in the details here and when people start going on about “the patient owning the data” what they are really doing is encouraging a mental shortcut that cannot readily be undone.


The Health Internet

For whatever reason, people still do not get the basics of the Health Internet. Part of the problem is the fact that the marketing term has been, until recently the National Health Information Network or NHIN. The Feds recently decided to start calling the project the Health Internet, because that gives a much better idea of what they are trying to achieve.

Please do not be the guy/gal who writes in my comments but the Internet is not secure, that means my privacy will be violated. That is pure FUD and is not how the Health Internet will work. It is a relatively simple process to make the Health Internet into a zone that is more secure and private than the current health information infrastructure. Notice that did not say “secure” I said “more secure”. Your bank is not “secure”, your doctors paper records are not “secure”, the CIA is not “secure”. As an adjective, secure is more like the human attribute of “tall”. I am typically considered a tall person, but in college I was an student athletic trainer for my schools basketball team. In that crowd, I was short. While there is one and only one person who can be considered universally “tall”, it is well understood that this is a relative term. Similarly, the Health Internet is relatively more secure than current systems. I personally am far more comfortable having my private data in the Health Internet than I am with having my paper records locked in my Doctors office. You should be too.

So you should not be thinking about security or privacy in the Health Internet…. Really… It is as close to a solved problem as it gets. There are always obviously ways to make things more secure… but taller is not always better.

So what does the Health Internet buy you as an individual living in the US? To put it simply you and your Doctors should eventually be able to get to all of your health information as easily as you now get access to your financial information. Its a big promise, but the design of the Health Internet should eventually make that kind of convenience and access a reality.

Given that, it becomes obvious why a rebranding to the Health Internet is a good idea. For several basic reasons:

  • the original Internet started life as a government network (ARPANET)… And that has turned out pretty well.
  • the reason that the original Internet was such a hit was that people built neat stuff on top of it. Similarly, the Feds are hoping that people will use the Health Internet as the platform for further innovation.

So the Health Internet is a good thing and everyone should embrace it.

So how do you jump start a Health Internet? You do it by providing Open Source Software that enables people to participate in the new network.

Most people do not really understand the relationship between Open Source networking projects and the success of the original Internet. Here is how this breaks down:

Most of the Internet servers that provide X do it using Open Source project Y. With that as a template, look at the following chart:

Web server:Apache

Of course, you -can- use proprietary software for these components, but the Internet as we know it would not exist without these very low cost tools that provide a substancially large portion of our Internet infrastructure. So whats the plan for the Health Internet? Simple.

Health Internet:CONNECT

The CONNECT project is an Open Source project that -will- run the core Health Internet. The core will connect major government health data sources including the VA and the DoD the initial Health Internet core. Most importantly the CONNECT software is available for local exchanges to connect into the core Health Internet.

Overall the strategy of creating an Open Source project that can be used to fractally to create other, connected, networks is a proven strategy. Its a smart move and it is going to change Health Informatics in fashion that is very similar to the way the Internet has already changed computing generally.

Open Source Health Software Conference

So I have two small news items.

First, I am renaming the yearly Houston Open Source Conference from fosshealth to OSHealthCon, which just stands for Open Source Health Software Conference. Why the name change? Well, it is caused by the need for me to distance myself from the term “free”. I know what “free” means when you are talking about software, but again and again, the term is abused by people with a proprietary agenda.

People would talk about the differences between “free software” vs “commercial software” implicitly insulting any professional who wants to use freedom-respecting licenses.So I am throwing in the towel. I am not going to fight this battle any more. At some point, I have to decide if I am going to advocate for freedom, or for one particular way of talking about freedom.

The other important news item is that I have started posting the 09 Videos up to

This is our first stab at videoing our own conference, and the results are just as amateurish as you might expect. Still, if you can tolerate the sound, there is a tremendous amount of insight available there.

I will be posting new videos there as I sort out how to make transcoding work on GNU/Linux.


Enabling open core

What license should you consider for your new Health IT platform? As you consider that, you should think carefully about your user audience. You want people in the Open Source community to develop against your code. You want people to add value to your core. To achieve this you have to recognize that our community does not share universal motivations. The most important detail that you need to understand about our community is the ways in which we we relate to proprietary software.

There are two general ways of thinking about how to relate to proprietary software within the FOSS movement.

There are those that believe that the most important potential feature in software is the ability to change and share it without restriction, which is software freedom.

Others in the FOSS community feel that the important issue is that we have a good method for collaboratively developing good software and if people want to make money selling software that restricts freedom (the definition of proprietary software) thats fine.

I am solidly in the first camp. However, for the purposes of this article I will treat them as equally valid perspectives. This respect for an opposing opinion is crucial for the FOSS community because we want to be able to develop software together!

People in the first group we might call freedom sticklers and the second group we will call pragmatic openers.

Before we move on we should discuss the basics of licensing. I have written on licensing before, but you will find my freedom stickler bias in those writings. I will try to avoid that here.

The most important thing to understand about licensing (for this discussion) is to consider the perspective of the person who accepts a license with the intention of redistributing the sourcecode with other software.

Imaging that Ozzie the Originator released some valuable software called coreware. He decides to release the code as open source! He must consider several perspectives as he chooses a license.

Freedom loving Fredi 😉 wants to ensure that whenever possible software that he writes will not be used to allow someone to control another person. Fredi appreciates the value of coreware and writes a module for it called Fredis freely scanning module.

However Proprietary Pat also has scanning application that has far more functionality than Fredis module. She likes the idea of open source but, for whatever reason, is not in a position to release her own software under a FOSS license. It is important to note that if Pat did not have a functionally better scanning module than Fredi, there would be no reason for Ozzie to consider her interests. Ozzie knows that when an open option is available, functional and stable end users will always prefer it. This can be called the Open Source Sets the Floor effect.

Pat has software patents and proprietary software that she feels must be protected from the full GPL (a license popular with Fredi and his ilk). Certain provisions of the GPL can have the effect of devaluing software patents, or at least that is how patent owners often feel about it.

Then there is Indifferent Ingride who writes a printing application. She has no specific position on proprietary vs. FOSS. She just wants her printing software to be as useful to as many people as possible.

Ingrid, Fredi and Pat would all be willing to help Ozzie improve coreware assuming they are happy with the license. Ozzie knows that if everyone is not happy, someone will start a competing project with a license more to their liking. This would dilute the talent pool available to work on coreware!

Ozzie the Originator is a bind. He knows that he can chose a proprietary-friendly license like the Mozilla Public License or the Eclipse Public License that will make Pat happy. But Fredi will never agree to a license that would be incompatible with the licenses that ensure that he can keep his own software freedom respecting. For people like Fredi there is no substitute for two very popular keep-it-free licenses the GPLv3 and the AGPL. The Free Software Foundation keeps a list of licenses that are and are not compatible with the GPL.

What is Ozzie to do? How to keep both Fredi and Pat happy? The first place to look is the LGPL which stands for the Lesser General Public License. This license does two important things, first both Pat and Fredi can use coreware as the basis for the coreware + someothermodules under their preferred license. You can think of coreware + somemodules as a “rollup”.

From a licensing perspective some open source rollups are loosely coupled (like GNU/Linux distros) while other rollups are more tightly coupled (like the Linux kernel itself). Tightly coupled rollups must have identical or fully compatible licenses. Most thinking says that if one software package locally calls the functions exposed in another software package, then they are tightly coupled. (Any VA VistA -server- rollup is likely to be considered a tightly coupled rollup while the relationship between VistA clients and VistA servers would probably considered loosely coupled). It should be noted that these ideas are generally accepted as flowing from a consensus understanding by the Open Source community lawyers of the copyright rules of derivative works, not all of them look at this way.

Ingrid can release her printing component under the LGPL too; essentially adding it to the core… Both Pat and Fredi will then benefit from Ingrids code. Of course end users will have to chose between Pats code and Fredis code because their chosen licenses are incompatible. Each of them is creating a new rollup of coreware with a different family of licenses. While coreware can be included in each rollup, the two rollups are license incompatible.

Both Fredi and Pat can collaborate on coreware with a LGPL codebase because they know that in the end the license of their own module will determine how the LGPL acts for the their users. For Fredis users the LGPL upgrades to the GPL and the AGPL, but for Pat, the LGPL does not interfere with her proprietary license.

Everyone is happy. (or close)

Is the LGPL the only license that is intended to work in this way? No, but it is the license that is specifically designed to solve this problem. Another license that attempts to be compatible with GPL/AGPL projects is recent iterations of the Apache license. Apache is generally considered more proprietary friendly than the LGPL. If Ozzie uses the Apache license, Proprietary Pat could make changes to the internals of coreware, that she does not need re-distribute. Both Apache and the LGPL give here the right to “hoard” or “protect”, depending on your perspective on the matter 😉 her module. But Apache also allows her to horde/protect her changes to coreware itself.

The reality of licensing is that at least two parties must be satisfied with the license. The end user and the most significant contributor. The GPLv2 made Torvalds happy, and his end users tolerate it. Everyone else in the Linux universe tolerates the GPL for Linux because the value of Torvalds original contribution and those contributions he was able to amass around that original contribution. Together these are too valuable to try and replicate. Companies that hate the GPL and everything it stands for, like Microsoft, contribute GPL code to the Linux kernel because Linux is too important for them to ignore. (P.S. If you hear someone talking about these issues in terms of viral or non-viral, you can bet that freedom is not a priority for them)

For VA VistA we have a conundrum, the originator of the code, the US government, has left the code basically licenseless. I believe this means that the choice if preferred license should be up to the most substantial third-party developers. I believe that the most substantial way to make VistA better is to make contributions that make further development easier. MUMPS is a great language but it makes VA VistA inaccessible to most programmers. Given that I believe the most significant third-party contributions to VA VistA are (in no particular order):

  • Medsphere’s OVID – because it lets you code for VistA in Java. (AGPLv3)
  • EWD from M/Gateway – because if you already code in MUMPS you should still be able to write web interfaces. (AGPLv3)
  • Astronaut VistA – because you want to be able to install… With all of the above development environments, in seconds…. Not months… (AGPLv3)
  • TMG-CPRS – because adding patients and correcting demographics should be easy. (GPL v2 or later as per the core WorldVistA EHR license)
  • OpenVistA CIS – because we want to be able to run VistA without Windows. (AGPLv3)
  • Timsons Fileman – VistA Fileman is an important core VistA component that has had many improvements since George Timson left the VA. (LGPL)

-all- of these applications do not just make VistA better, the are Platform Improvements. These improvements are designed to spur new innovation by making hard things easy or previously impossible things tractable.

-all- of these innovations (as far as I can tell) are available under either the GPL or AGPL.

I hope that it is now obvious why most of the VistA community believes that if there is to be collaboration between the Fredis and Pats of the VistA community it must be around a LGPL VistA core.

Soon DSS will be releasing a version of vxVistA under the Eclipse Public License. That license is not compatible with the GPL. If vxVistA is released under the EPL none of the above platform improvements would be available to vxVistA. However all of them are available to users of OpenVistA, WorldVistA and Astronaut VistA, all of which use GPL variants.

I have lauded the release of vxVistA but I fear that as a FOSS project, it will be stillborn because of the EPL. Users will be forced to choose between vxVistA and the considerable menu of proprietary partners whose patent and proprietary interests are satisfied by the EPL, and a projects where VA VistA is being improved -as a platform-

If we were talking about one or two minor improvements that might be available under the GPL variants the I would not take this position but practically, the most important member of any opencore community is not Fredi or Pat but Indifferent Ingrid. Ingrid wants to work with the best platform and contributes in such a way that it makes the platform itself better. Whoever wins the attention of Ingrid, wins.

These lessons are applied in the specific context of VistA, but I hope that is clear that these issues are generalizable to any Health Information Technology (HIT) platform.

(Update 10-13-09 Medsphere has released its server project under the LGPL)

(Update 10-16-09 Ben from Medsphere has responded to my post)

(Update 10-18-09 Thanks for Theodore Ruegsegger, who pointed out several serious errors… fixed)


The wrong conversation, missing CONNECT

Today I heard a session today at the National IT Forum at Harvard entitled “Business-Government Interactions to Support a Platform”.

I felt like I was Alice in Wonderland. Behind me sat two of the top leaders of the Open Source CONNECT project. Which is, frankly, the single largest contribution to Health IT interoperability to come from the Federal Government… ever. Even now, that project will ensure that there will be a National Health Information Network that will create local exchanges that will allow the transfer of health information about individuals from coast to coast.  Or at least this is so likely to happen, that other outcomes would be so random that they cannot be planned for in any case. Yet,  the CONNECT project was hardly mentioned one time during the session about “What we want from the Government”.

The session waxed long on what to expect from the Government, what the Government should do and should not do. Lots of talking about laws and rules and Google.  How should we do health information exchange? Some of it was pretty interesting, but basically it was the wrong conversation.

The right conversation starts with this: we can assume that CONNECT -will- unify the health information transfer in the US. It will serve as the basis for the core NHIN and regional networks will have the option of implementing it. That means that CONNECT sets the bar for health exchange.  Software must be as good as CONNECT to be considered for a local Health Information Exchange, otherwise, why not use CONNECT?

So -given- that the US government will (sooner or later)  solve the problem of health information exchange using CONNECT, the question is how we as platform developers will -leverage- CONNECT to make new and improved patient and clinician-facing tools.

While the first talk was better, and the contacts I have already made here are invaluable, so far there is too much fluff and not enough on the dirty details required to make a platform. I really wish Ben Adida could have made it, because as it stands I feel somewhat ungrounded. The conversation should really have been “what does CONNECT mean for us?” and instead it was just circular nonsense. I really want to ask after almost everyone finishes talking “so… you will therefore code what… exactly?”

For this post I want to make it clear. CONNECT is not perfect, they have warts both as a codebase and as a project. But they are rapidly fixing themselves, and they will change everything. This seems so obvious to me… and yet apparently not everyone gets this.


Away from iphone and towards a better platform analogy

As many of you know, the CHIP/Indivo/Harvard guys (who I guess I should call the ITdotHealth guys) wrote an article in the NEJM saying that we needed something like the Iphone app store in Healthcare IT.

I wrote a rebuttal saying that, among other platforms, the Google android platform was a better fit. Frankly, I thought that would be the end of it. Most of the time I write a blog post, I get some hits, and maybe a comment if I am lucky. But mentioning the iphone is great for getting attention. Apparently, just saying the word iphone brought the readers out of the wood work. iphone iphone iphone <- (just to be sure…).

More than just getting some good comments I have just realized that Ben Adida (check out my blog roll) wrote a Knol that touched on my criticisms and argues convincingly that there needs to be some balance between openness and safety.

Though it is clear that Apple’s regulation of the iPhone apps market has gone far beyond malware prevention, the goalof malware prevention is certainly reasonable.

I think he is right on, and I look forward to talking about it with him in person tomorrow. I think now, the night before the conference, it might be a good time to drop my thoughts about what platform analogy would really be the best to reference as we move forward. I also take a moment towards the end of the post to concede some of the things that Apple really got right, since I do try to be fair.

If I had to pick one thing that best embodies the 10 principles that are being targeted here, I would pick yum. Yum is the update manager for Red Hat based operating systems. Here’s why:

  1. Like the iphone app store, it is “substitutable (first of the ten points). You can download like 10 different web browsers on the current Fedora.
  2.  It built its own protocol. RPM was a lower-level standard, and yum was born as a meta-tool on that standard.
  3. Yum allows for multiple platforms. It forms the basis for the software packaging for just about every Red Hat/Fedora based operating systems, of which there are several.
  4. The API for yum is open, which is what lets things like yumex happen.
  5. The programs installed by yum never have direct control over yum (unless that is the point of the program, and that is what the user wants to do).
  6. Application install is as pointy-clicky and as user friendly as it gets BUT you do not lose the power of command line script-ability. Talk about walking the fine line!!
  7. Separation between the copyright/patent/trademark of applications and the platform is totally there! You can point your yum to a proprietary repository, for instance to download Adobe flash… no problem.
  8. Unfortunately it does not make any sense to say that you can remove everything from yum and still have a platform. So I guess it strikes out on that one. Of course, I am not sure why the platform itself should -not- be considered a package on the platform… Ill have to ask about that tomorrow…
  9. Yum is really really efficient. You can update applications very quickly, and you can even install a special yum module that will find the fastest download servers, ensuring the best experience for downloads.
  10. The certification is as minimal as can be. The packages -can- (not required to be) signed by the people who set up a repository, and you simply do or do no trust that signature.

Someone will point out, someday, in comments that apt-get is just as good and does all the same things. To that future commenter I fully admit that you are 100% correct. I am a long time Red Hat guy and I am letting my colors show, for the record I am trying Ubuntu on my desktop for now….

Now let me point out a couple of cool things about yum that are not on the “big ten” but that I think are worth emulating:

  1. Yum is actually an upgrade to a previous platform, Yup. Yup was good, but users forked it and made it much better… then the original yup developers adopted yum. That’s the virtuous cycle of Open Source in action if I have ever seen it.
  2. Yum handles “trust” in the system, by getting out of the way. A “default” repository is trusted to get the system off the ground. But you can “trust” other repositories to get upgrade versions of the software you are currently using, to get substitutionsfor the programs you were currently using, or to get new software that is found nowhere else. It automatically find the balance betwen openness and security. Users make the decision about how to trust, and the system does not auto-branch beyond those decisions.
  3. Although yum violates principle 8,  you get the benefits of being able to use the platform to upgrade the platform. You can upgrade a late-generation yum operating system while it is running.
  4. The yum platform was central making a larger community effort. Remember when Red Hat stopped doing Red Hat Linux, instead creating the Fedora project and RHEL? Fedora existed before that, as a high-quality repository of Red Hat packages! yum was an important new feature of Fedora Core 1. The yum platform helped move the whole community forward.

So I think the yum project and the way that Red Hat made into a software distribution network is a pretty good model to follow.

Even I, however, get why they original authors chose to use the iphone as an analogy. Not assuming that these points are original, I want to point out some things that Apple did right, that other systems have failed at.

  1. Apple enforced simplicity. They refused to allow programs to run in the background. They refused to allow many other things that a developer for Windows CE might have expected. They made the core interface as simple as possible. They even excluded cut and paste initially to make the system simpler. Apple put these restraints in place because by making the applications simpler, they made the user experience vastly more intuitive.  I have used countless “modular” or “substitutable” platforms that miss this.  It is the platforms responsibility to protect the overall user experience, -not- the application developers. That means knowing when to say no. Ignore this one at your peril.
  2. Apple built a meritocracy at the level of the end user. When you see an application on the iphone that has been used by 5000 users, and they have all rated it 5 stars, you can be pretty sure it is good. That rating stands front and center in the platform, and more importantly, the platform itself constantly promotes and rewards its star performers. On other modular systems, I usually spend a lot of time trying to sort out what modules are reliable. The Firefox module system has also done a good job of this.
  3. Despite its habit of blessing particular development groups with special privileges, Apple also made it easy for the individual developer to become a super star on the platform. It did that by giving people pretty substantial development tools and a robust development environment.  If you want to get rock star developers you have to give them their version of the red carpet. That means awesome documentation, video tutorials and lots and lots of working examples.

I figured I would jot down these thoughts before the conference, so that I can have the most fun while there. Apparently, some of these people are actually reading this… so its a very efficient way of making points as opposed to taking the whole conference to dinner with a Fred-monologue.


Surescripts agrees to modify NDA to be compatible with Open Source licences

As many of you know,  I am often asked to represent the FOSS Health IT community in negotiations with various organizations. My first opportunity to do this was with CCHIT, that negotiation has turned out pretty well. Then I represented the FOSS community at the NCVHS hearings on Meaningful Use.

Most recently, I have had requests from the community regarding Surescripts (who appropriately use a .net domain name… because they are a network!!).

For those that do not know, Surecripts (after the merger with RxHub) is essentially the only way to communicate electronic prescription messages in the United States. However, many in the FOSS community felt that the Surescripts Non-Disclosure Agreement prevented FOSS implementations of the Surescripts interface.

I just got off the phone with Rick Ratliff and Paul Uhrig from Surescripts and they agreed to modify the NDA to explicitly allow the release of Surescripts implementations under Open Source and Freedom Respecting Software Licenses. In fact from their perscpective this was implicitly allowed under the current NDA.

To move forward I have asked representatives from Medsphere and ClearHealth (Open Source vendors who already have a working relationship with Surescripts) to work with Surescripts to produce a short modification to the Surescripts NDA which will explicitly allow for a FOSS release. Once they have finished that language, we will present the resulting changes to the community at large to make sure it works for everyone. After this, Surescripts has agreed to add the changes to the default NDA.

While this issue will not be resolved until we have FOSS-available implementations that can access the Surescripts network, this is a huge step forward. I would like to thank Paul and Rick for making time for me in what must be a tremendously busy schedule.



Network Effect vs Open Source

Something I have been thinking alot about lately is the issue of Software as a Service and how that model works with the network effect and open source software.

My thinking is prompted by a service that I am thinking of launching. The code behind the service is very simple, and while I have predilection to release everything I do under FOSS licenses, I am thinking of not releasing the code for this. Notice that I am not talking about making a proprietary software product, that would be unethical, I am talking about offering a service over the Internet, using code that is kept private. Private code is ethical, proprietary code is not. It is a matter of control, proprietary software allows a user to run software that they have no control over. Private software running a network service is often called the ASP loophole of freedom respecting software licenses like the GPL (but not the AGPL), but basically it is ethical because the user is not actually running the software at all, they are just accepting a benefit from that software. The moral issue gets convoluted when you have a service that maintains user data on the foreign site, rather than just providing a take-it-or-leave it service. Google, for instance, is in a very different position of responsibility when it chooses to offer an email service rather than a search service. If Google stopped providing search, that would suck, but if gmail went down and took years of my corresponence with it… that would -really- suck.

For certain kinds of critical data, I think it is unethical even to use private code. This should seem especially obvious for health information.

Before we get to my issue, I wanted to point out another organization that is in essentially the same position:

StakOverflow is a site that supports the ability to ask very specific technical questions and then rank the answers that result. You see if StackOverflow releases its code open source, then you could have hundreds of separate question-answering sites start, all of which would have have only trivial amounts of users on each site: Joel (as in Joel on Software) discusses this issue in a podcast (transcript):

Spolsky: Well, but they will suck away some the audience that might have come to us, thus reducing the network effect, and thus reducing the value to the entire community.

As long as Stackoverflow is in -one- place, then all of its users go to one place to ask and answer questions. There is a network effect of all of those users going to to same location, it means more questions and more answers. More questions and more answers means better answers and questions since the whole point of the stack overflow architecure is that “more” becomes “better” through user voting. Better answers means that more people will go there to search, which means more users, which means more answers/questions which means better answers which means better users and you have the loop. The critical upward spiral of community collaboration where the more users you have the more valuable the central resource is.

What does this sound like? It sounds like open source software development and the way Wikipedia works. In fact there is a whole book about this upward spiral-through-open-collaboration effect called wikinomics.

But the upward spiral of the -content- on StackOverflow is hindred by attempting to open source the code. The code would obviously improve if it where open sourced, but the content would degrade. (Aside: It might be possible to find a way to turn the StackOverflow model into a protocol too, so that you could have multiple instances that would create a large disstributed system of StackOverflow instances. So that when you searched for bird watching on you might get results from or whatever. This is what Google is trying to do with Google Wave)

It should be noted that StackOverflow actually already open sources the content that it produces, using a creative commons license for the questions and answers posted there. They also provide a data dump of the content, so that you can get it for programmatic use without bothering to screen scrape. So they really are making an open source contribution.

Back to my idea. I have a service that I will be launching soon that will also greatly benifit from the network effect on the content, but would be damaged by having multiple instances. I am inclined to not release the source code for this reason, but I have not yet made up my mind…


This got several good comments very quickly. Thanks for that, I really have not made up my mind on this issue and your comments have been very helpful.

Probably the most important information that I got is that there are several Open Source Stack Overflow clones in various stages of development.

I had searched for Open Source implementation of Stack Overflow and had only found Stacked. Personally reimplementing something so that it will not be proprietary anymore and then using a proprietary language (no offense to mono) to do it in just seems pointless. Of course I really wish there was something in php, since that is my current crutch language of choice. Hopefully people looking for a GPL or BSD implementation of Stack Overflow might be able to find it now. Drop a comment if you have a good implementation in php!!


e-prescribing prior art

Whenever I hear that someone was doing Health IT a long-long time ago, I always suggest that they find copies of their old code and post them online so that we can have a strong source of prior-art to fight software patents with.

Recently, Bob Paddock took me seriously and dug up some invaluable prior art on automated prescribing. Today he sent me the results, including scans of printouts of both the printed prescriptions and the source code that made them. All of it with a date so long ago that it would invalidate any still active patent covering those subjects.