VistA Custodial Agent Launches, and it doesn’t suck (much)

[Update 2011-10-19 Many of the issues here applied only to the initial launch of OSEHRA and began to be addressed almost immediately. Do not assume everything you read below still applies]

As typical, I was alerted to the fact that Tiag, the winner of the Open Source VistA custodial agent competition, has launched a website and new non-profit foundation called Open Source Electronic Health Record Agent (osehra.org)

The good news? The site, and the foundation both seem to be moving in the right directions. The site appears to be using a Drupal backend, which is a reasonable choice. The foundation has made pretty strong technology and licensing initial positions and those initial positions are not stupid. They have done -some- stupid things, but for the most part these appear to mostly be relatively minor errors that hopefully can be fixed.

Strongly pushing the Apache License

Something that OSEHRA is doing right is to strongly push the Apache 2.0 license, without insisting on it. The whole licensing document makes a compelling case for Apache, and for the right reasons. Apache has a reputation as strong non-copyleft Open Source license because it is GPLv3 compatible, it is more explicit, and handles patent issues well. It is a strong license and almost every informed person I know in the Health IT community prefers it when non-copyleft is required (which it obviously is in this case). But they consider their licensing document in draft, which I assume they leave some room for debate on the subject.

Strongly pushing git

Another smart thing they have done is to setup a git repository. VistA, in order to survive, must have a strongly distributed version control system. Git is by far the best option for this, given how much it has pulled ahead of the other Open Source distributed version control systems. Git has some fundemental advantages for projects that are as large and expansive as the Linux Kernel, and VistA is just such a project. OSEHRA has nothing in their VistA git repository so far, but they fact that they already have a repository is a very strong move.

I seriously doubt that many of the TIAG contractors actually understand how difficult running any kind of software version control system is for VistA. To actually pull it off, it is likely that someone is either going to have to build some KIDS-to-git software bridges and/or develop some very methodical development processes that allow an individual instance of VistA, in a KIDS environment, to function. It is hard to know if TIAG is being smart here or just really lucky. We may never know unless Joseph Conn’s FOIA requests efforts reveal more from their initial proposal. (BTW, given that the VA is stonewalling it would be smart for TIAG just to hand over their proposal to Conn, for the sake of transparency).

OSEHRA is putting all of its chips on long term VistA-git compability. Specifically, their testing and certification strategy centers on using Gerrit, (a git-based source-code review tool) to ensure that new contributions to VistA are safe and compliant. It is unclear to me how this system squares off against the KIDS-based Class I, Class II and Class III software evaluation and deployment system that the VA currently employs.

Perhaps Tiag is lucky to choose git, perhaps they are smart to chose git. Either way, I’ll take it. Git is one of very few systems that has the brawn to potentially solve the most vexing KIDS-vs-VCS problems. It is a smart choice. Pretty much the only choice that might have been stronger would have been to endorse Medsphere’s preexisting work on bazaar. Since the decision by Medsphere to back bazaar, git and mercurial have surged ahead as more popular distributed version control systems. Specifically the git community, centered around github, has become the standard for enabling Open Source development. Take a read on the short whitepaper covering the OSEHRA repository strategy. It is obvious that Tiag gets the potential of Git, but it is difficult to see if they fully understand the complexities in existing in the nexus of git+KIDS+GTM+Intersystems as technology stacks. Still, who gives a damn if they understand what they are getting themselves into if they are still taking the right strategy? There really is no other rational way forward in this technology issue. I am pretty happy about this.

Long-term the foundation board is community elected

According to my initial skim of the current OSEHRA by-laws, the DOD and the VA, along with Tiag, initially appointed three board members. However, when the term of these board members expire, they will be re-elected by the membership. I would love it if someone with either some legal expertise and/or more free time can read the by-laws carefully and see if my understanding on this is correct.

If it is, then there is a basic mechanism in place for OSEHRA to actually represent the community, something that has been missing in WorldVistA for years. This is pretty good news.

NDA in the Membership Agreement

The largest of these is the Membership agreement. At this time, no Open Source developer or company should consider joining the OSEHRA organization, or even register for the website. The current OSEHRA membership agreement contains a tremendously broad non-disclousure agreement. Here it is:

Confidentiality.  Member acknowledges and agrees that all information disclosed by such Member or received by such Member relating to the Corporation, including information discussed in any meetings of the Corporation, shall be presumed to be confidential (the “Confidential Information”) unless otherwise stated by the disclosing party.  Member hereby agrees to keep the Confidential Information confidential and not disclose the Confidential Information to any party other than its Representatives, whom it will cause to observe the terms of this Agreement, using the same degree of care that it exercises with respect to its own information of like importance but in no event less than reasonable care.  Confidential Information does not include information which:  (a) is rightfully obtained by a recipient Member without breach of any obligation to maintain its confidentiality; (b) is or becomes known to the public through no act or omission of the recipient Member; (c) the recipient Member develops independently without using Confidential Information of the Corporation or any other Member; (d) is disclosed in response to a valid court or governmental order, if the recipient Member has given the disclosing party prior written notice and provides reasonable assistance so as to afford it the opportunity to object; (e) information that is known to the receiving Member prior to disclosure by the disclosing party; (f) information that is approved for release in advance by written authorization of the disclosing party; or (g) information that is rightfully disclosed to the receiving Member by a third party not bound by a confidentiality restriction.  Member understands that the Corporation does not contemplate the exchange of competitively sensitive information.

No Open Source software developer I know would be willing to agree to this. It basically allows other ‘Members’ of the corporation to limit what can be coded by simply communicating ideas through OSEHRA. The agreement presumes that information from the OSEHRA is confidential -by default- which is so deeply contrary to the values of Open Source it is not funny. As an Open Source developer, whenever someone starts to tellme “This is confidential information…” I stop them immediately and tell them not to tell me any more. The only reason that I do not consider this a big deal is that it is obvious that OSEHRA simply did not think this through and allowed boiler-plate legalize to slip into its document process. They will fix this because, I, and countless other Open Source developers will not be able to participate under these terms. I know this because the other important document that they have created is their draft Software Licensing document. In that document they get the basic idea of openness right.

Non-disclouser agreements, like the one in the paragraph above, serve to create fodder for “trade secret” litigation.  Trade secrets are a mirky aspect of “intellectual property” law (I use the term ironically), because in order to have an action under trade secrets law, you need to basically show that you were keeping a secret, and someone else did something wrong in obtaining and taking advantage of that secret. (IANAL, but this is not just my understanding). Non-disclosure agreements are a big part of the process for ‘protecting’ trade secrets. Trade secrets have little use in the Open Source community, and elsewhere OSEHRA recognizes this. From its consideration of “intellectual property” law in the draft software licensing agreement:

The final bullet point, trade secrets, is mostly irrelevant and not applicable in the context of open source projects given that openness and transparency are some of the required characteristics of open source and are enforced by the best practices. We exclude trade secrets for the subsequent discussion in this document.

Obviously, the fact that the membership agreement at OSEHRA fundamentally endorses and protects trade secrets, which they know is not a part of the Open Source ethos must simply be an example of the left hand not knowing what the right hand is doing.

Still, no Open Source developer or company should consider joining OSEHRA while the click-through membership agreement contains these terms.

Commercial vs Proprietary

A pet peeve of those of us who make a living writing and supporting Open Source software is when people say “Commercial” when they mean “Proprietary”. The word “commercial” simple implies that someone is being paid for working on software and implies a certain amount of professionalism. There are plenty of commercial Open Source companies out there. Red Hat is a great example of a general commercial Open Source company, and Medsphere certainly considers itself a commercial Open Source Health IT company.

Sometimes, OSEHRA uses “commercial” correctly, referring to both proprietary and Open Source individuals and companies:

In the context of open source software development, it is very important to prevent patented methods from being implemented into the code, given that they restrict the use of the software and undermine the emergence of a commercial marketplace in the ecosystem. Patents undermine the environment of trust that is required for a collaborative ecosystem to be successful. Commercial companies will be unlikely to invest and build upon the common resources of the ecosystem if there is a suspicion that this will result later in costly patent litigation.

underlines are mine. This use of “commercial” is appropriate. But the following one is not.

The OSEHRA should not adopt GPL, a restrictive “free (libre) software” license that limits commercial interests.

This is a false statement. Red Hat lives and dies by the GPL in the Linux kernel. It is a commercial enterprise. What the GPL limits is “Proprietary business models”. It means that commercial companies have to share improvements. The GPL aligns corporate “interests” with community interests. It is not the “commercialness” of companies that is limited by the GPL, it is their “proprietary-ness”. This problem becomes more apparent with:

The OSEHRA will continuously encourage members of the ecosystem, both commercial and non-commercial, to contribute back to the code base any improvements and modifications that they made to the code.

There will be almost no “non-commercial” members of the eco-system. There will, however, be a continuing debate between proprietary vendors, hybrid proprietary/Open Source vendors, and fully Open Source vendors. Labeling this as a distinction between “those who which to make money and those who do not” is to be intellectually dishonest about the core debate.

Any frequent reader will know that I favor Open Source processes and copyleft licenses. But the argument against my positions is legitimate: proprietary software licenses can help ensure that developers are paid fairly for their work and contribute to long term sustainability of software development. I get that. I wish there was a way to ensure that developers could more safely invest in Open Source code development, and I have been financially taken advantage of by free-loaders in the Open Source community more than once. I have the intellectual honesty to admit that there is a real-debate here, with valid points on both sides. But the divide is Open Source vs. Proprietary and not Open Source vs. Commercial, or Non-Commercial vs. Commercial.

This is insulting terminology, that artificially endorses the positions taken by proprietary vendors and glosses over the consequences of their positions. (i.e. the word proprietary gives some hint about the tremendous amount of control that a proprietary vendor holds over a user, whereas ‘commercial’ gives no such hints).

Ambiguous Ethics Terms

I have a degree in Philosophy and a minor obsession with ethics. I call people out in the Health IT frequently and, some might say, harshly.

I am convinced that discrediting those who deserve to be discredited is an important part of what it means to be ethical in Health IT.

But the OSEHRA Code of Conduct says:

Working to instill public and consumer confidence relating to electronic medical records, its member companies, and its professionals, avoiding any action conducive to discrediting members of OSEHRA, Inc.

Being a member of OSEHRA will never exempt some from a need to be “discredited”. Hell I am not sure I could even define what “discredited” means here, but I am pretty sure if I feel that someone who happens to join OSEHRA deserves to be discredited, that OSEHRA should not have anything to say about that. For instance, would this famous post from ONC regarding the use of the term EHR over the term EMR serve to “discredit” EMR technology? This is the only place that I can find on the OSEHRA website where the term ‘electronic medical record’ is preferred over ‘electronic health record’. Would ONC qualify for OSEHRA membership?

I am also unsure of how scheduling an untimely party could be unethical:

Refraining from scheduling general attendance meetings, receptions or other events at times that conflict with substantive programming or social events at OSEHRA, Inc. meetings without the express prior written permission of OSEHRA, Inc.

This raises the strange possibility that I might use my deeply nuanced understanding of the basic motivation constructs of Open Source software developers (they like free beer) to create an alternative social event (free beer party) that would compete with “substantive programming” (a boring meeting) that would put me in violation of the “ethics” of OSEHRA. Seriously?

Again, I think this is an example of boiler plate language slipping through without careful examination.

Every Open Source project has unwritten policies regarding basic courtesy among software developers. Generally, it is frowned upon to criticize individuals directly, but ideas can and are frequently judges very harshly. When a person strongly associates with a bad idea they can expect some unkind words. But these unkind words always focus on the quality of the ideas.

Any Open Source community code of conduct is either a paper tiger or has some acknowledgement that heated technology discussions are the norm in the Open Source world.

No automated document forge

Putting aside the complexities of git+KIDS, software development is not the most important set of tasks facing OSEHRA.

OSEHRA is intended to be the middle man on a lot of really thorny policy and governance issues. The documents that they have created, including their white-papers, and corporate documents are far more important than any software issues they might attempt to address at this stage. But OSEHRA is not yet using an effective document forging process.

There is an art to Open Source document forging, and everyone is familiar with the wiki concept. But wikis are not  ideal for gathering commentary on formal documents. Community commenting was advanced most substantially by stet, which was used to enable thousands of comments on specific sections of the GPLv3. Stet is a dead project but the heir apparent is a project and service called c0-ment. Co-ment enables a community to build consensus around the contents of a document in the same way that a bug reporting engine allows the Open Source community to improve code. All of the documents that OSEHRA is developing, and even those that it considers “done” should be submitted to a community comment process enabled by co-ment or software like it.

Apparently, lots of people read this blog. I have the server logs to prove it. I even had Roger Baker comment on my last blog post on Tiag. I remain surprised by things like that, since I write primarily about things that amuse me in ways that amuses me. I always assume that my readership is limited to other geeky ideological information revolutionaries like myself. But apparently, important decision makers and thought leaders are reading this (Hi Allison).

Which is a problem, it means that my voice, as an abnormally high-profile hacker/blogger is automatically weighed higher than others in my community. In fact, my proclivity to spend hours typing up an analysis like this makes me a worse technologist, not a better one. It is critically important that OSEHRA develop processes that embrace the insights of my community who constantly prefer hacking over writing. They need to hear from the guy who is going to read this article and then spend the next 4 hours hacking to see if he can write a working KIDS-to-git translation script. Systems like co-ment dramatically lower the barrier for developers like that to participate in a document creation process. They do not have much time to contribute to commenting on governance issues, but if the OSEHRA governance process does not work for these hard-core developers, then it will cripple OSEHRA in the long term.

Overall

I am very pleased with the first pass that OSEHRA and Tiag have taken here. They are making some smart moves and most of the mistakes that I point out here are minor infractions that can either be quickly fixed (the NDA problem) or do not matter that much (the code of ethics issues).

OSEHRA

Unethical Blue Button Contest

This contest is deeply problematic.

The blue button initiative was a good initiative because it allowed greater access. It made that possible by ensuring that access to patient data did not have to wait for the VA/DOD/Whatever to create a download that conformed to the still-forming XML standards that make true interoperability possible.

But now the VA is promoting the Blue Button format as something that should be integrated into other EHR systems. That is unacceptable.

Meaningful use requries that doctors allow patients access to their health record summaries using CCR or CCD. Both of those standards can easily be translated to printable reports that are very readable. Meaningful use -also- requires that patients be provided a summary in a printed form. The simplest way to do this is just to have onsite printing of the CCR or CCD reports.

Blue Button data format is not as readable as a summary printout, and is not as parseable as XML. It is the worst compromise between human and computer readability. No clinician should ever make a clinical decision based on the content of a Blue Button download. The data is simply not rich enough to transfer into another health record or PHR. Essentially that makes the blue-button data format standard an FYI-only use case.

The innovation of Blue Button is to not let standards compliance get in the way of access. For this reason the DOD and the VA use of the Blue Button format should be applauded! The Blue Button initiative was fully give-me-my-damn-data compliant! But promoting it as an alternative process to a process that supports fundamental data reuse and is -already- required by meaningful use is unethical.

I formally request that the VA withdraw this contest, or make it clear that a CCR/CCD/printable report download meets the requirements of the contest.

Either you are promoting the notion of patient access, which is wonderful, or you are promoting shitty health data standards, which is unethical.

Cross posted on the VA challenge forums here.

Here is the example of the file format in question.

-FT

Contract to create VistA shepherd goes to newbie

I just learned, (from a modern healthcare article, of course) that the contract has been awarded to Tiag.

At first blush, this news was not concerning. I am only peripherally involved in the VistA community, and there are lots of solid VistA-related contracting companies that I do not know of. It was a little surprising I must admit, I was expecting this to go to Perot Systems (now Dell), or DSS, both of which have deep VistA pedigree. My second guess would have been a big contractor like IBM or CSC.

But I have done a little analysis and now I am pretty concerned about this.

First I used Google to do a site search for the term “VistA” on the Tiag website. The syntax for that is “site:http://tiag.net VistA

Results.. nothing..

doh!

Ok, but even though there website does not list anything about VistA, maybe the leadership is invested in the VistA community. Its pretty easy to sort participation in VistA community, it all happens across the Hardhats mailing list. So I did a search on the hardhats mailing list for the proper names of each of the people listed under the Leadership Bios on the hardhats mailing list. Here is a sample for Tiag CEO Dalita Harmon. Nothing.

doh!

Ok, the Tiag leadership is not participating in the VistA development community, but perhaps they have “underling involvement”. I search on the hardhats mailing list for anything coming from tiag.net returns nothing. (it might return a thread I started about them now….)

doh!

Maybe they prefer to participate in person, attending WorldVistA meetings? Nope.

doh!

Maybe the organization is just deeply skilled in health IT. It looks like they have some military health IT experience, which is not at all the same as VA health IT. The resume of the Tiag CEO shows that she is not a computer scientist, or a self-taught developer. This is pretty important, because Tiag is registered as a small business. The CEO will probably be making significant decisions about this project, and she is not a software developer. Moreover her resume speaks to industry-hopping with “leadership experience” as the result. Given that background, there is a very real danger that she might be confident without being competent regarding VistA.

doh!

Perhaps, they have experience with Open Source? Nope.

doh!

How about experience with MUMPS? Nope.

doh!

Are these google searches working at all? Perhaps Google has not indexed tiag.net. Nope, a search for the CEOs name returns lots of pages. Including this one, which details the history and philosophy of the company, from that page:

one of our core differences is going against the industry standard of treating people like commodities.  tiag hires the best talent out there and treats them like talent

This is kind of troubling, because VistA, in my experience is one of the most complex and difficult technical arenas in health IT. The system is amazing, but making VistA go is a dark art, and experience really matters here. From what I can tell, they have no VistA experience to speak of. This, and the generally buzz-word compliant and beautiful tone of the website lead to a dangerous potential conclusion: This is an organization that has expertise in writing beautiful proposals, rather than any kind of industry experience. What if this is yet another “beltway bandit” with a limited, “across the fence” understanding of the VistA community inside the VA and no concept of the VistA community outside the VA.

doh!

At this point, I am going to put out the soft feelers to the VistA community, for indications that my research is wrong. But at first blush, it appears that the VA has chosen VistA-outsiders for this role. There a several ways that this analysis could be wrong, for instance, if they had just hired George Timson, or perhaps partnered with someone like Open Health Tools, and they have not yet updated their website. So these concerns could be simply irrelevant.  So first, I hope I am wrong in this analysis. Second, I can only hope that choosing a VistA-outsider was intentional on the part of the VA.

-if- it was intentional, it might not be a bad thing.

It appears, at first blush, that these guys are all going to be VistA newbies. The first thing they need to understand is that they are in fact newbies. I know a lot about Health IT, but knowing VistA… thats something else. Understanding what VistA appears to be, and understanding it at the level of a CAC or VistA programmer… thats something else entirely. It also appears that they are newbies to Open Source generally. I would have loved to see some Linux Foundation/Apache Foundation/Mozilla Foundation type credentials. I do not see that here either.

While I was initially investigating VistA, I wrote the WorldVistA wiki page “What is VistA really?“. Its a chronicle of a health IT outsider becoming a VistA insider (remember I said “insider” not “expert”). Nothing written in that article would come as a surprise to a VistA insider, but if you read it… and you are surprised by anything there, then that is a pretty good indicator that you are a VistA newbie.

Being a VistA newbie is fine, as long as you understand that you are a VistA newbie.

If this “Open Source VistA” thing is going to work, then the people leading that effort are going to have to be deeply aware of what “Open Source” and “VistA” really mean, -or- they are going to have to have a lot of humility.

In Open Source, reputation and leadership are the same thing. If Tiag is as unknown to the rest of the VistA community as they are to me, they have a long way to go. This does not mean they will do a bad job, Harris Corp was pretty inexperienced with Open Source, but they ended up doing a pretty good job on CONNECT. In the end, they earned a good reputation. I was pretty freaked out when that contract went to Open Source novices, but it turned out O.K. in the end.

Of course, Harris had a stellar technical team. They really understood what they were getting into from a technology standpoint. Does Tiag? One of the critical issues around an Open Source VistA process is that normal version control does not work on VistA. This has to do with the quasi-versioning capability of the KIDS system. Updates in VistA often come in the form of KIDS packages that inject code directly into the VistA database. That code, living in the database, and not on the filesystem, makes tracking VistA with traditional version control impossible. Can you imagine trying to create an Open Source governance structure without the presumption of an underlying version control system in place? The governance of most projects translates to “the process by which we decide who gets access to subversion/git/whatever”. This is just one example of how VistA context is going to be critical for any kind of workable governance. My proposals for VistA governance, are some of the oldest and complete writing on the subject. They date from 2008, which is something like 21 years ago in Internet years (which are, as everyone knows, roughly compatible with dog-years). So I am something of an “expert” on the subject of VistA governance.

There are three working definitions of ‘expert':

  1. The student: Person who understands the problem well, and might recognize the solution.
  2. The amateur in experts clothing: Person who advertises that they know the solution, but in fact does not understand the problem at all.
  3. The real expert: Person who has solves the problem.

I am, at least, solidly in the first category.

If “What is VistA really?” serves as a good introduction to VistA, this post will serve as a good introduction to Open Source values. What matters in Open Source is “being right” and being right comes from evidence. The evidence in this case (a bunch of Google searches) suggests that Tiag is inexperienced and over their heads on this one. The difference between Open Source developers and other developers is that we talk openly about these types of issues, and criticism, when backed by evidence, needs rebutting. Participating in community discussion and responding to community criticism is what “participating” in Open Source community means. We have lots of heated arguments, and its never personal. Its always about what the right thing is for the sake of the project in question. If Tiag thinks this post is critical, they are in for a whirlwind.

At this point I am pretty nervous. The future of VistA depends greatly on Tiag not screwing this up, and I see no evidence that they have any experience in Open Source, licensing, governance or VistA and its unique development process. There are thousand ways to make a train-wreck here and only a few ways to do this right.

I will try an update this article with more information regarding Tiag’s qualifications.

I really hope Tiag has what it takes.

-FT

Medsphere OpenVistA meaningful use certified

I am happy to mention that Medsphere has received certification for their OpenVistA EHR system.

As far as I know, this is the only meaningful use certification for Open Source software, for the in-patient setting. All of the other certifications that I have mentioned so far have been ambulatory.

Congratulations to the Medsphere team. This is an important accomplishment, and a milestone for the Open Source health software community.

-FT

VistA License debate: its about proprietarization

It looks like WorldVistA is, for now, holding fast to the GPL and AGPL for VistA licensing. I have been a vocal advocate for compromising with DSS and Open Health Tools around the LGPL. The LGPL would allow for some innovations to be licensed under the GPL, and others, in the core of VistA to be compatible to bundle with proprietary software.

Recently, Skip McGaughey was quoted in modernhealthcare as saying:

“I believe it’s all about community-building,” McGaughey said. “I believe people have focused too much on technology and licenses and they need to focus on the care of individuals. If we can switch the focus from licensing and technology—the VistA community has a tremendous opportunity to fundamentally alter care throughout the world.”

“They’re starting from a base that has a tremendous knowledge base, built by care providers, tested and modified over a long period of time,” McGaughey said. “So, the opportunity is tremendous. So what we have to do is change the focus and quit worrying about the individual ‘me’ and talk about the ‘we’ together,” he said.

“If we enable an environment for people to collaborate in building infrastructure that everybody can use, to share the expense, what we can do is build the integration and interoperability and build a collaborative spirit,” McGaughey said. “Then people can climb the value stack to provide added value that can make money.”

It should be noted that I was not at the talk and did not hear exactly what Skip said. I know Skip and I know that he is a good guy, I think he intended to bring a message of reconciliation regarding licensing which is very good.  I may actually agree with Skip’s position, but I cannot agree with this quote. While I am in favor of compromising with Open Health Tools, the position of WorldVistA on insisting on the full GPL is not unreasonable and it is certainly not anti-people.

Lets be clear, when you talk about proprietary friendly licenses in medicine, you are not talking about a way for people to “make money” or “earn a living”, you are talking about a mechanism that traps software consumers into a monopoly relationship with a software provider.  Proprietary software in healthcare is so famous for abusing this monopoly position to the detriment of its clients that the issue is being investigated by congress and is even the subject of in-depth lampooning.

To trivialize licensing and indicate that is about “people” is typical and insincere. The software license defines the basic power structure of a relationship between software developer and software consumer. Full copyleft ensures that the developer and the consumer are always equals. Proprietary licenses ensure that the software vendor is in control. Open Source licenses that allow for proprietarization are a grey area. If software consumers are careful only to use Open Source components, they can maintain a balance of power, but if they ever allow a proprietary module into their ecosystem, then the license for that module puts some vendor back in the drivers seat.

If there was an “open” movement in the prisons around the world so that all prisoners were limited to just one shackle, they would still remain prisoners. Similarly as long as one software vendor can dictate terms to a clinic or hospital, they have a problem. Proprietary vendors who do not abuse their clients are like kind wardens. Just because they are nice a prisoner, does not change the fundamental power dynamic in the relationship.

The LGPL is a compromise precisely because it allows people who value freedom to work with people who are willing to compromise with proprietary vendors.

When you start hearing people saying things like “value stack” and “let people make money”, you are hearing the argument that being trapped is sometimes OK, if what you get for it is worth it.

This kind of power dynamic is precisely what prevents communities from trusting each other and cooperating. If you want to create community, you better not ignore licensing concerns.

-FT

VistA moratorium

Lots of people have been asking for copies of the VistA moratorium, and rather than email it to ten people, I thought I should make my copy available in public. Obviously this is bad news if your job was to develop Class III code for the VA.

The VA has recently released a moratorium in the development of Class III code. For the uninitiated, Class III code is code developed at local VA hospitals. The VA has made the decision that VistA will no longer be developed by a group of collaborators, rather, it will be centrally developed, by a single controlling entity.

-FT

NCVHS Testimony on Meaningful Use

(Update 08-13-09 I have already presented this to NCVHS)

Introduction

I represent a community of health software developers and clinical users that respect software freedom. This community operates in the legacy of the VA VistA underground railroad. There are several important commercial EHR vendors that respect software freedom they are an important part of our community but they are only a part and certainly not a majority. I hope to convince you today that the notion of ‘vendors’ is too small for the task of computerizing healthcare. I hope to convince you that we need an open software community to solve this problem. I believe this because this is the only thing that has substantially worked so far and that given the magnitude of this problem, this is the only thing that has any chance of working in the future. Before we can move into that future, we need to have a candid dialog about the failure that typifies current health IT.

Relevant Biography

I appreciate the opportunity to testify today. More importantly I am thankful that you have chosen to invite a person from our community to speak to you. I must discuss why I am qualified to be a representative of my community. This important because legitimacy here is not earned with degrees or certifications. The FOSS community respects me because I have made substantial code and documentation contributions under FOSS licenses. I am the original author of FreeB (http://freeb.org) which is the first billing engine that supports both paper formats and X12 available under the GPL. Several projects adopted FreeB, and although few projects still use the original version, almost all of the second generation FOSS billing systems have been built in response to FreeB. Because of FreeB, at one time code that I had written was accepted into more projects than another single FOSS contributer (although that accolade now goes to members of the Mirth project). I won the 2004 LinuxMedNews innovator of the year award for my work on FreeB. I am also the primary author of the WorldVistA ‘What is VistA Really‘ vistapedia article. I developed mutual respect for many different FOSS EHR projects through my FreeB work. Since that time I have tried remain nuetral to any given project or codebase, instead working to further the community as a whole. I no longer have a leadership role on any given project, though I professionally support several codebases. Currently I am the Chief Architect of the HealthQuilt project hosted at the University of Texas School of Health Information Sciences. HealthQuilt is focused on the use of FOSS interoperability software for the purposes of Health Information Exchange in Houston, T.X. and is funded be the Cullen Trust for Healthcare. This testimony would have been impossible to arrange without the help of the UT system folks here in Washington D.C. and the U.T. SHIS people back in Houston. Thanks!

I say these things not to impress the executive subcommittee, you already think I know something or you would not have invited me. I say this to communicate to the larger FOSS community why I am qualified to speak for the clinicians and programmers in the health FOSS movement. I must do this to offset the hubris of merely presuming that one can represent an entire community, whose opinions and priorities differ significantly. I hope my testimony is representative of the many emails and conversations I have had about this over the previous few days. I should note that Dr. Ignacio Valdes (of LinuxMedNews) and Will Ross (of Medocino Informatics) have both maintained similar project neutrality and I have relied on their council heavily.

I am a Hacktivist, which is in itself a controversial title and deserves explanation. The ‘Hacker‘ part of what I do has nothing to do with breaking into computers. That is called Cracking and true Hackers have little respect for it. The difference between a Hacker and a Cracker is similar to the difference between a graffiti artist and a simple vandal. A Hacker (for our purposes) is a person who solves complex software problems with panache. (http://www.catb.org/~esr/faqs/hacker-howto.html#what_is)

A Hacktivist is a person who Hacks for social change. I am also an entrepreneur and I charge for services and support of FOSS health software, including EHR systems. I see no contradiction between having a purpose of promoting social change and a profession of software consulting. One way to formalize this pairing is with the concept of a not-only-for-profit businesses.

Why are we here?

Of course I do not mean in this question in the global sense, but rather why is the United States government proposing that we fund EHR systems in this manner? Every other major industry computerized themselves decades ago. Why didn’t healthcare follow suit? It might be helpful to consider for a moment what was -not- the problem. Technology was not the problem. Thirty years ago, we had SQL database systems driving complex OOP applications. We had the ability to do thin clients, and thick clients, distributed software architectures and just about every other technology that drives modern EHR applications. Do not get me wrong, I love web 2.0 technologies, super-thin browser clients, Aspect Oriented Programming and Service Oriented Architectures, but those kind of recent innovations make the development and deployment of EHR applications ‘easier’, rather than ‘possible’. Technologically we had everything we needed to computerize this industry thirty years ago. Why didn’t it happen?

The problems in Health IT are political, not technical. By political I mean that the problem lies in the relationships between organizations and individuals involved in the delivery of healthcare.

Doctors, so far, have been reluctant buyers of EHR software. For good reason. EHRs slow doctors down and doctors are incented to see as many patients in a day as possible. EHRs get in the way of that, and so doctors have hesitated to adopt them. Generally, the way doctors are paid discourages them from using technology to improve the quality of their care. This funding should change that temporarily at least, which is a good thing.

The other side of that coin is that proprietary Healthcare IT vendors have been unsuccessful at selling anything that is not directly related to improving coding to doctors. Many modern proprietary EHR systems are little more than ‘coding justifiers’, they are not designed to improve the quality of care but to substantiate the increased code complexity of a given procedure. Even these EHR systems are woefully under-adopted.

We are funding EHRs because we have experienced massive, and unprecedented market failure. No proprietary EHR company has approached the market dominance that is typical in other software industries. Microsoft has about 80% market share of the desktop computing space. All of the other desktop operating system vendors combined split the remaining 20% market share. Why is there not Microsoft in medicine? Heck even Microsoft, would like to be “the Microsoft of medicine” and regularly fails in this endeavor. The largest Health IT vendors barely make it to double digit market footprint and those few that do achieved that status do so through mergers and acquisitions rather than dynamic growth. In the EHR space 1% of the market makes you a big player.

Why is there no ‘Microsoft of Medicine’?

The reason that there is no Microsoft of Medicine is that generally, healthcare does not have the same dynamics as the operating systems. When Microsoft software developers code operating systems they are essentially negotiating the requirements from other technologists, the engineers who designed the microchips in computers. There is alot of complexity in making an operating system, but there is also a very specific set of requirements that is totally and accurately defined.

Most proprietary EHR development follows a tragic pattern of ‘spec seeking bloat’. The basic development process typically looks like this.

  1. Develop Software at one clinic/hospital solve that organizations problem
  2. Start selling it in other places
  3. Start meeting new requirements from new customers
  4. Recognize that the current codebase is becoming unmanageable spaghetti
  5. Have big meeting where we all agree that the now we really ‘know what the software should look like’
  6. Write a new spec based on lessons learned from previous version
  7. Go code to new spec
  8. Release 2.0 version.
  9. Users hate it they all want previous version back.
  10. Developers scramble to make latest version work as well as previous version
  11. Return to step 3 and repeat until spec is so large that it is not possible to even consider implementing it

This is the reason that ‘mature’ proprietary EHR systems seem so bloated. An EHR is impossible to top-down architect. The product must be modified so much ‘on the ground’ that the higher level organization becomes meaningless.

It is impossible to create a ‘spec’ for an EHR system that is sufficiently complex. Trying to do this is the constant ongoing attempt to make healthcare work the way a microchip does. This is the reason why the current CCHIT ‘feature bucket’ certification is met with such resistance within the FOSS community, it is simply wrong way to approach the problem.

About Medical Manager wasting away

Most of the following information is pulled from a page that I maintain on the History of Medical Manager. http://docs.mirrormed.org/index.php/Medical_Manager_History

In the 1980’s it was estimated that 80% of all doctors using practice management systems used medical manager. If there was a company that had the opportunity to become the Microsoft of Medicine, it was Medical Manager. During the dot- com boom and bust Medical Manager was sold several times. Each time the software ownership was sold Medical Manager support staff were layed off to increase profits. Medical manager is now a ghost. It’s market share has been gutted. Its dominance has been regulated to a footnote in Health IT history. Medical Manager is important because it shows the basic temptation of with proprietary systems.

First a proprietary company releases a well-designed software system. The proprietary company supports their customers with passion. Soon their user base and adoption grows. Then the company is sold. The new management must take the same asset and now make more money on it. How do they do that? They decrease the number of support staff and attempt to force expensive upgrades on their customers. This process or variations on it, are the inevitable results of vendor lock-in, and this pattern is generally predicted by the economic models of vendor lock-in. (If anyone from SAGE is listening please consider releasing Medical Manager under the GPL, it would breath new life into the product. If you would like to try this, let me know and I will do what I can to help)

Sometimes proprietary software companies waste away and sometimes it dies of a stroke.

About AcerMed’s massive stroke

When someone discusses the process for selecting a proprietary EHR vendor, they typically recommend a product similar to AcerMed. AcerMed was CCHIT certified, and rated as best in Klas. They had enthusiastic users and capable software engineers. I have no evidence that AcerMed was anything other than an honest company. However, they were sued out of existence. Hundreds of AcerMed customers had to scramble to find new software.

Kept Honest ClearHealth vs. MirrorMed

I support the ClearHealth codebase under the trademark MirrorMed. The codebase is 95% the same, but I can support a ClearHealth instance, and ClearHealth Inc. can easily support a MirrorMed instance.

ClearHealth Inc cannot charge too much for its software, or I would undercut them. I cannot charge to much, or they would undercut me. If I am not responsive in my support my clients will go to ClearHealth Inc. If ClearHealth is not responsive… you get the idea.

Because ClearHealth and I are sharing code under the GPL, neither of us can get away with the shennigans that are common in the proprietary industry. Hard things are expensive, but easy things are cheap. If either I or ClearHealth Inc. go out of business, our clients would not be trapped or abandoned. The GPL insulates clients from the kind of corporate failure that AcerMed experiences. If Medsphere went out of business tomorrow, OpenVistA would be just fine.

If Medical Manager was available under the GPL, Medical Manager would never have tried any of the shennigans that they did. Ironically, if Medical Manager had been available under the GPL, SAGE would currently have deeper market penetration than it does now.

About VA VistA

Most of the following information is pulled from the pages that I maintain on the What is VistA Really and Why is VistA Good?

Almost everyone can admit that VistA is an excellent EHR system. Recent research shows that VA VistA operating at VA hospitals accounts for more than half of the advanced hospital EHR systems deployed in the United States.

What is not commonly understood is ‘why’ it is so good. How did it happen that a system developed by federal employees leads the way as the most widely deployed advanced EHR system in the United States? The reason is that VistA, unlike proprietary EHR systems, evolves.

From Evolutionary Dynamics: Exploring the Equations of Life by Martin A. Nowak http://www.ped.fas.harvard.edu/people/faculty/

The main ingredients of evolutionary dynamics are reproduction, mutation,
selection, random drift, and spatial movement. Always keep in mind that
the population is the fundamental basis of any evolution. Individuals, genes,
or ideas can change over time, but only populations evolve.

Each VA hospital employs its own VistA programmers to solve the problems of the local hospital. That makes each hospitals instance of VistA an ‘individual’. The VistA instances at all of the hospitals combine to form the required population. The mechanism for reproduction is the ability to copy source code. The mechanism for positive mutations is the ability of each local VistA programmer/clinician pairs to improve the VistA source code. The process of selection is the ability for V.A. clinicians and administrators to recognize superior work at another hospital and ‘kill’ the local programming effort in that area in favor of adopting the foreign code.

This is not some abstract plan. Medication barcoding was developed at a local VA hospital and then taken nationwide. This is a high-profile example of a process that is constantly repeated across the VA institutions (or it used to be).

Competing, decentralized, collaborative software development are both hallmarks of FOSS development and requirements for software to evolve. This stands in stark contrast to the recent decision to integrate a proprietary lab system into VistA. That proprietary system is incapable of evolving precisely because it cannot be freely copied. This is the reason that the VistA community was upset about this http://www.modernhealthcare.com/article/20090203/REG/302039997

I was surprised to hear that the bureaucrat that decided to go with a proprietary, static lab component defended the decision by saying ‘ we are taking an evolutionary approach’. People like this are very dangerous to the VA VistA community. They have learned to speak our language without respecting our values. They are pharisees who embrace the form, but not the spirit of our community. Because of these kinds of decisions, the majority of major VistA innovations over the last two years have been outside the VA.

There is a small chance, that someone from a congressman or senator’s office might read this. You should know that unless you are getting VistA advice from a card carrying member of the underground railroad, you are getting bad VistA advice. Personally I am a VistA novice, but I now know enough to know that the majority of VA bureaucrats are either making good decisions because A. they are listening to railroad card holders or B. they -are- a card carrying underground railroad member or C. sheer dumb luck. It is painfully obvious to those of use in the community that most VA bureaucrats typically have no idea what they are babbling about. Read my VistA articles above and then find yourself someone who actually codes in MUMPS to advise you directly. Once you get it, never ever let go of that direct programmer contact. Remember that VistA happened as a rebellion by clinicians and programmers against the VA bureaucracy. I also find that Roger Maduro and the board members of WorldVistA tend to be informed and right-headed when it comes to VA VistA.

Seven Generations

My grandmother took a drug while she was pregnant with my mother than predisposed my mother to ovarian cancer. My mother died from ovarian cancer. I will pass my mothers genes to my daughters and granddaughters. As my grand-daughters consider their predisposition to ovarian cancer they will need to consider the contents of my grandmothers medical record as well as their genetic inheritance. The content of my grandmothers medical record could easily be relevant for a period of over 150 years.

The notion that a proprietary software vendor can be trusted with the responsiblity of upgrading my grandmothers paper medical records into an electronic format that will be relevant to my grandchildren is like pre-selecting the East India Trading Company to provide the technology for the Apollo Space Missions. It should immediately strike anyone who considers the problem as a farce. Companies simply do not last for 150 years.

Making a laundry list of what an EHR should do is a little silly. It is equivalent to saying ‘We should encode both modern and future medical principles and make the computer do that’. We have only a vague idea what an EHR should do now, much less what it will need to do in the future.

I have extensively covered this in my article which covers the concept of the seven generation test, some of which I covered here. http://www.fredtrotter.com/2007/10/19/healthvault-failing-the-seven-generations-test/

Get out of the way. please.

I hope I have argued effectively that the proprietary vendor model will never delve true EHR requirements. I hope I have encouraged you to take a very-long term perspective on this problem. I hope I have shown you how dangerous proprietary licenses are to clinicians. But I do not need you to agree with me on these issues. (or for that matter to publicly admit that, secretly, you agree with me)

What I do need you to do is not create barriers to the commercialization of the evolutionary ‘VistA’ development model and ideals with your funding systems, or with your definitions of ‘meaningful use’. Please do not allow yourselves to get caught up in proprietary thinking. Here are some general rules.

Tolls are not OK. To quote from ClearHealth CEO David Uhlman.

If “Meaningfule Use” ends up requiring the American Medical Association’s Current Procedure Terminology (CPT), proprietary editions of ICD9/ICD10 codes, direct electronic transmittal of prescriptions (after the RXHUB/SureScripts merger only one company provides this), then they are precluding a completely Open Source offering for healthcare.

These kind of proprietary systems cannot be freely redistributed and that is a requirement under FOSS licenses.

Expensive, feature bucket certifications are designed for black box systems and will not work for the FOSS community. VistA is waaaay beyond CCHIT standards and has to be ‘dumbed down’ to meet the certification requirements. The FOSS community has been working with CCHIT and they have been very responsive over the last two months. But they were unresponsive for the two years before that. If you make CCHIT the only way to get certified they will have no motivation to work with us. Give us the option to create an alternative certification body. If you give the FOSS community that option, I fully expect that CCHIT will work with the community to create a separate-but-equal certification method that works for FOSS but is still ‘fair’ to proprietary vendors.

Answers to specific questions

What is the “time to market” cycle from adoption of standards to installation across the client base?

This is impossible to answer. It depends on the standard and depends on the individual who is in control of an instance of a FOSS EHR. The vendors cannot control our clients the way proprietary vendors do. I can tell you that bad standards will be adopted more slowly than good ones.

How does that enable or constrain criteria for 2011 for eligible professionals?

I have no idea.

Hospitals?

I have no idea.

Later years?

Only FOSS EHR systems are going to be able to adopt to far-future standards. Not sure I can say more than that.

What are vendors’ expectations with respect to increased product demand in 2011 and after, and how do they expect to meet it?

This is actually a question for the community and not the vendors. The existing vendors would say that they will scale their operations, and they will be able to that as well or better than any proprietary vendor. However, if the current vendors are unable to meet demand, the community will spawn new vendors to support existing projects. This is only possible with FOSS EHR systems.

From a technical perspective all of the FOSS EHR vendors I know of can scale with the ‘cloud’. (the ‘cloud’ is another technology that is impossible without FOSS). Using that technology our vendors can easily provide an EHR instance for every provider in the country. So the technology scales all the way to a national level smoothly. If our community was exclusively chosen to deploy every EHR in the country we would need to scale our support staff for things like phone support, and that would take a year or two. Even this is 10 times faster than proprietary alternatives.

What are potential risks (for example, need for additional technical support to assure successful implementations) and how can they be mitigated?

With freedom comes responsibility. FOSS EHR users have the right to shoot themselves in the foot. We cannot give our clients true freedom and, at the same time, ensure that they will always do the right thing. The best FOSS EHR vendors will be those that develop a collaborative relationship with clients that make good decisions more likely. But no vendor can control a client. Thats against the rules.

I think there is a danger that the single/small group practitioner(s) will be unfairly hurt by these technology requirements. I hope that our community will be able to address the specific cost and technology requirements that this user group has. I am afraid that technology requirements will force small practices into larger groups, which may not be the best way for those clinicians to provide care. I am advocating for a ‘simple as a toaster’ sub-projects within the FOSS community to help prevent this.

How will vendors need to adapt their product development and upgrade cycles to synchronize with progress toward increasingly robust requirements for meaningful use, information exchange, and quality reporting?

VistA is already way ahead of everyone. Other projects like ClearHealth/MirrorMed, OpenEMR, OpenMRS, Tolven, etc etc will have to catch up. Even with the other projects playing catchup, the limiting factor here will be how much technology a client can implement in a short period of time. Please read David Uhlman’s blog post below for more insights on this issue.
What changes are anticipated in the vendor marketplace between now and 2016 as a result of the incentives?

The incentives are going creating an ‘political environment’ that could replicate the focus on quality that is already found in the VA. This will replace the procedure farming that currently the most profitable clinical business tactic. Once that happens the basic ‘evolvability’ of FOSS will cause a blossoming of different systems designed to increase quality. Essentially the VistA programmer/clinician pair programming model will spread like wildfire outside the VA, even as it continues to be killed off inside the VA.

Vendors that currently have investments in VistA or other mature projects like ClearHealth, OpenEMR, OpenMRS, or Tolven will have a considerable advantage over newer FOSS vendors and proprietary vendors.

Over the next 50 years, it will become increasingly difficult to compete as a proprietary vendor. Only those who can achieve and sustain Microsoft-like development savvy will be able to compete. FOSS EHRs will provide a floor and without substantial advantages, no one will consider using proprietary systems.

The value will move away from the code itself and into higher level processes like data mining for clinical rules. This will be just in time however, because without this kind of adaptability it will be impossible to cope with the coming deluge of genetics and protenomics information.

References

I have used information and ideas from the following resources extensively.

Free and Open Source Software in Healthcare AMIA Open Source Working Group White Paper (Dr. Ignacio Valdes) http://www.amia.org/files/Final-OS-WG%20White%20Paper_11_19_08.pdf

David Uhlmans ClearHealth CEO blog post on Meaningful Use http://health365.wordpress.com/2009/04/26/idea-67-for-april-26th-2009-not-shooting-ourselves-in-the-foot-or-the-meaning-of-meaningful-use/

Dr. Edmund Billings Medsphere CMO file posts to the Open eHealth Collaborative http://groups.google.com/group/open-ehealth-collaborative/web

OpenHealth Mailing List http://tech.groups.yahoo.com/group/openhealth/

LinuxMedNews.com http://linuxmednews.com/

Webinars and papers from Mendocino Informatics http://www.minformatics.com/

HardHats (VistA support list) http://groups.google.com/group/Hardhats

I would also like to thank the folks from MOSS, Dr David Kibbe, WorldVistA and the folks from U.T. SHIS for help and advice.

.
.
.
.
.
.
.
.
.

Computer Science should be required for Medical School

Hi,

Currently, in Texas,  one is required to take Physics I and II and Calculus I (or equivalent stats class) to apply for Medical School. That is not all, of course, but they are requirements.

So far, I have never meet a Medical Doctor who needed to use calculus. In fact the only ones that might need to really understand the subject are those who are doing high-level mathematical modeling for Bio-Informatics. For these researchers Calculus is not enough. Your average primary care doctor, *never* uses calculus. They also *never* use Physics! Of course they have all kinds of systems that obey the laws of physics, including blood pressure, syringes etc. But they never treat a patient and think “hmm.. what was the relationship between volume and temperature of a gas….”. They think in higher level abstractions like “When blood pressure is high that could mean X, or Y or Z depending on….”

The real benefit of Physics and Calculus is that they introduce you to new ways of thinking. That new way of thinking makes it easier to understand higher level concepts that you *will* use everyday as a doctor.

I recently had a conversation with a clinician about some work that I am doing on the NPI database which lists every doctor (that prescribes medicine) in the United States. The conversation went like this.

Fred: “I need to munge the data, I need to process the data in a different way than it is listed (a flat CVS file), I need to turn into a real normalized database before I am going to be able to use it effectively”

Clinician: “Wow… Ok… How long will that take you”

Fred: “Well you do not want me to work on this full-time do you? I only have about 5 hours a week I can work on this in my current schedule..”

Clinician: “Yea.. but your other projects are pretty important… given only 5 hours a week, how long would this take?”

Fred: “three months.”

Clinician: “Boy… that’s a long time… I know! Why don’t you just create a database with Texas doctors, instead of all the doctors in the United States! How long will that take?”

Fred: “three months.”

Clinician: “That makes no sense at all. How can that be possible?”

Fred: “Well making the database smaller does not really help me at all, that is the part of the problem that the computer takes care of, not me”

Clincian: “Cmon. A smaller database should mean a shorter time, this seems almost obvious to me. You should be able to do it faster than that. ”

Fred: “Ok.”

Clincian: “Ok, so how long for just a Texas-only database?”

Fred: “three months”

and so on…

Now lets point out… first of all… that this Clinician is not stupid. He was ignorant of how computers, and more importantly of the process of programming computers. This issue is worth discussing in detail because it really illustrates the thought gap between someone who knows how programming works, and someone who does not.

There are many NPI records, specifically in a recent (May 2008) release of the database there were 2,557,650  lines in the comma delimited file as revealed by “wc -l” (subtracting one to account for the first line in the file, which is full of labels… not a real NPI record)

The changes that I need to make to fix the NPI database are pretty complex (fodder for another post) but for now, I will just say that it is a “Complex Reordering of each Record”. Here is how my process for approaching this problem looks:

First I need how to process a single record. So I write a function to do that. For the sake of prose, I will call that function

complexReorderingOfEachRecord( $Record )

I will look at the one Record in the NPI database and then try to pass that Record into the function and see if it does the right thing. The complexReorderingOfEachRecord is a long function, it does lots of really complex things. So complex in fact that I really cannot keep all of its functionality in my head at one time. I use various ways of abstracting the problem so that I can think about the problem in useful chunks, and figure out if each chunk is working.

I am going to actually include some psuedocode in this post.  Psudeocode is code that is not exact enough for a computer to execute, but is clear enough that a human can read it and understand what it does. Programs are like recipies, they are simply exact  instructions that the computer will follow. I will use some basic programming elements in my examples (Note to programmers: this blog is also for clinicians… so you can safely skip this…)

  • Sequential execution – Each line of code is read and executed by the computer before moving down to the next line
  • Variables – a variable is a changing placeholder for information. Each time a program is run, it is possible that the variables will contain different values. I use php, so I mark my variables with dollar signs “$”. This ends of working alot like the “x” in an algebra problem, it can have different values depending…
  • Functions – It is often useful to merge many simple lines of code into a single function. Later you can execute all of the code inside the function by calling the function name and passing data to the function by putting inside the parens “()” after the function. It is basically a way to group useful bits of commands together.
  • If/Else statements – when the computer reaches an IF statment it looks at the contents of the paranthesis “()”beside the if statment. If the contents are “true” then the code inside the braket symbols “{}” following the if statement is run. If the statement in the parens “()” is “false” then the code in brackets “{}” following the ELSE statement is followed.

So the inside of the complexReordingOfEachRecord looks like this

function complexReorderingOfEachRecord( $Record){

reorderingStepOne($Record);

reorderingStepTwo($Record);

reorderingStepThree($Record);

}

(Note to Programmers: I am actually using an OOP design for my project, so in reality these would be function calls on objects, but I want to keep this on a procedural level to make my point)

complexReordingOfEachRecord, reordingStepOne,  reordingStepTwo, reordingStepThree are all functions. The contents of recordingStepOne are not shown, but they are custom functions, meaning that I wrote them. $Record is a variable. There are no IF statements yet.

Ok.. I write this code, test it, debug it about 15 times before it works to import 1 record. But then I run the code on the first 10 records my system blows up! Some NPI records are not for Doctors at all, they are for organizations that provide healthcare: Doh! I need to run the program differently for people vs organizations!

So I modify my function to look like this:

function complexReorderingOfEachRecord( $Record){

reorderingStepOne($Record);

reorderingStepTwo($Record);

$isAPerson = reorderingStepThree($Record);

if($isAPerson){

doOneThing($Record);

}else{

doAnother($Record);

}

}

Now the function has one IF statement that looks in the variable isAPerson and then executes either doOneThing or doAnother based on the contents of $isAPerson.

I have to code, test and debug this another 30 times to get it working. I have to test it more times because the new function calls doOneThing and doAnother do not work without modifications to reorderingStepOne and reorderingStepTwo. I have to switch between thinking about different part of the problem very quickly to make sure it works. To start, everything breaks, but as I discover why, by running the program again and again, I make small changes that eventually make the whole process work correctly. The shorthand for this process is code, test, debug, repeat.

As I am working I start to run the program on the first 100 records. I notice that often the person in the record is not an M.D., there are also dentists and other clinicians who are in the database ! But my work is focused only on M.Ds. So I modify the code again:

function complexReorderingOfEachRecord( $Record){

reorderingStepOne($Record);

reorderingStepTwo($Record);

$isAPerson = reorderingStepThree($Record);

if($isAPerson){

$isAnMD = doOneThing($Record);

if($isAnMD){

processMD($Record);

}else{

processNonMD($Record);

}

}else{

doAnother($Record);

}

}

Now I have a “nested” IF statement, an IF statement that exists in another IF statement.

As before all of the other functions must be modified to make my two new functions processMD and processNonMD work correctly. This requires 50 repetitions of code, test and debug. Sometimes one code, test and debug cycle takes 30 seconds. Usually it takes about 5 minutes. Sometimes it takes as much as 15 hours.

Now I am testing against 1000 rows of the NPI database, and it works perfectly! I have put in about 40-50 hours (or about 3 months at 5 hours a week)

But now what! I have only imported only 1000 rows of the database. Now I will explain how I ran the code on one row, 100 rows and then 1000 rows. I will introduce the WHILE statement to my simple psuedo code.

$i = 1

while($i < 1000){

$Record = getANewRecord()

complexReorderingOfEachRecord($Record);

$i = $i + 1

}

The “while”is just like an if statement, except that when the contents of the curly brackets “{}” are done, then the contents in the parens “()” are re-evaluated. If they are still true, then the contents of the “{}” are run again. The $i variable starts at 1, and then grows by one every time the contents of the curly braces are run “{}”

So how do I import the whole NPI database? I change to code to look like this:

$i = 1

while($i < 2557650){

$Record = getANewRecord()

complexReorderingOfEachRecord($Record);

$i = $i + 1

}

Then I start the program and go to sleep. In the morning, all 2,557,650 records are correctly processed.

Once I had done the work to determine “How to change an NPI record” the computer simply repeated that process for as long as I wanted. Computers are so fast now, that even very very complex processes can be repeated very quickly.

You see *I* never import any data. The computer does that part. *I* the programmer tell it *how* to import that data. Like doctors, when programmers have a simple concept with big implications, we create an important sounding word for it. The important word for *how* to do an information task is “algorithm“.

If you get an algorithm right, computers do just what you want. If you get the algorithm wrong… computers do other things. If you get the algorithm badly wrong… God help you.  This is why computers often seem to have a “mind of their own”; when programmers tell them to do the right thing, they do exactly that. When programmers tell the computer to do the wrong thing… they do exactly that.

Any programmer reading this is likely going blind with boredom. But someone who has not programmed might likely be asking “Wait… what’s a function?” This is actually a pretty terrible introduction to programming. For something more real, I suggest you start here.

My point is this, computers make some types of tasks really easy. Getting to them to do those tasks, without making a serious mistake, is pretty difficult and time-consuming work. If you, as a clinician, do not understand what tasks are hard, and what tasks are easy, then it is almost impossible to evaluate the software you are using. I cannot tell how many times a clinician has requested a “simple change” that has taken me three weeks of programming. On the other hand, I cannot tell you how many times I have seen clinicians (or more often clinicians staff members) subject them selves to terrible software designs that would be trivial to fix.

To create an algorithm you need to understand two things:

  • What the computer can do
  • What the computer should do

There are some people, like my friend Ignacio Valdes, who have been extensively trained in Computer Science and Medicine. These people are amazing, because you can watch them switching back and forth between one part of healthcare IT (Clinical know-how) and the other (Computer Science know-how). But even these few gems (rare as hens teeth), cannot actually hold the complexity of even a single clinical IT problem in their head at one time. That is just not the way that programming, clinical care or anything truly complex works! Programmers must ignore parts of a program to improve on any given part. Clinicians must ignore parts of a patients body to address a problem with one part. (Most heart surgeons, for instance, remain unconcerned about the flaky skin problem while their patients are in open heart surgery.) Knowing what to ignore, and what deserves attention is often the true test of expertise.

The only way to deal with Healthcare IT is to create teams of people to manage the complexity together. The problem with that is that for any given problem domain, there is a danger that the communication cost will grow exponentially in relation to the number of participants. It is common for the communication costs to totally destroy all productivity in a given group. But at the same time, it is simply not possible for a single person to correctly navigate the complexity of even a simple Health IT software project.

The solution to this problem is found in the VA VistA development model. Here are the rules:

  1. You do not work on “the system”, you work on part of the system. VistA is actually hundreds of programs that work together.
  2. Whenever possible you work in pairs. Any more gets unmanageable.
  3. One person must understand everything they need to about the programming of the clinical issue. We can call this person the Programmer. (In the VA this is a Programmer or a CAC)
  4. It helps if the Programmer has a basic healthcare vocabulary.
  5. Another person must understand everything they need to about the clinical problem itself. We can call this person the Clinician.
  6. It helps if the Clinician understands, basically, what is easy, what is hard, what is possible and what is impossible with computers.
  7. You rely on other pairs to address other clinical problems.
  8. You intentionally have redundant “programming pairs” so that you are forced to compete to make better solutions.
  9. When another pair makes a better solution to your problem, you celebrate that and adopt their code as the new starting point.

Its number 6 that this article is focused on. It would be really helpful if Physicians in particular were required to know what a “for loop” meant. Just like calculus and physics they will rarely, if ever, use that information. But for the time being, the fundamental lack of understanding of computer science in clinicians is holding healthcare back. Can you imagine speaking to your doctor if he or she had no idea what the word “pressure” meant in the phrase “blood pressure”. As it stands, most doctors do not really understand what the implications of the word “Information” in the phrase “Health Information Systems”.

What scares the hell out of me is not that the clinician above did not know how the programming process worked. Ignorance has a simple cure: learning. What scares me is that he was willing to pressure me to speed up the schedule, even after I explained how things worked. Trying to force a programmer to take short-cuts to make a deadline is a very very bad idea (see point number 4 here). Doctors, like military officers, often fail to recognize that in “being in charge” is contextual. It does not matter if a Doctor is right about a clinical issue, if they are wrong about a software design issue. The resulting software will fail to perform, despite its clinical correctness. Doctors cannot “be in charge” in software design the way they can in an operating room or in clinical practice. That does not mean they are not vital, it just means they should not be in charge. The programmers should not be “in charge” either. The “Clinical Pair Programming” that I am describing above is a description of the peer thinking that is required to solve these problems. When someone is “the boss”, (meaning they actively back only their own priorities) the system breaks.

The irony is that the few Doctors I know who are my peers with regards to computer science education, are often more hesitant to challenge me regarding my information systems opinions. Do not get me wrong; they often disagree with me, but not more than any programmer would disagree with any other programmer.

This is why I support an undergraduate computer science prerequisite for medical school.

-FT

DSS frees vxVistA, changes the VistA game

According this press release on LinuxMedNews   DSS will be releasing vxVistA under the EPL in association with the Open Health Tools group.

This is huge news. DSS has been a proprietary VistA company for years. They have a tremendous amount of respect in the VistA community for technical competence and they have been slowly building important extensions to VistA for a long time.

vxVistA is a culmination of many of those improvements. DSS has many proprietary components, and not all of them will be released with vxVistA. I understand that DSS will soon have information published through its website that clarifies what is being released and what is not. They have already said that the version that will be released will not be CCHIT certified, although the codebase will largely be the same.

Still I have it on good authority that the release will be substantial. This is important because there are many missing components of FOIA VistA that vxVistA could address. It is not unreasonable to speculate that vxVistA could be the most technically advanced variant of VistA available under any license.

If they know what is good for them, Medsphere and ClearHealth will be paying careful attention. The moment this release is realized is not unreasonable to say that DSS is now the top company for open source VistA. They have more customers than either ClearHealth or Medsphere. They have extensive functionality in vxVistA that is not found in WebVistA (ClearHealth), OpenVistA (Medshere) or WorldVistA. DSS has a much deeper pool of VistA talent than any other single company that I know of. Do not get me wrong, Medsphere and ClearHealth have very experienced developers, but DSS has focused on MUMPS and VistA for years longer than either company has even been in existence.

It remains to be seen if DSS knows how to be an Open Source company. But they have always been straightforward, honest and open about their opinions and business strategies, and that is probably the most difficult lesson to learn.  If they can create a community portal that can compete with Medsphere.org (which is the best community site in the health FOSS industry) there may be no stopping them.

This is a game-changing announcement. At least I will have some fresh material for my next “State of the Source” talk.

EPL is a solid license, approved by both the FSF and OSI. That makes it both “free” and “open source”.  It is the license of choice for the OHT which will be hosting the code base. It is specifically designed to handle a project that has a FOSS core that will not be a threat to proprietary modules. Since DSS will be a hyrid proprietary/FOSS company for the foreseeable future, the reason for choosing the EPL should be obvious. So far, OHT has done little of substance, given the caliber of partners and resources that it has.  Many of us have been wondering when OHT would do something significant.

The fact that DSS has chosen to release its code through OHT brings a new relevance to OHT. There should be no confusion however; OHT is relevant because it is working to release DSS code, not the other way around. The code that DSS is releasing has the potential to be vastly more valuable than anything OHT has even attempted.

I want to point out that the devil is in the details. I have been assured by DSS President Mark Byers that the release will be significant, but I am not enough of a VistA expert to be able to determine to what degree this is true, even when DSS clarifies what they are releasing.  Because so much is available in FOIA VistA it might be difficult for a novice like myself to determine what the real value of DSS really is. Thankfully, the Hardhats community will quickly asses the value of the DSS release, and let MUMPS-outsiders like myself in on the evaluation in short order.

No matter what, this marks the entrance of DSS as a serious FOSS health IT vendor. To which I can only say “Welcome!”

-FT