Speaking at OSCON


I am honored to announce that I will be speaking at OSCON 2010 on the healthcare track.

This talk is my “Health of the Source” talk. My intention in this talk is to cover both the “spirit” of Open Source in Healthcare as well as the “letter” of what is specifically going on right now. If you are unfamiliar with Open Source in Healthcare, and you can only attend one single talk on the subject, you should attend my talk. You will learn the most about the most different things. If you want to attend more than one talk, you should probably read Andy Orams summary of the OSCON healthcare talks.

As always, I am asking my readers and followers to tweet me about things that are happening in Open Source Healthcare that I should mention. If you could not get a spot at the conference but you are doing something wonderful, let me know and I will try and mention your work to the right people.


Science Conjecture in Science policy

Science-based policy is pretty difficult to do, but I support the notion.

But when is science conjecture taking the place of science? I know that this guy is not being quoted directly, but paraphrased by a reporter. That has been done to me enough that I know this strips what a person actually says of any nuance… but still something about the following statement makes me very comfortable.

The response of the doctor in charge was paraphrased as:

He acknowledged that it was impossible to specify just how many cancers were environmentally caused, because not enough research had been done, but he said he was confident that when the research was done, it would confirm the panel’s assertion that the problem had been grossly underestimated.

Does this scare the heebie jeebies out of any one else?

A person in the role of scientist making a policy recommendations based on what science will “soon” find? What does that even mean? Don’t get me wrong, I think we should totally be on carcinogen patrol, be when do good intentions begin to betray the scientific process?


Doctor impatient with NHIN security dithering

Recently, the members of the NHIN Direct Security and Trust working group (brian and I at least) were criticized for dithering:

> Fred and Brian,  I love what you’re trying to do for FOSS and data
> interchangeability.  You’re dedicated, smart, expert programmers and
> systems experts. You want to be socially responsible, and protect
> people from HIPAA violations.  I think your design principles as
> expressed at http://nhindirect.org/Design+Principles  are exactly
> right, but I think NHIN has been led astray as to what its mission
> ought to be, and this matters very much to me as an eventual user of
> health data interchange.

leading to…

> Okay, let’s get this straight. Under the HIPAA law,  Covered Entities,
> such as doctors offices, are responsible for protecting PHI, not the
> manufacturer of the fax machine the office uses to send information to
> another office, or the folks who write specifications for fax
> transmission.  Does this still make sense?
> OK.  Your job at NHIN is to design a fax machine.  “Just the fax,
> ma’am.”  Covered Entities, such as doctors offices and hospitals, are
> responsible for what information will be sent, and are responsible for
> protecting it (at BOTH ends).  Another government agency (hhs.gov/
> ocr)  is responsible for enforcing HIPAA, not NHIN.    You just have
> to provide the highway to send the information, and make sure it gets
> to an actual covered entity. That’s ALL you have to do!
> I think you’re suffering from ‘Mission Creep’. You’re trying to make
> sure no one ever violates HIPAA with their fax machine.  Dang near
> impossible.  Thankfully, not your job.Yes. It makes perfect sense.

But I am not arguing that with you.

Doctors trust fax machines to provide private point to point communications.
The do not even wonder if fax machines “actually” provide private point to point communications.
In fact fax connections can be sniffed, which is why they have tele-walls now (a firewall for telephones)

And even that does not prevent the “wrong number problem

But even if we accept your premise, that we need to make NHIN Direct “just like a fax machine” someone, somewhere dithered about how faxing should be implemented. They went in circles with different ideas. They negotiated between different vendors of machine technology and they came up with a standard that could be implemented by a hardware company to build a fax machine. Here are some of the results of that.

To make something “simple like a fax machine” we need to have a set of instructions that people like EPIC, Medsphere, Google, Microsoft and Indivo can all implement identically. Those instructions might feel like they are far too complex for the simple task at hand. However we are already following Einsteins edict: “as simple as possible, but no simpler”. I think you will find that almost every discussion on the forums -has- a point. A valid concern that really should be addressed.

Main difference between a FAX machine and NHIN Direct is that, for whatever reason, Doctor’s do not distrust the security implications of a fax.
However, they -already have- faxes, which they already trust. When NHIN Direct and CONNECT come out, the “trust models” and “trust implementations” will be subject to intense scrutiny by security researchers who are decidedly better about thinking about these issues than I am. If a strong majority of those security researchers are not generally satisfied with our decisions, then they could act to cause doctors not to trust our design. This is a potential problem even if we design it right.

You are absolutely right, the law makes doctors responsible for HIPAA violations, but you forget the extension, they may also be held responsible for trusting the wrong technology. If a doctor downloaded Kazaa and accidentally published patient files on the Internet, do you think that a judge would be patient with his argument that ” it was a flaw in the technology!”. It is the doctor’s responsibly to be reasonably sure that a technology does do something that would be illegal. Currently doctors trust the “known-good” fax network. But will they trust NHIN-Direct? The answer is “of course they will”. But only because the security community will look at what we are proposing and say “well that looks reasonable” and they will only do that because we are getting our proverbial shit proverbially together.

I would suggest that you please be patient with our security sausage-making, I believe you will like the final result.


NHIN-Direct leans towards HealthQuilt Security Model

My last big project, before the skunkworks project I am doing now for Cautious Patient, was as the Chief Architect at HealthQuilt.

HealthQuilt was a prototype project for a Health Information Exchange in Houston T.X. hosted by UT SHIS. (Which just won the status of Regional Extension Center under ARRA . My boss at HealthQuilt, project leader Dr. Kim Dunn will be the director of the new REC. Dr. Dunn built a community of the local “interested parties” in Health IT during the HealthQuilt project. Ultimately, politics (remember this was pre-incentive) would prevent any data being transferred between organizations using our model before funding ran out. But now the community that Dr. Dunn built will be vital in her new role as REC director.

My job at HealthQuilt was to choose which technologies we would use to prototype the HIE. HealthQuilt was committed to Open Source from the beginning, so I was an obvious choice to handle the detailed technology choices. We spent alot of time with  Houston Health Information Security professionals aling with the crews at Mirth and MOSS, designing a workable trust model.

I am happy to say that just as Dr. Dunn will be able to build on the HealthQuilt community for the Houston-based REC, the NHIN-Direct project may decide to reuse some of the concepts (and perhaps some of the code) that we developed at HealthQuilt. Here are some of the basic, core concepts of the HealthQuilt model.

  • The Health Information Network should be built using point-to-point ssl VPN or https connections.
  • The trust model should use X.509 PKI Certificates.
  • It should use many (rather than one) Certificate Authorities (CA).
  • Both the recipient and the destination of a given VPN tunnel or https connection must have certificates. This is very different than the PKI model used on the Internet, where servers are generally certified but clients are generally anonymous.
  • This “encrypted Health Internet” should run entirely underneath any healthcare protocols. That means trust is handled first at the network level. If some actor in the network is no longer trusted, CRL or blacklists will prevent -any- communication with them, rather than relying solely on relatively young implementations of health protocols to provide adequate encryption.
  • The “relatively young implementations of health protocols” should still implement encryption, as though there was no network security in place. (This one is actually Sean Nolans idea.. more later)
  • This allows for a natural layering of security, which makes security wonks like me feel all warm and fuzzy.
  • The “core NHIN organization” should have a list of “typically trusted CAs” called “anchor CAs” that it recommends to all network participants. This is similar to the way that normal Internet CA’s are “suggested” to you by automatic inclusion in your browser of choice.
  • Individual network participants can also choose to trust other CAs, like those provided by a hospital they are affiliated with.
  • The job of the CA’s will be (roughly) to make sure that anyone they issue a certificate to is, in fact, a particular clinical entity (doctor, clinic, hospital, etc) who has the right to receive and/or send PHI.
  • This means that members of the network do not need to sort out trust relationships on a peer-to-peer basis. They can assume that everyone who the CA trusts is trustworthy, and they can automatically share data with them when a clinical need justifies it.
  • If, for some reason, two members of the network do not trust each other, they can still use a blacklist to prevent communication.

The browser providers determine what bar CA’s must reach for automatic inclusion in each browser. That “automatic inclusion” is the foundation of the trust model of the Internet. That is what gets you a secure connection to Amazon to buy a book, even though you do not think too much about “how do I know that is really Amazon?”

So why did HealthQuilt come up with this model? We knew that each institution in the Houston area would need to make trust decisions on its own. They would never tolerate us saying “Here are the ten other hospital systems in the network, take it or leave it”. The answer would always be “leave it”.  Some of our constituents were very concerned that a blanket trust policy would mean that they would trust organizations that they do not have a real-world trust relationship with, i.e. Planned Parenthood clinics vs. Catholic Charity Clinics. In order to participate in the network, they needed to have fine-grain control over the trust decisions. Most participants planned to trust everyone else in the network, but they did not want to trust that the network itself would remove bad actors in the future. The combination of blacklists (which is how a node can cut off communication with another node) and CRL’s (which is how a CA says “I do not love you anymore”)  provide both network and node level control over dealing with “bad-actors”.

Most importantly, the network-level security model is technologically identical to the current Internet trust technology. The policies and the trust decisions are very different, but the technology is basically the same; ssl + x509. That is good, because it means that the trust issues are not entirely handled at a level where new protocols are being developed. If you rely only on message security, and you discover that one implementation was “leaking” information by encrypting slightly less than what was intended in a given “message” that could be a real problem. SSL-vpns and https, using x509 PKI is a known-quantity (not the same thing as “safe” mind you). Using that “underneath” the new stuff that NHIN Direct and CONNECT projects will develop will help ensure that implementation or design mistakes do not automatically imply a broad attack vector.

Moreover, when advances (i.e. quantum cryptography) makes the current Internet Trust model obsolete, it will have to replaced with something. Whatever it is replaced with will have to play at least some of the same roles as the current X509/ssl infrastructure. That means the whole Internet will work with us to upgrade the network trust model.

I should point out that the NHIN Direct team was certainly not doing nothing until I showed up and told them what I had done with HealthQuilt. I think that something very like the basic HealthQuilt Trust model would have been embraced in any case anyway. I am just happy to be able to present a package of thought-out ideas to the NHIN Direct team. Ironically, even before I made my suggestions, Sean Nolan, of Microsoft HealthVault, was already arguing against the “single CA, top down trust model”. Once you make the concession that you are not going to attempt to do trust entirely using CA’s and proxy CA’s (the top down model) then most of the HealthQuilt Trust model, is just incremental obvious choices.

I will be calling this trust model, the “HealthQuilt Trust Model”. This is despite the fact that the NHIN Direct trust model seems like it could justifiably also be called the “Microsoft Model”. Microsoft has some really talented technical people and it makes me feel good to see them reaching the same conclusions that I do, in parallel. Still I seriously doubt that the new NHIN Direct trust model will ever be called “The Microsoft Model” since the name does not actually describe the model at all.  This is good, because the phrase “Microsoft Model” generally makes the hair on the back of my neck stand up and do the polka. It should also be noted that my original ideas on the HealthQuilt model were pretty useless without adjustment from Ignacio Valdes of LinuxMedNews, my brother rick, or David Whitten of WorldVistA and the VA and several of the Mirth engineers and Alesha Adamson of MOSS. All of whom gave me valuable feedback. It is also important to note that the model has improved substantially in response to the excellent thinking done by Brian Behlendorf and the rest of the NHIN Direct Security and Trust Workgroup.

Still, I will be using the name because it is truly indicative of how the trust model should work. It should be like a quilt, legitimately different ideas about trust and security implemented by different organizations, but despite those differences, still connected. The Internet has shown time and time again, that uniformity is not the only way to cooperate.

You can follow whats happening on the NHIN Direct Security and Trust Workgroup forum. If you are truly a glutton for reading, you can read my posts and the responses


OSCON includes Healthcare

Update: I am speaking at the 2010 OSCON.

I am happy to spread the news that OSCON, probably the most important Open Source conference in the country, will have a healthcare track in 2010.

Andy Oram has explained the decision to add a healthcare track to OSCON.

They have asked me to help promote the conference and I want to be sure that our community offers up the very best in talks and technical content. This is a really good way to access the developer mind-share in the broader Open Source community and we need to jump all over it.

I can honestly say that this conference will be vastly more important than the little shindig I am putting on in Houston. If you had to attend just one of the two, then you should probably go to OSCON… God bless you if you can go to both!!

With a healthcare track at OSCON, and a healthcare track at SCALE (DOHCS) we are finally moving towards general Open Source healthcare meetups.

I should take a moment to promote OpenMRS, CONNECT and WorldVistA all of which have great project-focused meetings already.

Happy days!


Technology vs Policy for privacy

I have long been an advocate of reasonable and measured reaction to “privacy scare tactics”. I have argued, for instance, that it was a good thing that HIPAA does not cover PHR systems. But that does not mean that I do not think privacy is important. In fact there has been something nagging at the back of my brain for several years now:

We typically use technology to provide security, but we use typically use policy to protect privacy.

That is deeply problematic. To see why we should carefully consider privacy and security in a simple context.

Imagine that you have purchased a safe-deposit box at a bank. Safe-deposit boxes are a great example, because if the bank accidentally gives away the money from your account, its not really a big deal… after all, its a bank, if they lose your money, they probably have some laying around somewhere else. The “lost money” could be restored to you from bank profits. But if you have baby pictures, your medical records, your will, Document X or your family jewels (not a metaphor) in your box, the bank cannot replace them.

Now to “protect the contents” really subdivides into two issues, security and privacy.

Security is the degree of protection against danger, loss, and criminals… according to wikipedia.. at least for today 😉

In the bank context “Security” means that no one can easily break into the bank  vault and grab your “Document X” from your box and run off. Of course there are many movies about how bank security can be overrun, but, thankfully, it is not a typical event for a bank have its security deposit boxes robbed. Banks use both technology (the vault, the alarm system, the video system) and policy (only the bank manager can open the vault, the vault can only be open during the day, etc etc) to protect the security of the bank boxes. Note that security is a spectrum of more secure to less secure, there is no such thing as just “secure”. Security can almost always be improved, but eventually improving security begins to interfere with the usefulness of something. If the bank vault could only open once a year, it would be more secure, but not very useful.

In the world of health information “Security” means that is difficult for a hacker to break in and get access to someone’s private health record. That is an oversimplification, but a useful one for this discussion.

Privacy is something else. The source for all knowledge (at least until someone changes it) says: Privacy is the ability of an individual or group to seclude themselves or information about themselves and thereby reveal themselves selectively.

In the bank example, Privacy is all about who the bank lets into the deposit box. For instance, if they decide suddenly that all blood cousins will now get automatic access to each others safe-deposit boxes, that would be a violation of your privacy. If your cousin could get access to your Document X because the bank let him, then the problem is not that the vault walls are not thick enough or that the combination is not hard enough. The problem is that the bank changed the basic relationship with you in terms of privacy. This is the first of several principles: Security Technology does not necessarily help with privacy.

Banks do not typically change the deal like that, sadly, modern websites regularly do. Two recent examples are the launch of Google Buzz and the Facebook privacy problems including the most recent expansion. The problem with modern information companies is that they usually take the corporate upper hand with their privacy policies. Sometimes this is really egregious, such as the original HealthVault Privacy policy, which gave permission for Microsoft to host your health data in China. But almost always it includes the key phrase “We reserve the right to change this privacy policy at any time”. If we sum up the problems with using privacy policies to protect privacy we find another principle: Privacy Policies often provide little privacy as written, often give permission to change the policy in the future which negates any notion of commitment, and even then policies are often ignored or misinterpreted. Privacy Policies should not be the only thing protecting our privacy.

Perhaps you want your spouse to be able to get access to your box. Perhaps you even want your cousin to have access. But the idea that the bank can just “change the deal” from what you had explicitly allowed is pretty strange. Thankfully banks rarely do that. But you could use technology to ensure that your privacy was protected, even if the bank arbitrarily changed its policy.

If you wanted you could keep a safe inside your safety deposit box, and keep your Document X inside the safe. Then you could give your combination to your spouse, so she/he could also open both the safe, (as long as you had also told the bank to give the access to your  to the safety deposit box to your spouse). Even if the bank decided that your cousin should have access to your box too, it would not matter, since your cousin could not open your safe. (we will pretend for the sake of this analogy that explosives and other means to circumvent the security of the safe would not work and the safe could not be removed from the safety deposit box). Our next principle: It is possible to use technology to help protect privacy.

Note that the safe also protects your Document X against access from bank employees. This is important because it does not matter what the privacy policy is, if it is not enforced by the employees of an institution, or if it the policy is arbitrarily changed in a way that you feel violates your privacy. It is also important to note that the government employees could not open the safe either. Of course we all know that governments are not inclined to violate the privacy of its citizens, but if the government did get a look inside the safety deposit box, all they would see would be the safe. Here we have another principle: Privacy technologies should prevent unwanted access from insiders and authority figures as well as from “bad” guys.

Which brings me to my point. We need to have more technologies available for protecting health information privacy. We have lots of technologies available for protecting security, but these do not protect privacy at all. These technologies, if they are going to work, need to give people the power to ensure that their health information is protected even from the people who provide a given technology service.

So far we have several principles:

  1. Security technology does not necessarily help with privacy.
  2. Privacy policies should not be the only thing protecting our privacy.
  3. It is possible to use technology to help protect privacy.
  4. Privacy technologies should prevent unwanted access from insiders and authority figures as well as from “bad” guys.

To these I would like to add some implied corollaries:

Encryption, by itself is not a privacy technology. It is a security technology, but it is only a privacy technology depending on who has the keys. This was the problem with the infamous clipper chip. The first issue with Information System privacy is, and will always be, “who has the keys?”.  So when a service says in response to a privacy challenge “oh don’t worry its all encrypted” that’s like saying: “You are afraid that I will fax document X to your mother? Don’t worry I keep document X at home in my safe!” If you are afraid that I am going to fax document X to someone, the fact that it is now in my safe should not make you feel more comfortable. I can still get in my safe and I can still fax document X wherever I want. Document X being in a safe is only helpful if you trust everyone with the keys to the safe.

The other thing is that you simply cannot trust proprietary software to provide privacy. If you cannot read the sourcecode to see what the software does, then it does not matter what kinds of privacy features it advertises or even who has the keys. At any time the developers could change the code and make it violate your privacy. To further extend and abuse our example, it does not help you to have a safe inside the safe-deposit box if the bank can change the safe’s combination whenever they want, without you being the wiser. Only Open Source systems can be trusted to provide privacy features.  I do not argue that this is enough, but it is an important starting point.

I have been working on a relatively complicated system for achieving these goals. My initial application is blessedly simple, and so I have been able to avoid some of the issues that make these kinds of systems commonly intractable. My service is still in skunkworks and will continue to be very hush-hush until I make the beta launch. But I will be announcing my designs soon and I have already submitted those designs to some Internet Security professionals that I respect to make sure I am moving in a sane direction. This kind of thing is really technically difficult, so I am certainly not promising that I can deliver this kind of technology, but perhaps I can deliver something working that would give other people a place to start. As you might expect, the sourcecode for this project will eventually be made available under a FOSS license.

I do not want to get into the design details yet (so please don’t ask) or even talk about my new application (which is just entering closed beta) but I wanted to start talking about why these issues are important. Please feel free to comment on the features that privacy protecting technology should have…


Welcome NPR

Apparently, npr mentioned my blog. They contacted me for a comment on Medsphere’s ROI calculator.

So if you are from NPR I thought I should actually give you the entire email I sent to Mr. Weaver:

My impression of Medsphere’s prices is that they are a “premium” VistA vendor. If you look you can find cheaper VistA support. However, there are only a handful of VistA support companies with real experience and all of them charge like Medsphere does. On the other hand, even “premium” support for VistA is cheaper than proprietary solutions by 30% to 80%. This is simple evidence of the economic pressures of Open Source. Medsphere can and does charge as much as it can. But it is competing against other groups using the same software that it developed, and against hospitals doing the work internally. There is objective value in going with a vendor who has succeeded with an Open Source project in the past. The balance between open competition (which lowers the price) and proven track record (with raises it) means that Medsphere’s price is objectively fair, and subject to market forces both before and after the installation of the software.

This is the real story about Medsphere and other Open Source Health IT vendors. It’s not that they are inexpensive, although they are much cheaper. Its that their prices actually reflect the markets response to their ongoing performance. That not only makes things cheaper now, but ensures long term cost savings and improvements in performance.

Proprietary lock-in is a form of monopoly where the vendor has an economic incentive to provide poor support. Therefore at least the support services from proprietary vendors are not subject to market forces. That is why they are so poor and Senator Grassely seems to think they warrant investigation.

Medspheres Open Price-Stimulus Calculator

Medsphere marketing has making some pretty savvy moves lately. First the bus and now an open ROI calculator.

This is really cool. Medsphere is being transparent about its prices, and showing you exactly how much it is putting its neck on the line for its customers. This tool will let any Hospital IT person calculate just what they can expect from the stimulus package, and how much Medsphere will cost.

I do not know of anyone else, proprietary or otherwise, that has anything like this.

It would be neat to run through several hundred test cases so see where Medsphers “sweet spot” is… I am betting small-to-medium hospitals.


Announcing the Patient Participation Conference

With pleasure I announce the 2010 Patient Participation Conference. This is a project of my employer Cautious Patient

The subject of the conference is simple “How to be an e-patient or e-patient caregiver”. This should be broad enough subject matter to cover any current discussion in the e-patient community. The basic principles of the conference will be:

  • An unconference: content generated as much by attendees as conference organizers http://en.wikipedia.org/wiki/Unconference
  • Low cost: preferably under $200 for the average attendee, we want real patients and caregivers to be able to attend out-of-pocket. That means we also need to low-glitz; you might get a t-shirt as a handout, but no ipod.
  • Scholarship some anchor and keynote speakers: we would like to find sponsors (and are willing to sponsor ourselves) attendees who would be important speakers, leaders in the e-patient community. We cannot afford to pay everyone, but we want to do what we can.
  • Everything video recorded, and published for free in near real time. This makes the conference for everyone, not just those who can afford to pay to attend
  • Small, short and intimate.

While most of the conference will be unplanned and free-form, generated in real time by people attending the conference, I want to have a few pre-planned anchor talks and keynotes that will serve to ensure that attendees are guaranteed to get at least a few things that are useful to them. With that in mind I would like to find basically two types of talks, either leaders in the community that are also strong speakers/teachers or talks with amazing content, delivered by people who are just OK speakers. I would rather hear a great story than a great speaker. Here are my personal biases as far e-patient speaker selection:

  • The User over the CEO – I would rather hear about a user who used HealthVault to improve his health than the guy who runs HealthVault at Microsoft.
  • Tactics over Strategy – I would rather hear “How to be a great diabetes mom” than “how to improve diabetes compliance in the U.S.”
  • Technology, but in the back seat – I think “How to really use Google Health” and “How to use gmail to manage your health” would be equally relevant. I would -love- a talk entitled “My health notebook kicks your PHR’s a**”
  • Hard stuff over the easy stuff – losing weight and lowering cholesterol is a good goal for almost half the country, but it is not the same thing as living with Diabetes or a failed kidney, and that is not the same thing as cancer or anything with “terminal” in the name. This also applies to controversial issues.
  • Evidence over Anecdote, but both are best – I want to make sure “the science is on our side”. I think talks that emphasize how patients can embrace and understand research are critical. I think a “tale of two e-patients” is a great example of a talk that nails this issue.

What other “biases” should an e-patient conference have in its anchor speaker selection? What specific speakers would you like to hear? If I am going to invite the “CEO” what companies are really enabling the e-patient movement?

Please help me out by contacting me with the answers, but you can also just leave a comment!

The Super Silo

I wanted to republish a post that I made to emrupdate here.

This was in response to a person who wants to help a doctor with what amounts to a super-simple note-keeping system for her nursing home patients. I should note that I think the mashups of technology that this person is suggesting is pretty clever. This is obviously a bright guy who is responding to a doctors sense of the record keeping problem. This is the dangerous first step of healthcare informatics, where a geek feels like he understands the requirements, but really has only scratched the surface. What feels like a good idea to this pair will create problems later that neither of them could effectively handle.

Here is the summary of the  original proposal:

Essentially I am using File Explorer (and optionally, Launchy) as the EMR system, and Notepad (with its .LOG function) as the note-taking application. Since she is the only physician in her practice and nobody else needs access to these files, I see no need to complicate things further. This system should also be extremely small in size, and trivial to back-up.

Is this sufficient, and is it HIPAA-compliant?

My response (which references more of the message)

> Is this sufficient?

No. For the love of all that is good. No.

What you are talking about is a system that is designed to make -your- doctor more effective, at the express cost of the ability to have information sent to or from other healthcare providers. The notion that this is acceptable is tragic. Your doctor is simply considering rendering her note into a format that would have maximal use for herself, at minimal cost, both in dollars and effort. Because she has so few patients, there will never be enough financial incentive to justify porting this data into a form that can be merged with other patient data. In short, you are considering creating a kind of super-silo.What happens to this data when your doctor retires? What happens to this data if you are not around to support the solution? Neither of you has thought this through, or even taken the time to consider that you to not even have the needed tools to properly think it through. I would encourage you to play a long game of “what-if” sprinkled with some “what-happens-when?” so that you can fully appreciate that you have not thought this through.

In twenty years people will talk about projects like this in the same terms as people do now about doctors participating in blood-letting and refusing to wash their hands because they were “gentlemen”. Your doctor is considering excusing herself from the responsibility to participate in Science and you are enabling her. The informatics community does not have that much figured out (sadly) but we know enough to say that this plan is a bad one.

Do not feel too bad. Yours is just an especially bad version of the bad decision that most doctors are making to use any proprietary EHR system. The problems that you will face will happen to most doctors in America, just a little later.

Please reconsider going down this path. I would recommend ClearHealth or OpenMRS as other simple and cheap EHR systems that you can use in your environment. But really you should look into VistA, either Astronaut or OpenVistA (which both have good licenses and OK installers) b/c the VA runs many nursing homes and VistA has a reputation of handling that use-case well.