My next article on HealthVault, does not belong here on FredTrotter.com, I am a guest blogger over on free software magazine and when I have an issue that touches on software freedom, I like to post it there (where it will be read more…)
My next article on HealthVault, does not belong here on FredTrotter.com, I am a guest blogger over on free software magazine and when I have an issue that touches on software freedom, I like to post it there (where it will be read more…)
An article I wrote about Vendor Lock-in in health software has been published in the recent Fall 07 edition of EHR Scope. From the article:
You can fire your paint store, your dentist, your lawyer, your mechanic and even your
doctor. You can fire them for any reason. Yet you cannot fire your proprietary EHR software vendor. Or at least, not without also changing the software that you use. So I guess you could fire your proprietary software vendor, but only in the sense that you could “fire” your mechanic, if it meant you were forced to buy a new car.
I will give EHR Scope the exclusive on the article for a few months, and then I will republish it on GPLMedicine.org
Dr. Deborah Peel has endorsed Microsoft’s HealthVault PHR. From the Patient Privacy Rights press release:
PatientPrivacyRights.Org founder, Dr. Deborah C. Peel, will stand with Microsoft in Washington, D.C today at a press conference to announce the launch of HealthVault.
Is Dr Peel qualified to make this recommendation?
Please take a look at Dr. Deborah Peel’s bio, she has done an impressive amount of medicine and privacy activism. At least on this bio, she lists no formal computer science training. On the same page we find the bio’s of the other members of the Patient Privacy Rights board members. Please note especially the bio of Tina Williamson (use this link as the one on the bio page is broken) who was formerly the Vice President of Marketing for a dot com company. This work should count as negative experience for determining the validity of marketing claims as per sourcecode. Perhaps the computer science expertise upon which Dr. Peel relies is on staff? Nope no computer science trained staff there.
According to the Patient Privacy Rights website, there is no competent electronic security or privacy expert with actual computer science training associated with Patient Privacy Rights organization. But remember the Privacy Coalition is much more than just the Patient Privacy Group! It is made up of 45 different organizations with interests in patient privacy. Perhaps some of these organizations are informing Deborah Peels recommendation of the most abusive, monopolistic software company on the planet as the “leading” caretaker of the American consumers healthcare record.
Of the 45 organizations, (which are probably great organizations…) only three are technology oriented. One of them is a meta blog site called NewsBull. For the moment I will assume that blogging expertise does not necessarily translate into informed insights into the complexities of protecting patient information, and I will exclude the possibility that informed recommendations came from NewsBull.
From what I can tell the most technically impressive person at EPIC is Simon Davies, the rest of the staff appear to be well-meaning policy types. I contacted him to see if he was informing Dr. Peels recommendation. His reply:
“I’m still looking into this technology and am hoping to find out more details on the security aspects fairly soon…”
Not exactly a glowing endorsement, instead it sounds like typical statements from someone who recognizes the depth of complexity involved. I doubt that Dr. Peels technical assessment was informed by Simon Davies.
CPSR, on the other hand, is clearly the home of some very serious tech talent. CPSR was one of the organizations that fought the Clipper chip nonsense. It is currently lead by Annalee Newitz and Fyodor Vaskovich, of nmap fame. These people obviously have enough technical muscle to make definitive statements regarding the security of Microsoft. I am still talking to them but so far it does not seem like they were consulted, Fyodors first response to me began:
“Hi Fred. I wouldn’t trust Microsoft with my health records either….”
Somehow, I doubt that Deborah Peel asked the author of nmap what he thought about a PHR from Microsoft before delivering her unqualified recommendations. The fact that she might have had access to that level of expertise and did not insist on consultation is pretty shocking. Of course, it takes some insight to know just how important nmap and Fyodor are in security circles.
Why would someone make a recommendation like that without possessing a tremendous amount of technical savvy or without consulting someone who had a tremendous amount of technical savvy? Only someone who assumed that this was merely a legal/medical/ethical issue rather than a legal/medical/ethical/technical issue. I have a degree in psychology, and it would be utmost of hubris for me to question a prescription that Dr. Peel gave to one of her patients. It would be totally unethical for me to recommended specific drugs to a mental health patient, despite that fact that I have some informal on-the-job experience with mental health drugs.
The problem with psychoactive drugs, and with medical information privacy, is that the devil is in the details. If I was forced to choose an anti-depression medication for someone, I would probably choose one that I had worked around alot, something with a big name that made me and my patients feel more comfortable. 8 times out of 10 my prescription might work fine, but I have no idea why it would not work the other 2 times, no idea how to determine if it was working or not and no idea what to do to fix it. I have a four year degree in mental health… what would it take for me to get that last 20% of prescribing potential? I would need two years of undergraduate courses in hard life sciences, followed by four years of medical school and then four years of residency. In short, to move from 80% accuracy in understanding of drug impact to something like 98% accuracy takes about a decade (not to mention the time required by board certification) . Hardly seems worth it… until you think about how easy it is to kill someone with drugs. Would you want to see someone who was 80% sure that the drugs you were given would not kill you?
Psychiatrists are qualified to make recommendations for mental health drugs, but their medical training does not qualify them to examine source code and determine if they match high level privacy guidelines. Based on my personal experience it takes at least 7 years to really have a clue about a specific technology area like this. I have been studying this for 13 years now, and I am often humbled when I discover just how little I know about this stuff. Even with over a decade of training I often feel overwhelmed about what I should do, just concerning the technical issues involved. I would never presume move outside of my area of expertise to make any clinical decisions.
Dr. Peel should have the same humility when it comes to technical issues. Despite this, Dr. Peel has said “Microsoft is setting an industry standard for privacy.” I am not the only one who thinks that is ridiculous.
But wait, having expertise in medicine does not exclude expertise in Computer Science generally or elctronic privacy specifically. It is possible to have both skill sets in one person.
What happens when a board certified psychiatrist also has a masters in Computer Science? What happens when the same person spent a decade studying the way information moves in a computer system AND a decade studying medicine? Then they write posts like this one from Dr. Valdes of LinuxMedNews. Granted, I tend to agree with Dr. Valdes on issues like software freedom and ethics in medical computing. Granted, there are experts at Microsoft who would be able to speak intelligently regarding the technical concerns that I am raising. Many of Microsoft’s experts have experience that are equivalent to Dr. Valdes’ training. But those experts are not speaking for 45 different organizations with legitimate interests in patient privacy endorsing a company with arguably the worst security and privacy track ever. In short, Dr. Peel is guilty of hubris. While she may have good intentions and clearly has a sincere desire to protect patient privacy, she appears to be very much past her technical depth.
Of course I could be wrong. I have not seen Dr. Peels vita. If Dr. Peel will publish her full resume and it contains solid computer science based privacy training and experience that she has left off of her online biography, I will be happy to retract some of these criticisms. The only thing that can justify Dr. Peels endorsements are a full source-code review by a professional electronic privacy expert. If Dr. Peel can show that she had access to such a review, then I would be happy to retract some of these criticisms. Finding this article unchanged and unamended implies that my assumptions about Dr. Peel are, in fact, correct.
If HealthVault were to be successful it would be good for Microsoft’s bottom line, but terrible for our cultures. Indeed Dr. Peel is right about one thing, Microsoft would be “leading” us. Those wearing shackles are often lead by others.
Somehow I doubt that Microsoft will respond to my criticisms. This is what the Free and Open Source community needs to attempt if Microsoft refused to budge.
We need to write a tool, using the Microsoft HealthVault API to export the data from HealthVault. Preferably this should export to a standard format, but comma delimited is better than nothing.
If you are investor who wants to make a business around this tool, please contact me and I will put you in touch with a technical team (not me.. I have no interest in this just now). If you are a programmer and you would like to work on this, contact me to be part of the technical team.
When confronted with proprietary software and no alternatives, hack around the problem.
Microsoft often does the wrong thing. But that does not mean they have to. There are three requirements for behaving ethically as a PHR host.
Pretty simple. By releasing sourcecode, Microsoft would ensure that the software could be run without Microsofts help. That means that Microsoft might go away in two hundred years or so, but the HealthVault software would not. By allowing your consumers to download their data in a standard format you would ensure that the data would not be trapped in a proprietary format.
Recently Microsoft has released two licenses that were approved as open source licenses. These would be ideal for use in this environment.
Will this happen? I think it has a snowballs chance, but perhaps, if Microsoft does not listen, Google might.
I hope I have made my case that “patient privacy” is complex enough that merely “Recognize that patients have the right to medical privacy” is the ethical equivalent of saying “When considering the medical ethical issue of abortion , you must recognize that often women want to get pregnant and have a child.” This is a great example of a statement that sounds good, is completely true, and yet gets us nowhere.
Generally all of the “Patient Privacy Principles” have this problem. They are great principles but when you get deeper, to the level that is required when implementing software, it is obvious that they are only useful in spirit. For instance.
“Deny employers access to employees’ medical records before informed consent has been obtained”
Sounds good right? But does that mean that you will require consent to inform the employer of a workers compensation injury status? Doesn’t the employer have the right to know the ongoing status of a workplace injury without repeated informed consent? What, exactly does informed consent mean? When was the last time you started a new job and did not sign all of the fifteen CYA forms that you employer put in front of you? Does that count as informed consent? Again, obviously the spirit of the law here is good, which is something like “the employers should not be able to discriminate against employees based on health information” but that does not cut it when making software, we have to actually determine exactly what system will do and what it will not do, in order to write software.
So are the Privacy Principles flawed? Only when their interpretation is left to a private company with no possible way for patients to review how the code actually works!
Deborah Peels endorsement of Microsoft’s HealthVault is the equivalent of a food critic looking at a magazine food ad to make a recommendation for a restaurant. Have you ever looked at those ads when you were really hungry, you see the roasted turkey browned to perfection with a pat of butter slowly melting on it. Looks delicious! It is impossible to make that photograph with food that also tastes good. Food photographers work all day on food photographs, they cannot afford to have food that changes in appearance over the course of an hour. Can you imagine trying to include a fresh bowl of guacamole in a picture with ten other foods? Long before the picture was ready the guacamole would look disgusting. That beautiful turkey browned to perfection is actually a frozen turkey that has had the skin “browned” using a paint remover gun. The pat of butter… well, lets just say its not butter. I know this might seem obvious, but in order to judge the quality of food, a food critic must actually taste the dish.
There is no way that Dr. Peel can verify one way or another that HealthVault works the way Microsoft says it does. For instance, it would be trivial for every new piece of data for every patient to be automatically emailed to Bill Gates, or Fred Trotter. That “email the record” functionality would change nothing in the appearance of the user interface that Dr. Peel evaluated (I assume she looked at the interface). The only way to sort this out is to examine the sourcecode. Any competent Computer Scientist would acknowledge that this is trivially true: obviously it is not what Microsoft says that matters, nor is it what the software appears to do! What matters is what the software actually does and the only way to determine this, one way or another is to read the sourcecode. There is a long and glorious tradition in the software industry of shall we say “fudging” what the software actually does for marketing purposes. Is Dr. Peel qualified to examine this source code vs. marketing material gap? More on this issue later.
I try to keep up with other coverage of HealthVault, and doing so lead me to watch a video produced by mndoci on Youtube. I contacted the author, Deepak Singh and asked him if he would do a YouTube video in response to my HealthVault articles. I have been thinking about trying a video post, but have been afraid to take the plunge, now I can see how videos work in this blog by posting Deepak’s video directly here.
The video is a great help to me. I often feel that my writing is clear, and concise. I usually feel this way until I actually talk to a reader, whose questions and criticisms make it clear that I was clearly not so clear. Clarity, apparently, requires hindsight, and sometimes humiliation. I will be posting more about the blogospheres reaction to my HealthVault articles soon. I will be making some changes to make my articles better afterward, so if you are interested in reading version .1 of my thinking on this, better get it now.
I have passed my CISSP certification, marking me as an Information Security Expert. I had to pass a complex test and demonstrate that I had three years of full-time security experience to become CISSP certified. I have a four year degree in Computer Science, and I have been trained in Information Warfare by the United States Air Force at the Air Force Information Warfare Center in San Antonio. I have been trained in physical security by the United States Marine Corps (Hoorah). I have worked in Healthcare IT Security for over 5 years now. Frankly, I find the issue of Health Information Security to be extremely complex. Here are examples of the thorny issues that I face as a professional. (this article was originally written about HealthVault, but applies so broadly I removed HealthVault from the title 10-04-11)
There are various State and National laws that govern the disclosure of HIV or AIDs status. These often mean that portions of medical records must be operate with different disclosure rules based on whether they reveal a persons HIV status. For instance imagine the physician discussing a patient with AIDS in the notes section for that patient.
” It would be good if Patient X could maintain their exercise regime. However, given his level of immune function, Patient X should stay away from public gymnasiums, which can be unsanitary. I recommend any kind of constant aerobic activity, three times a week for at least 30 minutes each.”
Normally a message like this would be ideal for a PHR to pass to a personal trainer, however the middle sentence arguably reveals the HIV status of the patient. There was no mention of the terms “HIV” or “AIDS” so a simple text search of the document could not easily determine that it was associated with HIV status. Yet this piece of patient information should be treated differently. The level of awareness that a PHR would need to have in order to determine that the note above is related to HIV status is equivalent to human intelligence. The PHR would need to understand English to such a high degree that it would be very close to passing the Turing Test.
The alternative, of course, is to have a person validate every piece of data to see if they reveal HIV status for patients whose PHR records are tagged with HIV positive status. But how many records could such a custodian hope to manage? What level of human-error would be acceptable from such a custodian? Assuming all the records were correctly tagged, how could a human accurately review thousands of medical data points in a given record?
But even those issues ignore the problem of who tags a record with HIV status. Perhaps the patient should be in charge of tagging the account with HIV status, so that automated systems could attempt to handle the rest. But what if a patient wants to withhold that status from the PHR?
What about Family planning and pregnancy status? Physicians must be very careful to follow local laws to know what extent a patients parents can be informed about their under-aged daughters reproductive condition. However, any other medical condition would obviously be under the purview of the child’s parents or guardians.
There are also cases where the patients themselves cannot access their own records. Many psychiatrist records must be protected in this manner.
Can a patient remove the information that they have diabetes from their own record? Can they remove their allergy to penicillin? What if they removed it on accident? If patients can accidentally remove data, or can remove a diagnosis or allergy that they do not like, how can a physician or other healthcare provider rely on the contents of the PHR? If a physician knows that they cannot rely on the contents of the PHR, why would they both to add information themselves. If physicians do not add information to the PHR, why should its contents be trusted. Electronic trust is tricky.
If the patient cannot totally control every aspect of the record, does the patient really own the record? Does the healthcare provider own the record, even though the law often compels providers to produce and distribute a patients record?
How much information should payers (insurance companies, etc.) be able to see? Payers certainly must be made aware of the procedures that they will be paying for, but they should not be given so much information that they can discriminate inappropriately.
Lets sum up. Medical records belong to the patient, except when they don’t. They should be accessible to the patient except when they shouldn’t. The records of minors are always open to their guardians except when they are closed. Segmenting data in order to protect portions of health information is currently an intractable problem of free-text analysis. Tagging patient records with critical information is difficult. Trust is far more complex than is first seems. Finally, patients should be allowed to “control” their own record, except when that control would allow them to do something that would invalidate the record.
This is just a taste of the kinds of problems that I have run across during a career as a health information privacy professional. Notice that a deep understanding of several of these problems requires enough Computer Science know-how to understand why free text analysis is a difficult problem. The other problems required at least shallow understandings of medico-legal issues, which seems simple until you consider how you are going to design a PHR or EHR to meet these requirements.
How do you design a PHR so that “control” can be so finely parsed? How do you put the doctor in charge sometimes, the patient in charge other times (except to undo what the doctor did), the teenage daughter in charge, for only one of her medical issues, in such a way that her parents are not informed about that one medical issue, but are in charge of everything else?
In short “patient privacy” is a very, very complex problem that requires some pretty high level thinking and is pretty easy to mess up. When you see someone pretending like there is a simple solution to these problems, you should be very suspect.
Which means that when the commitments that Microsoft has made regarding HealthVault become inconvenient, they will simply change them.
Will the data that you enter into HealthVault be secure? Would my HealthVault data be studied by my insurance company? Would access be limited to those who I choose to have access? Thank goodness Microsoft’s answer to this question was not simply “Trust Me”! Instead it is “Trust my auditor”. This is, apparently enough to satisfy the Patient Privacy Rights Foundation, and the Coalition for Patient Privacy. From a recent Patient Privacy Rights press release Dr Deborah Peel is quoted:
“Corporate claims to offer privacy mean nothing unless they are willing to take the same steps Microsoft has taken in building HealthVault,” says Peel. Microsoft has committed to independent third party audits to verify their pledge to protect privacy. “Audits are essential,” says Peel. “Technology companies have got to do better than telling consumers to just ‘trust us.’ Consumers shouldn’t trust anyone but themselves to decide who can see and use their sensitive health information.”
Apparently, this means “trust the auditors”. Of course we all know how well audits serve to protect the public from unethical corporate behavior. The alternative, which is obviously not being discussed, is the ability to inspect the code for yourself. A top GPL licensed PHR is IndivoHealth. Lets do a quick comparison.
Question: PHR Covered by HIPAA?
IndivoHealth: When it is used by a covered entity, yes.
HealthVault: No. Microsoft is not a covered entity.
Question: How is this verifiable? How can you trust that the user really has control? How can you trust that there is no proprietary back door built in to the software?
IndivoHealth: Read the IndivoHealth source code yourself. Hire an auditor of your choice to review the sourcecode. Verify that the auditor you hired is telling you the truth by hiring another auditor, again of your choice. Verify that both auditors you chose and hired are not full of… smoke… by reading the source code yourself.
HealthVault: Trust Microsoft. Trust the auditor that Microsoft pays millions of dollars a year to whistle blow on Microsoft.
I think you get the idea. Nonetheless, Deborah Peel is pretty impressed with HealthVault, from a HealthcareITNews article:
“Their model is that consumers truly should control the information and that’s the direction they want to take as a company,” said Peel. “We really think that because they are the industry leader that the rest of industry will have to follow or be left behind.”
“Microsoft has agreed to adhere to all of the privacy principles that the coalition developed in 2007, ” Peel said. “Not only adhere to them in terms of contracts but to be audited on these principles. We think they’re setting a new amazingly high bar and frankly, we think what they’re doing is really the best practice that the entire industry needs to follow.”
Well, this is good! Microsoft has agreed to follow the privacy principles! Principles are good. What are the principles? We find the principles at Patient Privacy Rights website lets go through them one at a time..
Actually, Microsoft deserves credit for generally working hard in this area. Give credit where it is do. However, no commitment is made in the privacy document regarding accessibility.
Microsoft’s policy indicates that users can quit the system and Microsoft will then delete the data after 90 days. So much for seven generations of custodianship but I guess deleting meets the “opt-out” requirement.
No commitment of segmenting information in the privacy statement.
In short, Microsoft’s commitment to follow the policy is a commitment that they have NOT made in their policy. Microsoft is basically saying “Trust us, this is secure and private”. Everything about Microsoft’s history indicates that commitments to privacy and security are bogus. What exactly made the Dr. Peel conclude they are the market leader in Health Record security and privacy? What made her conclude that Microsoft has “committed” to third party audits?
Perhaps Dr. Peel is discussing a subject as though she were an expert, when in fact she has had little relevant training on the subject.
Microsoft, of all the companies that might consider creating a PHR, is especially problematic. Microsoft has a long history of standards abuse.
Lets consider a parallel issue to the “personal health record”, personal email. I use gmail and have used yahoo mail in the past, but for this example lets pretend that I used Hotmail, a Microsoft Product. Hotmail users trust Microsoft to protect and store potentially sensitive personal email data. I currently have at least a gigabyte of personal messages on my mail account. At this rate I will have at least 100 gigs of messages assuming I die of old age. What if my wife (who will likely outlive me given that she is younger and averse to simple sugars, cholesterol, sodium and saturated fats in a way that I am not) wanted to ensure that my emails survived Microsoft’s eventual demise? After inheriting my password, my wife could download everything via Hotmail’s POP3 service. She could download my emails to a proprietary package like Outlook, or better yet, a GPL email application. She could transfer them to another service that she trusted, like gmail.
By leveraging Hotmail’s POP3 interface she would be taking responsibility for the continued storage of my emails, and ensuring that my great-grandkids could know for certain exactly how many time the Nigerians contacted me with a special offer because they trust me so much.
But what about my “HealthVault” account? How could my wife ensure that my great-grand kids know about last months cholesterol results? Knowing my cholesterol history is going to be vastly more relevant to them, then the time and place of last week’s LAN-party. To make this possible Microsoft would have to export the data in a format vastly more complex than POP3, perhaps something like the Continuity of Care Record CCR.
The problem with formats like CCR is that they are not strong standards and suffer greatly from the dialect problem. The dialect problem is when the “implementations” of a “standard” differ enough to make them incompatible. When a person from Australia, England, and the US speak English to each other they typically understand each other, because the dialects of English are close enough that they are compatible. Alternatively French, Spanish, and Italian technically could be considered “dialects” of Latin, yet obviously speakers of these languages cannot, without translation, understand each other completely. CCR and the other electronic medical languages are currently suffering from the dialect problem. Show me two HL7 implements and I will show you two systems that cannot communicate without “translation” work. (BTW the FOSS way to solve this problem is with Mirth, which is an HL7 router) Protocols that suffer from the dialect problem so much that they typically cannot communicate effectively without extensive configuration can be thought of as “weak standards”. Protocols that are not impacted negatively by the dialect problem are “strong standards” (a good example of a strong standard is the TCP/IP protocol and FAX protocols)
Not only has Microsoft not committed to implementing and not abusing a standard import and export format, it is making moves to create a proprietary standard in the place of CCR. HealthVault already has a MSDN page where you can learn how to “interface” with the Microsoft PHR. Microsoft intends to create a community of “Programs” within Healthvault by which third parties can further process medical data. Those programs will interface with Healthvault in a fashion that will create a “de facto standard” that Microsoft will abuse. (For more on this research the history of the Microsoft Word format, which is a good example of a Microsoft format that became a de facto standard which Microsoft subsequently abused) .