Hacking on the Wikipedia APIs for Health Tech

Recently I wrote about my work hacking on the PubMed API. Which I hope is helpful to people. Now I will cover some of the revelations I have had working with DocGraph on the Wikipedia APIs.

This article will presume some knowledge of the basic structure of open medical data sets, but we have recently released a pretty good tool for browsing the relationships between the various data sets: DocGraph Linea (that project was specifically backed by Merck, both financially and with coding resources, and they deserve a ton of credit for it working as smoothly as it does).

Ok. here are some basics to remember when hacking on the Wikipedia API’s if you are doing so from a clinical angle. Some of this will apply to Wikipedia hacking in general, but much of it is specifically geared towards understanding the considerable clinical content that Wikipedia and it’s sister projects posses.

First, there is a whole group of editors that might be interested in collaborating with you at Wikiproject Medicine. (There is also a Wikiproject Anatomy, which ends up being strongly linked to clinical topics for obvious reasons). In general you should think of Wikiprojects as a group of editors with a shared interest in a topic, that collectively adopt a group of medical articles. Lots of behind the scenes things on Wikipedia take place on Wikipedia talk pages, and the connection between Wikiprojects and specific wiki articles is one of them. You can see the connection between wikiproject medicine and the Diabetes article, for instance, on the Diabetes Talk page.

Wikiproject Medicine maintains an internal work list that is the best place to understand the fundamental quality levels of all of the articles that they overlook. You can see the summary of this report embedded in the project page and also here. There is a quasi-api for this data using the quality search page data, you can get articles that are listed as “C quality” but are also “High Priority”.

Once a clinical article on Wikipedia article has reached a state where the Wikipedian community (Wikipedian is the nick-name for Wikipedia contributors and editors) regards it as either a “good” article or a “feature” article, it can generally be considered to be highly reliable. To prove this, several prominent healthcare wikipedians converted the “dengue fever” wikipedia article into a proper medical review article, and then got that article published in a peer-reviewed journal.

All of which is to say: the relative importance and quality of wikipedia articles is something that is mostly known and can be accessed programmatically if needed. For now “programmatically” means parsing the HTML results of the quality search engine above, I have a request in for a “get json” flag.. which I am sure will be added “real soon now”.

The next thing I wish I had understood about Wikipedia articles is the degree to which they have been pre-datamined. Most of the data linking for Wikipedia articles started life as “infoboxes” which are typically found at the top right of clinically relevant articles. They look like this:

ethanol_1 ethonal_infobox diabetes_infobox

The Diabetes infobox contains links to ICD9 and ICD10 as well as MeSH. Others will have links to Snomed or CPT as appropriate. The ethanol article has tons of stuff in it, but for now we can focus just on the ATC code entry. Not only does it have the codes, but the correctly link to the relevant page on the WHO website.

An infobox is a template on wikipedia, which means it is a special kind of markup that can be found inside the wikitext for a given article. Later we will show how we can download the wikitext. But for now, I want to assure you that the right way to access this data is through wikidata, parsing wikitext is not something you need to do in order to get at this data. (This sentence would have saved me about a month of development time, if I had been able to read it.).

For instance, here is how we get ATC codes and ethonol via the wikidata API:

Most of this data mining is found in the Wikidata project. Lets have a brief 10000 ft tour of the resources that it offers. First, there are several clinically related data points that it tracks. This includes ATC codes, which are the WHO maintained codes for medications. (It should be noted that recent versions of RX Norm, can link ATC codes to NDC codes, which are maintained by the US FDA, and are being newly exposed by the Open FDA API project.

I pulled all of the tweets I made from wikimania about this into a storify.

Other things you want to do in no particular order:

Once you have wikitext its pretty easy to mine for pmid so that you can use the PubMed API. I used regular expressions to do this, which does occasionally miss some pmids. I think there is an API way to do this perfectly but I cannot remember what it is…

Thats a pretty good start. Let me know if you have any questions. Will likely expand on this article when I am not sleepy….


Susannah Fox is the new CTO of HHS

I am not actually sure that anybody reads this blog. I suppose they must, that is really the magic of RSS… letting you know when your friends blog… right??

Still. If you wanted to actually follow what I am doing you should probably be reading the DocGraph Blog, or the CareSet Blog, or the OpenSourceHealth News Page. I just don’t tend this blog the way I should… But when I have news that I think deeply interesting and I cannot find a category for it anywhere, this is the perfect place.

Discovering that my dear friend Susannah Fox is now the CTO of HHS is just that kind of category defying important news.

I cannot think of a better person for a role like this. I mean that literally. I tried. I failed.

Susannah is a geek, enough of one that she cannot easily be snowed by other technologists (I should be explicit: I am talking about government contractors), even where she does not have direct technical expertise. Not every geek I know can do that. On the other hand she is not soo much of a geek that people find her arrogant, or incomprehensible. (I have problems with both). Most of that job will not be directly geeky stuff. That sounds contradictory, but HHS is just too large to have any one technical strategy. There is no way that a reasonable technical vision for the FDA would apply at CMS or at NLM. Being the CTO at HHS is about seeing the connections, understanding how things fit together, and then having a vision that is not actually technology centric, but patient centric.

As technology savvy as Susannah is, it is her capacity to hold a huge vision, and keeping patients at the center of that vision, that make her so deeply qualified for this job. No body asked me who the next CTO of the government was going to be, and frankly I was a little worried about who would be next. Bryan Sivak and Todd Park (her predecessors in this role) leave pretty damn big shoes to fill. Someone in the whitehouse/HHS is casting a net wide enough to know who the really transformational thinkers in our industry are.

I have to admit, I am still reeling from this news. I am usually pretty good at figuring out what the implications of something are… at calculating where the hockey puck is going… But I really have no idea what is the implications of this are going to be… other than to say:

This is going to matter, in precisely the way that most things in healthcare reform don’t.


Does Epic resist or support interoperability? Hell if I know.

I just realized that my somewhat infamous question at the ONC annual meeting is recorded on video!

The background on my question, which I made me very popular at the meeting afterwards, was that I had heard that Epic hired a lobbyist to convince congress that it is an interoperable company.

That lobbyist and others at Epic have been heard saying stuff like “Interoperability is Epics strength”… and “Epic is the most open system I know” etc etc.. This makes me think “what planet I am on?”

I have actually heard of hospitals being told “no at any price” by Epic, and I have never heard that regarding another vendor… although there are lots of rumors like that about Epic I would prefer to be fair. How would I know if Judy et al, had really turned a corner on interoperability. Epic has been a faithful participant in the Direct Project, which is the only direct (see what I did there?) experience I have had with them.

But I want data… and here is what happened when I asked for it at the annual ONC meeting. Click through to see the video.. it auto plays so I did not want it on the my main site.

Continue reading Does Epic resist or support interoperability? Hell if I know.

Libel and Discourse in the Digital Age

Libel, like copyright, is one of the central legal frameworks for governing online activities. It sets the bounds for what can and cannot be said about people in the new media area.  Like copyright law, libel law is a legal framework designed in a pre-digital era, that is somewhat strained in this new digital media age.

I write this with some trepidation. This blog posts touches on gender issues on Twitter, and that is a heated and, at least on Twitter, mostly broken discussion.

Any discussion on sensitive issues online, especially on Twitter, can devolve into a core of reasonable people trying to have reasonable discussions that are surrounded by a much larger group of people (or at least a large number of twitter accounts) who say completely ridiculous and incendiary things. Jimmy Wales response to a GamerGate email regarding the policies for Wikipedia’s GamerGate article is required reading here.

The wonderful thing about Twitter is that it facilitates open to the public conversations about anything at all. These conversations usually involve only people who are genuinely interested in particular topic, which means that the Twitter conversation is usually representative of the topic as it exists in the real world. But a given hashtag is useful and productive, only to the degree that people all agree on what the topic under discussion is, and also fundamentally agree on what is the appropriate means to have that conversation.

Many times, both of those constraints fail, and this is when you get a single hashtag, like #GamerGate being used in at multiple conflicting ways. One way is to have a discussion about “Ethics in Game Journalism”, the second is to launch a coordinated attack on female game journalists and other feminists, and the third is the feminist community using the hashtag to refer to those attacks. In the sense that all three things are happening at once using the same hashtag on Twitter, all of them are equally valid and equally invalid uses of the hashtag. But all three discussions regularly lament that the other two discussions are trying to “redefine” what “GamerGate” “is”. The letter from Jimmy Wales helped me realize that there is an inherent difference between a movement and a hashtag. Before reading that I was deeply confused on how think about “GamerGate” a word whose definition changes dramatically depending on who you listen to.

Generally I think the power of Twitter lies in its capacity to have public conversations that serve only as “signals”, with larger discussions on topics left to more forums that are better suited for comprehensive discussion, like blogs. Twitter is ill-designed to handle contentious issues, in part because Tweets are necessarily atomic in nature. It is too easy to take a single tweet, and then lambast that single tweet as the entirety of someones position. This is not strictly a straw-man tactic because it actually takes a little work to get Twitter to contextualize any discussion. Twitter presents tweets as atoms, and not threads on a topic.

On Twitter, there is a lot of “What I said was X, but what I meant was Y”. As an informaticist, I would call Twitter something like a “Communication Platform with Low Semantic Fidelity”. Which is not an insult to the platform… this is both a “feature” and a “bug”, depending.

So it is with great irony that I found myself having a discussion about libel, on the very platform that makes the issues around libel so complex.

For those who have been living under a rock on Twitter lately there has been a drama unfolding regarding the role Vivek Wadhwa plays regarding women’s gender issues in technology. The play continues to unfold, but here is the outline of the opening scenes:

  • Wadwha makes a statement onstage referring to “floozies”. (have not been able to find video of this)
  • Mary Trigani writes a post entitled Captains and Floozies criticizing Wadwha’s comment.
  • Wadwha comments on the blog post.
  • Trigani reposts the Wadwha’s comment with the title Vivek Wadwha explains
  • Amelia Green Hall, writes QUIET, LADIES. @WADHWA IS SPEAKING NOW which sternly criticized the role that Wadhwa plays and how he plays it.
  • This blog post caused enough of a stir that Amelia was subsequently interviewed by Meredith Haggerty on NPR’s TLDR series. This podcast (which is still available here) is essentially a retelling of Amelia’s blog post in audio form, with no dissenting voice from Wadhwa or elsewhere.
  • Wadwha reacts on twitter saying that the podcast is “libel and slander
  • NPR removes the podcast from their page, although as per normal it will be remembered forever on the Internet somewhere…
  • Twitter presumes that the post was removed because of Wadhwa’s “threats”
  • Wadhwa insists that he wants the post itself restored, but merely wants to have the opportunity to blog in the same space.
  • Apparently, his interactions with NPR makes him believe that he will be able to publish a retort on the NPR site.
  • For whatever reason, Wadhwa’s defense is not published on NPR, so he manages for it to be published on Venture Beat instead.

Which brings us to current. (I will try and update the timeline if things change)

Obviously it’s interesting stuff in it’s own right, but I am mostly interested in the issues around libel. Specifically, I am interested to understand if it was in fact libel, and I am interested to know if the fact that Wadhwa labeling it as libel was a “threat”.

Lets deal with the first issue. Was it libel? Well it turns out that this is not a clear legal question, especially for Wadhwa. You see in the US, the legal test for libel typically has three components (IANAL and I am quoting Wikipedia, so you would be foolish to take this a legal advice).

  • statement was false,
  • caused harm,
  • and was made without adequate research into the truthfulness of the statement.

(from wikipedia)

Unless, you are a public figure, and then libel also includes “Proving Malice”. Again quoting wikipedia:

For a celebrity or a public official, the person must prove the first three steps and that the statement was made with the intent to do harm or with reckless disregard for the truth, which is usually specifically referred to as “proving malice”

Listening to the podcast there are several statements that stand out specifically as false:

  • ..”Has he really been this spokesman for women in tech for years while he is believing that women can’t be nerds because thats because thats like super misogynist”..
  • (on the website of for Wadhwa’s book) “I can get to a photo grid of women it doesn’t list their names..” (Wadhwa points out that such a list lives here)
  • “Wadwha was barely acknowledging the women he was working with”
  • Wadwha was “Gaslighting minimizing marginalizing people who disagree with (him)”
  • The story implies that Wadwha titled his response to Trigani’s post “Vivek Wadwha explains” when in fact Trigani had made that title.
  • The DM’s that Wadwha sent were “non-censual”.

If you listen the to podcast, and you read Wadwha’s rebuttal, it is pretty easy to see understand how Wadwha at least would view these statements as false, harmful, and inadequately researched. Wadwha is painted as pretender, a person who who is taking the role of “real” expertise. The implication here is that there is something essential to the experience of being a women in technology that is required to acquire legitimate expertise about women in tech. At the same time, there is the implication that the experiences of women in tech so vastly distinct that no one person could possibly make useful statements about them as a class.

This is an interesting issue with civil rights in general. There was a time when the racial civil rights movement choose to exclude white supporters from leadership positions. This makes sense when you are dealing with a pervasive attitude that presumes that a particular class is fundamentally incapable of self-representing and/or leadership.

But there is a difference between requesting that someone bow out of a leadership role, in order to further the aims of a social justice movement, and attacking the qualifications and intentions of that same person in the most public way possible (i.e. on the radio and Internet at the same time).

On the other hand, if there is a person claiming leadership in a social movement, while saying or doing things that hamper that movement, it is a natural reaction to eventually (after back channel discussions have failed) to out that person in public.

So which is it? Is this a necessary exposer in defense of an important social movement, or it is petty dramatics within a movement that should be above such theatrics?

What the hell do I know? Although I am at least a little interested in anything that qualifies as social justice, I am hardly an expert in this area. I don’t know any of the parties involved and I have no familiarity with the book and research body in question.

What I am interested is how libel works in the Internet age. What is fascinating specifically to me is the degree to which Wadwha is being criticized for calling the podcast “libel”. It is fairly clear to me that IF the contents of the podcast are misrepresentations, then Wadwha is just being publicly attacked. The whole podcast was about him, not about “men speaking for women generally”, but just about him and what he was specifically doing wrong. The podcast implied that he was a lecherous, misogynist, manipulative plagiarist. IF those things are not true about him… then does he have the right to say “This thing that is happening is slander and libel” without inappropriately using that language to squelch criticism.

According to Wadwha, he has made no legal threat, he did not ask for the article to be taken down and, in fact, he has asked for it to be restored. That does not generally sound like the acts of someone who is seeking to muzzle critics.

What I find fascinating, is the apparent consensus that merely labeling the podcast as libel IS itself a legal threat.

Here are some reactions from two lawyers who work for the EFF (an organization I admire and donate to)

And then here..

Lastly this is one specific quote from someone who has been on the other side of this.

However, I did find this gem from @DanielleMorrill, who was obviously researching this earlier than I was. She found places where Wikipedia policies cover these issues…

For my part, I cannot help to empathize with Wadwha. My family has had some pretty nasty run ins with people willing to publish false things about us. If someone in traditional media decides to smear you, its nearly impossible to undo the damage. At least Wadwha had the opportunity to tell his side of the story, an opportunity my family never got. 

Apparently, the consensus on the Internet, and what I would advise people do on this, is to just say. “Hey that stuff you wrote/said about me is not true, and its pretty hurtful and you really should have researched that better” instead of actually coming out and saying “Thats Libel”. Its pretty clear that Wadwha tried to take a position of “You have libeled me, but I am not planning on suing you, I just want to achieve balance”, and from what I can tell, that has blown up in his face, and possibly made things worse for him. 

I have certainly learned several things from this incident that will make me slightly less likely to put my foot in my mouth: Specifically…

  • I should be careful not to speak over other people on panels. I am frequently the most vocal and opinionated person on a panel. Audiences frequently ask questions specifically to/for me, and moderators will frequently favor me because I can be entertaining. But apparently when Wadwha does the same thing he is percieved as “taking the air out of the room” etc etc. I would never want my fellow panelists to feel they don’t have a voice b/c of me. I will have to work on that.
  • Apparently there is a whole contingent of women who have been so completely harassed by DM’s that saying something like “A non-consensual DM” actually makes sense to them. I had no idea that Twitter harassment had reached that level for women. I mean you have be brave or crazy to let someone know you are a female user on Reddit (which is sad), but I thought Twitter was a “safe place”. I was wrong.
  • When someone labels themselves as rude or mean or otherwise thinks that it is a good idea explicitly admit in their twitter profile that they are difficult to deal with… believe them. They are not kidding. Its one of these things. Lookup the Far Side cartoon that says: How Nature says “Do not touch”. Its just like that.
  • I need to be careful to explicitly not speak “for” the people I personally advocate for (which in my case is usually patients) b/c this can be disempowering. I need to find ways to advocate without being presumptuous, which is harder than it sounds.

Thanks for reading, I may well update this post based on reactions from Twitter and elsewhere.








Hacking on the Pubmed API

The pubmed API is pretty convoluted. Every time I try to use it, I have to try and relearn it from scratch.

Generally, I want to get JSON data about an article, using its PubMED ID and I want to do searches programmatically… These are pretty basic and pretty common goals…

The PubMED api is an old-school RESTish API that has hundreds of different purposes and options. Technically the PubMed API is called the Entrez Database, and instructions for using it begin, and end with the Entrez Programming Utilities Help document. Heres the things you probably really wanted to know…

How to search for articles using the PubMed API

To search pubmed you need to use the eSearch API.

Here is the example they give…


The first thing we want to do is not have this thing return XML, but JSON instead. We do that by adding a GET variable called retmode=json. The new url


Ahh… thats better… No lets get more ids in each batch of the results…


Breaking this down…


is kind the entry point for the whole system..


is the actual function that you will be using…

This tells the API that you want to search pubmed.


Next you want to set the “return mode” so that JSON is returned.


And then you want to add the retmax to get at least 1000 results at a time… The documentation says that you can get 100,000 but I get a 404 if I go over 1000


The term argument


db and term are seperated by the classic GET variable layout (starts with a ? and is then seperated by a &) if that sounds strange to you, I suggest you learn a little more about how GET variables work in practice.

Now about the “YOUR SEARCH TERMS HERE” What that is a url_encoded string of arguments to the search string for pubmed. URL coding is (something of a trivialized explanation) how you make sure that there are no spaces or other strangeness in a URL. Here is a handy way to get data into and out of url encoding if you do not know what that is..

Thankfully the search terms are well defined, but not anywhere near the documentation for the API. The simplest way to understand the very advanced search functionality on pubmed is to use the PubMed advanced query builder or you can do a simple search, and then pay close attention to the box labeled “search details” on the right sidebar. For instance, I did a simple search for “Breast Cancer” and then enabled filters for Article Type of Review Articles and Journal Categories of “Core Clinical Journals”.. which results in a search text that looks like this:

("breast neoplasms"[MeSH Terms] OR ("breast"[All Fields] AND "neoplasms"[All Fields]) OR "breast neoplasms"[All Fields] OR ("breast"[All Fields] AND "cancer"[All Fields]) OR "breast cancer"[All Fields]) AND (Review[ptyp] AND jsubsetaim[text])

Lets break that apart into a readable syntax display…

("breast neoplasms"[MeSH Terms] 
  OR ("breast"[All Fields] 
        AND "neoplasms"[All Fields]) 
  OR "breast neoplasms"[All Fields] 
  OR ("breast"[All Fields] 
        AND "cancer"[All Fields]) 
  OR "breast cancer"[All Fields]) 
AND (Review[ptyp] 
  AND jsubsetaim[text])

How did I get this from such a simple search? PubMed is using MesH terms to map my search to what I “really wanted”. MesH stands for “Medical Subject Headings” is an ontology built specifically to make this task easier.

After that, it just tacked on the filter constraints that I manually set.

Now all I have to do is use my handy URL encoder.. to get the following url encoded version of my search parameters.


Lets put the retmode=json ahead of the term= so that we easily just paste this onto the back of the url.. we get the following result.


I wish that my css could handle these really long links better… but oh well. I know it looks silly, lets move on.

To save you (well mostly me at some future date) the trouble of cut and pasting here is the trunk of the url that is just missing the url encoded search term.


At the time of the writing, the PubMed GUI returns 2622 results for this search, and so does the API call… which is consistent and a good check to indicate that I am on the right track. Very satisfying.

The JSON that I get back has a section that looks like this:

    "esearchresult": {
        "count": "2622",
        "retmax": "20",
        "retstart": "0",
        "idlist": [

With this result it is easy to see why you want to set retmax… getting 20 at a time is pretty slow… But how do you page through the results to get the next 1000 results? Add the retstart variable


If you need more help, here is the link to the full documentation for eSearch API again…


How to download data about specific articles using the PubMed API

There are two stages to downloading the specific articles. First, to get article meta-data you want to use the eSummary API… using the ids from the idlist json element above… you can call it like this:


This will return a lovely json summary of this abstract. Technically, you can get more than one id at a time, by separating them with commas like so…


This summary is great, but it will not get the abstracts, if and when they are available. (it will tell you if there is an abstract available however…) In order to get the abstracts you need to use the eFetch API


Unlike the other APIs, there is no json retmode, the default is XML, but you can get plaintext using retmode=text. So if you want structured data here, you must use xml. Why? Because. Thats why. This API will take comma separated id list too, but I cannot see how to separate the plaintext results easily, so if you are using the plaintext (which is fine for me current purposes) better to call it a single id at a time.




Healthcare IT reading list

My Programmable Self Behavior Change Reading list has been one of my most popular posts.

I still think any Health IT expert should be well-versed in behavior change science, since so many healthcare issues boil down to behavior change problems… either for patients or providers or both.

But the other day, I was having drinks during HIMSS with Keith Toussaint, Matt Burton (both Health IT rock stars at Mayo Clinic) and Sulie Anna Tay (a rising star at Cisco). Soon talk turned to “have you read this, have you read that” (you know how those conversations usually play out) and we started creating a “Required Reading List for Health IT”. I forgot about it until today, when I needed to find some references in one of the books… and realized I had left the project undo. So here are my required reading list for Health IT and healthcare reform, in no particular order:


I think its important to listen to end of life issues from Alex Drane. And read the same topics from Atul Gawande Letting Go.




I hate to humblebrag so I will just be plain: David Uhlman and I wrote what is probably the most popular book on Health IT, Hacking Healthcare.


EHR Vulnerability Reporting issues

For those who actually bother to read to the bottom of my bio, I was actually in Internet Security before going into Health IT. I spoke at DefCon and everything.

During my career in Health IT I have had to report a security vulnerability to an EHR developer once, and it was such a painful process that I basically just gave up.

My poor friend Josh Mandel and his group at SMART found an XSLT vulnerability in an HL7 provided file that is a part of essentially every modern EHR system (the standard, if not the file itself, is mandated my Meaningful Use).

They have had a horrible time trying to get the attention of the major EHR vendors, with less than 10% paying any real attention.

I am saddened, but not at all surprised. I will write more later…


How to submit prior art on the Medicity Direct Patent

Recently Medicity has tried to patent the concept of a HISP. Please join me in submitting prior art to prevent this undermining of everything that the Direct Project stands for.

Groklaw shows the way

Here is a specific page that I had some trouble with and the right answers for it…

The Patent number in question is 61/443,549

The confirmation number is: 9529

The first names inventor is: Alok Mathur , Alpharetta, GA (US)

The date of file is: 02-16-2011

The strange string they are going to ask you in the middle appears to be: 201161443549

Read Groklaw carefully because the form is massively unnecessarily complex. (Because that is how the government rolls)..

The following prior art exists for their claims:

* Conversion of encrypted payload content, perhaps CCDs, into HL7 2.3 transactions sent to an EMR over TCP/IP ports

Of course, converting to HL7 v2 is not actually a good idea in 99% of the cases, but it was always part of the original vision of the Direct Project



Just search this page for HL7 to find Arien discussing the need for HL7 2.x interoperability

or you can read about how we dithered over 2.x versions of HL7


I will no dignify the fact that they note that this happens over TCP/IP with a comment. Really, you are going to use the networks protocol for that?

Are you sure you do not want to use UDP? Or perhaps IPX? Wow. Innovation. <- (sarcasm, see note for USPTO employees below)

* Conversion of encrypted payload content, perhaps HL7 v3, into rendered PDF formatted reports that are automatically printed to a local printer device per the provider’s workflow preferences.

* Construct of a standard Direct compliant outbound S/MIME transaction with CCD attachments by converting native PDF or HL7 v2.x formats and contents.

This of course makes direct look like a fax machine. Which is a -huge- step backwards. But generally, converting between different healthcare interop standards has been done for quite some time.

A main goal of the HISP is to convert between various formats. We spent months talking about the particularly difficult conversions, i.e. Direct to IHE


As far as I know the central advantage of a PDF is that you can print with it.

Here is Keith Boone discussing the issue on his blog




This is 2 months too late but shows that we including printers as possible devices to send direct messages to.

The second set of claims is particularly annoying to me because I got involved in Direct specifically because it was not possible to do coordination of care without an underlying point to point messaging infrastructure.

  • Sharing of virtual care team records across disparate networks

  • Dynamic updates to disparate patient reocrds using encrypted serialized patient objects across disparate networks

  • Sharing of application context within applications across disparate networks

  • Sharing of user context within applications across disparate networks

  • Establishing long-term patient and provider object-level communication across disparate networks.

Its late, so my patience for this is wearing thin. Email handles “sharing PHI across disparate networks”. The whole fucking point of direct is that is -just- email.

So everywhere that Medicity is saying “share (PHI Type here) across disparate networks” they are full of shit. This is the problem that Direct itself solves.

Then the question becomes. “Hey, now that we have this amazing capacity to share PHI across disparate networks, what specifically should we share?”

Hmm… perhaps we should use this to keep patient records in sync… no shit.

(in case you cannot tell. The preceding text is sarcasm. I am saying this because someone from the USPTO might be reading this, and I am not sure you might not have picked up on that. Working at the USPTO might be the kind of job where you lose your sense of humor. I am just saying. )

The whole concept of a HISP is that it site on the edge of the Direct network and integrates the local environment into Direct.

Medicity has a HISP product. It does things that HISPs do.

They do not deserve a patent for concepts that are -both- obvious and well described by the Direct community during the -entire- process of developing Direct. The fact that the US government did not dictate what a HISP should do does not mean that it was not discussed carefully, completely and commonly by everyone working on this project.

The “HISP as a bridge concept” is something that I had a hand in creating. I do not appreciate my own work being co-opted and abused in this fashion. I am requesting that Medicity withdraw this patent application, and consider… I don’t know… competing for Direct HISP business, instead of applying for bullshit patents on ideas that were created as part of an Open Source project.










About to have a call with the National Health Service

I am about to have a call with a group of people who work with the UK National Health Service.
I know for a fact that the people on the call are doing serious, thoughtful work on behalf of their government.

In contrast, my government just started paying the electricity bill again.

It is fairly hard to describe accurately how I feel about going into a call like this. Happily I have Reddit/Imgur to help!!