Recently, Shawn Hernan wrote a piece on Microsoft’s security blog that argues that the “many eyeballs’ effect does not ensure that open source software is secure.
His argument is excellent and while I disagree somewhat with his conclusions, his point is undeniable. his argument, if it can be boiled down to a quote is:
The key word in Raymond’s argument is can. I’ll concede that open source can be reviewed by more people than proprietary software, but I don’t think it is reviewed by more people than proprietary software.
The simple reality is that -most- Open Source software projects are -not- popular enough to create a sub-culture of project developers who devote themselves to fixing bugs and ensuring quality code. Note that just because Open Source software is popular with its users does not always ensure that the size and makeup of its developer community is large enough to encourage such a subculture.
This is where Shawns piece breaks down. He does not bother segmenting the various “Open Source” projects nor does he bother segmenting proprietary software companies. One of the most important lessons to learn about software development is that very very few proprietary software companies are capable of operating at the level that Microsoft does.
Lets imagine for a second what methods would produce the most bug-free and therefore secure software.
The most important issue for determining how buggy software will be is to determine what the “game” is for developers. “The game” is the basic motivational structure that each individual developer is subject to when writing the given software. This kind of assumes that there is some variance in the rate at which developers debug their own code, and the rate at which they review other developers code, based on some kind of reputation, financial, or pleasure incentive.
What happens if you make bug-fixing the primary focus of the game?
The primary culture of the development team would be to develop highly secure and bug-free code, then simple competition and social pressure, inside your core development team, would help ensure that you have secure code. If a new developer were to join the project/company and was seeking to establish credibility, he or she would know that finding a bug and fixing it would increase their own prominence, probably at the expense of the developer that wrote the bug. The original developer would know that and take every effort to slow down and write good code in order not lose credibility.
The problem with this “game” is that any Open Source project that took this approach would slow to a crawl. New features would be added very slowly, as developers spent lots of time reviewing, testing, and generally obsessing about bugs present in any portion of the code. In fact if secure code and bug-free software were truly the goal then this software project would intentionally continue to slow down as long as going any slower would produce better, more stable code.
Frankly, no reasonable business plan for a proprietary software company could ever tolerate such an intentionally metered pace. However, Open Source software presents the opportunity for any community culture to grow and flourish. The OpenBSD project is a project with a focus almost identical to what I described here. The OpenBSD operating system is objectively the most secure operating system in the world based on several measures. You might imagine that a project so obsessed with security and bug-fixing would have a web-page devoted to their thoughts and practices on the subject, and you would be right.
Shawn Hernan neglects to acknowledge that projects like OpenBSD are possible in Open Source but are impossible with proprietary software companies. But it is important that we not straw man Shawn’s argument. Proprietary Software companies have one substantial advantage over Open Source projects. They can pay developers to follow procedures that ensure high quality code and they can pay some developers to do nothing but professionally audit code. Microsoft is very good about this. But then they are in a pretty unique position regarding their software. You could restate Microsofts business plan as “protect the billions that we already make selling operating systems”. The profits from anything that the company is doing “to compete with Google” for instance, pales in comparison to the profits from selling operating systems. But Microsoft can recognize that if they do not compete with Google now, and at least sometimes on Google’s terms, Google will eventually begin to hit them where they live, by subverting the all important operating system profit.
Microsoft has developed strict code-review procedures, like the ones Shawn Hernan mentions, because every time a vulnerability comes out for Microsoft software, alternatives look better and better. At this stage in Microsoft’s history, they have decided that security is a financial priority, but this only after years of ignoring it in favor of other more profitable business priorities. Just like most Open Source projects are not focused on being fundamentally secure, most proprietary software companies do not have an financial incentive to invest in programmers and procedures that produce fewer bugs and more secure code. Especially in healthcare software, bug-free code is an afterthought. This is just the nature of for-profit endeavors. The focus is always on what makes the most money. Microsoft is now in a position where secure code will protect profits. Proprietary EHR companies are not in that position. They write code that helps them sell to doctors. Since doctors are typically irrational purchasers, the feature set and priorities of typical EHR companies are similarly irrational.
It should also be noted that just because Microsoft has an financial incentive to produce secure code in one product line does not means that this extends to all of its product lines.
There is another method by which Open Source projects can be more secure and bug-free than code developed by a proprietary software company with an financial incentive to produce solid code. Its pretty simple really, and Shawn has already acknowledged it:
But could “enough” code review, which might happen for popular projects like Linux and the Apache Web Server compensate for the more comprehensive approach of the SDL?….
Shawn is acknowledging here that there are differences between Open Source projects. Ironically both the Apache project and the Linux Kernel are often subjected to distributed attempts at systematic bug detection similar to the whole contents of SDL. A great example of this is the Security Enhanced Linux (selinux) project run by the NSA. The problem with software bugs is that they are often unimportant for the average user, but a cracker can still use them to break into the system. This point is made most eloquently in the classic paper by Anderson and Needham, Programming Satans Computer. Projects like selinux attempt to make other bugs less prone to becoming a security weakness, an important step. This is exactly the kind of comprehensive security improvement that, according to Shawn, auditing is not supposed to have that Microsoft’s SDL does have. But the many eyes principle is not limited to mere audits. The whole point is that anyone can could shoehorn bug-detecting methods onto Open Source projects. I will use the term “Audit” to embrace all of the bug-detection and fixing techniques that Shawn mentioned. Since simply human auditing is often the most onerous and unlikely task for all but the most motivated developers.
So we are now able to make a little ranking. Who has the most secure and bug free development practices and what kind of “game” are they using to achieve that security. I think it looks like this:
- Best: Security Obsessed Open Source: Open Source Projects that build-in bug-fixing at the expense of everything else, like OpenBSD.
- Really Good: Popular Open Source with audit teams: Open Source Projects that are so popular -with developers- that they can afford to create a security obsessed sub-culture within the development team. Projects like Apache, Linux, and MySQL (before… well you know…)
- Really Good: Proprietary Vendors who pay for audit: Proprietary software vendors who have a financial incentive (Microsoft) or cultural imperative (Fog Creek) to create a development structure that reduces bugs.
- OK: Audited Open Source: Open Source projects that are sufficiently popular with developers to actually get some kind of formal code audit, preferably both automated and by a programmer trained in testing.
- Crappy: Unaudited Open Source: Open Source projects that are essentially one-man shows, whose code is rarely used and even more rarely looked at. (The vast majority of Open Source software projects in existence fall into this category)
- Worst: Unaudited Proprietary: Proprietary companies without a financial incentive to pay for expensive testing and bug-detection essentially have a financial incentive to -ignore- bugs. For most proprietary companies a software problem is only actually a bug, if it prevented a sale or lost a client.
Shawn’s post correctly points out that Microsoft, which is a typically in group #3 is much better than Open Source generally which is typically in group #5. of course his argument will be embraced and touted by companies who are in group #6.
Which brings me back to my subject of passion: Open Source medical software. I believe, tragically, that most Open Source Health software projects are in category #5 (Unaudited Open Source). I think this is changing with companies like Medsphere and ClearHealth becoming more mature companies, they can afford to sponsor auditors for their projects. Others like MOSS and Indivo are starting out with a security focus. I think #2 is the right target for us since it is not clear that OpenBSD is a -better project- just because it is -more secure-. Linux is still more secure that Microsoft Windows, and it is developed at much greater pace.
Still, I think the lesson here is that the best security happens when people focus on the boring and mundane. Starting with reading someone else’s code and then moving all the way up to using software to make software more secure. We simply do not have enough security focus in our corner of the Open Source world and I have to admit it is partly my fault. Before coming to the world of Open Source Health Software I was trained in Information Security at the Air Force Information Warfare Center in San Antonio. That payed off with later security contracting with Rackspace, Verisign and Nokia. I have never been 100% devoted to code auditing in any of my previous roles, but I know enough by rubbing shoulders to know when I should be nervous. As I look around the Open Source Health Software arena, there is a lot to be nervous about. There are also upsides. Our community has embraced at least two other serious security people. Alesha Adamson with MOSS was trained as a security geek at one of the few NSA National Centers of Academic Excellence and Ben Adida with Indivo X has a PhD from MIT, under the Cryptography and Information Security group
Both of these people are fully cognizant of security research and they are in leadership roles in their respective projects, and in the community as a whole But is it important that the whole community learn from their strong kung-fu. We need to develop a kind of security focused sub-group within the Open Source health information movement. We need people willing to do security audits on not one but several important Open Source codebases. Perhaps we should be trying to get members of the larger security community involved with Open Source healthcare software auditing. I will be thinking more about this, but I would love to hear my readers thoughts on the subject.