An article from InfoWorld reports that tests by Reasoning (a software inspection firm) have indicated that recent Apache code (2.x version) has about the same number of bugs per thousand lines of code as commercial equivalents.
I'm not entirely sure what this means, but I do find it an interesting data point.
The important part is that it is being picked up by major IS magazines (does it get any more mainstream than InfoWorld?) and it is being used to show that the quality threat of some open source code is unfounded.
Previous studies by Reasoning found the TCP code in Linux to be very robust in comparison to other competing code as well, making open source look even better.
However, I'm neither confident that the methodology used by these folks make sense, nor that you can compare two products on the basis of "defects per thousand lines." In particular, you need to make sure that the software you are using does what you want and not much more. As you can imagine, if you have a simple set of web requirements that can be satisfied by a 10,000 line web server, you wouldn't want to use a 100,000 line web server, even if the number of defects per thousand lines were the same, since you would still have ten times the number of bugs and not additional benefit.
Of course, the opposite is true, using software that doesn't do what you need isn't very beneficial either, no matter how few bugs.