Who looks through a proprietary product for security flaws? Maybe one or two paid reviewers. The rest of them are "black hats," outsiders who disassemble the code or try various types of invalid input in search of a flaw that they can exploit. Only a black hat would disassemble code to look for security flaws. You won't get any "white hats" doing this for the purpose of closing the flaws.
So, if you don't publish your source, expect that only black hats, and the few people inside of your company who work on the product, will look at your code. Apparently, the black hats are very successful at finding security flaws this way, and the folks on the inside aren't very effective at stopping them.
Many companies are strapped for cash, and they omit the security review entirely. They figure that security is something they can go back and fix after they have enough customers for it to matter, and when people start reporting problems.
In contrast, open source has a lot of "white hats" looking at the source. They often do find security bugs while working on other aspects of the code, and the bugs are reported and closed. However, open source can still profit from a formal security review, just as proprietary code can, and there is an accelerating trend to do formal security reviews in open-source projects.
But code review is no cure-all. You can't find all bugs by looking for them, because trouble often comes in an unexpected form. Thus, those white hats who stumble upon bugs while working on the source are still essential companions to the code reviewers.
One great example in this regard is Borland's Interbase database server, because it was both proprietary and open source, and had an undisclosed security problem during its transition from one to the other.
Interbase is an enterprise-class database product that ran airline reservation systems and other mission-critical applications of large companies. Certainly Borland had the funds to do security reviews on the product. But some time between 1992 and 1994, an employee at Borland inserted an intentional back door into the database. The back door completely circumvented the security of both the database and the operating system hosting it--in some cases, the back door would have allowed an outsider to gain a system administrator login. The back door was not well hidden. I assume that it was done maliciously, and not on orders of Borland's executives.
Anyone could have found this back door by running an ASCII dump of the Interbase executable, for example, by using the "strings" command on Unix or Linux. But if anybody found it, they kept it to themselves, and perhaps used the exploit for their own gain. The back door remained in the product for at least six years. At least one person knew of it, and could have exploited it, for this entire time. How many friends did he tell?
Borland released Interbase to open source in July 2000. An open-source programmer who wasn't looking for security flaws discovered the back door by December 2000, and reported it to CERT.