Random number generation is a key component of secure encryption. Subvert it — by making the random numbers somehow predictable — and you make cracking the encryption easier. So that's what the NSA sought to do.
If you're an open source developer (or really any developer, but even more so an open source developer) this means you have to be absolutely certain of the integrity of any random number generator you use. This is why the developers of the FreeBSD operating system have decided no longer to take the results of hardware-based random number generators, basically Intel's RDRAND and Via's Padlock, and simply pass them on to applications as random. (Thanks for the tip, Ars Technica.)
FreeBSD will continue to use these generators, but will pass their results through another algorithm to add extra "entropy" (i.e. randomness).
The FreeBSD developers may have been inspired by a dispute a couple months ago over the same issue between a petitioner on change.org and Linus Torvalds, creator and final arbiter of what goes into Linux. Kyle Condon (no idea who he is) started a petition to Torvalds: "Remove RdRand from /dev/random". (/dev/random is the facility in UNIX and UNIX-like operating systems for programs to gather random numbers.) In his usual dismissive and impolitic way ("you're ignorant"), Torvalds said that there was no problem and no change was necessary.
Turns out Linux already does something similar to what is planned for FreeBSD. Torvalds: "...we use rdrand as _one_ of many inputs into the random pool, and we use it as a way to _improve_ that random pool."
This does seem like the right way to do it. As Torvalds adds, "...even if rdrand were to be back-doored by the NSA, our use of rdrand actually improves the quality of the random numbers you get from /dev/random."
It seems highly unlikely to me that Intel would knowingly put a backdoor into their processors. The exposure of such a backdoor would be ruinous to the company's reputation. But... perhaps you can't be too careful with these things. The algorithms Intel and Via use may be excellent but, being in hardware, they are completely opaque. I'm not sure if there's a way to verify randomness purely based on output.
The downside to not simply trusting the hardware is a performance penalty. I guess if I were writing an operating system I would make that trade-off. I might put in an optional mode to use only the hardware so that the developer can choose to make the other trade-off.
I have no definitive information on what Microsoft does for random number generation in the Windows CryptoAPI. The CryptGenRandom function, which appears to be the call at issue, allows the caller to specify a cryptographic service provider. Absent that (a 0 value for the parameter), it uses (according to the documentation) "...the AES counter-mode based PRNG specified in NIST Special Publication 800-90". But when Windows calls it, do they specify a cryptographic service provider, which could include a hardware random number generator? I have asked Microsoft and will pass the information on when I get it.
[UPDATE: A Microsoft spokesperson: "Windows adds-in additional entropy, even when a hardware random number generator is present."]
It's surprising, but it shouldn't be, how often we discover new truths that turn out to have been discovered by the earlier giants of the industry. One I have cited often is Ken Thompson's acceptance speech for the Turing Award in 1984, entitled "Reflections on Trusting Trust." Thompson is one of the early Bell Labs developers who built UNIX and many other things we now take for granted.
The moral of Thompson's essay is that software is so complicated and multi-layered that it's virtually impossible to write something without trusting someone else's code at some point. He's speaking of bugs more than security threats, but the analogy is still perfect. The moral of that? Maybe you don't trust Intel's random number generator anymore, but why should you trust the add or move instructions either?