Bug tracking with magnifier on LCD screen.
The Google Project Zero team said that around 95.8% of the security bugs they find in other software and report to their respective vendors get fixed before the 90-day deadline for a public disclosure expires.
That's quite the batting average for one of world's most infamous cybersecurity programs.
In a statistic shared on Wednesday, Google's elite security team said that during its whole history -- from July 17, 2014, when Project Zero was created and until July 30, this week -- its researchers found and reported a total of 1,585 vulnerabilities to a wide range of hardware and software vendors.
Of these, Google said that vendors failed to deliver a patch before the final deadline expired only for 66 reports. As a result, its researchers were forced to make vulnerability technical details public before a fix was made available to users.
For the first few months of Project Zero's history, this standard deadline was a very strict 90 days.
However, starting with February 13, 2015, Google added an additional 14-day grace period that could extend the deadline under certain conditions.
Google said the introduction of this grace period improved their work of reporting bugs. Companies had more time to deliver patches, and the 14-day period also accounted for updates that were theoretically ready and available, but the vendors scheduled them on strict monthly rollouts, which inadvertently broke the 90-day deadline - -albeit, the bugs were technically fixed.
This deadline adjustment also had an effect on the program's overall efficiency and statistics.
"If we limit the analysis to the time period where grace extensions were an option (Feb 13, 2015 to July 30, 2019) then we have 1434 fixed issues," the Google Project Zero team said.
"Of these, 1224 were fixed within 90 days, and a further 174 issues were fixed within the 14-day grace period. That leaves 36 vulnerabilities that were disclosed without a patch being available to users, or in other words 97.5% of our issues are fixed under deadline."
The Project Zero program, which recently celebrated its fifth birthday, was put together as a way to audit hardware and software used internally at Google, and then report bugs to vendors.
Any bugs Google security researchers find are documented on the project's bug tracker and then reported to vendors.
Information about these bug reports, sometimes including highly technical details and proof-of-concept code to reproduce the bugs, are made public after a vendor releases a fix, or when the deadline passes without a patch.
Over the past few years, Project Zero researchers have come under criticism for releasing these highly-detailed bug descriptions and proof-of-concept (PoC) exploit code, even if the bug was fixed. Many security experts have argued that these reports helped attackers create exploits to launch attacks on users.
But in a FAQ page published this week, the Project Zero team defended their actions, claiming that the bug reports help defenders rather than attackers.
"Attackers have a clear incentive to spend time analyzing security patches in order to learn about vulnerabilities (both through source code review and binary reverse engineering), and they'll quickly establish the full details even if the vendor and researcher attempt to withhold technical data," Project Zero researchers said.
"Since the utility of information about vulnerabilities is very different for defenders vs attackers, we don't expect that defenders can typically afford to do the same depth of analysis as attackers," they added. "The feedback that we get from defenders is that they want more information about the risks that they and their users face."
So, in Google's point of view, releasing these details doesn't help attackers, since many of them would be probing changelogs and app binaries anyway, but they do definitely help companies and system administrators who want to deploy mitigations or detection rules.
"It's a tricky balance, but in essence, we want to even the playing field," Project Zero researchers said.
Furthermore, the Project Zero team also clarified that when bug reports go public, despite being patched or not, the PoC code included within the reports aren't full exploit chains.
Instead, they only release "one part of an exploit chain" and attackers would "need to perform substantial additional research and development to complete the exploit and make it reliable."
"Any attacker with the resources and technical skills to turn a bug report into a reliable exploit chain would usually be able to build a similar exploit chain even if we had never disclosed the bug," the Project Zero team said.
In addition, the Project Zero team also took this occasion to recommend that other security researchers follow their lead and start using a fixed disclosure deadline for bug reports.
"We think that industry practices will improve as more researchers start to include timeline expectations in their bug reports," Project Zero said.
"There are many good reasons why a security researcher might choose not to adopt a disclosure deadline policy on their bug reports, but overall we've seen many positive outcomes from adopting disclosure deadlines and we can certainly recommend it to other security researchers."
However, these aren't the only golden nuggets that Google researchers shared. Other details and main talking points from Project Zero's recently published FAQ page are below: