Facebook has become embroiled in a fight with a security researcher who may have overstepped the bounds of bug bounty disclosure -- and now the escalating drama has resulted in a divided security community.
At the heart of the drama is Wes Wineberg, a researcher who recently submitted a remote code execution (RCE) flaw discovered within the image-sharing service Instagram, now owned by Facebook.
The security expert discovered a set of flaws and weaknesses in October -- including a severe RCE vulnerability -- reporting the issues to the Facebook security team as part of the company's bug bounty program.
In a blog post, the contractor for security company Synack says he was able to tap into Instagram weaknesses to pick up recent source code for the Instagram backend system, SSL certificates and private keys, keys used to sign authentication keys for Instagram, email server credentials, iOS and Android app signing keys, and API keys for Twitter, Facebook, Flickr, Foursquare and Tumblr. The researcher was also able to access employee accounts.
"To say that I had gained access to basically all of Instagram's secret key material would probably be a fair statement. With the keys I obtained, I could now easily impersonate Instagram, or impersonate any valid user or staff member.
While out of scope, I would have easily been able to gain full access to any user's account, private pictures and data. It is unclear how easy it would be to use the information I gained to then compromise the underlying servers, but it definitely opened up a lot of opportunities."
Wineberg was able to grab this data by exploiting three flaws, all of which were reported to the social networking giant. The first was accepted without fanfare, and Wineberg was offered $2,500 for his work. However, the other two vulnerabilities were not met happily by Facebook, which accused the researcher of going beyond the bounds of ethical behavior in his exploration of Instagram's closet skeletons.
In a statement posted on Facebook, the social network's chief security officer Alex Stamos alleges that Wineberg's submission -- which earned him $2500 -- had already been reported, but was rewarded anyway. Everything up to the point of reporting was ethical, but Stamos alleges that the researcher's behavior afterwards forced him to become personally involved.
Stamos says Wineberg used the RCE flaw to find AWS API keys, which were then used to download non-user Instagram data, including technical and system information through an S3 bucket.
"The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself. Intentional exfiltration of data is not authorized by our bug bounty program, is not useful in understanding and addressing the core issue, and was not ethical behavior by Wes," Stamos says.
Stamos also alleges that Wineberg was "not happy" with the bug bounty payment offered, and responded by saying he planned to write about the data download.
This is where things get interesting. Stamos admits to contacting the CEO of Synack as Facebook "assumed" the researcher was operating on behalf of the company due to the use of a synack.com email address and affiliation on his Facebook account (of which, it is important to mention, a researcher must have to submit reports to the bug bounty program.)
Stamos says while speaking to CEO Jay Kaplan, he explained that Facebook believed Wineberg acted "unethically," and while Facebook was alright with the researcher writing up his report on the vulnerability itself, Facebook would not accept him discussing his access of S3 or releasing the Instagram data he had taken.
"I told Jay that we couldn't allow Wes to set a precedent that anybody can exfiltrate unnecessary amounts of data and call it a part of legitimate bug research, and that I wanted to keep this out of the hands of the lawyers on both sides. I did not threaten legal action against Synack or Wes nor did I ask for Wes to be fired," Stamos says.
The implied threat of lawyers aside, the original bug has now been fixed, and Stamos admitted that Facebook did not triage the report on the security flaw quickly enough.
The drama may have unfolded as the result of miscommunication in the first stage, but has left a mark on a now divided security community. Many researchers have commented on Stamos' post, some of which agree with Facebook -- but many do not, implying the mistakes made have left a bitter taste in their mouth.
Some have noted that as the RCE flaw was already known, it should have been fixed immediately, and others are angry at the assumption of the researcher's reasons to submit the flaw -- especially as such an assumption led to the drama spreading to his workplace without due cause.
A main bone of contention is that Facebook does not specifically say researchers shouldn't go poking around for additional problems once they have found a flaw -- but do not specify such a rule, unlike Microsoft.
As one user noted, clarity is key:
"Facebook didn't publish such a restriction, and still hasn't, so exploring to see how deep the rabbit hole goes is, for all practical purposes "in scope" provided you don't violate any of the other restrictions.
If that's not what they want all they have to do is change a few words to make their intention clear and there wouldn't be any problem!"
Read on: Top picks