Google is one of the biggest technology and software companies in the world, with over 150,000 employees and billions of users – and that status means it's a high-profile target for cyberattacks.
Google knows it is a target – and has a team of security experts who are tasked with conducting their own attempts to break in, with the aim of helping to secure the company and its users from the latest cyber threats that might come from malicious hackers.
"The team essentially has the mission to make Google safer and more secure for our users, for Google itself and our infrastructure – and we do that by simulating threats and attacks that Google is actually exposed to," says Stefan Friedli, red team tech lead and manager at Google.
The term 'red team' originated in the military, where a designated team would take on the role of adversaries in a scenario designed to test how the defenders react. The term has found its way into the information security world and represents an authorized team using offensive hacking techniques to test the defences of a 'blue team' - the cybersecurity defenders.
"What we're trying to do and why we're trying to do it is to give our detection and response team a sparring partner to see how quickly can we stop something," explains Friedli.
To do that, the red team uses tactics and techniques used by real hacking groups and tries to simulate how they'd act as accurately as possible. But it can also mean getting creative: according to Google's own mini documentary about the team, in one attempt to break into Google's network, the team sent USB plasma globes to some Google workers who were celebrating a work anniversary; if they connected the toy to their work PC, it would be infected and allow the team to continue their mission, which was to test the security around plans for the Google Glass device.
Getting into the mindset of an adversary also means that the red team needs to act with the knowledge that an attacker would – there's no point in the red team exploiting their own knowledge of how systems at Google work because that's unlikely to be something a real hacker trying to compromise the network would know in advance.
"Sometimes these can be hard to separate because, as an attacker, you think really results-driven – but you also need to fade out all the things that you know as a security engineer at Google that an attacker wouldn't know. This distinction is tricky with regards to how to assume the perspective," says Friedli.
"It would probably be very bold to say that we can a hundred percent emulate the way that a crime threat actor acts, but with the reports that we have from TAG [Google's Threat Analysis Group] and with the experience we have across a diverse team, we can get together and figure out what would an attacker do with a certain intent and motivation, and then model our actions after that," he adds.
With the sheer number of potential cyber threats targeting Google, one of the key challenges facing the red team is deciding what needs testing as a priority.
While cyberattacks like phishing emails pose a threat to people, and the red team does occasionally run phishing simulations, testing out a new or novel threat that the defenders may not have encountered before is often the priority – because Google needs to ensure its networks are protected.
"Our objective is to make things better; and if we find something that we haven't thought about before, then I would rather have us find it than anyone else. Because then we're hopefully the first to know and then we can fix it before anyone else gets to that point. We really to try to keep the home advantage," Friedli explains.
The red team's role is to perform offensive cyberattacks against the rest of Google. But that doesn't mean that success is only measured by successfully hacking the network. The aim of the red team is to help improve security, so if their attacks are being detected, that's also a successful outcome.
"There is a misconception that the red team is only useful if it manages to reach the goals they're simulating, like breaking into something. But we see this a bit differently – if we get caught early on, that's a good thing that's working as intended," says Friedli.
For example, if the red team uses techniques that they've used previously and they get detected earlier in the attack process, that's a success because it means the previous exercise has demonstrated improvements that need to be made, which have been acted on to help spot unusual activity and protect the network and its users.
That work could be performing tests that result in more users deploying multi-factor authentication (MFA) or making it easier and more intuitive to use. And if it's making security better for Google's internal staff, who use Google products, it also ultimately helps enterprise and general users stay protected, too.
Sometimes the red team could come up with what they feel is an innovative attack, only to find it doesn't work, while on the other hand, something unexpected could yield new and interesting results.
"This is a very surprising role in in many ways. Often things that you would expect to have a low likelihood of working out might yield more interesting insights that you bargained for," says Friedli.
"And then sometimes you try to engineer something really complex, and you feel like that's going to be relevant, just to find out that this is not even a feasible option," he adds.
While Friedli and his team are there to think and in some ways act like cyber criminals, unlike malicious hackers, the red team needs to ensure that it's acting ethically and responsibly so as to not endanger Google staff or users.
"We have a very solid track record within Google of acting professionally and ethically, and as part of that we have rules of engagement that essentially dictate what is acceptable, and under which circumstances and where we draw the line," Friedli explains.
One of the limitations is that the red team never targets actual Google user data – if an exercise involves a need to simulate doing this, it's only done on specially designated test accounts not owned by actual users.
In addition to this, while cyber criminals may try to target the personal accounts, smartphones and computers of Google employees, for the Google red team this is a step too far and they explore these risks via alternative means.
"We need to respect our colleagues, and so we pivot to other means of exploring these threat vectors by again using test accounts or using our own devices," says Friedli.
Sometimes attempting to break in can involve taking advantage of humans making honest mistakes while trying to do their job, like someone clicking a phishing link or being duped in any number of methods real attackers would try to exploit. But victim blaming isn't part of the process.
"We certainly won't go around and point fingers at somebody who clicks the link," explains Friedli, who says information on who did click isn't even recorded: "This is not relevant to us at all in any way."
What's important is examining red team attacks and using them to drive security improvements, whether that's implementing technical controls, removing part of an attack cycle that cyber criminals could exploit, or all manner of things – with the goal of improving cybersecurity for Google, its employees and its users in an ever-changing threat landscape.
Google has run a red team since 2010 and the team is experienced at what it does, how it interacts with the blue team defenders, and knowing what boundaries shouldn't be crossed while performing exercises – all of which helps Google keep networks, employees and users secure.
And Friedli thinks other organisations and their cybersecurity teams can benefit from performing red team exercises.
"I think then there is an opportunity to benefit from it, to have the second viewpoint; this adversarial set of eyes that challenge things and offer this," he says.
However, he also warns that establishing a red team is not something to be rushed into. Before thinking about any sort of red team exercise, it's important for organisations to have their house in order – there's not much reward for running adversarial exercises if they're not already doing the basics first.
"But once you have a certain maturity, and you want to see where you can tweak and improve, that's where I think red teaming can come in really handy," says Friedli. "It's not trivial but it does pay off."