Backdoors are most often defined in terms of their creators’ motives.
For starters, a backdoor is said to be a piece of code intentionally added to a program to grant remote control of the program — or the host that runs it – to its author, that at the same time remains difficult to detect by anybody else.
But this last aspect of the definition actually limits its usefulness, as it implies that the validity of the backdoor’s existence is contingent upon the victim’s failure to detect it. It does not provide any clue at all into how to create or detect a backdoor successfully.
Backdoors are of course as old as security audits themselves, and date at least to the days of the security audit report by Karger and Schell on Multics (c. 1974). While some backdoors have been spotted since then, we still know relatively little about how to hide them, as well as how to detect them.
It is not surprising then, that one of the seminal publications in the field by Turing award winner K. Thompson (“Reflections on trusting trust,” in his 1983 Turing lecture) is a fictional story with a moral, and does not include any methodology for successfully hiding or finding backdoors. But Myer’s cry was clear enough 30 years ago: do not neglect attacks originating from (intentionally-embedded) backdoors!
How then, can one be trained to audit code for backdoors? As with security bugs, or bugs in general, the problem of detecting backdoors is “undecidable” and cannot be tackled even in the simplest practical cases. Moreover, since backdoors are inserted intentionally, they are typically more difficult to find. Where things stand today, intention appears to be the main difference between a bug and a backdoor, right?
On the other hand, how can one learn to hide backdoors? A simple recipe would be: learn how all detection procedures work and come up with a backdoor that fools all of them. Yet, the software development lifecycle – and security auditing in particular – are still highly manual tasks, so detection techniques cannot be enumerated – only the manual practices of code inspection can be learned.
Successful backdoor hiding or finding cannot be done analytically (e.g., through algorithms or formal procedures). It can be learned, as an art or a craft. Disappointed scientists can only perform experiments: Do. Gather data. Analyze.
However, engaging in experiments in the form of games may provide this learning experience. It definitely sheds some light in this obscure art. After this brief prolegomenon I state my purpose: I want to help to improve our collective skill-set and our ability to prevent backdoor threats. I want you to play a game and learn, with me, how to hide and detect backdoors.
A few years ago, the CoreTex team did an internal experiment at Core and designed the Backdoor Hiding Game, which mimics the old game Dictionary. In this new game, the game master provides a description of the functionalities of a program, together with the setting where it runs, and the players must then develop programs that fulfill these functionalities and have a backdoor. The game master then mixes all these programs with one that he developed and has no backdoors, and gives these to the players. Then, the players must audit all the programs and pick the benign one.
Each round played, a few new hiding tricks and techniques are introduced. When a backdoor is not discovered, its developer will know that his technique has passed the test. When it is detected by the other players, its confirmed that their detection techniques are good. Also, players that fail to detect backdoors learn about their limitations and developers of backdoors that are discovered learn what techniques do not pass a simple test.
This fun-to-play game invites participants to experiment and learn about backdoors. When we played this game, every player used a different hiding technique. During the game and in its aftermath, we discovered many new hiding techniques.
The contest will be conducted live over the Internet and all the programs will also be published after the event closes. With this open effort, we can start collecting data. Analysis and understanding will follow.
* Ariel Waissbein is Director of Research and Development of CoreLabs, the research and development center of Core Security Technologies. He is responsible for driving all day-to-day research and publishing activities.