The truth about Goobuntu: Google's in-house desktop Ubuntu Linux

The truth about Goobuntu: Google's in-house desktop Ubuntu Linux

Summary: For the first time, Google reveals some details about its desktop of choice: Ubuntu.

Google's desktop operating system of choice is Ubuntu Linux.

San Diego, CA: Most Linux people know that Google uses Linux on its desktops as well as its servers. Some know that Ubuntu Linux is Google's desktop of choice and that it's called Goobuntu. But almost no one outside of Google knew exactly what was in it or what roles Ubuntu Linux plays on Google's campus, until now.

Today, August 29th , Thomas Bushnell, the tech lead of the group that manages and distributes Linux to Google's corporate desktops unveiled Goobuntu from behind Google's curtain at LinuxCon, the Linux Foundation's annual North American technical conference, First things first, can you download Goobuntu to run it yourself? Well yes and no.

Bushnell explained that “Goobuntu is simply a light skin over standard Ubuntu.” In particular, Google uses the latest long term support (LTS) of Ubuntu. That means that if you download a copy of the latest version of Ubuntu, 12.04.1, you will, for most practical purposes, be running Goobuntu.

Google uses the LTS versions because the two-years between releases is much more workable than the every six-month cycle of ordinary Ubuntu releases. Besides, Google also tries to update and replace its hardware every two-years so that syncs nicely as well.

Why Ubuntu, rather than say Macs or Windows? Well you can run those too. Bushnell said, “Googlers [Google employees] are invited to use the tools that work for them.. If Gmail doesn't want work for them, they can use pine [an early Unix shell character-based e-mail client] that's fine. People aren't required to use Ubuntu.” But, Goobuntu use is encouraged and “All our development tools are for Ubuntu.”

Googlers must ask to use Windows because “Windows is harder because it has 'special' security problems so it requires high-level permission before someone can use it.” In addition, “Windows tools tend to be heavy and inflexible.”

That said, Bushnell was asked why Ubuntu instead of say Fedora or openSUSE? He replied, “We chose Debian because packages and apt [Debian's basic software package programs] are light-years ahead of RPM (Red Hat and SUSE's default package management system.]” And, why Ubuntu over the other Debian-based Linux distributions? “Because it's release cadence is awesome and Canonical [Ubuntu's parent company] offers good support.”

Yes, that's right. Google doesn't just use Ubuntu and contribute to its development, Google is a paying customer for Canonical's Ubuntu Advantage support program. Chris Kenyon, who is Canonical's VP of Sales and Business Development, and was present for Bushnell's talk confirmed this and added that “Google is not our largest business desktop customer.”

So, what about the desktop itself? Is everyone required to use Unity, Ubuntu's popular but controversial desktop? Nope.

When asked about Unity use, Bushnell said, “Unity? Haters gonna hate. Our desktop users are all over the map when it comes to their interfaces. Some use GNOME, some use KDE, some use X-Window and X-Terms. Some want Unity because it reminds them of the Mac. We see Mac lovers moving to Unity.” There is no default Goobuntu interface.

What there is though is "tens-of-thousands of Goobuntu users. This includes graphic designers, engineers, management, and sales people. It's a very diverse community. Some, like Ken Thompson, helped create Unix and some don't know anything about computers except how to use their application.”

To manage all these Goobuntu desktops, Google uses apt and Puppet desktop administration tools. This gives the Google desktop management team the power to quickly control and manage their PCs. That's important because, “A single reboot can cost us a million dollars per instance.”

That said, desktop problems , even on Linux, will happen. As Bushnell said “Hope is not a strategy. Most people hope that things won't fail. Hoping computers won't fail is bad You will die someday. Your PC will crash someday. You have to design for failure.”

This is where Goobuntu's 'special sauce' appears. On Google's desktops, “Active monitoring is absolutely critical. At Google we have challenging demands, we're always pushing workstations to their limits, and we work with rapidly moving development cycles.”

On top of this, Google has very strict security requirements. As Bushnell observes, “Google is a target Everyone wants to hack us.” So some programs that are part of the Ubuntu distribution are banned as potential security risks. These include any program “that calls home” to an outside server. On top of that Google uses its own proprietary in-house user PC network authentication that Bushnell says is “pushing the state of the art in network authentication, because we're such a high profile security target.”

Put it all together: the need for top-of-the-line security, high-end PC performance, and the flexibility to meet the desktop needs of both genius developers and newly-hired sales representatives, and it's no wonder that Google uses Ubuntu for its desktop operating system of choice. To quote, Bushnell, “You'd be a fool to use anything but Linux.”

Related Stories:

Minor improvements coming in Ubuntu Linux update release

Ubuntu 12.04 vs. Windows 8: Five points of comparison

20-million new Ubuntu Linux PCs in 2012?

Shuttleworth on Ubuntu Linux, Fedora, and the UEFI problem

Ubuntu 12.04 arrives and it's great

Topics: Linux, Apple, Google, Laptops, Open Source, Operating Systems, Ubuntu, PCs, Windows

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Odd

    “Windows is harder because it has 'special' security problems so it requires high-level permission before someone can use it.” In addition, “Windows tools tend to be heavy and inflexible.”

    Wow. Odd, as that statement isn't keeping with the norm.
    William Farrel
    • Ah yeah it really is.

      The nature of Linux and Unix environments can have the Home folder isolated from the rest of the system and flat files are much easier to read than a registry so it makes it easier for defense.
      • Not sure if that's valid

        "The nature of Linux and Unix environments can have the Home folder isolated from the rest of the system"
        Perhaps I misunderstand what you mean. How is this different from a Roaming/Network Profile? How is it different from the standard Users folder which is basically the same thing as /home anyway?

        " and flat files are much easier to read than a registry so it makes it easier for defense."
        I'm sorry, but configuring hundreds of flat text files with thousands of parameters each is not easier than searching the registry. Neither are great, but while in principal I would pick text file configs, in practic, Linux programs can have these stored in any of a hundred different folders making config a nightmare. I would pick the Registry as convoluted as it may be over the Linux scenario any day of the week. It is FAR easier to get to the bottom of things in Windows.
        • You've never used Linux in a large scale deployment have you?

          This isn't a dig on you as an individual, but your comment indicates it's quite possible you've never managed a large scale deployment of Linux - desktop or server.

          Most configuration files are not managed by hand. When you have text configuration files, simple tools like sed, awk, and grep coupled with a working knowledge of rexex and some level of shell scripting (perl, ksh, etc) mean that you can manage hundreds (thousands) of flat file without difficulty. Add ssh, keys, and a `for` loop and you can manage hundreds of configuration files across hundreds or thousdands of desktops easily.

          Now.. add a config management tool like Puppet or KSB's msrc tool set and you can see how Google easily manages tens of thousands of desktops and possibly hundreds of thousands of Linux servers with the # of admins they have.

          I've managed both Windows and Linux and I'd much rather have text configs + ssh than RDP and right-click properties any day.
          • spelling sucks

            excuse my spelling, I'm a Linux admin.
          • Traxxion meant a different 'complexity' perhaps?

            Per this statement:

            "Linux programs can have these stored in any of a hundred different folders making config a nightmare."

            ...I believe the point is that to the admin, the registry is "one place," while the Linux configs are 'all over the place.'

            That's a valid point, but IMO the balance still comes out in Linux's favor.

            Not that almost all configs are under /etc, but rather I have to manage less than 10 files in total on the average server, and fewer on the desktop. You only touch configs when A) you are using the service, and B) when the default configs are unacceptable.

            Defaults are entirely unacceptable with the likes of samba, ssh, nfs, mail etc, of course, you will be handling their config files. But on a typical task, say a backup server, I have a very limited number of configs to touch, and they are all pretty much in plain English.

            A backup server sitting beside me here runs ssh, cron, shorewall, and has unique host mapping and routing tables. That's it. I can back up 1/2 dozen tiny text files and if need be use them to restore this server from bare metal as fast as I can get the OS on the new hard drive.

            They're not in one place, but close enough, and easy to find, eg /etc/ssh/sshd.conf, /etc/samba/smb.conf, /etc/cups/cupsd.conf etc, etc, etc... ( ;) )

            You can't hose the system like you can with a mistake in the windows registry. On the other hand, the registry make rolling back to a known good config possible, a huge plus for the registry model. (Linux is working on a "system restore" type deal atm)

            I don't have any problem dealing with the registry, but when dinking around with Linux configs I definitely feel more relaxed. It's like working with the registry feels serious, like being at work in a suit and tie suffering through some meeting... in a word uptight. Handling Linux is shorts, t-shirt and a fishing pole along the bank of the river, waiving at the canoes floating by... relaxed.
          • Well

            "You can't hose the system like you can with a mistake in the windows registry. On the other hand, the registry make rolling back to a known good config possible, a huge plus for the registry model. (Linux is working on a "system restore" type deal atm)"

            Registery system _needs_ that, while text configuration system doesn't. As if one text file is corrupted, you only restore that from backup. And texteditors can make to save backup on every save.
            It is much easier to trackback the problem and fix it, than roll over _all_ configs. How does admin/root know there isn't any other config changed in register before rolling it back? Of course you can store specific roots of register and restore them, but it is nearly same thing as with text configs, but you need to merge and hope that other parts of touching roots haven't changed after backup.

            GNOME used register vs KDE text configs. And text configs has always been much better. You can even have GUI to edit those text configs (you don't need to use text editor).

            When data is in plain text form, without special formatting (XML etc), it is safest, fastest and easiest way to maintain, edit and restore.

            No matter if files are in different directories, as they can be searched in seconds. They can be backup easily and restore to same locations at one task.

            And you can even edit the files post-bone after backup in other devices (even on smartphone or even trough email client) before storing or restoring them.
          • All you need is one set of working config files.

            Keep that backed up and when a system is hosed, just replace it's config files with yours. If something breaks in Linux you can swap it out with something that works and there is no hassle or complexity to the matter. It is a simple as copying a file.
          • I work within medium scale Windows

            Yes, you are correct - I am a Windows pro and a mere Linux tinkerer. I hear what you are saying about being able to manage large scale Linux deployments, but I'm still curious how that works on configuring an individual desktop - let's say your master image? If it is so easy to configure all these diverse pieces of software from simple tools then why are these not more publicised for configuring the desktops I have at home in walkthrus for example? It seems to me that almost without exception, each bit of software and each service needs to be individually configured in a very convoluted way.

            For example, one that is important to me is autofs for mounting per user shares - a piece of cake if you have configured it the way you want it to work before and have the configs lying around, but change your distro - bang doesn't work. Upgrade? - bang! doesn't work. and after trawling the forums for hours you finally find some little nuance in a config file, or some little version variation in a module, or soemthing similar. In Windows definitely since NT any version it has been something along the lines of:

            net use \\server\users\%username%\loginscript.cmd

            and job done. Or just map the connections within the user profile and Windows will remember the connection until you tell it otherwise - again job done.

            Linux always feels like this to configure ( I primarily use Linux at home btw...)

            Configuring repositories - find url and add to some text file

            Configuring daemons - configure some text file and add a startup script to init.d chmod+x, launch it, blah blah - might work

            Configuring network & open ssh port - open some text file, add IP and ssh port entries to it, save

            In Windows most of these things either do not require manual intervention, or are an absolute snap to configure. The settings can then be deployed with ease via GPO's.

            Like I said, if you have your configs ready to go in Linux - sweet, but if you are configuring a desktop from scratch or trying to fix a problem, then the winner surely has to be Windows?
          • Sorry - I was sleeptyping

            net use \\server\users\%username%\loginscript.cmd

            should have been [RETURN]
            net use \\server\users\%username%

            with \\server\users\%username%\loginscript.cmd being optional obviously....
          • @Traxxion

            What software you are you running on Linux? What distro are you using?
          • Quite a lot

            I set up a linux server at home to replace my windows box and found it very difficult to replace all the requirments, but eventually I ended up running Ubuntu Server 11.10 and configured LVM rather than mraid, to replace my spanned dynamic disk, ps3mediaserver for dlna, because none of the others seemed to work, samba, vmwareserver2 intended to host at the very least pfsense (which ceased to work as soon as I changed the video card - yes, the video card and vmware had a fit), I then configured virtualbox with phpvirtualbox and it worked, but not very well, ssh, apache, mysql and probably a few other things that evade me right now. I kid you not when I say, every single service seemed to require hours of forum trawling and then when I changed the graphics card, the whole lot came undone and I think I had to start again to get the system running properly.

            I've also tried to build a custom Amiga-like distro selecting PCLinuxOS and achieved much of what I wanted to but hit a stumbling block around editing the themes because I found the theme images and configs in dozens of places (as mentioned before, this starts to feel fairly typical in Linux for some reason) and none of the gui frontends worked.

            I quite liked Slitaz for a while, but had issues

            My laptops/desktops I have eventually settled on LinuxMINT 10 and now Linux MINT Debian, but LMDE seems to be a little flaky in some respects, so I'm still undecided where to go from here. Configuring autofs on LinuxMINT was easy. Configuring on someother distros (maybe PCLinuxOS?) seemed impossible. Honestly, I have tried dozens of distros to find what I am happy with and each one seems to have its own little irritations.

            My point is that any of the above are SIMPLE things in Windows (yes, even changing a video card) and would usually cause no hassle whatsoever. The only hassle that springs to mind, is CIFS file sharing, but the LanMan server version Vista+ is well documented and is a one off registry tweak - easily fixed and instantly fixed if you have the .reg file to hand.
          • Yes and No

            I've been setting up Ubuntu networks for a while and worked with windows since 3.11. Something on the order of tens and dozens respectively.

            They are both just about the same as far as being a trip into minutia land. Once you know what you are doing they get easier. I still use both and have an intergrated network that I use in the shop. The really big difference in the two is cost of ownership and viruses. My ubuntu freezes from time to time but nothing way serious and no more than win 7 does. I have big stuff on my machines and work 'em.

            I always make a backup when I change a config file on a ubuntu machine. Windows you can use the restore function. You can also use a config file from another Ubuntu machine to restore a machine with a problem. If you have to completely remove an app both OSs can be a pain getting ride of all the associated files. Yes there are command line switchs in ubuntu that are supposed to work, but guess what? They don't always. Of course you can then... blah, blah, blah.

            Etc., etc., etc, X 100

            Bottom line: Same story, different day. I'm over worring about what is somehow 1 billionanth of a percent "better". If it makes you happy, use it. If you get to choose, make a list and check out what is important to you and go from there.
          • It Can Vary

            I am not a large scale sysadmin, just an Engineer.

            However, on my Linux systems, there are only a couple of places where the config files are saved. If the program is a system resource, it's configuration is in the System directory. If it is a personal preference, it is saved in my Home directory. There is no need to look at more than a few directories.

            On my Windows systems, the Registry is in one location, which ought to simplify things, however, every program that is installed changes the registry, and any buggy program can totally hose the system.

            There have been a number of times when it was easier to reinstall everything than to wade through with a registry editor to fix a broken registry. I have not had this experience with Linux. Erase the config file in question, and the configuration that caused the problems is just gone.

            I find it easier to install a single program than the entire system. This is particularly true as most programs in Linux are installed with a single selection, and don't require any rebooting.
          • Sorry I can't agree with that

            "If the program is a system resource, it's configuration is in the System directory."
            No, it can be

            and quite possibly you will need to find and configure any number of parameters seperatley and manually. This is by no means the exceptional circumstance either.

            "If it is a personal preference, it is saved in my Home directory. "
            that is to say /home/

            my point being that you have to know what it is called, where it is, what you are ultimately going to change and it is not always the case that such things are in the home folder.
          • Not quite right....

            Settings under Unix/Linux are in one of only a few places.

            Global settings are in any of the following:

            That's it. That's not to say a program might not store some data somewhere else, e.g. standard website location is /var/www; but that's data not settings.

            All user-specific settings are in their home directory:


            This can be a little more confusing. Some programs use a file (e.g. .vimrc); others have a directory (e.g. .kde, .config). But they are all there. This is generally well documented on a per-app basis; but again, it is typically only updated for non-standard settings to customize it to the specific user.

            If a user's configuration is screwed up; then you just remove their directory and let it reset itself to the defaults. Remember - users cannot modify global settings - only root can do that (or any other user assigned root priviledges).
        • Registry

          How do you verify that a registry is compliant? Text files definitely are easier to manage. It's very easy to diff text files. It's very easy to write CFEngine promises (as much as I hate CFEngine) or Puppet scripts (or whatever they are called in Puppet) to manage the machines. I imagine you can do that with SCCM, but for ad hoc comparisons and maintenance, a bunch of different text files work better. The registry represents a single point of failure. Make one mistake and it can have unintended consequences. There's no way that I can host PostGreSQL by editing a Tomcat configuration file.
          • The registry is organized...

            If you modify the registry entries associated with one application, it's likely not going to impact another unrelated application.

            Microsoft warns people heavily about editing the registry with the doom and gloom message because its mainstream audience is not the same as the average Linux user base in terms of technical capability.

            Those warnings of dire consequences, even if overblown, save them a LOT of tech support calls.

            Registries can be compared just as easily as text files. Regedit exports/exports registry trees in plain text that looks suspiciously like your average Linux config file.

            In practice, the registry really isn't much different than Linux config files. Or in other words, default configurations rarely have to be touched directly, and if they do, the impact isn't nearly as system-wide as people think. Like Linux, if you mess with a config associated with a shared lib, the impact is going to be on any app that depends on it. Windows is the same way.
          • Plus

            Since system restore was turned on you can easily restore a corrupt registry entry from any day of the week easily whether using safe mode or the recovery console> The registry is also organised into logical hive files.
            SYSTEM for computer system settings, SOFTWARE for software settings, NTUSER.DAT per profile for user specific settings, etc.

            It is so easy to work with and keep track of things.
          • Registry isn't intuitive

            I don't agree. There are so many undocumented registry entries. Nothing is there that explains any of it, no text which could tell you whether a value of "1" turns it on or off, or whether that is done with "0", or even "2". And then many programs additionally have Linux-type "ini" files that can hold further config data, why this "double" approach?

            Usually the configuration files in Linux come with a lot of explanations and even examples within the files themselves that normally makes their configuration a lot easier and straightforward, provided you take the time and read the files before changing settings within them.