Few would dispute the importance of network security--the barrage
of horror stories about viruses, defaced Web sites, and denial
of service (DoS) attacks have made network vulnerability all
too apparent. But how do you go about securing your network?
Don't start by thinking about products and services. You'll
want to develop a strategy first, based on a thorough understanding
of security technologies. In this article, we'll examine the
fundamental goals of a security framework: authentication, protection
of privacy and system integrity, vulnerability analysis, intrusion
detection, and protection against DoS attacks.
Who goes there?
Authentication is the process of validating a user; are you
who you say you are? Solutions range from traditional user name/password
regimens to the use of complex devices such as tokens, smart
cards, and biometric scanners. A system can authenticate you
by examining three things: what you know, what you have, and
what you are. Not all solutions use all three, though. Tokens
and smart cards (what you have) must be paired with passwords
(what you know) or biometric technology (what you are) to produce
a stronger solution. This helps prevent stolen smart cards or
tokens from being used.
One popular token design, used in the RSA SecurID card, displays
a constantly changing numeric identifier on a tiny LCD screen;
the number is synchronized with server software. A user logs
on by entering a user name, a password, and the identifier currently
displayed on the token. The server-side software computes the
correct identifier for that token at that moment. Although such
tokens improve security, they can be expensive, running $50
to $100 per user.
A smart card contains an embedded chip that can be programmed
to send and receive data and perform computations. The underlying
electronics are small and can be shaped into a wide range of
physical packages. Most smart cards are driver's license- or
credit card-shaped. There are three categories of smart cards:
* Memory-only: Capable of storing and returning information
but no more. Such devices have limited use in network security
and are generally relegated to applications such as phone cards,
gift cards, and the like.
* CPU-based: Capable of processing information.
* CPU- and crypto-coprocessor-based: Typically tied to a public-key
infrastructure (PKI) and sometimes called PKI-enabled smart
cards. PKI is a combination of software, services, and encryption
technologies that facilitate secure communications and transactions.
The only way to get a card to perform private-key operations
is to provide a password or biometric information.
A multiplatform, crypto-aware, driver-level API called PKCS
#11 (Public-Key Cryptography Standard 11) has been developed
by a consortium headed by RSA Security (www.rsasecurity.com/rsalabs/pkcs/pkcs-11).
PKCS #11 facilitates the use of removable devices that work
with cryptography and is well suited to smart-card devices and
to cryptographic accelerators, such as those used to speed up
Secure Sockets Layer (SSL) or IP Security protocol (IPSec) processing.
PKCS #11 is a multiplatform standard available under Apple,
Linux, Unix, Windows, and other platforms; it is implemented
in Netscape clients and servers. PC/SC (Personal Computer Smart
Card), a proprietary standard from the PC/SC Workgroup (www.pcscworkgroup.com),
was originally designed for Windows. PC/SC ties into Microsoft's
Cryptographic API (CAPI).
Smart cards offer many benefits but require smart-card readers
or some other way to interface with your computer. As interfaces
like USB continue to proliferate, the challenges of deployment
will decrease; manufacturers are already integrating the smart
cards and USB interfaces into single units and providing simple
USB-compatible smart-card reader. The Aladdin eToken is a smart
card that can plug directly into a USB port for reading and
Biometric authentication systems capture and store physiological
traits such as those of the finger, hand, face, iris, or retina,
or behavioral characteristics such as voice patterns, signature
style, or keystroke dynamics. To gain access to a system, a
user provides a new sample, which is then compared with the
stored biometric sample. Biometric systems offer great promise
in user validation but are expensive and complicated to administer;
this deters many companies from deploying them.
How do they do it?
With increased reliance on public networks and the growing
use of wireless technologies, data is more at risk than ever.
Many people believe their applications provide adequate protection
for passwords transmitted over the network, but most passwords
are sent either unencrypted or very weakly protected. A hacker
who breaks into any device that comes in contact with your traffic
and then sets the LAN interface into promiscuous mode (sniffing
mode) can read your passwords and anything else you haven't
adequately encrypted. Vulnerability scanners and intrusion detection
systems, discussed later, can check for weaknesses like Ethernet
interfaces in promiscuous mode on your network. But you can
rarely control every interface your data will travel through.
TCP/IP protocol applications such as FTP, HTTP, SNMP (Simple
Network Management Protocol), telnet, and others offer little
or no protection for passwords. To protect passwords and sensitive
data used with these applications, you must implement a secondary
security protocol such as SSH (Secure Shell), SSL, or IPSec,
or take other restrictive measures. When you are administering
routers and servers remotely, telnet and FTP should never be
enabled without a protocol such as SSH.
SNMP is used to configure routers and servers and to gather
statistics from them, but it offers little password protection.
Many argue against doing any form of configuration using SNMP
because of this. Upload and download files using a secure protocol
such as FTP with SSH. As for gathering of statistics, configure
your router or server to accept SNMP queries from the IP address
of your network management server only and limit your SNMP data
to nonsensitive material.
Most of us now know to use SSL with HTTP (HTTPS) to transport
information securely, but a common programming error often exposes
passwords. HTTP basic authentication is the most common method
of authenticating Web site visitors, but alone it provides inadequate
password protection. When using SSL to secure a page for which
HTTP basic authentication is configured, you must be sure to
gather the password after you activate SSL in your HTML code,
not before. Otherwise, passwords will be sent in plain text
and not through the protected SSL session.
Microsoft Windows servers based on Win NT 4.0 and earlier use
NT LAN Manager (NTLM) to password-protect access to Windows
resources. NTLM's inadequate password security requires an adjunct
protocol such as IPSec when you are transmitting over public
networks. Windows 2000 uses Kerberos, a significantly improved
password mechanism in which the password itself never travels
across the network during authentication. Despite this improvement,
other well-publicized and recurring vulnerabilities make activating
Windows 2000 authentication over a public network without IPSec
a bad idea.
Both SSH and SSL are strictly two-party, point-to-point protocols.
They do not engage a third party, such as a firewall, as part
of the overall security scheme. To SSL and SSH, the firewall
is something to tunnel through, not interact with. In fact,
from the perspective of these two protocols, the firewall might
be an attacker. IPSec, by contrast, is designed to accommodate
more than two parties.
IPSec authentication works via security associations (SAs)--agreements
between two entities on methods for secure communication. A
single IPSec SA can exist between two endpoints, with intermediate
firewalls establishing their own encryption and authentication
SAs to apply corporate firewall policies. The encryption of
a connection is broken at the firewall, allowing the firewall
to inspect the session's contents. The contents can then be
reencrypted for transmission to the destination. In this way,
two endpoints can securely authenticate themselves, but intermediate
firewalls can also inspect contents of the session and perform
their own authentications.
Don't ever change
Suppose you've configured your server operating system in a
way that seems secure. You've disabled services you don't need,
disabled access on all TCP and UDP ports except those you absolutely
require, and so on. Your next concern is to maintain the integrity
of your configuration; if an intruder modifies anything, you'd
like to find out quickly and easily. Server administration software
tools, such as Tripwire, have proved useful in detecting such
Integrity checking is implemented through a hash function,
which produces a unique number based on data supplied to the
function. The probability of getting the same hash for two different
data inputs to a properly designed function is close to zero.
Tools such as Tripwire compute the hash of various key system
files and can securely compare the hash of the original system
files with the hash of ones that may have been altered. Of course
you must store the hashes (in a hash snapshot) on a separate,
secure system--or else hackers can replace your original hash
with one that corresponds to the files they've altered. You'll
also need a new hash snapshot any time you change your system
For smaller installations, the hash snapshot can be stored
on removable media and compared with a real-time computed hash
of the system configuration on a regularly scheduled basis.
Note that the Tripwire (or other tool's) executable itself must
also be protected, because a hacker won't hesitate to install
a modified version of Tripwire over the original.
Hacker was here
Intrusion detection is a real-time analysis of the behavior
and interactions of a computing entity to determine whether
penetrations have occurred or are likely. An intrusion detection
system (IDS)--typically a server running IDS application software--probes
servers, workstations, firewalls, and routers and analyzes them
for symptoms of security breaches. The IDS monitors for known
attack patterns, analyzes system logs (audit trails), and issues
alerts based on violations of security policy. The amount of
logging you do depends on storage space and processing power,
because intensive logging can consume significant resources
and cause system instability. Anyone who tells you to log everything
conceivable is not helping you. In the real world, such an approach
A vulnerability audit is an analysis of system weaknesses.
A vulnerability scanner (typically a server running vulnerability
analysis software) may appear, from the perspective of the IDS,
as a device attempting an intrusion. The vulnerability scanner
tests the system by poking around as a hacker would and by checking
system configurations the way an experienced administrator might
when looking for errors and weak spots. Some scanners are aggressive
enough to crash the systems they scan; test carefully before
deploying one on your live network.
Vulnerability analysis and intrusion detection should be focused
on components at all levels: the network, the server, the desktop,
and the applications. System administrators may argue that analysis
and detection are not needed behind the firewall, because that
area is safe. This is a very dangerous assumption. An IDS and
vulnerability scanner should be implemented both behind your
firewall and for those devices exposed to the open Internet.
Note that your IDS and vulnerability analysis tools can be
configured to monitor components managed on your behalf by a
carrier. If you do this, coordinate your analysis with the service
provider, because your systems may appear as intruders to the
provider's detection systems. If you are not doing vulnerability
or IDS analysis on the managed systems, be sure their owners
are. The third-party systems, if not properly secured, can prove
an ideal jumping-off spot for hackers, who can gain leverage
by using trust arrangements you have for systems managed by
a third party.
It's a good idea to create a security incident team up front,
before you actually experience an intrusion. The team should
be ready to perform forensic analysis, carry out surveillance
and hacker baiting, shut down affected services, carry out recoveries
from backups, coordinate any public relations interactions should
the attack become public knowledge, and of course, address the
vulnerability. In most cases, you should rebuild systems that
have been hacked, and do so off-line, so as not to be vulnerable
to hacker activities during the system build.
Repairing instead of rebuilding a hacked system is dangerous
and a nearly impossible task; you don't know exactly what the
hacker did. Obviously, the replacement system should incorporate
a defense against the successful attack and should be built
with your organization's latest tested and approved system patches
and in accordance with any configuration advisories. Clearly,
backup and recovery procedures are an important part of a quality
Storming the gates
DoS attacks take advantage of the fact that without adequate
filtering, routers will deliver traffic wherever a hacker wishes,
regardless of source IP address, destination address, or traffic
type. Systems can thus be overloaded and brought to a standstill.
We all know we need to filter, but many of us think this should
occur only at the firewall. That isn't the case. You should
also configure routers that have the computing power to handle
filtering to do so. And follow this essential principle: Disable
any components you don't absolutely need, and cut off the traffic
at the earliest possible point of entry.
Establish a solid security escalation path with your ISP that
lets you quickly notify its engineers to filter DoS-based traffic
upstream, within the ISP's network. Ask your ISP about its procedures
for coordinating filtering with its peering partners in response
to DoS attacks. You don't want to be caught by surprise and
find yourself on hold with a customer service rep during a DoS
Running systems close to the capacity of the CPU, the memory,
the available storage, and the network bandwidth maximizes vulnerability
to a DoS attack. Monitor resource usage within your system,
look for suspicious increases in usage, and allocate sufficient
spare capacity to accommodate sudden unexpected increases in
load. Though you may not be able to protect against the largest
distributed DoS attacks this way, a hacker accessing a few computers
and bombarding your system shouldn't be able to overwhelm your
Choosing the optimal configuration for risk reduction is a
combination of art and science. Understanding the drawbacks
and benefits of the security technologies you employ is fundamental
to keeping your systems safe.
Eric Greenberg is a security consultant and author of the
book Network Application Frameworks published by Addison Wesley
Longman. He can be reached at firstname.lastname@example.org. Carmin
McLaughlin is a consultant and can be reached at email@example.com.