Why build your own Linux server? It's a good question. The answer is simple: to save money. Instead of forking out for appliances such as the D-Link DFL-700, you can build one that does exactly what you want, and gain useful experience in the process. Given the learning curve involved, what it won’t save you is time -- especially if you have little or no experience of Linux. If you need a quick solution, frankly, rolling your own Linux server to look after your home/small office network won't be top of your priorities. On the other hand, Linux expertise is both expensive and hard to come by (the two are not unrelated) so, if you plan to move any of your IT systems to Linux any time in the near future, this may be a relatively simple and -- in the longer term -- worthwhile project to undertake.
Modern PCs shrug off the kinds of loads generated by a small business network, so the same machine can perform a number of features such as file serving for Windows clients, Web caching and firewalling. We decided to adopt a typical strategy and redeploy a redundant PC -- it is, in fact, our original, 950MHz Athlon-based 'quiet PC', first seen here.
Two disks provide storage, one 40GB the other 10GB. We used Partition Magic 8 to repartition the 10GB disk into three: 4GB bootable for the OS; a 700MB swap partition, which is a bit bigger than the memory size and should be sufficient; and the remainder as a share for administrative purposes. The larger disk we left as a single partition, with the aim of opening it up for user sharing. However, 40GB looks a little meagre these days, so it's first in line for a future upgrade -- probably to 160GB. Selecting the right Linux distribution is the first major task. We don't need big enterprise features, but we do want high reliability, driver availability and online support from forums and newsgroups. Eliminate distros aimed more at desktop use, and you're left with Novell's SuSE Linux and Red Hat Linux (RHL), both of which have plenty of open source drivers and other software support. So with some prior experience of Red Hat Linux 9 running on the 2.4.20-30.9 kernel, we downloaded the entire 2.24GB of ISO images and burned them onto CDs -- clearly, you'll need a broadband connection for this.
Usually, you would operate a Linux machine with a user account rather than the super-user root account, which provides privileges that allow you to render the OS inoperable. When setting things up for the first time, though, overall privileges are appropriate. Just be careful. The first job was to select the services that could safely be switched off, using the principle that only software whose purpose you know -- or at least have an idea ought to be running -- should be left live. The first candidate for switch-off was telnet -- there are few justifications these days for telnetting into a server, since it's very insecure and there are better alternatives such as the cross-platform VNC. We switched off cups and other print serving services, since we shan't be using those, along with NFS, Unix's native but insecure file sharing system. Other services disabled included SNMP, PCMCIA and, for the moment, httpd -- the Apache Web server. Items we ensured were running included Squid, the Web caching service; FTP, so we could access files from elsewhere on the network; and of course SMB, the Samba server that shares files for Windows networks. Once it looked more like a server, we switched on the required features. First we created user accounts with RHL's user manager and assigned their home directories to folders on the 40GB disk. On this occasion, there was no need to rummage around in Linux's text-based configuration files. With that basic task done, it was time to turn to start building the server proper.
Web caching speeds up browsing for all users, and saves download volumes by storing frequently used objects so they can be delivered to users quickly from local storage. It's part of RHL and comes with a sensibly configured configuration file (/etc/squid/squid.conf -- you can recognise it as a configuration file from its .conf extension). What's more, the configuration file, although large, is well self-documented which makes the task much easier.
Squid's default settings work fine as a basic proxy and caching server, so the main issue is to define who can have access to the cache and who can't. You'll want to provide access to nodes on the local network while denying access to anyone from the outside. Assuming your LAN uses the subnet 192.168.1.0, you would do this by adding the subnet to the access control list as follows: http_access allow 192.168.1.0/255.255.255.0 For most small addresses outside that subnet will be refused. However, making it explicit by adding: http_access allow localhost
http_access deny all
is good practice. We also changed the default IP port of 3128 to the more memorable 8080: http_port 8080
Improving performance is the next step. We increased the amount of memory that Squid devotes to caching objects -- the server won't have a huge amount else to do, and we've stuffed it with memory for this reason. So we changed the default cache_mem setting as follows: cache_mem 50 MB Then we increased the maximum size of object Squid will save in RAM to 32MB: maximum_object_size 32768 KB That's probably all you need to get started.
Known as Samba, the SMB service ships free with RHL and with most Linux distributions. Firing it up using the services module then allows you to get stuck into configuring the system. The first task is to create users in Samba, which then can then be mapped onto Linux users -- there must be an equivalent Linux user account for each Samba account. You can then create Samba-specific access permissions on top of the Linux privileges. It's convenient for each user to have a private area and a public share for all. If you're having problems creating the right access privileges in the file system, use the chmod command. This lucid description provides a good handholding guide. Note that our shares were created on a separate mechanism from the OS, which simplifies backing up. Once that's done, Samba's graphical interface makes configuration pretty straightforward. If you want to fine-tune your access permissions though, you're best advised to edit the smb.conf file (/etc/samba/smb.conf). For instance, you can restrict access to IP addresses in your local subnet and, better still, ensure that the shares for those in one department are completely invisible to those in another. Separating out the accounts workers' shares would be a typical example. You can also fine-tune your password strategy -- whether or not to use encryption for example, since Windows 98 machines and below don't encrypt: encrypt passwords = yes You can also set password complexity: password level = 8 decide whether the machine should be the browser master: local master = yes and select which machine (if any) stores passwords for security purposes: password server = betelgeuse [password server] yes Samba is fairly simple to set up and, once up and running, can usually be left to its own devices except when users change.
Although you can scour the Internet, or even download packages such as Smoothwall that will convert your machine into a dedicated high-security firewall (dedicated means it will wipe any data already present), the built-in firewall, ipchains, is plenty good enough for our purposes. Unfortunately, it can also be hard to get to grips with. Help is at hand. There's plenty of good documentation on the Web but, essentially, the ipchains tool tell the kernel what packets to filter by inserting and deleting rules from the Linux kernel's packet filtering section. The way it works is that packets fall through a list or chain of rules, each of which can affect its fate depending on what type of packet it is. There are three lists -- input, output and forward. When a packet comes in, the kernel uses the input chain to decide its fate. If it survives, the kernel decides where to send the packet next. If it's destined for another machine, it consults the forward chain. Finally, just before a packet is to go out, the kernel consults the output chain. If a packet falls through all the filters and has not yet been passed on or rejected, a well-configured firewall will block or reject it. So the first task is to work out which traffic you plan to allow, and which to block. A simple firewall will allow access to external Web sites (http), to email servers (smtp), and to domain name servers (DNS) and not much else. You can set up the system to allow only certain types of traffic through the system. For example, we used the security level applet to allow ftp (for file access), ssh (for remote control) and DHCP (for automatic IP addressing) traffic but no others. The command ipchains –list will show the rules that are currently configured and, with a little study of the output, you'll be able to see what task each rule or chain is doing. A detailed description of how to create a firewall and set it up in the dual-homed configuration described would, on its own, double the size of this feature. Although the task is not particularly difficult, explanations and caveats take time so, instead, we suggest you read the Linux ipchain how-to and this firewall and proxy server how-to instead.
Although it's not the latest version of the OS, once all the available updates have been installed, the combination of Red Hat and the KDE 3.2 desktop has proved very stable. We found setting up a server to perform basic tasks to be fairly simple, although you have to be prepared to read a lot of online documentation. As ever, common sense and a willingness to google for answers always yields results. Linux is ideal for the kinds of tasks we've described, and the experience you gain will reap dividends in the future. What's more, the availability and enthusiasm of the open source community for answering questions, plus the fact that the software is free, neatly blend the economic and personal justification for the task.