Build your own Linux server

Summary:Want to give an old PC a new lease of life? Why not transform it into a Linux server for your home/small business network?

Why build your own Linux server? It's a good question. The answer is simple: to save money. Instead of forking out for appliances such as the D-Link DFL-700, you can build one that does exactly what you want, and gain useful experience in the process. Given the learning curve involved, what it won’t save you is time -- especially if you have little or no experience of Linux. If you need a quick solution, frankly, rolling your own Linux server to look after your home/small office network won't be top of your priorities. On the other hand, Linux expertise is both expensive and hard to come by (the two are not unrelated) so, if you plan to move any of your IT systems to Linux any time in the near future, this may be a relatively simple and -- in the longer term -- worthwhile project to undertake.

Hardware requirements
Modern PCs shrug off the kinds of loads generated by a small business network, so the same machine can perform a number of features such as file serving for Windows clients, Web caching and firewalling. We decided to adopt a typical strategy and redeploy a redundant PC -- it is, in fact, our original, 950MHz Athlon-based 'quiet PC', first seen here.


We removedunnecessary adapters, including the graphics card, which was replaced by the oldest (and therefore coolest-running) equivalent we could find in the parts bin. We swapped the 10/100Mbps network card for a gigabit (1,000Mbps) Ethernet device, providing a performance advantage that no off-the-shelf appliance at this level will deliver. We also removed an older and potentially suspect 128MB DIMM with the aim of pre-empting hardware faults -- the remaining 512GB is plenty in a moderately loaded Linux server, and reliability is paramount.

OS installation
Two disks provide storage, one 40GB the other 10GB. We used Partition Magic 8 to repartition the 10GB disk into three: 4GB bootable for the OS; a 700MB swap partition, which is a bit bigger than the memory size and should be sufficient; and the remainder as a share for administrative purposes. The larger disk we left as a single partition, with the aim of opening it up for user sharing. However, 40GB looks a little meagre these days, so it's first in line for a future upgrade -- probably to 160GB. Selecting the right Linux distribution is the first major task. We don't need big enterprise features, but we do want high reliability, driver availability and online support from forums and newsgroups. Eliminate distros aimed more at desktop use, and you're left with Novell's SuSE Linux and Red Hat Linux (RHL), both of which have plenty of open source drivers and other software support. So with some prior experience of Red Hat Linux 9 running on the 2.4.20-30.9 kernel, we downloaded the entire 2.24GB of ISO images and burned them onto CDs -- clearly, you'll need a broadband connection for this.


Installation itself went fairly smoothly with only a couple of glitches. We could not persuade it to install from another machine on the network over a tested and working FTP connection, and we also at times had problems persuading it that one of the install CDs was readable, even though we re-burned it and swapped in another CD drive. Eventually though, Red Hat Linux 9 with the KDE 3.2 desktop environment was up and running, with all hardware recognised first time.

Initial configuration
Usually, you would operate a Linux machine with a user account rather than the super-user root account, which provides privileges that allow you to render the OS inoperable. When setting things up for the first time, though, overall privileges are appropriate. Just be careful. The first job was to select the services that could safely be switched off, using the principle that only software whose purpose you know -- or at least have an idea ought to be running -- should be left live. The first candidate for switch-off was telnet -- there are few justifications these days for telnetting into a server, since it's very insecure and there are better alternatives such as the cross-platform VNC. We switched off cups and other print serving services, since we shan't be using those, along with NFS, Unix's native but insecure file sharing system. Other services disabled included SNMP, PCMCIA and, for the moment, httpd -- the Apache Web server. Items we ensured were running included Squid, the Web caching service; FTP, so we could access files from elsewhere on the network; and of course SMB, the Samba server that shares files for Windows networks. Once it looked more like a server, we switched on the required features. First we created user accounts with RHL's user manager and assigned their home directories to folders on the 40GB disk. On this occasion, there was no need to rummage around in Linux's text-based configuration files. With that basic task done, it was time to turn to start building the server proper.

Topics: Hardware, Reviews

About

Editor, journalist, analyst, presenter and blogger. As well as blogging and writing news & features here on ZDNet, I work as a cloud analyst with STL Partners, and write for a number of other news and feature sites. I also provide research and analysis services, video and audio production, white papers, event photography, voiceo... Full Bio

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

The best of ZDNet, delivered

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.