X
Home & Office

Learn how network load balancing can jump-start performance

Load balancing a network is a task that many administrators often overlook—to the detriment of their networks—even in the smallest of environments. Generally, poor network performance is treated by supplying more bandwidth to the clients (whether intranet, extranet, or Internet), along with faster backbones and more memory in the servers.
Written by Micheal Mullen, Contributor

Load balancing a network is a task that many administrators often overlook—to the detriment of their networks—even in the smallest of environments. Generally, poor network performance is treated by supplying more bandwidth to the clients (whether intranet, extranet, or Internet), along with faster backbones and more memory in the servers. However, the path that clients take to get to your network services sometimes needs to be optimized as well. You can achieve this through network load balancing.

Network load balancing allows system processing resources to be multiplexed to remove the traffic jam that results when you try to take a million packets flying across a fast network connection and stuff them into a Web server, for example. Let's look at the various types of network load balancing and see how they can improve the performance of your network.

Different generations of load balancers
Several generations of network load balancers are on the market, and they run the gamut from very simplistic to incredibly robust and comprehensive in their performance enhancements.

First-generation load balancers are basically round-robin DNS machines that enable HTTP sessions to be distributed among several IP hosts. These systems use basic pinging capabilities to ensure that session requests aren’t sent to failed servers and introduce an element of fault tolerance to multiserver installations.

Second-generation load balancers not only check to see if the servers are still alive but also determine how well they are performing. That way, if a server becomes overburdened, incoming requests can be sent to other machines to ensure that the overall load is well balanced across all available resources.

Finally, third-generation load balancers are designed to span the entire content delivery chain. As Web and network services have become more sophisticated, monitoring a single tier of Web servers is no longer sufficient. It no longer makes sense to send requests to a healthy first-tier Web host that might have defective back-end servers and/or applications behind it. New services, such as online sales, have started to use multiple tiers of servers for content, databases, and transaction processing engines. The fact that these electronic businesses now involve consumers’ money also makes it crucial to guarantee the maximum possible performance and reliability for customers. Thus, load-balancing vendors developed the third-generation solution for ensuring the health of all the resources in the content delivery chain.

Two types of load balancing
Load balancers can basically be categorized into two groups:

  • Software-based load balancers
    The only problem I see with software-based load balancing is that a process done in software is inherently slower than a process done in hardware. The reason you needed a load balancer in the first place is that your network operates at wire speed and your server operates at the speed of the application or applications delivering the content. On the other hand, if your network requires something different or something special, software can easily be updated to handle your needs.
  • Hardware-based load balancers
    Hardware-based load balancing is usually done by routers and switches. These devices use application-specific integrated circuits (ASICs) and operate in hardware at wire speed. These are the fastest devices on the market. However, since they operate in hardware only, if your network business processes change, you have to wait for vendor development and enough of a business case for the vendor of your load balancer to produce what you need. Or you simply have to buy new hardware.
How does it work?
Load balancers modify their delivery based on the information they gather about your back-end servers via custom “agents” (designed by the load balancer vendor) or using some form of systems management tools. Each has its own strengths and weaknesses. Agents tend to be application- and/or hardware-specific and can closely monitor application processes. However, you usually end up being tied to a particular vendor.

Load balancers that use existing system management tools can monitor a wider range of applications and hardware through APIs and common protocols. Of course, using these methods to manage your business systems has some obvious security concerns that need to be addressed up front.

As mentioned above, new-generation load balancers not only handle network and server performance problems, they can also direct traffic based on front-end requests to back-end content. In this scenario, the load balancer acknowledges the request and holds the session until the content is ready for delivery—a procedure called “delayed binding.”

This kind of content-aware routing has enormous benefits, in that server clusters can be tuned for specific applications (CGI, streaming video, static page serving, cookie serving, etc.), and the load balancer will handle and deliver all the requests to the client while maintaining connection state. This is extremely important, for example, when it comes to Web shopping carts that use Secure Sockets Layer (SSL) transactions because SSL connections are extremely processor-intensive and must be persistently maintained across several Web processes and transactions.

Final word
The type of network load balancing you choose will depend on how much traffic you need to balance and the complexity of the equation. Clearly, load balancing a popular—but simple—corporate intranet site will require a less involved solution than load balancing an e-commerce site, which will definitely benefit from the new generation of load balancers.

TechRepublic originally published this article on 15 January 2002.

Editorial standards