For a long time now expanding storage meant buying more servers and
disks. This has proved to be a costly method for increasing storage,
and is often an inefficient way of doing things.
We are now instead using storage area networks (SAN). This term
generally relates to a bank of storage devices for multiple servers
and/or networks that may be accessed in a local area network (LAN) or
wide are network (WAN).
The key benefits here are: increased disk utilisation, improved data
availability, improved performance and protection of data. SANs also
reduce the loads that are placed on servers.
Fibre channel (FC) has been the main way of interconnecting
servers and storage devices in SANs for some time now. However, it's
very expensive. IP storage, or in particular iSCSI, is a fairly new
technology offering lower costs and easier deployment than fibre as
iSCSI uses standard Ethernet technology to create storage area networks
and greater connectivity.
If you want to get more technical, iSCSI or Internet SCSI is an IETF
standard that maps SCSI blocks into Ethernet packets. The iSCSI
protocol is a method for transporting low latency SCSI blocks across IP
There are a few other terms we should clear up before we go on. You may
have come across the terms Direct Attached Storage (DAS) and Network
Attached Storage (NAS).
The differences here are quite simple. DAS units allow system
administrators to connect them directly to a server typically via SCSI
-- some DAS arrays actually allow multiple servers to be connected
directly to them and share their resources, and also provide server
redundancy too if required.
NAS are specialised file servers for Ethernet-based networks. NAS boxes
allow administrators to connect them somewhere on the local area or
wide area network for remote access. By simply attaching a DAS or NAS
device directly to a server or on your network you can instantly
increase your storage capacity by 1TB or more. Deploying iSCSI
There are three obvious ways you can deploy iSCSI. The most obvious way
is natively, which basically is an all iSCSI SAN where all the devices
have an Ethernet interface and connect to an Ethernet LAN and all use
the iSCSI protocol without using any sort of translation or specialised
equipment. This is the best way a small to mid-sized company could get
Then there is bridging which is when you join Ethernet-based devices
that are using the iSCSI protocol to an existing FC SAN. This
deployment is typically carried out by organisations that have existing
FC SANs and would like to migrate to an Ethernet based SAN. Specialised
equipment has to be used here. We talk more about this later on.
The third way of using iSCSI, extension, is essentially linking SANs across large distances.
iSCSI is limited to current one gigabit Ethernet speeds while fibre
channel can run at 2Gbps with 4Gbps just recently being made available.
10Gbps fibre is on the horizon, but so is 10Gbps Ethernet. For the time
being, if you have already invested heavily in fibre think carefully
before making an investment in iSCSI. At the moment the technology is
slower, and over larger distances the delays will be even greater.
We know that FC works and it would be a safe bet to run mission
critical systems over fibre. For now, we can't actually say this about
iSCSI vendors in our minds are having some doubts over large
implementations, particularly because of the large delays that can
occur. When 10Gbps Ethernet becomes easily available, we expect to see
deployments of iSCSI becoming more common.
The Storage Network Industry Association (SNIA), who announced the
iSCSI standard a few years back, says small to medium businesses will
welcome this technology as a means to maximise their return on
investment. Despite the big hype vendors have been somewhat slow in
certifying their products. We are seeing this technology moving slowly
but it is being used successfully by those who are keen to take
iSCSI Cards, TOEs and Standard NICs
You are probably wondering how your servers will connect using iSCSI.
There are three ways: with a standard network interface card (NIC)
using an iSCSI driver, with a TCP offload engine (TOE) NIC with an
iSCSI driver, or by with an host bus adaptor (HBA) designed for iSCSI.
iSCSI places quite a large overhead on server CPUs. As iSCSI wraps
additional information over standard SCSI commands, it creates a huge
amount of TCP/IP work which a standard NIC offloads to the CPU. TOEs or
HBAs can look after the overheads associated with iSCSI without
offloading any work to the main CPU. The main difference between these
two types of cards is iSCSI HBAs only handle iSCSI traffic whereas TOE
cards handle all Ethernet traffic.
What will it cost?
Traditional SCSI DAS deployments have the lowest acquisition costs,
however they bear much higher long-term costs, or TCO, than FC SAN and
iSCSI SAN. Furthermore, FC SANs have a higher upfront cost that iSCSI
For this technology overview we take a first hand look at the Snap Appliance Server 4500 and how it implements iSCSI. We also look at some products from Adaptec, HP, and Cisco. Snap Appliance Server 4500
The Snap Appliance Server 4500 is a network attached storage (NAS)
device. It comes in a one rack unit (1RU) chassis and is marginally
deeper than it's wide. On the back of the unit is a serial port, video
port, a pair of PS/2 connectors, a pair of USB ports, and a pair of
gigabit Ethernet ports. There was also an Ultra SCSI 160 port that is
provided by an internal card.
Inside was a 2.4GHz Intel Pentium 4 processor with 512MB of RAM and
provision for two extra memory modules. To gain access to the hard
disks you have to remove the front facia then pull out the drive
cradles. There were four drive cradles in total. In these cradles were
250Gb Western Digital drives to give you a combined storage capacity of
The Snap Server doesn't take long to install and configure.
All we did was use a network cable to connect it up to one of our
switches. From there we installed the Snap Server Manager on a Windows
2003 Server machine. The Snap Server Manager basically finds the Snap
box on your network and lets you assign an IP address to it. By default
it looks for a DHCP server to get an IP address. Then by launching an
Internet browser and typing in its IP address, you can configure the
NAS. On the down side, the Snap Server doesn't make use of access
control lists which means anyone can try and take control of this Snap
Server if they happen to crack the password.
Once we had the basics up and running it was time to configure iSCSI.
On Snap Servers, an iSCSI Disk is based on an expandable,
RAID-protected volume. To client machines it appears as a local SCSI
drive. Unlike standard Snap Server volumes, Snap Server iSCSI disks can
be formatted by the iSCSI client to accommodate different application
Using the administration tools, you can create an iSCSI disk on an
already existing volume. However the iSCSI disk is formatted, managed,
and backed up from a client machine running iSCSI initiator software or
Currently, the Snap Server supports the Windows initiators which runs
on Windows 2000, 2003, and XP and is available for download from
Microsoft at no cost. (Support for UNIX/Linux initiators will be
available in a future release.)
iSCSI disks are isolated from other resources on the Snap Server,
because the file system of an iSCSI disk is different from the Snap
Server's native file system, and also because they're managed by the
client. We suggest you should either dedicate an entire Snap Server to
iSCSI disks, or place the iSCSI disk on a separate volume, which is
what we did.
Performing a backup of iSCSI disk should be left to the clients. When
attempting to back the iSCSI disk from a server, you may be backing up
inconsistent data. This would happen if clients are connected to the
iSCSI disk and were modifying the data. The client is the only one that
can maintain the consistent state required for data integrity. It is
also suggested not to use snapshots on volumes containing an iSCSI
disk, as running a snapshot will disconnect clients that are writing to
the iSCSI disk and the snapshot could contain inconsistent data.
We downloaded the latest Microsoft iSCSI Software Initiator version
1.04a from Microsoft's Web site. After installing the Initiator
software we set up the target portal which was basically the IP address
of the Snap Server. The default socket was set to 3260. Once we added
the target, we were able to view the two available disks we had
previously created on the Snap Server. Then by logging in, we initiated
an active session between our host machine and target NAS box.
Under My Computer, the NAS box appeared just like any other hard disk.
We were then able to transfer files to and from this disk. Microsoft's
initiator software would also automatically restore the connection on
Phone: 1300 301 053
Price: AU$8054 for 1TB model
Adaptec ASA-7211C iSCSI Card and Adaptec iSA 1500 Storage Array
You may be as curious to find out what the performance difference is
between the software-based Microsoft iSCSI initiator and a
hardware-based iSCSI card. In theory an iSCSI card is supposed to avoid
excessive CPU utilisation. We were going to test this out. Adaptec sent
us an Ethernet over copper-based iSCSI card, which uses an onboard
processor to process iSCSI functions for a complete TCP/ IP offload.
We installed this card and drivers, which was easy enough to
do. The drivers place an additional icon in the Windows Control Panel
so you can go in and configure the card and add targets. We set the
Snap Server as a target by giving it an IP address. (The Adaptec card
can also scan for targets if you don't know the IP address of your
target.) From here we expected to be able to login to the Snap Server.
After thinking we had done something wrong during the install, we
discovered that the Snap Server will only work with software initiators
and TOE cards and not with true iSCSI cards. We really didn't have
enough time to get a TOE card and run some tests, so unfortunately we
couldn't do much with the Adaptec iSCSI card.
An engineer over at Adaptec informed us on that company's most recent
arrival, the iSA-1500 external storage array. It acts as a true iSCSI
target. The product has been available in the US for some time now and
has only just hit our shores in the last few weeks. It comes in a 1RU
form factor with four SATA drives (1TB) and is designed for small
businesses and remote offices looking for a cost effective shared
storage solution. It's a little more expensive than the Snap Appliance
Server, but it's a true iSCSI target; together with iSCSI cards it
should prove to be a fast and cost efficient storage system. Hopefully
we get a look at one of these and compare it back to the Snap Appliance
Server using TOE cards.
Phone: 02 8875 7874
Price: interface card AU$999.90; disk array approx AU$15,000
We can make
things a little bit more complex now. As we have mentioned earlier,
iSCSI SANs can be connected over a wide area network with standard
Ethernet equipment. However, when connecting to fibre channel SANs, an
IP storage device is needed to convert the FC protocol to iSCSI.
Both IP storage routers and switches allow users to extend the reach of
the FC SAN and bridge FC SANs to iSCSI SANs. Having this functionality
allows you to then perform FC-to-FC switching, FC-to-iSCSI switching,
or FC-to-gigabit Ethernet switching.
HP StorageWorks IP Storage Router 2122-2
While visiting HP's Storage Centre in Rhodes, Sydney we had a look at
HP's StorageWorks IP Storage Router 2122-2. This is HP's second
generation IP storage router for small to medium-sized businesses and
enterprise workgroups. This device helps migrate servers from a DAS to
a SAN environment and consolidate storage.
This IP Storage Router supports iSCSI bridging; it can bring storage
resources to servers in the IP network. It also supports FC/IP for SAN
extension, where it can extend the SAN infrastructure across WANs.
The SR21222 iSCSI router has a list price of AU$17,045.
Cisco MDS 9000 IP Services Module
Another impressive device is the MDS 9000 IP Services Module from
Cisco. It supports the interconnection of remote SANs and extends SAN
connectivity to IP enabled servers using FC/IP and iSCSI.
It supports the full range of services available on other MDS 9000
family Switching Modules including virtual SANs, security, and traffic
management. It includes eight hot swappable,
small-form-factor-pluggable (SFP), LC gigabit Ethernet interfaces and
all ports are configurable for both FCIP and iSCSI operation on a
The MDS 9000 IP Services Module has a list price of AU$74,545.
This article was first published in Technology & Business magazine.
About RMIT Test Labs
RMIT IT Test Labs is an independent testing institution based in
Melbourne, Victoria, performing IT product testing for clients such as
IBM, Coles-Myer, and a wide variety of government bodies. In the Labs'
testing for T&B, they are in direct contact with the clients
supplying products and the magazine is responsible for the full cost of
the testing. The findings are the Labs' own -- only the specifications
of the products to be tested are provided by the magazine. For more
information on RMIT, please contact the Lab Manager, Steven Turvey