Lieberman redux and raising your Web presence: You get the hosting you pay for

Last month, I posted a blog regarding an accusation coming out of Senator Joe Lieberman's camp that his campaign's Web site had been hobbled by supporters of his competitor in the then upcoming Congressional election Ned Lamont. Lamont went on to win the bitterly fought Democratic primary, but not before it came to light that it wasn't a Denial of Service that brought Lieberman's site down.
Written by David Berlind, Inactive

Last month, I posted a blog regarding an accusation coming out of Senator Joe Lieberman's camp that his campaign's Web site had been hobbled by supporters of his competitor in the then upcoming Congressional election Ned Lamont. Lamont went on to win the bitterly fought Democratic primary, but not before it came to light that it wasn't a Denial of Service that brought Lieberman's site down. Rather, the problem turned out to be a poor choice by one of Lieberman's IT people when it came to Web site hosting.  Lieberman's campaign apparently got what it paid for when it chose to host its site on a server that was shared with 73 other Web sites

There's a very important lesson to be learned from the fact that Lieberman's site crashed This rock bottom-priced tier of hosting is also the one that's most risky to use for mission critical applications. at the moment that his campaign least needed such a failure, and more importantly, I have a recommended service provider based on my personal experiences so far when it comes to selecting Web hosting service providers.

The lesson, of course, is that no matter what it is you do -- whether you're a political candidate, a small business, or a corporate behemoth -- you have to take stock of your information technology, decide which of it is critical to your mission (aka, "mission critical") and then decide, within your range of affordability of course, what must be done to make sure that your technology doesn't fail you when you most need it to be operational.

In establishing a Web presence, which practically every profit or not-for-profit entity (including political candidates) must do, there are a lot of decisions that need to be made.  But the one on which most others depend is the hosting decision. When it comes to hosting, there are a lot of options.  What is it you're going to host? If it's just a blog, then perhaps you're better served by an outfit like SixApart that specializes in blog hosting (with it's TypePad service). Last year, I helped a local mayoral candidate establish a TypePad-based blog as his only Web presence and he was more than happy with how that choice helped him to get his message out.

If it's a Web site that requires more flexibility -- in other words, it may serve a mixture of page types ranging from static HTML to forms to blogs to e-commerce related pages -- then, you're clearly shopping for something more along the lines of a Web server along with some associated disk space that can be more freely populated with the content, designs, and layouts of your choosing. 

For Web hosting, Lieberman was apparently relying on a site called MyHostCamp where the monthly fee for such hosting (along with 10GB of network bandwidth usage) is a paltry $15. To the extent that any political candidate deems his or her Web site critical to campaign success or failure (obviously, a debatable item since many might argue that Lieberman's Web site ultimately had no bearing on the primary's outcome), then next question is what measures have been taken to make sure that site is up and running when it needs to be up and running (like, the days leading up to and including election day). 

When it comes to Web hosters, there are a lot of different tiers of providers and one of those tiers -- the one at the very bottom -- is the one where you have what looks like your own dedicated Web server but in reality, everything about it is shared. The physical system and all that goes with it (the network connections, power supplies, disk drives, etc.) are shared. The underlying software (the operating system, the Web serving software, etc) is shared. 

If the server is based on Linux or some kind of Unix, you can gain terminal access to it and even load third party software into it. But root level access is unavailable and the limitations placed on the amount of drive space and network bandwidth per month that you're allowed to use can sometimes produce undesirable effects. For example, you may be allowed 5GB of storage.  But if you eat a couple of gigs of that up with some downloadable audio and/or video that you'd like to make available on your Web site (as a political candidate might), it would only be a matter of a few downloads before your monthly allocation of bandwidth usage would be totally consumed.

This rock bottom-priced tier of hosting is also the one that's most risky to use for mission critical applications. There are typically no failover or fault tolerant options in the event that something goes wrong and without root level access to both the system and the configurations of the underlying "shared" applications, you're pretty limited in how you can overcome that problem through commonly used brute force methods. 

None of this is meant to disparage the low level tier of Web hosting. For certain types of applications where high availability -- particularly at certain times -- isn't a requirement, it can be perfect.  In fact, when it came to starting mashupcamp.com and establishing its Web presence (which included other applications like MediaWiki), I was pretty much goo-goo ga-ga over

Next >>
one such provider -- Jumpline -- and wrote as much.


But where and when some more resilience and flexibility is on order, Web hosting is not the place to cut corners and despite how well Jumpline's virtual server architecture has served us in running mashupcamp.com, I've come to realize that we need something a bit more commercial grade.

As Mashup Camp continues to grow in popularity, there was one question that haunted me regarding how deeply intertwined the Web site -- the wiki in particular -- is with the event.  Like other unconferences, the way Mashup Camp works is that the attendees are the ones who not only lead the various discussions that take place at the event, they're also the ones who document the content of those discussions by posting summaries to the wiki.  You can see many such notes that were posted to the Mashup Camp wiki by going to the last Camp's schedule & notes grid and clicking through any one of the listings. 

So, here's the question that haunted me.  What if the Web site, including the wiki, became unavailable during that event and none of the attendees could access the site to read the summaries as they were being posted or to update them? Bummer, right?  Enough to ruin the event? Perhaps. Perhaps not. Some might think yes. Some not.  Worth taking a chance on? Probably not.

So, I started to do some homework to find out what the options were. For starters, I did not want to be sharing a server with other sites any of which could suddenly get a rush of traffic the way Lieberman's apparently did, overwhelming the entire server and bringing down all its sites with it. Some people that I spoke to posited that when multiple customers are sharing one server, that the hosting provider is more likely to respond on a timely basis given how many customers' business could be at risk.  Yeah, sure.  Tell that to the 72 other sites that went down the tubes with Joe Lieberman's site. Not to mention how Joe Lieberman's site wasn't exactly up in running in a reasonable amount of time.

With mashupcamp.com, we have some requirements that aren't at all unlike the requirements that a startup might have. Because the open source-based MediaWiki was our wiki of choice, vBulletin was our threaded discussion forum technology of choice, and the open source-based Wordpress was our blogging solution of choice and all of those run on Linux, we needed the server to run Linux.  No specific distribution of Linux.  But Linux nonetheless. Some hosters specialize in running Windows-based servers. Our choice of applications immediately ruled them out.

One advantage of the shared hosters is that they're in complete control of physical server, operating system, and shared application maintenance.  In other words, if the hardware has problems, the operating system needs to be patched, or the applications need to be upgraded, the hoster takes care of that for you. These are great headaches not to have and in true outsourcing fashion, the special value or differentiation that mashupcamp.com could derive out of insourcing such work is so insignificant that it makes the most sense to outsource it, even if it means spending a few extra bucks.

So, now, we're zeroing in on the requirements. It's got to be Linux. We don't want a shared system but we'd like to outsource some of the more routine server, OS, and application management tasks.  High availability, especially during an event, is a requirement. If the site goes down, it needs to be back up in minutes.  Not hours.

That last availability requirement -- enumerated in very measurable terms -- raises all sorts of other questions the answers to which further helped to dictate the final solution.  The first of these is of course, what could go wrong that would lead to an enduring site outage: the kind of outage that could spoil an event? Here's the short list:

Next >>
  • At the hardware level, the storage could fail.
  • At the hardware level, the network interface card could fail.
  • At the hardware level, the power supply could fail.
  • At the hardware level, the motherboard (including the microprocessor) could fail
  • At the network level, the local area network could fail.
  • At the network level, the the Internet connectivity could fail (or be disrupted)
  • At the software level, the operating system or an application could crash (or be disrupted)

Looking at that list, it became pretty clear that one of the options -- to buy a physical server and locate it in the place of my choosing (anywhere from my house to a professionally run co-location center) and to outsource the routine systems management stuff to an expert -- was overly complicated. It involved far too many parties where a single managed hosting provider could do the trick.

In the sweet spot of most managed hosting providers is a type of hosting service where a physical server owned by them and located in one of their datacenters is dedicated to each customer and their job is to keep that server running in tip top shape, for that customer. For example, it's their job to worry about fending of denial of service attacks (a network disruption). If a hard drive crashes, it's their job to discover it on a timely basis and replace it. If there's a security patch issued for the operating system or one of the basic applications, they take care of applying it and making sure it works. 

Determining which of the many managed hosting providers out there (and there are many) are best capable of meeting these basic reliability requirements takes some research. Some of the reliability can be baked into a managed hosting provider's choice of datacenter and system design. How redundant, for example, is some of the datacenter's underlying infrastructure? If the local utility fails, where does the datacenter get its power from to power its servers and cooling systems.  If the datacenter's back up power generation system fails, what's the backstop? A second power generation system? An uninterruptible power supply (UPS) that gives all the servers in the facility a few minutes to gracefully power down (so data isn't lost)?  What about Internet connectivity? How fast are the datacenter's connections to the Internet and how many different ones does it have in the event that one fails? 

Above and beyond that, what's done at the server level to account for all the problems that can go wrong there?  What if a network interface card (NIC), power supply, or disk drive fails? Or what about the motherboard or microprocessor?

During my summer vacation, I researched managed hosting providers.  I talked to lots of sales people, explained my situation to them and how I wasn't very comfortable with any single points of failure.  Feeling as though the salesperson's level of comfort in discussing my high availability needs was an indicator as to whether the managed hoster he or she worked for was really capable of delivering what I needed, I explained my concerns and listened carefully as they attempted to talk me through their solutions. Without naming names, most failed the test.  Some asked why would I want to do a thing like that. That, as it turns out, had to do with establishing a failover server in the event that something so awful happened to the primary server that it was incapable of recovering on its own (despite the fault tolerant technologies built into it).

But one provider -- Appsite.com -- stood heads and tails above the others.  In the presales discussion, the salesperson -- Ed Buck -- knew immediately when the technical discussion required the input of someone more technical than me and it wasn't long before one of the engineers from Appsite's parent company Vericenter (provider of super commercial-grade hosting to large businesses like Yahoo, Microsoft, andIBM) was on the line with me talking me through not just how two servers with some load balancers in front of them could do the trick (and how Appsite will take care of all of that too), but what we'd need to do at the software level to essentially program automatic failover in the highly unlikely event that our primary server failed.

I say unlikely because Appsite's recommended server configuration includes redundant power supplies, NICs, and mirrored storage in the event that any one of those components experiences a failure.  Although Vericenter has datacenters all over the country, the one in Atlanta where Appsite runs its servers for its customers has redundant power generation systems and Internet connectivity (here's Appsite's impressive one pager on the facility).

On the software front, Vericenter's engineers talked me through how we'd need to first take advantage of MySQL's replication capabilities (replicated any database changes from the primary server to a secondary server), second, fix any PHP connection strings so that the applications look for the secondary database in the event that the primary one isn't respond, and third, to update PHP to use the mysqli extensions which are required to support the more advanced PHP connection strings.  Going back to some of the conversations I had with other hosters, they were telling me how rsync is all I needed.  Yes, we'll be using rsync to make sure static HTML pages and other files are "mirrored" from the primary server to the secondary one.  But you apparently can't just regularly mirror open databases with rsync.  For that, you need to engage the databases native replication facilities and then tune your apps (eg: PHP) appropriately. 

Compared to the other hosters I talked to, the "engineering discussion" scored major points with me. Sure, Appsite's Web site advertised 24x7 support (where was Lieberman's hosting service when his folks needed to make that call?), but you never know what you're going to get. Some ape watching the WWF who takes the call and scribbles something onto a sticky note?

Next >>
Or real, quality attention.  The presales call with Vericenter's engineers is what established my comfort level that Appsite had the right people, capable of providing the support we needed.


So, I closed the deal and the servers were turned on last week and soon enough, Appsite's promised 24x7 support was going to be put to the test.  The cost is substantially more than the $15 per month that Lieberman paid (more like in the multiple hundreds of dollars).  But trust me, it is well within the means of a tenured Congressman's campaign budget.  You get what you pay for.

As we went to install MediaWiki, it was discovered that the Red Hat Enterprise Linux (RHEL) installation (Appsite's standard Linux config) on our server was missing MySQL support in its PHP installation. I know. It seems crazy.  But we weren't going to touch the PHP installation since that was one of the parts of the RHEL installation that Appsite supported.

<Sidebar>: In fact, in a bit of support arbitrage (and in an interesting peek into how the managed hosting business works), Appsite's support of RHEL is a mirror image of that which is available from Red Hat itself because Appsite's promise to keep its Linux customers up in running is backed by its own support contract with Red Hat.  Although its not a dealbreaker for mashupcamp.com, the downside of this arrangement is that Appsite won't support us at all if we replace an official Red Hat supported component with one thats unsupported.  The impact of this bit of inflexibility in our case is that we can't use the most recent version of MediaWiki since it relies on PHP5 and, currently, for RHEL, Red Hat only supports PHP4 (here's a threaded discussion I found on the issue).  This is expected to change with RHEL 5, the release of which has been delayed until the end of 2006 according to stories that surfaced at last month's LinuxWorld. 

For businesses that absolutely require unsupported components, the hosting choices narrow even further. You can fall back to strict co-location where its your hardware sitting in a datacenter and you assume total responsibility for maintaining the system, unmanaged hosting (where it's the hoster's hardware, but you absolve the hoster of any OS/application support obligations), negotiated managed hosting (where you negotiate with the hoster to provide support for normally unsupported components), or, maybe there's a hoster out there that, as a part of its services, offers support for certain components that might not be supported by the OS provider. Back in early 2005 for example, in recognition of the appetite for PHP5, Chicago-based shared hosting provider Hostway announced support for it.</Sidebar>

So, I logged into my Appsite management console, filled out a trouble ticket, said we needed the problem fixed immediately, and left multiple phone numbers where I could be reached.  Barely 10 minutes later, the phone rings and it's someone from Appsite double checking the date I put in my message.  He was right.  I entered the wrong date.  He said thanks and told me someone would get back to me.  About another 15 minutes passed and "Chris the engineer" called to say the problem was fixed. 

So what you say? You pay the big bucks and you should get this kind of support.  Fair enough.  I agree. It's just that I filed the original ticket sometime around 7pm on Sunday night over the Labor Day weekend and the episode was over in about a half an hour.  That my friends, is service.  Yes, you should get what you pay for. But these days, when you actually get it, it feels pretty good -- a lot better than that sour feeling you get in your stomach when you buy something only to realize later that it was a bad decision (something that's happened to all of us).  OK.  So, we're only a couple of weeks into the contract with Appsite and I may feel differently in a year. But right now, I feel pretty good about the decision and at this point, this is is the best "out of the box" experience I've had with technology and the support that goes with it in a long long time.  So, if you need hosting and your needs are a bit like those of mashupcamp.com, perhaps Appsite should be on your short list.

Editorial standards