It could be argued that very little has actually changed in terms of the tools of the trade and the methodologies that I work with, as both make heavy use of virtualization and systems automation, but the reality is that I haven't done a lot of work in the datacenter itself since shifting the focus of what I do.
In my professional life, I help hosters, service providers and ISVs build new cloud offerings, whether they are deployed as IaaS, desktops as a service or hosted/subscriber applications via SaaS. If it's cloud in the public or in the private space, I'm all over it.
Until recently though, I haven't practiced what I preached. Vestiges of my earlier life still existed, in the form of my own private server lab, which I used for software testing and storing files and for multitudes of other things.
I'm a server guy. I've always been a server guy.
Over the last seven years I invested in 1U and 2U x86 servers, building them new from parts or buying them second hand. I needed systems that could run virtualized instances of Windows and Linux, running under various hypervisor platforms, because I had little room for on the job experimentation.
Over time my testing demands increased and I needed more storage. So I bought more hard disks. I needed to speed up performance, so I upgraded CPUs and installed SSDs. I needed to test and run larger workloads. So I bought more memory.
When you are an enterprise, there are cycles for upgrades and replacement. Systems are assets that are depreciated and serve ongoing business functions, and there are IT budgets to justify their existence as necessary if they serve the needs of the business.
Look, messing around with hypervisors is fun at work, but let's face it, I really don't want to run my own infrastructure anymore.
But as an independent systems professional this was a tough pill to swallow. How much money should I spend per year on maintaining server equipment and my PCs? $2000? $5000? I made a very good living, but it was hard to justify the expense.
I continued to do it though, because I wanted to further my education on competing technologies, whether it was operating system, virtualization, networking or storage-related.
It wasn't just the cost of the equipment though that was a burden, however. I had an entire bakers rack in a spare bedroom dedicated to them and they made a lot of noise, generated a lot of heat, and consumed a lot of electricity.
I couldn't keep them running all the time because my electric bill would be outrageous, and they made such a racket that I had to turn them off at night or we couldn't get any sleep.
It finally came to a head about three or four months ago. I was going to buy two new servers to replace the aging systems I own now. With my oldest boxes going on six years old, they are now at the point where running modern hypervisors on them, whether they are Hyper-V, VMware or KVM-based is a bit of a challenge.
They'll still run most stuff bare metal perfectly fine, but that's a huge waste of resources and makes it that much harder to test the things I want to test.
I did the math, and it probably would have run me a good $5000-$7000 to get what I wanted in terms of CPU horsepower, memory, networking and storage. That kind of capital investment for what amounted to a testing/lab rig for someone who no longer owns their own consulting firm didn't make any sense anymore.
I looked at the pricing on Microsoft Azure and Amazon Web Services, as well as a number of private cloud offerings, and the numbers frankly astounded me. I'd be out of my mind to buy new equipment.
Let's start with storage. Enterprise-grade, locally-redundant cloud "blob" storage is ridiculously cheap now, to the tune of about two cents a gigabyte per month for the first terabyte, and it gets cheaper after about the first 10 terabytes. If you want georedundant you pay a little bit more.
I could use that blob storage accessed by virtual machines I could run in IaaS, or I could use it as direct storage over the Internet, and build my own secure backup vault with it.
If I wasn't as concerned about RTO's or potential restore costs I might be enticed to use something like Amazon Glacier, which is priced at about 1 cent per gig and functions more akin to a tape drive than a random access device.
My multi-terabyte photo collection would have been a good candidate for that, but I chose to put it all on Azure blob, so I could access it directly from my PC as a hard drive and backup target using software like Cloudberry and GoodSync.
For personal data storage? OneDrive is awesome. I can access that from every device I own, from anywhere. Admittedly, I don't use many Google services anymore, but I have to give serious props to Google Drive for driving consumer storage prices way, way down.
The VMs? Look, messing around with hypervisors is fun at work, but let's face it, I really don't want to run my own infrastructure anymore. I only want to run workloads and operating systems when they need to run, and I want to provision and deprovision them as necessary.
Azure gives me 10 different types of VM configurations for general purpose, compute intensive or memory intensive testing. Amazon and Google have similar offerings as do any number of independent cloud providers.
Cloud storage and VM compute is a commodity, period.
Provisioning a multi-tier application environment is as easy as clicking a few links in a web browser. It was never that effortless when I ran my own systems. And if I need something as simple as a website or a hosted database without firing up a whole VM? I can do that with my cloud provider as well.
Sure, I used to love firing up the servers and making hardware hum. But that's not me anymore. Now I just want to make the applications work, I don't want it to break the bank and I want it to run more reliably than I can possibly build it myself.
And when you get down to it, that's what the Cloud is ultimately all about.
Have you put your servers out to pasture yet? Talk Back and Let Me Know.