Special Feature
Part of a ZDNet Special Feature: A Guide to Data Center Automation

Eight signs you could be automating more of your data center

Organizations with existing data centers can save time and money by adopting automation tools for data center management. Here are eight signs you can do more to lighten your day-to-day workload.

sddc-sdn-research.jpg
Image: iStock

Special Feature

Special report: A Guide to Data Center Automation (free PDF)

This ebook, based on the latest ZDNet / TechRepublic special feature, explores how data center automation is powering new levels of agility and digital transformation.

Read More

As commodity server hardware becomes more powerful, infrastructure cost (in raw performance terms, such as IOPS per dollar) continues to plummet. As a result, it has become substantially cheaper to largely automate the software side of data center administration. In essence, the era of coffee-fueled IT staff spending their days pushing around electrons in order to keep the lights on at a given organization has ended, as data centers can be automated to manage computational, storage, and networking resources, as well as programmatically handle software lifecycle management and security patches.

However, there are degrees to which organizations have adopted this automation. A sparkling-new data center built with an automation-first mindset is likely to be following best practices. In practice, organizations adopting automation on mid-lifecycle hardware most likely implement a hodgepodge of practices in place for a variety of reasons. Often, legacy business-critical software will not work well with automation tools, and IT financial constraints prevent applying new device licenses to older hardware.

Adopting automation can, however, be executed largely with existing resources available in Linux distributions, and with the assistance of open-source software. Here's eight signs you can do more to automate your data center, without necessarily breaking the bank to do so.

1. Servers require manual security updates

If your servers are not automatically applying security updates on a scheduled basis, your environment is easily susceptible to attacks by malicious actors weaponizing publicly known security vulnerabilities. For RHEL and CentOS, the yum-cron package automates OS update tasks. As these updates are tested by Red Hat prior to being pushed, there is little to no risk of an automated update wreaking havoc on a known good deployment. For those more on the bleeding edge, dnf-automatic provides the same functionality on Fedora. Likewise, for those on the Debian side of the fence, the unattended-upgrades package automates security updates for Debian, Ubuntu, and derivatives of those distributions.

2. Maintenance downtimes bring business operations to a standstill

For organizations running everything from a centralized data center -- including hosted (soft) PBX systems -- even a few minutes of downtime for IT staff to perform administrative tasks can be exceedingly costly for business. If data center maintenance requires frequent human intervention to the detriment of other IT operations, and negatively impacts the organization at large, the amount of automation must be increased.

3. Heterogeneous hardware and software requires extensive shimming to interoperate

Business requirements necessitate different environments for different applications, particularly in the case of proprietary solutions. However, forcing mutually incompatible software to interact with the same data set often requires extensive human intervention to make the output of one program comprehensible as input to another program (an issue which can be exacerbated if one or both programs depend on deprecated technologies, such as Python 2.x or Java EE 6). Likewise, automating tasks requires thoughtful timing to prevent data corruption events resulting from two programs attempting to use the same data set simultaneously.

4. Adding new hardware to the network takes hours or days

IT decision makers may be tempted to deploy self-built racks in order to reduce expenses, in purely hardware terms, compared to plugging in a prequalified system from a vendor. However, the hardware costs of having IT staff configure new hardware during deployments can absorb the supposed cost savings of building custom racks, as the attention of the IT staff is pulled away from day-to-day operations which they would handle under normal circumstances.

5. You are not using containers or virtualization solutions

Containerizing your deployed apps makes it substantially easier to programmatically create, modify, and destroy instances without running the risk of adversely affecting other parts of the system in the process, as well as running multiple instances of the same application on one system without conflicts. Application containers such as Docker may satisfy your requirements, depending on regulatory requirements for isolating information, and the nature of the workloads on your server. For comparison, OS-level containers such as OpenVZ or VServer have somewhat more overhead as those solutions replicate the entire operating system for each instance.

Accordingly, full hardware virtualization, such as the open-source kernel virtual machine (KVM) and proprietary solutions from VMware's portfolio of hypervisors, or Microsoft Hyper-V, provide the highest level of software isolation, though this comes with considerably more resource overhead than application or OS-level containers.

6. You are not using software-defined storage

Software-defined storage is an increasingly popular technology trend in many organizations, and can greatly ease the task of managing data storage in use cases where enough data is generated inside the organization to warrant this level of storage management. In an interview with Evan Koblentz at TechRepublic, ClearSky Data CTO and cofounder Lazarus Vekiarides points out that, "for most folks, no, [software-defined storage] is probably not going to work for them because it's too much trouble," adding that the quantities of data generated by hyperscale companies such as Google, Amazon, and Facebook are not representative of most companies.

It is challenging to give clear-cut guidance indicating a threshold for quantities of data for which software-defined storage becomes advantageous. One approach is to focus on the number of individual objects stored, while another approach focuses on the capacity those objects occupy. Further, looking at the availability requirements for data within your organization, rather than the number or capacity of objects, requires measuring other business metrics, such as workflows.

Easier to implement (though less comprehensive) software management solutions are explained in greater depth in TechRepublic's cheat sheet for storage management software.

7. You are not using software-defined networking (SDN)

Manual switch configuration can be a significant time sink for IT staff working to bring new racks online, with post-install configuration changes being an operation, which requires nearly surgical precision to avoid accidentally disconnecting critical production systems. Adopting software-defined networking eases the process of bringing new hardware online, and it greatly reduces the complexity of changing network configurations post-installation.

Yasuhiro Yamazaki, director of Japanese software firm axsh -- developers of the open-source LiquidMetal SDN and VM orchestration project -- noted that using Ansible to automate server configuration changes is easier with software-defined networking, as Ansible scripts can easily be confirmed as working correctly when working in tandem with LiquidMetal to change the network structure.

8. You do not have a software-defined data center (SDDC)

A software-defined data center (SDDC) is the combination of software-defined computing (containers or virtualization technologies), storage, and networking with an additional layer of software for system orchestration and management. While there is a benefit from individual software-defined components, the management software -- which unifies the three -- can greatly reduce the amount of active attention that IT staff must pay for day-to-day operational management of servers.

As a trade term, SDDC is used primarily by VMware and hardware vendors reselling VMware software with their servers. The same concept is found in Cisco's Unified Computing System offering, while the open source project OpenStack can be configured to act as an SDDC orchestrator. Practically speaking, all enterprise hardware vendors offer SDDC solutions aligned with either OpenStack, VMware EVO SDDC, or Cisco Unified Computing platforms.

TechRepublic's cheat sheet for software-defined data centers provides a more in-depth view into what benefits SDDC platforms offer for enterprises, and how SMBs can benefit from partially software-defined data center deployments, as their needs require.

MORE ON IT AUTOMATION

Enterprise automation starts in IT departments, but gradually pays off elsewhere
A survey of 705 executives released by Capgemini, finds enterprise-class automation is still a rarity, but taking root within IT functions.

Looking to compare all different types of automation? Now you can
Organizations must vet and compare different types of automation based on business needs, but there was no way to do this -- until now. Forrester's Chris Gardner shares how.

What Kubernetes really is, and how orchestration redefines the data center
In a little over four years' time, the project born from Google's internal container management efforts has upended the best-laid plans of VMware, Microsoft, Oracle, and every other would-be king of the data center. So just what is it that changed everything?

True private cloud isn't dead: Here are the companies leading the charge (TechRepublic)
Companies like VMware and Dell EMC are heading up the private cloud market, according to Wikibon research.