X
Innovation

Building a private cloud with System Center 2012: Part 3

The third stage in our investigation of Microsoft's Private Cloud Evaluation Software bundle completes the lengthy but necessary configuration work, prior to actually building our virtualised datacentres.
Written by Alan Stevens, Contributor

Patience is a virtue and, as we've discovered, essential for anyone looking to evaluate Microsoft's private cloud technology. Those following our evaluation will know that in part 1 we spent several days building a set of Hyper-V virtual machines, each running Windows Server 2008 R2, onto which we've deployed a selection of Microsoft System Center 2012 components. In part 2 we started to deploy and configure the integration packs needed to connect those System Center 2012 components together. In this instalment we complete the configuration work, by first creating connectors in Service Manager.

Getting connected
Service Manager 2012 is a key enabler when it comes to delivering the IT-as-a-service technology that underpins the Microsoft private cloud solution. In particular, it provides a platform for task automation as well as a portal to give users self-service access to private cloud resources. To do this, Service Manager builds and maintains a configuration management database (CMDB), employing connectors to import configuration items (CIs), alerts and other useful information both from Active Directory and other bits of System Center 2012.

This sounds like it should be complicated to set up, but connectors are very easy to create. All we had to do was log onto our Service Manager VM, open the Service Manager console and run a set of wizards. With just a few minor differences these are all much the same, with useful options to test the connectors being built to avoid making silly mistakes.

pcloud3-smgr-connectors-done
A set of connectors must be created to link Service Manager to other System Center components and build its configuration management database (CMDB).

Actually, we did run into a few problems, mostly caused by errors in the evaluation guide, but nothing serious: overall, our Service Manager connectors took under an hour to set up.

Reporting
Next we needed to register our Service Manager installation with our Data Warehouse to support reporting. A Data Warehouse Registration wizard handles this task, kicked off from the Service Manager console to both step you through the registration process and deploy associated management packs from Service Manager to the Data Warehouse server. All of this happens in the background, with instructions in the guide to check when the synchronisation job, started by the wizard, has finished.

pcloud3-report-deployment-popup
Management packs are synchronised from Service Manager to the Data Warehouse server in the background.

According to the evaluation guide this can take several hours, but on our setup it was all over in around 20 minutes.

Virtual connections
Virtual Machine Manager (VMM) is another crucial component of the Microsoft private cloud solution, so a further necessary task was to connect our VMM server to Operations Manager so we could monitor our virtual machines and hosts and see diagram views of the virtualised environment from the Operations console.

Before we could do that, however, we were directed to Appendix C of the evaluation guide, which provides instructions on how to deploy monitoring management packs for use by Operations Manager. These packs contain the rules and knowledge needed to monitor the Windows Server operating system, IIS Web Server and SQL Server we were using, so each had to be downloaded and installed onto our Operations Manager server and then imported using a wizard started from the management console.

pcloud3-import-mgmnt-packs
Before Virtual Machine Manager can be connected to Operations Manager, a number of monitoring management packs must be deployed on the Operations Manager system.

That done, we were finally able to switch over to the VMM console and configure its connection to Operations Manager using yet another wizard. And that was it — at least as far as connecting everything together was concerned. However, we still had to jump through a few more hoops before we could actually start building our clouds.

Components not hardware
The last few steps are all about abstracting the hardware because, when working with a private cloud, you don't want users to have to get to grips with servers, Ethernet switches and other complicated bits of 'real' equipment. Far better to provide a simple shopping list of things like networks, storage and compute (CPU, memory) components that they can use to meet their business needs without requiring any real knowledge or understanding of what might lie behind them.

A little bit of extra work is needed to do this, but it's not that hard and using Virtual Machine Manager we had little difficulty configuring the infrastructure components we wanted, starting with host groups. Here we wanted to virtualise two physical datacentres, one in London and one in Brussels, so we created two host groups called (surprise, surprise) London and Brussels.

pcloud3-vmm-host-groups
To reflect our physical datacentres we created separate London and Brussels host groups, each with their own settings.

Using these host groups we could identify the best virtualisation hosts on which to place our VMs and specify the minimum CPU, memory, storage and networking resources required.

Turning our attention to networking, we could then define a network for each of our host groups, complete with assigned addresses, gateways and other settings including a load balancer, should we decide to use one later on.

pcloud3-vmm-logical-network
Networking resources can also be abstracted for use in Virtual Machine Manager.

We then looked at storage, with VMM 2012 using the new Microsoft Storage Management Service to communicate with external arrays using SMI-S (Storage Management Initiative – Specification) to provide for automatic discovery and allocation of storage resources. For our evaluation, however, we didn't have any SMI-S enabled storage, just local server disk space, so we simply went through the motions of classifying our storage in the VMM console — bronze, silver and gold in our case — to see how the option worked.

Finally we needed to configure our compute resources — in other words, the hypervisors, VMs and other servers we were going to be using. Since we already had a Hyper-V host, in the form of Hyperv02 (the server on which we were running all of our System Center VMs), we simply imported that and associated it with our London host group.

pcloud3-vmm-add-resource-wizard
We ran the Add Resource Wizard to import our Hyper-V host and its VMs into Virtual Machine Manager.

We say 'simply', but there was a heart-stopping moment at the end of the process when the add-resource wizard informed us that our job had failed. After some frantic checking of the settings we'd used across the whole of our private cloud deployment we finally found that, for some unknown reason, our VMM server had stopped communicating with its Hyper-V host. Fortunately, deleting and recreating the Hyper-V virtual network sorted that out and — voila — VMM had its compute resources and we were ready to create our first cloud.

pcloud3-vmm-managing-hyperv02
It didn't work first time, but we did, eventually, manage to import our Hyperv02 host and all its VMs into Virtual Machine Manager.

To see how we did that, you'll have to wait for the next instalment in our Microsoft Private Cloud adventure.

Editorial standards