Azure evangelist tackles cloud doubts

newsmaker Microsoft's Mark Taylor talks about issues ranging from risk and resilience to private clouds and interoperability.
Written by Toby Wolpe, Contributor

newsmaker Last month, Microsoft spelled out the first service terms and pricing for its Azure development and hosting platform.

In an interview with ZDNet Asia's sister site ZDNet UK, Mark Taylor, Microsoft's director of developer and platform evangelism, spoke about issues ranging from risk and resilience to private clouds and interoperability.

Q: When Azure becomes available in November, what will Microsoft be doing to convince people it is worth taking up?
Taylor: Since last November, we have been running our CTP [Community Technology Preview] program, which provides the ability to use Azure without paying for it. That will remain free until November when we go into commercial mode. Organizations can put applications up there and see how it goes. Many thousands worldwide are trying before they buy.

From November, the beauty of the pay-as-you-go model is that organizations can put up applications without having to make a substantial upfront commitment. As they get a better sense of their predicted volumes, they can move to a more formal arrangement.

Once the full Microsoft marketing muscle gets behind Azure to push uptake, what sort of provisions will you have in place to cope with surges in demand, if a service suddenly proves unexpectedly popular?
The architecture we've built out so far has a vast amount of redundancy in it, and we're confident we can deliver against our service levels for the predicted volumes that we have come up with. We've got an aggressive program to continue to expand in terms of data centers and the capability within data centers.

A big chunk of the data center we have in Dublin looks like a loading dock from a supermarket. We have the ability to reverse containers in--the power and the taps are there--and they just become part of the data center. The ability to scale very rapidly without having to build new facilities has been a real core part of the data center strategy. The cost and agility we can take from that approach will serve us very well.

In your Azure service-level agreements (SLAs), you're offering credits against failure to meet certain uptime thresholds--effectively credits to compensate for a customer's potential loss of business. How do think those guarantees match up to the risks that businesses are taking in moving to the cloud?
If you have a failure, it depends where the failure is. We provide the platform, the storage, the connectivity. A failure within an application, for instance, would have a similar consequence if it was running in a traditional infrastructure.

The cloud industry is a new one. As we evolve, as we learn, as we understand--and this is an industry-wide issue--what the right kind of commercial arrangements are, then I'm quite sure our SLAs along with all the others will evolve to accommodate that.

This is a new industry and a new approach to IT, and we are all learning. I'd be amazed if we don't see an evolution in terms of resilience and geographic location of data.

In the same way, I imagine we'll continue to evolve the contractual offerings, service levels and pricing models that we're coming out with initially. Competition will always drive that, as will pragmatic experience.

What can you do to reassure people, especially when the link between the customer and Azure may not be in your hands?
That's a very good point. The network between the cloud and the end user becomes a point of failure like everything else. What will happen--we're seeing it now with organizations such as Akamai and Limelight with streaming--with the increased dependency on public networks, we'll see the network operators step up as well. People will be building in a high degree of redundancy into their network provision, and that will become as important as everything else.

If you're entrusting your line of business to the cloud, you need to make sure not only the back end is as resilient and as scalable as you need, but equally that network resilience needs to be there as well. There will be an increase in the strength of service-level provisions of network operators as well.

What else can businesses do to ensure against cloud failures?
One thing is to understand where the points of weakness or the points of concentration are and obviously, when you are architecting an application, to design as something that is going to be delivered by the cloud.

One of the aspects of our approach is that customers can use a programming model that is familiar to them in .Net, and the extensions that you can provide to make it Azure-aware are the thing to focus on.

Are private Azure clouds going to be feasible?
One of the benefits of the cloud approach is that multitenancy provides the kinds of economies that you can deliver back to customers. If you go from that to a dedicated cloud, you get some technology advantages when it comes to scalability and provisioning, but you're not taking advantage of multitenancy.

"Private cloud" is one of those terms that is used a lot, but there isn't a standard definition for it. Virtualization from the server level upwards gives you a high degree of abstraction and privacy, and that's where the real economies come in.

But it depends on the level of abstraction you want to achieve. You can get right to the point where you've got your own data center running your own cloud. What you get there is the elasticity within the capacity you've got, but you do lose a lot of the economies because you still have to own all the infrastructure.

There's no doubt there will be a market for that kind of service, but it's really a question of where you want that abstraction point that creates your private cloud--and you can have it in multiple places.

What is Microsoft doing to address the issue of interoperability between Azure and other cloud services and on-premise applications?
What we've tried to do when it comes to the Azure service is that you architect applications in the same way as you would if you were just doing an in-house client/server application, for instance. When it comes to, say, SQL Azure, the database types and so on, the way you deal with the database is just the same as you would if it were in-house.

There is no doubt there will be a need to interoperate between different cloud services, and that is something we're fully committed to cooperating with. In the future, there will be the need to have the ability to, say, imbed a web service that is hosted on another cloud system.

The key question is whether there are new standards to be defined or whether the standards we have now are good enough, because what you're getting in the cloud is something that's very similar to what you would get with a standard approach. But either way, that's something that we're fully committed to participating in.

Do you have any suggestions for people who may be taking their first steps into the cloud and into Azure in particular?
I would definitely suggest taking a fairly simple existing application that's targeting the .Net framework and just to put it up there. The work involved to move it to a cloud structure is fairly trivial for a competent programmer.

That experience is very useful for organizations, not just because of the experience of how it operates when it's in the cloud, but the whole way you get it there--the various staging and testing.

One of our design goals with Azure is that you don't need to relearn too much, so you can use the languages, frameworks and databases that you're familiar with. The great thing right now is that it's free, and so the opportunity is there now for people to dabble with it and just get comfortable with it.

Editorial standards