X
Business

The drive-thru data center: Where an appliance runs your local cloud

The race to the edge, part 3: Where a fast food restaurant chain that can't always reach the cloud because its city is surrounded by a rain forest finds a way to move its data center into something like a refrigerator.
Written by Scott Fulton III, Contributor
scale-003-fig-01.jpg

It took 37 days for the Norwegian company contracted to build fiber optic connectivity for Brazil just to carry the optical cable for the project from the Atlantic Ocean, down the Amazon River, to the city of Manaus.

Connectivity for this city's two million-plus inhabitants has never been a certainty. For 2014, cloud services analyst CloudHarmony ranked South America as the continent with the greatest service latency by far of any region in the world; and the following year, it rated Brazil as the world's least reliable region for Amazon AWS service.

By now, the people of Manaus are tired of the "Amazon" puns. Meanwhile, they consume fast food just like the rest of the world. So for them, a major restaurant chain has rearchitected its order management system as a single unit. It's a brilliant example of a real-time distributed application.

When the restaurant's marketing team decides it's time for a new promotion, it can execute on its plan in minutes rather than months. Not only do prices change immediately on execution, but the menu screens above the order counters and at the drive-thrus are all updated. From that point, ordering trends are measured in real-time, and the effectiveness of a campaign can be reported to management by the end of the work day. Artificial intelligence algorithms can be inserted into the processing chain, to schedule price changes in real-time in response to factors such as customer demand, outdoor temperature, and even the news cycle.

It should be the quintessential use case for cloud computing. Except it isn't really.

Drive Through

scale-001-fig-07.jpg

We're along Waypoint #3 in our four-part journey in search of the new edge for modern data centers. It's a place removed from anywhere you'd expect to be the headquarters or central nervous system of a modern network. Indeed, it's one node in a highly distributed system. But each node has unto itself its own decentralized authority and responsibility -- a certain level of oversight.

And each of these nodes is a micro data center (µDC) tucked somewhere behind the pick-up window. Walking into the back room, you'd easily mistake one of these units for a refrigerator. It's a fully self-contained, 24-unit rack-mount system with a built-in power distribution unit. Its designer is Schneider Electric -- among the world's leading producer of physical platforms and power systems for data center components, and the parent company of backup power system producer APC.

Steven Carlini is Schneider Electric's senior director of global solutions. He perceives a clear, ongoing separation of powers across the infrastructure platform we now call "the cloud," with the public cloud at the center and owned or co-located regional data centers along the periphery.

"One of the main drivers of edge computing is to process [data] locally, and send the results of the computations into the cloud," Carlini told ZDNet. He continued:

By far, we are still seeing content delivery as the main driver of moving closer to the customer. We're seeing industrial application where it's either process control, pipeline management, or waste water management, or agriculture, where we're doing a lot of local processing, and using the results of that processing to change the flow rates or temperature of oil or water through pipes, or automate valves to open and close at certain times. There's a lot of data generated by a lot of sensors in these applications.

"That's where we're seeing it. We're not seeing it in the general business applications," he noted. "We don't see our customers asking for that."

The Brazilian restaurant made a significant up-front investment in endowing each of its locations with, effectively, its own local cloud. We can reasonably presume that when an enterprise undertakes a project of this magnitude, it's not because its people want a faster running Office 365. There are particular applications that need critical attention, and though they need networking as well, that's not as critical as speed of deployment.

Also: Fog computing: Balancing the best of local processing with the cloud

But here's what's happening: As more enterprises invest in refrigerator-sized data centers, wherever they happen to be, the cost of that investment is declining. In turn, the break-even point where the price of cloud providers' monthly "as-a-Service" fees catches up with the cost of a µDC, comes sooner and sooner.

171028-m01-scale-w3-fig-03.jpg

In helping to make certain data centers more like appliances, Schneider Electric is also moving itself closer to the customer. Here is where "oversight" comes into play (and I chose that word for a variety of reasons): In a cloud environment where workloads could end up being staged on-premises, in µDCs, in hyperscale co-lo facilities, or in the broader public cloud, it will not only become critical to identify which component makes that determination, but where it's located. Arguably, it doesn't make much sense for an orchestrator deep in the public cloud to determine which workloads get hosted locally. Network engineers call that type of back-and-forth interplay a "round trip," and they try to avoid building such trips as best they can.

"We need to create ubiquity. It's really important that we separate between application and IT workloads -- and how the hardware is specified for them -- from network workloads," said Igal Elbaz, AT&T's vice president for ecosystem and innovation, speaking with ZDNet. "We want to move away from a world where you build something specifically for a specific customer. We want to encourage innovation; we want to encourage disrupting. And it's really difficult to create an ecosystem where every solution is designed for specific hardware, or for a specific customer."

Special feature: XaaS: Why 'everything' is now a service

So at least from the perspective of connectivity provider AT&T, it's in everyone's interests, for the long term, not to have specialization in the styles and buildouts of data centers, large and small. That's not to say micro data centers are bad ideas; indeed, AT&T is playing a major role in helping Vapor IO and others to build µDCs at 5G connectivity points. But it throws a big wrench into the theory that µDCs are just specialized refrigerator boxes that can help customers in challenging territories work their way out of a connectivity jam.

Inevitably, this question will be raised: Why can't the same solution that solved a fast food chain's synchronicity problems in Brazil accomplish the same goals for another such chain in, say, Kansas?

Schneider Electric's Carlini explains:

You might even see at the beginning of next year, these micro data centers that are replicating the cloud stacks -- Azure Stack and the Google stack -- that are going to be on-premises for corporate offices and smaller sites. So for Azure, the example would be Office 365. You'll have an instance of that on-premises, an instance in your co-lo data center, or you'll have it in your cloud data center at the outskirts of society, and you may or may not know where that 365 is running. They're developing the algorithms for how all that will work, and whether it will be transparent to the user, or if the user will be able to pick their preferred location for running these applications.

The final n-tier

A computer program always has been, and always will be, a sequence of instructions delivered to a processor for execution within a single span of memory. An application is a broader concept than a program, and over time, is only becoming broader.

This spreading out of the application's footprint is made possible by the expansion of territory for virtual networking. We've talked a bit about the distributed application, but up to now, not very much about what's distributing it.

Today, data centers use cloud platforms as a means of addressing multiple, disparate instances of the same class of resource -- processor capacity, memory address space, storage capacity, or networking fabric -- as single, contiguous pools. Such a platform makes it possible for a virtual machine (VM), which used to be relegated to a hypervisor running on a single processor, to span multiple servers -- or, to use the virtual infrastructure term for them, compute nodes. So in a hyperscale data center, each server is like a brick in a (seemingly) seamless wall. The cloud platform not only provides the mortar, but "shapes" non-uniform servers in such a way that they fit into the scheme without disrupting the illusion of homogeneity.

171028-m01-scale-w3-fig-04.png

In VMware's most recent scheme for vSphere, every server node hosts a network virtualization layer called NSX. Wherever NSX runs, a vSphere management instance can maintain resources for it, and stage VMs on it. A few months ago, VMware announced that NSX will be supported on Amazon's public cloud infrastructure. As a result, vSphere customers that are also AWS subscribers will perceive both spaces as a single staging platform for VMs. What's more, applications running on those VMs will be able to access AWS resources, such as S3 storage blocks and Redshift database functions, through API calls.

Microsoft's competitive platform is Azure Stack, to which Schneider Electric's Carlini referred earlier: an extension of the resources available on the Azure public cloud platform, so that applications may perceive Azure Stack space and Azure space as the same pool. And on the open source side, OpenStack remains a strong challenger for VMware's traditional space, providing a means for an enterprise to automate the deployment and management of private and hybrid cloud spaces.

Also: Public cloud, private cloud, or hybrid cloud: What's the difference?

These platforms serve as the equalizers -- the "layers of abstraction" -- for servers being gathered into clusters for distributed computing. Even today, their respective proponents contend that they extend the cloud seamlessly and homogeneously across vast spans of disparate hardware connected by the same network. These platforms utilize their own IP protocol network overlays, so even as their components' virtual locations are relocated across the physical network, they retain their addresses on their own virtual networks. Maps within maps.

But along comes containerization, championed by the open source platform Docker. Although it was designed to run distributed functions directly on processors ("bare metal"), a Docker environment can be staged on a VM as well. Suddenly, whatever staging space had been rendered level by cloud platforms, could be leveraged by orchestrators for staging distributed functions.

Now, all of a sudden, the argument that a network would be better suited if certain of its components, in particular places, could be delegated to run particular classes of functions, was no longer the subject of speculation. There is now a way -- or at least the inkling of a way -- for an orchestrator to marshal the distribution of specified functions to designated locations, or at least, to nodes whose operating specifications fit the profile that orchestrators require.

And if the orchestrator for the entire cloud is as close to the customer as the back office, the entire business model of cloud computing could be flipped on its ear.

"This whole cloud kind of ecosystem is going to evolve to a multi-tiered data center approach," said Steven Carlini, "that's going to include very small, micro data centers in the very near future."

Destination

scale-003-fig-06.jpg

Our final destination for this four-part journey will center on this fundamental question: In an IT environment with at least three, perhaps four, layers of computing capability, what will be the component determining where workloads are staged, and who will be in charge of that component?

Journey Further -- From the CBS Interactive Network

Elsewhere

Editorial standards