X
Business

Edge, core, and cloud: Where all the workloads go

The race to the edge, part 4: Where we are introduced to chunks of data centers bolted onto the walls of control sheds at a wind farm, and we study the problem of how all those turbines are collected into one cloud.
Written by Scott Fulton III, Contributor
scale-004-fig-01.jpg

There is a strange and uneasy tension standing at the base of a wind turbine, amid a power generation farm full of dozens more. The air can seem still even though you can clearly see, and hear, the turbines moving. Indeed, the sound never dies down, although you're standing in precisely the space where you would most expect it to. With all these rotating blades the size of softball fields, it indeed feels and sounds like a place you'd expect to find something called "the edge."

TechRepublic: Edge computing: The smart person's guide

There's no methodology for any of the world's power grids to distinguish renewable power, such as wind-generated, from coal-based or hydroelectric power. So when a data center customer purchases wind power, usually it's in the form of certificates issued directly by the renewable energy company. The data center owner then "retires" those certificates when it utilizes the kilowatt-hours it does purchase, a few of which may have actually been the wind-generated power from the issuing company.

That's how these deals usually work. Last May, however, the world's largest content delivery network, Akamai, purchased a 20-year stake in an 80 megawatt wind farm outside of Dallas, Texas, scheduled to go online in 2018. It may be a smart move, if you consider Akamai as the wind farm's consumer. What might not have entered analysts' minds just yet is the notion that a wind farm could need a CDN.

"When an application cares about latency," said Scott Sneddon, Juniper Networks' senior director of software-defined networking, speaking with ZDNet Scale, "it's because that application is doing something that is very user-based or interactive, or is accessing time-sensitive data. There's opportunity to think about architectures that are more distributed. A lot of the footprint that might exist for those applications starts to look like the CDN networks of old."

A CDN's purpose is to distribute bandwidth. A wind farm's purpose is to collect distributed power. Renewables producers are learning in recent years that highly distributed applications are empowering them to generate power more efficiently. In a very short period of time, the types of applications that energy producers had been relying upon the public cloud to provide, they're now deploying on data center appliance boxes called gateways, whose size and shape could make them easily mistaken for video game consoles.

"When we started this four years ago, everybody talked about the cloud," said Jason Shepherd, director of Internet of Things strategy and partnerships at Dell Technologies. "Of course, the cloud is super important, and will always be important. But a year-and-a-half ago, we started seeing a shift even in the language in the market. And now everybody talks about the edge, because they got the bill for the cloud."

We began this journey at the base of a cell phone tower. We conclude it now at the base of a wind turbine. Again, there's a nondescript, prefabricated shed. And inside of it is a chunk of a distributed network small enough to fit under your arm.

A place for "better"

There's another term for the type of distributed computing model that takes place in a place like a wind farm: "fog computing." It's perhaps no one's favorite phrase.

Also: Between the cloud and the corporate data center, there is fog computing

"Here's the deal: Ultimately, it doesn't matter what the words are. It's having the right tools to run workloads in a distributed fashion," said Jason Shepherd, Dell Technologies' director of Internet of Things Strategy and Partnerships.

Whenever consumers make choices about most any class of product, Shepherd said, eventually the choices they make become clustered together. There typically ends up being two or three such clusters, the obvious case in point being America's two-party political system. Whether for marketing purposes, or for expediency in conversation, two or three makes for a great choice. But never one, and rarely four or more.

Dell perceives three tiers of distributed architecture: edge, core, and cloud. (Some would argue that the IoT should constitute a fourth tier of "things," but there's a strong argument that IoT devices are mainly sensors, whole collective "intelligence" is provided on a deeper layer.)

"Edge, to us, bleeds up into a server-class processor -- or compute node -- immediately in proximity to the 'things.' There's two vectors you can look at: One is, how close am I to the very physical world of things? And the other one is, how much compute power do I have? When you mix those together, you start to see how it lays out."

As an example, Shepherd suggested a use case where video cameras serve as sensors for a quality control operation. A local processor can be analyzing the apparent vibrations in the scans for those cameras, so that events are flagged only when changes are detected. A GPGPU (a repurposed graphics processor for parallel general purpose computing functions) may be used for making those detections. "That's basically edge compute to us," said Shepherd.

By comparison, as Dell perceives the landscape, the types of applications taking place in micro data centers -- for example, real-time promotion control for Brazilian hamburgers -- would not be edge computing. This would fall into the tract of real estate that Dell calls the core.

"Now, I've got an increasing amount of compute. It's removed slightly from the immediate physical world. Maybe it's a conditioned space, it's locked up and not out in the open. The core for us is on-premises, going from that hyperlocal, micro-modular data center, up through a full-blown, traditional IT data center. The reason why we're talking about the core is that it has a blend of benefits from both sides. But clearly, if you're doing deterministic, real-time, quality-of-service, fraction-of-a-second [functions]. . . like an airbag, or braking an autonomous driving situation, it can happen nowhere but the edge."

It's unavoidable, from Dell's perspective, that the effort to improve quality of service in cloud computing should lead to specialization -- dividing the compute space into classes. It's not just "the edge" and "the other space that's not the edge" for Dell, but an opportunity to drive a kind of Radio Shack-style "good/better/best" compartmentalization that's best suited for the classes of hardware it produces. Its edge computing devices, as you'll see in the final waypoint of this journey next week, are more like small appliances, in toaster-sized boxes you might find bolted to the walls of operations stations. So Dell's compartmentalized cloud not only has shapes and sizes, but form factors.

And if Dell ends up being correct about this course of evolution, it won't be as though the edge ate the cloud the way analyst Peter Levine predicted (as we told you at Waypoint #2). But you may have a hard time finding traces of what, and perhaps even where, the cloud used to be.

Ubiquity versus specialty

scale-004-fig-02.jpg

A new company called Foghorn Systems, currently in its Series B funding round, has developed a class of software to be used in conjunction with Dell Edge Gateway servers, directly inside wind farms. Rather than use these gateways to collect and send turbine performance data to cloud systems (as the term "gateway" used to imply they would do), they process real-time analytics operations on this data locally. They also use machine learning algorithms to derive performance patterns from sensor data in the turbines and also from weather sensors, to record information about how turbine control systems best responded to changing conditions.

Also: Cloud innovation will power enterprise transformation in 2018

One of the original ideals of cloud computing was that there should be nothing obstructing any workload from operating at any location. Even the control program for a sensor need not necessarily be installed on the sensor. Some early Internet of Things architects argued this would be a benefit: endowing sensors, appliances, and other devices with minimal firmware, but full-featured control programs running in the cloud and connected via fat pipelines.

But as the proponents of modern edge computing -- especially Dell and AT&T -- have pointed out, although workloads may run everywhere, you can't see the results from a long way away without latency entering the picture. So as Dell's Shepherd believes, it's time to divide the cloud (not just the public cloud, but the entire breadth of cloud platforms) into quality-of-service (QoS) tiers.

"You want to have the freedom to put [workloads] anywhere ubiquitously," said Shepherd. "The reason to define it as edge/core/cloud, or edge/cloud, or whatever, is to people can wrap their heads around it. A lot of people still struggle with the whole fog computing concept, which is basically 'everything but the cloud.' But it's so abstract that people don't have a physical thing to wrap their heads around, so they continue to be confused."

The prime point of contention among software developers in the distributed space today, in light of what we're now learning about edge computing, is this: Should functions be made "aware" of the systems that are running them, so they can adjust their operating parameters to suit their environments? Dell's Shepherd is also a member of the Technical Steering Committee for an open source software platform called EdgeX Foundry -- a toolset specifically for applications designed expressly to run on edge systems. Speaking with us, he argued that such toolsets are necessary for developers, because the nature of edge computing is somewhat different from cloud computing.

Juniper's Scott Sneddon is very familiar with this point of contention.

"What's going to be interesting -- and it's still very unknown to me -- is how we inform the application of things like location, and things like a latency value or latency characteristic for a given location, versus another one," remarked Sneddon. "That's where I think a lot of the exciting data science development is going to be happening, in how we distribute information and distribute processing."

If the hardware used in edge computing is distinct from cloud server hardware, the network connections for edge computing are their own connections, the locations in which edge workloads are staged are unique, and finally, the software running in those locations is its own software, then there may be nothing left to equate an edge system from a cloud system, or "core system," or whatever other tier there may eventually become, besides a few brand names. "The edge" would not be the edge of the cloud, but its own unique environment. It therefore follows that the edge would be its own unique market.

One of the leading figures in the community of so-called cloud-native development (building software using cloud services to run on cloud platforms) is Abby Kearns, the executive director of the Cloud Foundry Foundation. Kearns steers the evolution of the Cloud Foundry software development platform. The evolutions of systems like this, she told us, goes through cycles of centralization, decentralization, rinse, and repeat. And she's not at all certain that such a separation process to which Shepherd and others refer, will be a complete separation.

"The cloud is basically somebody else's infrastructure," said Kearns. "When we talk about 'cloud-native,' it has nothing to do with 'the cloud,' really. We're talking about applications that have statelessness, that can take advantage of that ubiquity, that elasticity, that resilience, that immutable infrastructure. When I think about applications in the modern sense, it's applications that can be deployed easily anywhere, scaled out easily anywhere, and can be redeployed other places. It's that portability of that application."

Also: 8 steps to becoming a 'cloud-native' enterprise

Data ingest functions are not nearly as portable, she said, due to their proximity to the edge. But the principal functions of these applications can, and should, be portable, in her view. Most importantly, developers will need the ability to create applications that will be deployed on edge systems once they're completed (i.e., "in production") without actually having to stage their relatively unstable development space in the edge as well.

Holding territory

171028-m01-scale-w4-fig-04.png

This is the emerging picture: As the systems that host servers evolve into new form factors such as µDCs, the orchestrators that distribute workloads throughout the network will take heed of performance data these services pump out, including latency measurements. Some classes of applications will be distributable throughout the entire network, but others will have strict performance requirements. And this latter class may either be relegated to the edge exclusively, or may end up being shifted from the edge to the cloud, or to Dell's "core," as conditions warrant.

The result will be an information infrastructure ecosystem that is more synchronous, more self-aware, and more zoned. We will be more aware of it, as well. And because we can no longer afford to be confused by it, or ignore it, we may at some point stop calling it "cloud."

Arrival

scale-004-fig-05.jpg

For decades now, in countless pages like this one, we've written and talked about how technology is changing us. When end-to-end paved roads first connected cities a century ago, our counterparts wrote about how technology changed them. Automobiles seemed such sudden things. Yet they were the culmination of decades of ingenuity, plus a rising tide of productivity; the availability of petroleum-based fuels; the willingness, ability, and hunger of the labor force; and an indeterminate number of strokes of luck.

All the right trends of progress and all the necessary trends of regression produced the necessary mix of events at just the right point in history. People had been so busy building the roads, constructing the bridges, and forging the aqueducts that when cars became real for them, they may as well have been dropped from the sky.

The idea that technology changes us, rather than the other way around, is a trick of perception. The systems and networks that make work feasible and that enrich our lives have grown so big, we tend to lose sight of them. In honor of all the people who have made our lives and work possible, this has been an effort to correct that perception, and at last to project the ideals of information technology at their proper scale.

Until our next journey together, hold true.

Journey Further -- From the CBS Interactive Network

Elsewhere

Editorial standards