Special Feature
Part of a ZDNet Special Feature: Understanding Edge Computing

Where's the 'edge' in edge computing? Why it matters, and how we use it

Just as cloud computing seemed to be settling down into a standardized set of platforms, the drive for service differentiation results in new use cases for a faster, more flexible premium service tier. But will those use cases make sense in practice?

3 things you should know about cloud v. data center Cloud computing has made a tremendous surge in recent years, but issues such as compliance and data residency make tech leaders think carefully about keeping certain IT systems on-premise.

Video: 3 things you should know about cloud v. data center

Executive summary

At present, edge computing is more of a prospect than a mature market -- more of a concept than a product. It is an effort to bring quality of service (QoS) back into the data center services discussion, as enterprises decide not just who will provide their services, but also where.

"The edge" is a theoretical space where a data center resource may be accessed in the minimum amount of time. You might think the obvious place for the edge to be located, for any given organization, is within its own data center ("on-premises"). Or, if you've followed the history of personal computing from the beginning, you might think it should be on your desktop, or wherever you've parked your PC. Both alternatives have valid arguments in their favor.

Read also: From paper tape to a patched-together Altair 8800, the story of my first computers

But the world's computing services today are networked. ZDNet is one example of a service whose publishers have invested in servers stationed in strategic locations that make the effort to deliver you this document (and the multimedia content within it) in minimum time. From the perspective of the content delivery network (CDN) that accomplishes this, our servers are parked at the edge.


A Vapor IO Kinetic Edge micro data center operating alongside a cellular tower.

(Image: Vapor IO)

The providers of other services that we might otherwise lump together onto the already oversized pile called "the cloud" are each searching for their own edge. Data storage providers, cloud-native applications hosts, Internet of Things (IoT) service providers, server manufacturers, real estate investment trusts (REIT), and pre-assembled server enclosure manufacturers, are all paving express routes between their customers and what promises, for each of them, to be the edge.

What they're all really looking for is competitive advantage. The idea of an edge shines new hope on the prospects of premium service -- a solid, justifiable reason for certain classes of service to command higher rates than others. If you've read or heard elsewhere that the edge could eventually subsume the whole cloud, you may understand now this wouldn't actually make much sense. If everything were premium, nothing would be premium.

Read also: Arm rolls out Project Trillium for edge device computing

Yet if the edge comes into its full fruition, it would be a validation of a latent need for service tiers -- premium service grades for which enterprises might be willing to invest more. It stands in opposition to one of the principal arguments surrounding one version of "net neutrality:" That any kind of express service on one tier will lead to degraded service levels on the lower tiers. That fear has yet to manifest itself in reality.

Still, selling the benefits of premium service means persuading certain customers that differentiation is a more beneficial economic force for the cloud and the internet than commoditization. So in the absence of a plethora of proven use cases to serve as examples, the edge's most vocal advocates are testing the limits of their customers' imaginations.

Locating the edge

"Today, there's pent-up data in things, such as even dog food bowls that can tweet you when there empty," stated Dr. Tom Bradicich, Hewlett Packard Enterprise's vice president and general manager for converged servers and IoT systems, speaking last October to the 2017 annual symposium of IT consulting firm Nth Generation. "It's data. You can argue, how valuable is that? I guess it's valuable for the dog.

"Data processed at the edge is made accessible sooner to operators, or can create a real-time feedback loop with automation systems."

— Kurt Marko, Marko Insights

"But the point is," Dr. Bradicich continued, "almost everything has data in it that can be captured." In this and several other presentations over the past few years, Bradicich has argued that an edge computing model is vitally necessary for capturing, processing, and managing the tremendous amount of static data inevitably to be produced, he believes, by everyday objects -- once it becomes trivially easy to do so.

"The edge is a place," he told his audience. "It's where the things are, and it's not the data center or the cloud. . . The edge will start housing the very IT products that are in data centers and clouds." One class of these products, he believes, is what HPE calls operational technology (OT), which one might interpret to be a broad slice of the market that incorporates data center infrastructure management (DCIM) and systems monitoring.

(Image: Hewlett Packard Enterprise)

In a recent company blog post, Bradicich explicitly mapped the location for this place he calls the edge as a single line that spans across three very different classes of use cases: The aforementioned OT; alongside the Internet of Things (IoT), where highly available processors would enable real-time analytics for applications that can't wait too many milliseconds to render decisions; and a third class of "enterprise IT edges" that is largely left to the customer's imagination. It's this middle class, if you will, for which HPE is making its pitch for a new form factor of servers it calls Edgeline.

Read also: Cloud computing is eating the world: Should we be worried?

"Real time for IoT data is right when it exists -- right when that pump is starting to vibrate, right when that parking lot is full, right when that suspect is running down the street in a law enforcement application," said Bradicich, snapping his fingers repeatedly. "That's the data that's needed. If it takes a long time to process, it could be useless -- it could be too old."

The edge as an applications host

"If you think about 5G, for example, and the characteristics that 5G brings to the table -- ultra low latency, high throughput," remarked Igal Elbaz, AT&T's vice president for ecosystem and innovation, referring to the fifth-generation wireless network technology he would like for AT&T to be perceived as pioneering, "it allows you to start imagining all kinds of ecosystems and use cases, like augmented reality, autonomous cars, industrial IoT. But if you look into this, if you dive into that world, you realize that the wireless networking is really important, but there's another element: How do I create the same level of latency and quality of service on the computation side?"

Elbaz accurately snapshots the present state of edge computing: An idea whose existence depends on the success of a handful of other ideas. The word "imagine" crops up a lot in discussions of both edge computing and 5G, mainly because there are many gaps to be filled in the market definitions of both.

Read also: VPN services 2018: The ultimate guide to protecting your data on the internet

"How do I allow the offload of some of the computation from devices -- whether they're IoT, mobile devices, cars, or anything we can imagine coming out in the next couple of years," continued Elbaz, "into a close cloud infrastructure? Those cloud workloads will obviously be orchestrated in technologies in the marketplace, in the ecosystem, that you're fully familiar with." As one example, he offered Flexware, which is an AT&T-branded appliance that is programmable with network functions such as firewalls and routing, to which enterprises may subscribe by the month.

Could edge servers subsume handheld devices?

Calling Flexware an edge device might seem like stretching the impact of the edge idea beyond the limits of edginess. But follow with me for a moment: If AT&T (and others) can build a viable market around infrastructure capable of processing the "smart" functionality which we presently attribute to smartphones, with latencies that are minimal enough to not be noticeable to customers, then it could conceivably offer "dumber," less expensive, smartphones with the same services we see on Android and iOS today. What's more, those services could conceivably transport themselves between the user's many devices -- for instance, from her desktop to her phone to her car.

If an enterprise wanted to be in control of how its employees would utilize such a service, then Flexware, or something like it, would be the delivery vehicle for that control.

Read also: Enterprises learning to love cloud lock-in too: Is it different this time?

"When you and I are consuming applications on our mobile devices today, we don't feel the latency that exists," the AT&T VP went on. "You move to a low-latency, high-throughput application, you want to make sure that the round trip, not only on the wireless side but the computation side, is ultra-low. Otherwise, when you're in an experience with AR or VR that's not solid, you start feeling nausea, [especially] if your brain can perceive it within 7 to 10 milliseconds. So I want to make sure I provide those app developers what they need, in order to create those immersive experiences, without affecting the experiences -- and at the same time allow mobility and ubiquity. Because right now, you cannot walk around with a server attached to your device."

Where does the cloud go now?

Up to now, "the cloud" has been comprised of the platforms outside of organizations' own data centers where they host their applications, data stores, and virtual desktops. It wasn't supposed to matter where, in the network or on the planet, these platforms resided. Early on, marketing videos depicted the cloud as going wherever the customer goes, following him like the grey smudge over Eeyore in the Hundred-Acre Wood.

But the end user is not the only person whom outsourced computing services need to cater to. As more organizations and enterprises move from the virtualized PC model of computing (where a virtual machine assumes the role of a network server) to the distributed services model (where pieces of programs go wherever they're best suited), timing becomes more critical. Though it may not seem that ordinary back-office applications, like accounting systems or supply chain monitors, should run in real time, their new orchestrators -- especially Kubernetes -- take timing into account as a matter of principle.

Read also: What is the IoT? Everything you need to know

All of a sudden, it matters where an application is run. Since the advent of the consumer internet, content delivery networks (CDN) have defined a concept of the network edge -- a place that requires the fewest number of "hops" (intermediate routers) possible for large amounts of content, such as multimedia, to reach the user. That's where the idea of an edge began. A transaction, such as with an application or a database, requires not a one-way fast lane but a round trip. That trip is faster if the system processing it is nearer.

Scaling down to scale big

If location, location, location matters again to the enterprise, then the entire enterprise computing market can be turned on its ear. The hyperscale, centralized, power-hungry nature of cloud data centers may end up working against them, as smaller, more nimble, more cost-effective operating models spring up -- like dandelions, if all goes as planned -- in more broadly distributed locations.

Read also: From cloud to edge: The next IT transformation (ZDNet special report) | Download the report as a PDF (TechRepublic)

Now, telecommunications companies are partnering with a new generation of service and equipment providers, in a joint effort to construct and deploy micro data centers (µDC). They're compact enclosures, some small enough to fit in the back of a pickup truck, that can support just enough servers for hosting time-critical functions, and can be deployed closer to their users.

Making data centers smaller

"I believe the interest in edge deployments," remarked Kurt Marko, principal of technology analysis firm Marko Insights, in a note to ZDNet, "is primarily driven by the need to process massive amounts of data generated by 'smart' devices, sensors, and users -- particularly mobile/wireless users. Indeed, the data rates and throughput of 5G networks, along with the escalating data usage of customers will require mobile base stations to become mini data centers."

"How do I allow the offload of some of the computation from devices - whether they're IoT, mobile devices, cars, or anything we can imagine coming out in the next couple of years, into a close cloud infrastructure?"

— Igal Elbaz, VP Ecosystem & Innovation, AT&T

In a highly distributed data center, with storage, network bandwidth, and processing assets perhaps scattered across the planet, "the edge" is the part of the system which may be accessed in the shortest possible interval of time. Due to the peculiar, and often seemingly organic, way in which the internet has developed, this may not necessarily be the closest server geographically. And depending upon how many different types of service providers your organization has contracted with -- public cloud applications providers (SaaS), apps platform providers (PaaS), leased infrastructure providers (IaaS), content delivery networks -- there may be multiple tracts of IT real estate vying to be "the edge" at any one time.

The edge of communications

"The edge is not a technology land grab," remarked Cole Crawford, CEO of µDC producer Vapor IO. "It is a physical, real estate land grab."

(Image: Vapor IO)

As ZDNet Scale reported last November, Vapor IO makes a 9-foot diameter circular enclosure it calls the Vapor Chamber. It's designed to provide all the electrical facilities, cooling, ventilation, and stability that a very compact set of servers may require. Its aim is to enable same-day deployment of compute capability almost anywhere in the world, including temporary venues and, in the most lucrative use case of all, alongside 5G wireless transmission towers.

Since that report, public trials have begun of Vapor Chamber deployments in real-world edge/5G scenarios. The company calls this initial, experimental deployment schematic Kinetic Edge. Through its agreements with cellular tower owners including Crown Castle, this schematic has Vapor IO stationing shipping container-like modules with cooling components attached, to strategic locations across a metro area.

By stationing edge modules adjacent to existing cellular transmitters, Vapor IO leverages their existing fiber optic cable links to communicate with one another at minimum latency, at distances no greater than 20 km. Each module accommodates 44 server rack units (RU) and up to 150 kilowatts of server power, so a cluster of six fiber-linked modules would host 0.9 megawatts.

Read also: Top cloud providers 2018: How AWS, Microsoft, Google Cloud Platform, IBM Cloud, Oracle, Alibaba stack up

While that's still less than 2 percent of the server power of a typical metropolitan colocation facility, from a colo leader such as Equinix or Digital Realty, consider how competitive such a scheme could become if Crown Castle were to install one Kinetic Edge module beside each of its more than 40,000 cell towers in North America. Theoretically, the capacity already exists to facilitate the computing power of greater than 700 metro colos.

(Image: Vapor IO)

"As you start building out this Kinetic Edge, through the combination of our software, fiber, the real estate we have access to, and the edge modules that we're deploying," said Vapor IO's Crawford, "we go from the resilience profile that would exist in a Tier 1 data center, to well beyond Tier 4. When you are deploying massive amounts of geographically disaggregated and distributed physical environments, all physically connected by fiber, you now have this highly resilient, physical world that can be treated like a highly connected, logical, single world."

Improving workload distribution

Practically speaking, we can't replicate a separate hyperscale data center for every customer. But we know from experience that we can change a user's perception about how fast a Web page loads, by granting prevalence to those parts the user sees first. If we could break up a hyperscale data center, and move just those parts that a user or an application will need right away to a nearby location, perhaps we can cheat physics after all.

Read also: We found 22 cloud services your business definitely needs to try

As Kurt Marko first reported for Diginomica, a software firm called Swim.ai is applying machine learning algorithms to the components of distributed applications (which are networked and not limited to single VMs). Its goal is to derive traffic management patterns that learn over time which parts of an application are best suited for staging on premium, edge infrastructure, and which others can reasonably sustain the higher latency presented by the public cloud.

"Real time for IoT data is right when it exists. . . That's the data that's needed. If it takes a long time to process, it could be useless - it could be too old."

— Dr. Tom Bradicich, VP/GM Converged Servers & IoT Systems, HPE

In a company blog post, Swim.ai casts new light on the issue, noting -- quite accurately -- that a new industry has cropped up around expediting data streams to cloud servers, under the premise that the cloud is the new, permanent location for managing and processing that data. It's the fastest route to the biggest traffic jams, the company contends.

Machine learning for workload arbitration

Common sense might tell you that the types of math function that would consume the most time anyway, would benefit the least from being hosted on a system that's "further away" in terms of latency -- and Swim.ai acknowledges this. But since machine learning algorithms are not functions in the same sense, but actions performed in repetition on a continually refreshed data source, Swim.ai's architects believe it makes the most sense to deploy those algorithms on servers that are closer to where their data is collected and aggregated.

"These high-rate edge streams are ideal for performing machine learning (ML) at the edge, and can inform predictive maintenance and other IT/OT applications," the company writes. "Data processed at the edge is made accessible sooner to operators, or can create a real-time feedback loop with automation systems."

"In this case -- and likely true for many other AI-at-the-edge implementations (think cars, robots, etc.)," writes Kurt Marko, "the hardware isn't a server, but some sort of embedded hardware with GPU or FPGA acceleration for deep learning. In both cases -- embedded AI and mini data centers -- the edge systems will work with cloud services (public or private) for data aggregation/retention, large-scale analysis, AI model development (usually, although as SWIM illustrates, not always) and system management/control.)

'First-century hardware'

As HPE, Dell Technologies, AT&T, CDN provider Limelight Networks, and others invest more time, effort, and resources to fleshing out a design for what could become the cloud that replaces The Cloud, Vapor IO's Cole Crawford believes that this latest evolution in systems design will soon expose what he describes as the weakest link in the chain.

"Human beings are effectively 21st century software running on zero-century hardware," said Crawford, citing systems theorist David Krakauer. "If we think about the nervous system -- all of the brainy things that happen -- below that nervous system, there's this cardiovascular system that keeps the blood pumping, and sends oxygen to that brain.

Read also: Everything you need to know about the cloud, explained

"In the data center industry, that cardiovascular system is first-century hardware. We tend to think of the software side of this world as living in an #edgenative world. If we do have 40 billion -- well on our way to 1 trillion -- connected to 'an' internet, maybe not 'the' internet, we know that there have been issues there. Our thesis is that, due to what we could call naïve realism, we continue to build the internet the way that it was originally designed. And I don't think you'd have to go too far beyond Vint Cerf or Paul Mockapetris to get an opinion that a re-architecture is imminent and required."


It is a mistake to presume that edge computing is a phenomenon which will eventually, entirely, absorb the space of the public cloud. Indeed, it's the very fact that the edge can be visualized as a place unto itself, separate from lower-order processes, that gives rise to both its real-world use cases and its someday/somehow, imaginary ones.

It was also a mistake, in perfect hindsight, to presume the disruptive economic force of cloud dynamics could completely commoditize the computing market, such that a VM from one provider is indistinguishable from a VM from another, or that the cloud will always feel like next door regardless of where you reside on the planet. This is the fact to which Vapor IO CEO Crawford alludes, when he states that a re-architecture of the internet itself should be considered, implying that it should have had an edge to begin with.

Read also: What's the best cloud storage for you?

Yet it's very difficult, when plotting the exact specifications for what any service provider's or manufacturer's edge services, facilities, or equipment should be, to get caught up in the excitement of the moment and imagine the edge as a line that spans all classes and all contingencies, from sea to shining sea. Like most technologies conceived and implemented this century, it's being delivered at the same time it's being assembled. Half of it is principle, and the other half promise.

Once you obtain a beachhead in any market, it's hard not to want to drive further inland. There's where the danger lies: Where the ideal of retrofitting the internet with quality of service can make anyone lose, to coin a phrase, its edge.

Discover more from the CBS Interactive Network