5G and edge computing: How it will affect the enterprise in the next five years

Smaller data centers that are closer to the customer edge require light-speed connectivity, and 5G Wireless happens to have that in spades. The trick will be getting edge customers to buy in, and stay in.
Written by Scott Fulton III, Contributor

The basic objective of any service that bills itself as edge computing is to move the hosting of computing services as close as possible to the people who will use them, minimizing distance to expedite service delivery. Wireless communications, meanwhile, is made possible through an underappreciated process called backhaul. It's the way data in wireless communications is shuttled from the core to the edge of the cellular network, using the highest-speed fiber optic cables.

Technology often makes the most progress when it refrains from reinventing the proverbial wheel. Edge computing became part of the 5G Wireless mandate when engineers realized the same mechanism that reinforces a data communications network can reinforce a data processing network. And conceivably, the two businesses need not just coexist, but could then combine.

"Edge computing...is morphing. As the cloud develops, and the technology around cloud data centers develops," remarked Sam Fuller, who directs marketing for digital networking at NXP Semiconductors, during a recent Arm RISC developers' summit. "There's a lot of innovation in software and secure access. When we talk about the edge, we're really talking about leveraging that software model that's been developed into the cloud, and migrating that to what has traditionally been embedded computing applications."

An internet of fewer things

The extent of the transformation to which Fuller referred may be represented by two articles published by essentially the same manufacturer, separated by a mere two years. In June 2018, writing for an embedded processor and accelerator company then called Altera, which was merging into Intel, author Ron Wilson suggested that edge computing was comprised of the portion of Internet of Things architecture that engineers naively, in Wilson's words, believed could be relocated to the public cloud, but realized later would have been impractical. That leftover portion at the extreme edge of the network, he posited, became leveraged "to redefine embedded computing as a networking application, with edge computing as its natural extension."

Make the short hop forward to 2020, when the division now known as Intel FPGA explained this was all by design: "Distributed computing gives businesses the flexibility to process application data in the location that delivers the most value...5G connectivity makes the division of computing resources along the cloud-network-edge continuum possible."

Software architects sought to create a sweeping new concept called distributed computing, only to find that, by virtue of embedded computing's pre-existence, computing was already being distributed. It just wasn't being centrally orchestrated -- and there indeed is the prize.

The original promise of the Internet of Things was to incorporate highly distributed processors into a kind of private utility and build a new business model around the commoditization of its services. Because it makes more sense for processing at the edge to take place where the processors were first installed, to begin with, the new utility model is less about 'things' and more about 'services'.

The edge at home


Placement of a Vertiv SmartMod pre-fabricated data center module (PFM). 

(Image: Vertiv)

The physical appearance of a 5G edge computing-enabled data center might not look like what you expect. Businesses with an interest in deploying computing assets in multiple, remote locations, are investing in prefabricated modules that can be transported like shipping containers and dropped in-place onto concrete slabs. Vertiv is one company that builds made-to-order prefabricated modules (PFM) that come already equipped with the chassis and infrastructure necessary to support the recommended equipment for 5G MEC deployments [PDF]. Telecommunications service providers are already familiar with how to hook up PFMs such as SmartMod (shown above) for 5G connectivity.

Other than cloud service providers, telcos such as AT&T, Verizon, and T-Mobile in the US have a natural advantage with their ability to provide data services as utilities. The technology now known as Multi-access Edge Computing (MEC, though the 'M' previously stood for 'Mobile') was first advanced in 2014 by the European Telecommunications Standards Institute (ETSI). For all intents and purposes today, MEC is 5G edge computing. As a June 2018 ETSI white paper [PDF] made clear, the explicit purpose of 5G edge architectures is to move application hosting away from the centralized public cloud, and into an array of distributed servers at the telco network edge, presumably closer to the enterprise customer.

Put another way, it's a means of getting telcos into the cloud service provider market, without them first having to build hyperscale data centers on the scale of Amazon AWS or Facebook. It doesn't necessarily mean these telcos would compete against the public cloud. In the case of AT&T, for instance, a long-standing partnership with Google Cloud Platform will enable Google-branded services, and potentially Microsoft Azure's as well, to run in customers' data centers.

"It's important that we have a standard software platform," remarked NXP's Fuller. "One of the most important attributes of software is that it can count on a certain set of services that are available to it." This is what the managers of embedded devices, distributed remotely among several locations, have insisted upon up to now, and would not be willing to sacrifice when they move to something that bills itself as an 'edge cloud' or a 'cloud edge'.

Whither the differentiator

Software could, in the final analysis, become the differentiator: What enterprises with their own distributed computing assets are looking for isn't so much '5G', as a way to deploy, manage, and update their own applications from central or remote locations. What will appeal to this class of enterprise customers is a platform that gives their IT teams the on-ramps they need to accomplish this.

But the identities, locations, brands, and even definitions of these 5G edge platforms, are all 'jump balls' today. VMware is also looking to deploy platforms in essentially the same market space. Last March, the Open Networking Foundation (ONF) launched its ongoing pilot project called Aether, which aims to extend its network virtualization architecture with what it calls 'Edge-Cloud-as-a-Service'. Aether would integrate with the existing, open-source 5G radio access network project O-RAN, giving users a means of deploying distributed applications across multiple locales and orchestrate them using Kubernetes.

If ONF is successful, it's a valid question whether something like Anthos (at least as Google presently defines it) would even be necessary. Everything depends on which platform manages to make itself visible to customers in such a way that it obscures, and perhaps subsequently obviates, competitive options.

But even this depends on whether enterprises are willing to adopt the 5G edge route paved by MEC. Already in metropolitan areas, smaller data centers are being linked to nearby 'carrier hotels' using dedicated, leased fiber-optic lines ('dark fiber'). Those neutral facilities, owned and operated independently of telcos, use direct connections to public cloud data centers, enabling edge computing today, right now, without 5G MEC.

There's a lot of spaghetti being flung at the proverbial wall. If you haven't noticed it there, it's probably because none of it is sticking yet.

Editorial standards