X
Tech

Backhand slice: 5G and the surprise for the wireless cloud at the edge

Where the world's network operators debate what benefits there may be from relocating network functions from their base stations to cloud data centers. NFV can achieve many goals, yet stakeholders are far from agreement over which goals take precedent.
Written by Scott Fulton III, Contributor

Video: 5G wireless networks: What you need to know

We talk way too often about what a technology enables people to do. 5G wireless is about people doing a lot of work with the goal of enabling a technology. Its objective is to spread a very fast signal through the airwaves, using transmitters whose power curve is just under the threshold of requiring artificial cooling. It needs to be faster than what we have now, for enough customers and enough providers to invest in it, so that it may achieve that main objective.

Must read: Part one: The biggest switch: 5G and the race to replace the future | Part two: Wiring for wireless: 5G and the tower in your backyard

Assuming 5G deployment proceeds as planned, and the various vast political conspiracies, small and large, fail to derail the telecommunications providers' plans, it will reach the peak of its goals once it has achieved the virtualization of its packet core (which was begun with 4G LTE), its radio access networks (RAN), and the customer-facing functions of its data centers. 5G will make telcos into "edge cloud" providers.

But it's from atop the highest peak, as any Everest climber or any great John Denver song might tell you, that one obtains the best view of oneself, and one's own place in the world. The common presumption, when the topic of network functions virtualization (NFV) is brought up with respect to 5G, is that all this virtualization will take place on a single platform. Not only is this critical issue undecided, but there would appear to be a dispute over the decided or undecided nature of the issue itself.

Read also: How 5G will impact the future of farming and John Deere's digital transformation

"How does the cloud and the network come together?" asked Nick Cadwgan, director of IP mobile networking for Nokia. "The whole industry has jumped upon NFV. But really, you've got to think about it as a holistic problem. In other words, what you do in the cloud, or what they do with their services, impacts the network. What they do in the network potentially impacts the cloud. We will see the further application of cloud and Web-scale technologies. But we've got to do it very intelligently."

180606-scale-m05-w03-f01.jpg

We begin this third leg of our 5G adventure at the summit of the technology's goals and ambitions. From here, everything that lies west seems within our reach. It's usually in the descent where things start getting tricky, and the west gets wilder than even our dreams.

Overlook

In a telecommunications network, a network function is the software that facilitates a specific service being delivered to a customer. It's a machine, described as code. If the entire network were devised as a mechanism to service one person, then a network function is the code for that mechanism.

So NFV is a means for maintaining that function in a data center, usually on a cloud platform (OpenStack is one example, VMware vSphere another) and utilizing software-defined networking (SDN) to devise -- and change, whenever necessary -- the chain of processes through which that service is delivered. Telecommunications networks utilize NFV today. It's a revolution that has already happened.

Read also: 5G mobile networks: A cheat sheet (TechRepublic)

The problems telcos face with implementing NFV on the scale of 5G are fundamental ones. For example: Who is the customer, really? Should a 5G virtual network function (VNF) serve the telco first and foremost? Or should the customer truly be the end user, the subscriber, the person paying the bills -- in which case, the network should take on a distributed multi-tenant model, not all that different from Kubernetes and microservices? Or for a third possibility that's altogether different, should the VNF serve a kind of impersonal entity called the "use case," fulfilling a role that's crafted exclusively for the type or class of customer being served -- for instance, healthcare, logistics, content creators, content providers, or emergency services?

Should there be just one cloud? Or, when you bring security into the equation, can there be? Must the end customer's VNFs remain segregated from the telco's VNFs -- in other words, should they never reside on the same servers together? Isn't the mixture of different tenets from any class the very problem virtualization was designed to solve?

There's basic agreement among the 3GPP members with regard to topics like the functioning of 5G New Radio, the design of antennas, and the goals for integrated circuits for smartphones and tablets. Yet with respect to the topic of network slicing -- the division of labor between services -- not only is there disagreement on the subject matter, but open disagreement about what the parties are disagreeing about.

Read also: 5G adoption: The first 3 industries that will be at the forefront (TechRepublic)

"There's just a lot of rope to hang yourself with here," remarked Tom Nadeau, now the technical director of NFV for Red Hat. The author of the world's seminal publication on SDN and its follow-up book on NFV, previously a distinguished engineer at Juniper Networks, and the chief architect of Brocade's SDN controller, perhaps no one on the planet appreciates the position the 5G telcos find themselves in today, better than Nadeau.

"You have to ask yourself why 3GPP is insisting on designing that stuff anyway," Nadeau told ZDNet Scale.

"There's been a long transformation of standards in the last five years, as you've seen. There's a raging debate at the IETF (Internet Engineering Task Force) right now about what network slicing is. There, I think there's a lot of confusion, because a) there's not a lot of operators around to temper the enthusiasm of the vendors; and b) standards organizations need to understand their place in the universe. A lot of standards today are trying to continue what they did ten years ago or more, where they were the place, and they did the innovation on paper. And the reality is, the innovation happens in a lot of open source communities today, where people bang away, try different things, and make things work as a collaborative community -- versus this model where people vote for that picture or this picture, and it's the usual camel that's a horse designed by committee."

Nadeau's experience with open source comes not only from his contributions to Juniper and to Brocade, but to the open source OpenDaylight SDN controller. Vyatta was Brocade's commercial implementation of OpenDaylight, prior to that company's acquisition by Broadcom. In his book on SDN, co-authored with Ken Gray, he chronicles the state of affairs that led to the creation of SDN, and thus of NFV, in the first place: Prior to 2011, no single network controller possessed the ability to manipulate the state objects (what we might call today the "metadata") generated by multiple vendors' network appliances. Any network application was hard-wired for the appliance which served it. As a result, routing traffic through a massive enterprise network based on dynamic logic or on policy was impossible.

Slice of life

180606-scale-m05-w03-f02.jpg

Tom Nadeau's telling of ancient history -- more accurately, of seven years ago -- points to the very real opportunity that history, if not repeating itself entirely, may take a page from Mark Twain's playbook and try rhyming. Specifically, if network slicing should become hard-wired to the class of customer or user that each slice would serve, then a network management platform runs the risk of becoming incapable of orchestrating activity on that network based upon conditions the entire network would face as a whole.

Read also: 5G could widen the gap between haves and have-nots (CNET)

It would be as though each slice were an island unto itself. And that may be what some 5G stakeholders would prefer.

"Network slicing is about the ability to virtually create a set of capabilities that you're able to control and extend, as the end consumer of that slice, moving along the different locations of a distributed site," defined Sree Koratala, head of 5G technology and strategy in North America for wireless equipment provider Ericsson.

Koratala pointed out the major industry use cases being singled out for 5G, including high-speed broadband on one side of the scale, and narrow-band IT-oriented signaling on the other side. Each of these use cases would carry with it a different resource demand profile. It would be easy enough to automate the resource requisitioning process for each case, assuming it were designated its own exclusive network slice. But if these slices shared a physical network, the reason for that sharing would not be solely to drive up utilization, but also to share resources. That sharing requires a multi-tenancy model, where the underlying platform is aware of each slice's role as a tenant unto itself, but is also policing and mitigating the interactions between those slices and the platform.

"These are the requirements that we are addressing as we are designing our edge cloud solutions," Koratala told ZDNet Scale.

The first type of wireless technology that was officially given its own "G" was 2G, based on a standard born in Europe called Global System for Mobile Communications (GSM). Begun in 1982, it was a formalization of the circuit-switched architecture that had defined telephony up to that point, with the intention of enabling telcos worldwide to implement the same system. This way, Americans traveling to Europe wouldn't need to rent European phones to make phone calls.

Read also: Who's most ready for 5G? China, not the US, leads all (CNET)

The move away from GSM was about making the network packet-switched, like the Internet, with bundles of digital data traveling in asynchronous streams. The software with which such a network is run is called the packet core. 3G marked the beginning of its implementation, though it was during the perhaps truncated 4G era that 3GPP formally implemented what it calls Evolved Packet Core (EPC). It's this component that authenticates mobile devices as they enter the network through a cell, assigns those devices access points and gateways to the cellular network, ensures a specific quality-of-service (QoS) level for each user function a device may access, and applies the correct service charges for that level.

A fully virtualized network would apply NFV to EPC. Ericsson already has a method for doing so, called Virtual EPC (vEPC), which telcos and service providers use for their 4G networks today. It is the core software of 4G LTE, wrapped up in a virtual network function. So does Nokia, with its Cloud Packet Core. So does Cisco, with its Ultra Packet Core. VEPC is already a nice little market unto itself. As industrial analysis firm Research and Markets projected last January, it will command about $8 billion in worldwide revenue by 2022, growing at an annual rate of over 50 percent.

The grand prize for 5G virtualization, however, is the radio access network (RAN). This is the component that is typically run inside the base station, and which 5G would seek to wrap up into its own VNF, called VRAN, move onto a cloud platform, and slice and dice into customer-centric segments. This is the unknown quantity, at the spot where 4G LTE stops and 5G begins.

All of a sudden, this side of the industry looks a lot more like the PC market (personal computer, in case you were wondering) in the 1980s. When the era of graphical computing began in earnest, the major players at that time (e.g., Microsoft, Apple, IBM, Commodore) tried to leverage the clout they had built up to that point among consumers, to help them make the transition away from 8-bit command lines and into graphical environments. Some of those key players tried to leverage more than just their market positions; they sought to apply technological advantages as well -- in one very notable instance, even if it meant contriving that advantage artificially.

Read also: Why Estonia finds itself in the middle of a 5G arms race

With the nascent wireless network virtualization market, vendors there are already making their case that their expertise in virtualizing one component may be leveraged for virtualizing the others. It's a kind of togetherness that was known in the PC software market of old as bundling.

"From the infrastructure side, the first step would be to deploy and scale the NFV infrastructure -- which means, you want to have an infrastructure for highly distributed workloads, starting from the network side -- whether it's Evolved Packet Core or VRAN -- and then the ability to prepare for network slicing that is key for enterprise services," explained Ericsson's Koratala. "The next step is to evolve that orchestration and implement that network slicing, so that you're able to exploit the flexibility and the elasticity of the network to meet fast-changing customer demand. Then you're moving finally towards truly embracing absolutely cloud-native, which means you have to have containerized cloud-native computing capabilities, which are comprised of microservices that are deployed in the network and scaled very efficiently from small to large."

"The way that I would evolve, say, a higher-level IT application like an SAP system or a billing system, is going to be very different from the way I would evolve a function close to the network, like the mobile core." explained Nokia's Nick Cadwgan, speaking with ZDNet Scale. "We've got to do it very intelligently, but we've got to think about the challenge holistically.

"And I think what we're seeing with our customers is this realization that they've got to embrace the cloud and the network together," Cadwgan continued, "if they're going to deliver this vast range of services and experiences. Whether you want to talk about how they deliver it, how they host it -- there are lots of terms about central clouds and edge clouds -- we're going to need more. We're going to need evolution in the network as well to make it all happen. And we have to tie it together."

Slice and dice

180606-scale-m05-w03-f03.jpg

As we've already explored fairly thoroughly here in Scale, an edge computing system is a cluster of servers stationed very close to where the data for computing applications is gathered together, in order to minimize latency. This may be closer to the customer than the hyperscale data center, and in the case of edge micro data center providers such as Hyper IO, right alongside the WTF where you'd expect to find the base station. It's somewhat ironic that 5G would seek to move servers away from transmitters, while edge architectures would replace them with more servers.

Read also: What is 5G? Everything you need to know

Architects participating in 5G are well aware of the implications of Cadwgan's clarion call, to start seeing the cloud and the core as, if you will, two trunks on the same elephant. The problems with this architecture begin with the question of where the head -- the part that runs things -- ends up being located.

"There's convergence, but at the same time, edge and core are different," declares Guru Parulkar, the executive director of the Open Network Foundation (ONF) and the chief of the Open Network Laboratory (ON.Lab). Parulkar's contributions to SDN are many, not the least of which is having directed the Stanford University project that led to the creation of OpenFlow, the first -- and now, the most common -- open source framework for SDN control.

"We are focusing on what you might think of as a multi-access edge," Parulkar continued, speaking with ZDNet Scale, "that will include wireline and wireless. Then the core is the core. You might have a tiered cloud -- a telco cloud and a public cloud -- but then at the edge, you have a multi-access edge that is converged in the sense that it can support multiple types of edge access -- wireline and wireless -- and it is still a common platform on which you can run both networking functions as well as customer-facing functions, customer services. You may differentiate them, but they are still running on the same platform. And in that sense, they are converged."

Wherever the edge or parts of the edge end up being physically located, from ONF's perspective, it will host the network slices for both the network-facing and customer-facing VNFs. They'll share this platform. But the central data center for customer-facing functions would be kept separate from the one for the telco's central office. Think of these slices like differently colored stripes, but only with respect to the edge servers.

Read also: Stingray spying: 5G will protect you against surveillance

"The 5G network, if nothing else, is going to have to be a very distributed cloud infrastructure," explained Oguz Sunay, the ONF's chief architect for the open source telco virtualization platform known as M-CORD (the latter part of which originally stood for "Central Office Reimagined as a Datacenter").

"That is the biggest difference: We are moving from a centralized compute paradigm to a distributed one with 5G. The main reason for that is really latency. For the first time in the history of cellular communications, [low] latency is one of the primary goals that we're tackling. That necessitates the edge cloud."

The "access" to which Sunay and Parulkar refer is different from the capital-A "Access" in "Radio Access Network." The latter is the system with which mobile devices make contact with the WTF, while the ONF idea of access is more in line with data centers and operating systems. Although a typical cloud computing platform (or a hyperconverged platform) may pool together storage, computing, and memory resources, the ONF's M-CORD design adds network accessibility to this list. ONF calls this "access-as-a-service" (AaaS).

"The edge cloud," Sunay went on, "should be a convergence area for the different use cases and business verticals. That requires the necessity for network slicing. To date, M-CORD is the only platform that has showcased network slicing that includes not only the slicing of the network functions, but also the slicing of the radio access. So we treat those resources as fundamental groups in the edge cloud."

In enterprise data center design -- particularly with VMware environments -- there's a concept called microsegmentation. It's a way of tagging resources throughout a network at a very granular level, in such a way that they may be perceived as though they were strung together. That way, a security policy can apply to each string as an individual and isolated unit, even when it physically stretches across server boundaries. It's microsegmentation that enables a new class of access isolation, making it appear that many tenants are operating completely separate systems even though they may be sharing the same memory and storage.

Is microsegmentation the true identity of network slicing as ONF and M-CORD perceive it? Put another way, are we not only slicing the network but dicing it as well?

Not really, responded Guru Parulkar. "Slicing can give you a lot more ties to physical resources," he told us. "It can give you isolation; it can give you different QoS for different slices; and the slicing can be based on different attributes. So in the context of RAN, slicing gives you a lot more flexibility and control, that I don't know VMware architecture would give."

Conceivably, if a customer were to run a VMware environment on a network slice, then microsegmentation could take place in the isolated context of that slice. Likewise, if the customer ran Kubernetes clusters, they would inhabit a virtual topology inside a network slice. So the type of isolation that normally defines access and identity in a virtual data center context would itself be separate from the isolation of the virtual RAN. That said, it's conceivable that an application running in the telco context may eventually adopt a microservices architecture -- the most sophisticated model of distributed computing. So it would take a very strong isolation mechanism to keep telco microservices and customer microservices in their own respective slices, and thus to make the telco's orchestrator and the customer's orchestration leave each other alone.

Yet this is not a problem that AT&T wants to find itself involved with solving. In the minds of its engineers, if they're having this problem, they're doing something wrong.

Read also: Samsung and KDDI complete 5G trial in baseball stadium

"Without getting into too many details of the architecture, obviously we don't want to mix everything together," said Igal Elbaz, AT&T's vice president for ecosystem and innovation. "So think about this as co-existing geographically. We have a physical location at the edge. We have a central office. I want to make sure my user plane and my IP services are supported from there; but at the same time, at the same geography or the same site, I can also host services."

Elbaz was very clear and emphatic in his assertions that customer services and telco services will be physically separated from one another, telling us that it wasn't even a debate -- that the matter had already been settled, and from AT&T's perspective, there was no alternative.

It is no small matter -- not only the dispute, but the dispute over whether a dispute exists. It gets to the heart of where the edge will finally be located, but also whether China Mobile's original notion of virtualizing the RAN -- a notion which AT&T says it supports in its entirety -- is even feasible.

Redirection

180606-scale-m05-w03-f04.jpg

On the other side of the hills, where 5G must metamorphose from a technology into a market, is where the west gets not only wild but weird. At last, we can see the end or ends of our trail or trails, in a realm of the unresolved, the indeterminate, and from time to time the completely imaginary.

Read also: How US carriers moved up the timeline on 5G

At Waypoint 4, we come to the first fork in the trail, and it's completely unexpected. The potential success of virtualization in 5G may yet call into question whether those "smart devices" you've probably read about, truly need to be smart. If we can relocate the base station functions to the cloud and make them faster in the process, could we potentially do the same with Internet of Things devices? Or any other kinds of devices?

180606-scale-m05-w03-f05.jpg

Waypoint 5 brings us to the question of whether the generic, "white box" servers on which hyperscale data centers such as Facebook's were founded, are truly what the major telcos need to manage their network services. Should they instead borrow a page from the playbook of FPGA accelerator developers or ARM server makers, and remake the telco server in their own image instead? And finally at Waypoint 6, we come to realize there's another set of stakeholders in 5G whose interests aren't always at issue: The equipment manufacturers, who may not be all that interested in the cheapening and commoditization of their principal product as the architects had hoped.

This ride is about to get wild. Hold firm.

Journey Further - From the CBS Interactive Network

Elsewhere

Editorial standards