X
Home & Office

What is SDN? How software-defined networking changed everything

The internet began as a system for applying addresses to servers. Now it's a means for giving names to services, and distributing those services across the planet. SDN gave rise to a new way of implementing computing on the broadest scale of infrastructure, and has become, for better or worse, the whole point of networking.
Written by Scott Fulton III, Contributor
woman-engineer-looking-at-various-information.jpg
Getty Images/iStockphoto

Executive Summary

The phrase software-defined networking (SDN) was coined when it was necessary to distinguish the concept from the hardware-based variety. Since that time, "SDN" has come to mean the type of dynamic configuration that takes place whenever software-based services in a data center network are made accessible through an Internet Protocol (IP) address. More to the point, SDN is networking now.

Read also: VMware buys SDN startup VeloCloud

In the broadest sense, any software that manages a network of dynamically assigned addresses -- addresses which represent services provided or functions performed -- is utilizing some variety of SDN. The web gave rise to the idea of addressing a function by way of a name that resolves (represents, as in a directory) to an IP address. Originally, the web was intended to be a system for addressing content in this way, but engineers soon saw how efficient an evolved form of that system could be in addressing functionality.

This realization made the first web applications feasible. Then, it inspired software architects to envision distributed applications, with a multiplicity of functions running asynchronously, across a mix of servers that no longer needed to share the same facility. Absolutely every revolutionary technology to take shape in the 21st century data center is made possible by some manner of software-defined networking.

Cloud platforms such as OpenStack rely on the native ability of their workloads to be relocated to different areas of the physical infrastructure, without noticeably impacting their performance. When a workload is defined by its virtual address, its coordinators must allow themselves to be re-educated whenever that address points to a different physical location.

4G wireless communication introduced SDN to telecom networks. The introduction of a virtual network subdivision technique called network functions virtualization (NFV) triggered a radical simplification of telco data center architecture that remains ongoing, enabling customer services to be staged on less expensive servers nearly identical to those used in enterprise data centers. With new NFV models such as CORD (Central Office Re-architected as a Datacenter), a provider's services may be migrated, in whole or in part, to the customer's own premises, leveraging the customer's own hardware and infrastructure to eliminate latency and expedite performance.

Operations service support (OSS) systems, used in large data centers and typically by service providers, utilize NFV to spin up new customer services on demand. When faults are detected, the segregation of those services within the network enables the administrator to isolate, identify, and conceivably completely replace the entire service through a central console.

Containerized service platforms, such as Kubernetes clusters running Docker or OCI containers, utilize SDN in some form to map network addresses to individual workloads. On a containerized platform, each workload has an address. A sophisticated orchestrator, coupled with a software-based load balancer such as NGINX, makes it possible for these workloads to be replicated ("scaled up") on demand, without changing the address used to access the service.

Virtual network infrastructure is making it feasible for organizations to provision their workloads on whatever grade of infrastructure makes the most sense at the time -- on-premises, co-located, or on the public cloud -- and shift those workloads from place to place in accordance with demand. VMware has been busy incrementally updating its NSX virtualized network platform to support seamless provisioning of VM-based and containerized workloads on Amazon AWS and Microsoft Azure, enabling public cloud infrastructure to serve as backup capacity in high-volume situations.

SDN effectively converts all the major resources provided by servers -- compute capacity, memory, storage, and even bandwidth itself -- into commodities. Methodologies such as hyperconvergence are made possible by SDN. With each of these classes of resource dumped into pools of building blocks, like separating Legos of different colors and shapes, a configuration management system can "spin up" a virtual network with only the resources needed to execute a specific workload. That configuration can be tailored over time, adjusted in response to revelations about that workload's performance.

This technology completely eliminates the need to design server clusters and data centers around the workloads they host. Further, it completely cancels out the original purpose of the Domain Name System (DNS) for the internet: Declaring the domains designating which applications run on what boxes.

Read also: What are the fastest DNS providers?

Today, no part of the act of running applications and services on modern servers omits the principal functions of SDN. It is no longer separate from networking. SDN is networking, from here on out.

What SDN actually does

You can tell when software-defined networking has impacted a data center through a cursory examination of its hardware. SDN dramatically intensifies the processing ability of servers. Utilization rises, and storage is more condensed. The physical switches are replaced with radically simplified models, many of which do not carry brand names. The Open Compute Project -- started in 2011 by Facebook as an effort to drive simpler specifications for network data center hardware -- is an exercise in designing servers for SDN.

Read also: Edge, core, and cloud: Where all the workloads go

Much of the logic for SDN is moved inside the servers' central processor, as just another user function. Some of it is moved inside simple switch and router appliances, where software is comprised of open-source operating systems and open-source controllers. Yet all of these phenomena are the side-effects of SDN, not the purpose. These changes happen because the real purpose of SDN is to move networking logic to a place where it can be more directly controlled and managed, and even more importantly, changed to suit the dynamics of variable workloads.

The basis of SDN

Here are SDN's principal architectural tenets:

The flow of user data is separated from the flow of control instructions. In a physical network, data packets that belong to an application take the same route as internal instructions the network components need to coordinate their actions. As SDN engineers put it, the control plane is separated from the data plane. This makes it feasible for there to be one controller in a network making routing decisions for any number of devices, rather than a plurality of devices, each of which with its own handle on the control plane, and all of them having to coordinate -- a job that requires quite a bit of messaging, which places stress on the network.

(Some diagrams show the addition of a third plane, often called the "management plane." Usually this upper tier is added by a vendor that wants to demonstrate a competitive edge. A management plane may not necessarily be a bad thing, but in actual SDN architecture, it may not really be a separate thing.)

With the data plane separated, the flow of packets in that plane may be tailored, and altered when necessary, based not just upon their eventual destination but also the most efficient route to reach that destination. When Internet Protocol was first devised, the basic job of a network device was forwarding -- passing packets on in the general direction of their respective end goals. There appeared to be a peculiar logic to it all, but there really wasn't -- and for a time, that was the beauty of it. But in a more sophisticated data center, the abstraction of the data plane gives software the opportunity to apply reason to its logic -- for example, building data flows based on security policy, rather than adapting security policy to fit unalterable data flows.

The device that controls network functions is replaced with an operating system. That network operating system (NOS) may run on a plain, zero-frills, non-branded server, such as an x86. It communicates with other components by way of an open protocol, the original and most prominent of which -- devised by the creators of modern SDN, many of whom hail from Stanford University -- is OpenFlow.

180507-06-open-vswitch.jpg
(Image: Linux Foundation)

The role of each networking appliance is replaced by SDN with a virtual switch (vSwitch) or a virtual router (vRouter). VMware (perhaps inadvertently) created one of the first such non-device devices, dubbed vSwitch, ostensibly as a way to facilitate networking in its vSphere virtualization environments. There are a handful of alternatives. Since the switch remains a critical networking component whether it's virtual or physical, Cisco has devised its own virtual switch, called Nexus, with the intent of substituting for VMware's design, though as you can imagine, VMware began blocking this effort. The Linux Foundation maintains an open-source alternative called Open vSwitch. In practice, a vSwitch is meant to be paired with a hypervisor in a virtual machine (VM) environment such as vSphere. Since a containerized environment such as Kubernetes is, by definition, not a VM environment, it requires the addition of another virtual component such as the Linux Foundation's OVN. This lets an orchestrated container environment run in a logically defined network.

Read also: What is Docker and why is it so darn popular?

Why containerization came about

In an earlier era, servers hosted software, and servers were networked through a gateway. The first phase of virtualization simply cast that same model in software, which is why VMware's vSwitch is directly tied to the hypervisor. SDN, in its full fruition, removes the middleman, making the software the thing that is networked. Now applications can have addresses, which is actually one of the premier benefits of containerized, distributed environments, using Docker or CoreOS containers under Kubernetes or Swarm orchestration.

180507-07-opencontrail-in-a-docker-network.jpg
(Image: Juniper Networks)

To stay competitive in networking and to avoid being obsoleted by history, network equipment vendors have either blazed the trails for SDN, or found themselves adopting SDN reluctantly, perhaps looking a little singed in the process. One vendor clearly in the former camp, not the latter, is Juniper Networks. It plunged into the SDN field during the fateful year of 2012, first by purchasing a firm called Contrail, and then by building it into an open-source virtual appliance ecosystem unto itself: OpenContrail. As the diagram above depicts, OpenContrail serves as a device that provides the routing logic for distributed operating systems that host Docker containers.

Read also: Juniper's OpenContrail SDN rebranded as Tungsten Fabric

"Contrail is our SDN controller that supports automation and programmability," remarked Juniper Networks product marketing director Jim Benson, speaking with ZDNet. "It's a big part of operating and automating both a virtual and a physical infrastructure. It orchestrates the VNFs [virtual network functions] and puts together the service chains, all the way to the edge and to the core. Contrail uses vRouter and, in a distributed data center infrastructure, reach into any part of the cloud, string up the required VNFs, stitch together the different pieces of the service, and deliver a custom service to a certain vertical, or a group of end customers. It automates that whole process of customizing the services that can be offered, ultimately, to our service provider customers."

How SDN came to be

In the third edition (1996) of his authoritative textbook Computer Networks, Prof. Andrew S. Tanenbaum provided perhaps the first "accidental" definition of SDN. "A good way to think of the network layer," he wrote, "is this. Its job is to provide a best-efforts way to transport datagrams from source to destination, without regard to whether or not these machines are on the same network, or whether or not there are other networks in between them."

Software-defined networking today is precisely this. Over the last 22 years, "machines" have become virtual entities. There are software-based constructs that communicate the way a physical machine did in the 20th century. VMware popularized the notion of a virtual machine (VM) fulfilling the same task as a physical PC or server, though entirely in software. But these VMs communicated over a real network. So, at first, hardware routers had to be reconfigured to make VMs available over a local loop. That meant network controllers and routers had to treat VMs differently from physical machines.

In other words, physical IP networking suddenly violated Tanenbaum's canonical definition: Routing had to pay attention to the route, if any packet in a network of virtual machines was ever to reach the right destination.

Read also: VMware kicks off international expansion of VMware Cloud

The solution, if there was to be one, appeared at first to involve VMware's early encapsulation of the principal functions of PC hardware, as virtual infrastructure. The first hypervisors incorporated not just the BIOS, but the NIC as well. Perhaps inadvertently, VMware demonstrated that the role of controlling the network visibility for the machine running an application, did not have to reside on hardware itself.

Yet this wasn't really the creation of SDN, just one of its major seeds. The true virtue of SDN is its programmability -- that it is truly software.

9/11 triggers step No. 1

Soon after the attacks of September 11, 2001, a simulation engineer at Livermore Labs named Martin Casado transferred to an intelligence sector at the US Dept. of Defense., where he would serve as a network auditor. In a 2011 presentation, Casado discussed the many challenges he faced in doing his job: For instance, replicating and placing policy instructions on routers at strategic locations in the network, so that packets wouldn't be routed around them. He could build a template which automated the process, but such a template would effectively hard-wire the network into a particular topology.

180501-01-martin-casado-presentation.jpg
(Image: Open Networking Foundation)

"Adding or moving a machine was like an act of aggression," Casado told his audience. "In one network, we had to update eight points of state any time that we added or moved a machine. . . And having come from somewhat of a networking background, I had it very difficult, reconciling these networks that we'd created with the networks I'd learned about as an undergraduate." Those networks, he argued, would be scalable and adaptable, at least in the textbooks.

But in practice, Casado and his colleagues were ending up intentionally devising bottlenecks, in order to ensure that packets get routed through the appropriate routers where the policy rules were deployed. The alternative was to replicate the same rules all over the system, which would have meant implementing conditions for innumerably greater behaviors. So the tradeoff was predictable sloth versus unlimited unmanageability.

Programming the network for adaptation is a much simpler affair, Casado discovered, when the channel on which the network commands are given, could be kept separate from the channel on which the user data is being exchanged. Not only could the operation of the network be tremendously accelerated, but the hardware running the control functions could be substituted with much cheaper, non-proprietary parts.

A protocol rewrites the unspoken laws

Casado's presentation came by way of introducing his audience to a protocol he was devising called OpenFlow. He presented it as a methodology for implementing software-based control of networking hardware, thus enabling the hardware to become far simpler -- and presumably cheaper. OpenFlow's ability to dynamically manage the state of routers, he asserted, could be leveraged to decouple those components of the network layer responsible for the network itself, from those that were hosting the application. This would make feasible, he continued, a new realm of possibility for distributed software -- functions and services running on multiple servers that may or may not be orchestrated together, depending on priority.

Read also: OpenFlow SDN protocol flaw affects all versions

In 2003, Casado had already presented the gist of this idea to Cisco. In what may yet be recorded in history as the most beneficial mistake a technology corporation could make, after IBM's refusal to exclusively license PC-DOS from Bill Gates and Paul Allen, Cisco rejected the concept on its face, for reasons Casado said centered around its obvious inability to be marketed as a product.

"We hear the term 'centralization' thrown around a lot," Casado remarked in 2011, "but I'd like to encourage you to think about this a little bit differently. What OpenFlow and SDN does is allow you to decouple the distribution model of your control logic, from the physical topology. And that control distribution model can be anything you want: It can be purely distributed, if you want. . . it can be totally centralized, if you want."

It was another of those moments in which the world began turning on a completely different axis, and it's all recorded for history.

Not quite a business

SDN would resolve the problem of network programmability by creating virtual appliances that performed the same role as physical components. In 2006, Dell Computer began offering an experimental option with some of its servers: Preinstalling an Open Flexible Router from a company called Vyatta. It would do the work of a Cisco router and Cisco PIX firewall, only running on the Dell's x86 processor. When OFR won over so much as a single major customer, it made news. Some have said that Vyatta's acceptance at this time in history was the snowball event that triggered the avalanche.

For the next several years, the first SDN experiments didn't exactly take place in a vacuum, but it wasn't an industry unto itself. Here is where Martin Casado doubled down.

180501-02-sdn-architecture-onf-2012-white-paper.jpg
(Image: Open Networking Foundation)

The container changes absolutely everything

The advent of containerization, spearheaded by Docker Inc. in 2013 and fueled by a newly empowered open-source movement, is a direct outgrowth of SDN. Although the Linux operating system, since its inception, had utilized means to compartmentalize namespaces for process isolation and security, Docker took that notion, planted it into its cloud-based infrastructure, and grew it into a platform overnight, if that long.

Read also: Cisco jumps into containers

A container (now not just part of Linux but Windows as well) is a unit of software that has only one way in or out: Its network address. Addressing a container is like sending a message to a remote server, even if that container resides on the same computer as the one addressing it. The methodology of SDN employed in networking these containers determines how they coordinate with one another -- and, in the case of Kubernetes, with the orchestrator as well -- to produce an application.

VMware elevates virtual infrastructure further

180501-05-vmware-nsx-virtual-cloud.jpg
(Image: VMware)

By virtue of VMware's purchase of Nicira in 2012, the platform concept created by Martin Casado, which gave rise to the first virtualization of network infrastructure, is now the basis of VMware's NSX "Virtual Cloud Network" platform. As of May 1, NSX enables a virtualization environment such as vSphere to stage workloads using resources incorporating on-premises servers, as well as on-demand cloud-based infrastructure from both Amazon AWS and Microsoft Azure.

"You're now in a world where applications reside everywhere and data is everywhere," stated Rajiv Ramaswami, VMware's COO for products and cloud services, during a recent press conference. "This has profound implications in terms of the network that needs to support all of this and enable all of this to happen."

I asked Ramaswami, does this mean VMware has completely embraced the vendor-agnostic model of container orchestration embodied by proponents of Kubernetes? Today's vSphere, he responded, is capable of running container-based workloads both inside and outside of Kubernetes, including with platforms such as Red Hat OpenShift and Pivotal's Cloud Foundry.

"In every one of these cases," the COO told ZDNet, "what we do is, we tie a virtual network endpoint to that container, on a per-container basis. Once we have that association of a network endpoint with a container, then we can do everything that we do with a VM. We can tie those containers together; we can secure and microsegment those containers together. That's what is done today, and it's already available and working."

A container is the simplest unit yet for implementing a portable container that may always be reached by its network address, and by that alone. The container is the basis for the cloud-native development model, and for the world's hyperscale computing ecosystem as it exists today. It would not have been feasible without nearly two decades of relentless effort by academics to produce SDN.

Explore further -- From the CBS Interactive Network

Elsewhere

Editorial standards