Software-defined networking puts an end to patchwork administration
Re-Envisioning the Network
Globalization, rapidly increasing numbers of devices, virtualization, the cloud, and "bring your own device" make classically organized IP networks difficult to plan and manage. Instead of quarreling, some admins address these problems with a radically new approach: Software-defined networking.
Traditional IP networks are made up of a number of autonomous systems: switches, routers, and firewalls. The network devices evaluate incoming data packets by checking their tables and route the packets appropriately. For this to work, these systems need to be familiar with the network topology (Figure 1). The position of a particular device on the network defines its function; a change of position causes complication and reconfiguration.
Over time, the Internet Engineering Task Force (IETF) has given administrators a series of protocols to help spice up the simple control logic. However, these extensions typically address only a specific task and are comparatively isolated. In addition, the growing number of these standards (in conjunction with vendor-specific optimizations and software dependencies) has bloated the complexity of today's network.
The world outside the network, however, has not been standing still. A more peer-like flow of data is now taking the place of traditional client-server traffic. More and more computers and applications are involved in communication. Server virtualization pushes computers around the network, thus dynamically changing whole areas of the network topology. Smartphones and tablet PCs are causing the number of nodes to skyrocket.
The standard view of a TCP/IP network as a collection of hosts, subdivided by autonomous routers and switches, has served admins for years, but it is starting to show signs of age. Modern bandwidth demands, the proliferation of personal networking devices, and the complex communication demands associated with virtualization and cloud services have strained the existing infrastructure. At the same time, IT has followed a general trend toward fewer personnel and better, more powerful management tools, and many experts have asked whether more efficient management of networking devices might lead to more efficient networks.
According to the Open Networking Foundation  the principal problems with the conventional networking infrastructure model are:
- Complexity leads to stasis – A networking device is configured to fill a specific role based on its location on the network. Moving a device or making a change to network-wide policies often requires extensive manual reconfiguration. The time and labor required to implement changes mean that changes rarely happen, so the topology does not adjust dynamically to meet the network's changing needs.
- Inability to scale – The demands of parallel processing, Big Data, and network-based storage bring a need to eke out better performance at a more massive scale than the conventional network infrastructure was ever intended to provide.
- Vendor dependence – A higher degree of coordination and cooperation between network devices requires some overriding communication standard or else all the devices must be from the same vendor.
Software-defined networking (SDN) began as a way for device vendors and other Internet companies to address these problems in a unified, vendor-neutral solution that maximizes flexibility and performance.
On a conventional network, a device such as a switch or router works autonomously. The device gains knowledge of the network through its own interactions with surrounding computers and devices, or by communicating with other nearby routers through any of several available routing protocols. In the real world, administrators often spend significant time on manual configuration – for firewall or performance tuning purposes, or even just to encode knowledge of the surrounding network that the device will then use to make forwarding decisions.
Because the device operates autonomously, it must expend processing time on building and maintaining its own routing table – and on making forwarding decisions based on often complex criteria.
Adding to the confusion, devices sometimes migrate across the network, requiring spontaneous adjustment for the network to associate the virtual port with a new physical location.
Software-defined networking offers a much simpler role for a networking device. In the SDN model (Figure 2), switches and routers still forward packets, but they don't expend as much time or processing power determining how to forward packets. Instead, a computer called the controller, which has complete knowledge of the network, creates a lookup table (called a flow table) for each device that allows the device to forward packets based on a simplified form of pattern matching.
The device looks for a pattern in the header of the packet and consults the table to associate the pattern with an instruction. If a packet arrived that doesn't match the pattern, the device can simply forward it back to the controller for further analysis and processing.
The SDN paradigm described in Figure 2 requires much less sophisticated routing devices. SDN also makes the network more programmable. The controller can respond dynamically to changes and needs on the network in subtle and complex ways that would not be possible if all the devices were acting autonomously. The only remaining puzzle piece is how to manage communication between the controller and the device.
SDN calls for a connection between the device and the controller that is separate from the data plane (or forwarding plane) where the device is performing its forwarding functions. This network of connections between the controller and the devices, which is known as the control plane, is logically envisioned as a separate network (Figure 2), although it can be implemented through virtualization or VPN-style network of encrypted virtual connections.
This centralized control makes a software-defined network more agile and easier to control. Another important benefit is that it makes the network more programmable. Separation of the control plane from the data plane, and the efficient delivery of configuration information to the networking device, makes it easy to design custom tools for monitoring, managing, and optimizing network traffic.
Divide and Conquer
The goal of SDN is separating configuration from the infrastructure. In his doctoral thesis  Martin Casando in 2008 described the idea of the logical separation of control logic from the data flow. Casando and his academic supervisors at the time – Nick McKeown and Scott Shenker – are thus considered the progenitors of SDN.
In the same year, Casando, McKeown, and Shenker founded Nicira Networks, which focused on the new topic. In 2012 VMware acquired this company for an amazing US$ 1.2 billion. Nicira's Network Virtualization Platform is the foundation of VMware's NSX . (See the sidebar titled "SDN and VMware.")
SDN and VMware
Given the growing competition from Xen, KVM, and especially OpenStack, the virtualization market leader, VMware, has invested a lot of amount of money into harmonizing existing functions with cloud computing environments.
VMware enters the software-defined networking game with its Software Defined Data Center (SDDC) initiative. Instead of building its own product from scratch, the group went hunting: The beast it bagged was Nicira, one of the pioneers of the SDN market.
VMware primarily acquired Nicira because of its Network Virtualization Platform, an OpenFlow and Open vSwitch add-on developed by Nicira. NVP is a framework built around Open vSwitch that exposes the functionality provided by OpenFlow in commercial but standardized interfaces, thus offering significantly more functionality than Open vSwitch does out the box.
With one fell swoop, VMware acquired both the software and the entire Nicira developer team. A short time later, NVP was relaunched under the VMware brand: VMware NSX was born, and NVP was part of it.
NSX has a number of features tailored for different contexts. Users who have, thus far, relied exclusively on VMware's Vsphere and Vcenter products will probably only use the NVP controller cluster and continue to rely on the previously existing Virtual Distributed Switch (Figure 3) for hypervisors.
If you use Linux hypervisors based on KVM or Xen, things look different: The NVP, which was originally designed more in the Linux context felt than in the classical VMware universe, takes over command.
The ideal NVP setup in the enterprise depends on whether VMware components are already in use, or whether VMware is introduced into the datacenter as a new provider. What all variants have in common that there is an NVP Controller cluster. In effect, this is the brain of the entire SDN infrastructure: it contains the database with the specific configuration for the existing virtual customer networks.
NVP is not a combination of standard tools such as MySQL. Instead, the controller cluster is an NVP in-house development that offers features such as automatic data replication. The controller in the current NVP installations is always a redundantly configured three-node cluster. If one of the servers fails, then NVP automatically takes care of a failover.
The NVP-controller comes part and parcel with a RESTful API, which opens up the option of making changes to the NVP cluster configuration via HTTP. The overall behavior of the NVP cluster can be determined in detail. Of course, NVP also has its own consumer for this API, a comprehensive web interface, which issues REST commands in the background.
It is only logical that the Nicira Networks founders, led by Shenker, were also involved in founding the Open Network Foundation (ONF) in March 2011. The aim of the ONF is to support and monitor interfaces and standards for SDN. Google, Facebook, Microsoft, Deutsche Telekom, Verizon and Yahoo, all genuine heavyweights, are founding members. One important role of the ONF is maintaining and promoting the OpenFlow standard, a vendor-neutral specification defining communication between controller and devices on an SDN network. (See the article on OpenFlow elsewhere in this issue.)
Noticeably absent from the list of ONF founding members is network hardware giant Cisco. Talk of new paradigms and vendor neutrality is always a bit threatening for any industry leader, but Cisco didn't get to be number one by ignoring important new trends, and the company has recently made an effort to incorporate SDN techniques into many of its new products. (See the box titled "SDN and Cisco.")
SDN and Cisco
SDN tends to take the intelligence out of the network hardware and shift it to higher abstract layers. This solution is a good thing for companies like VMware, but it threatens traditional hardware manufacturers like Cisco.
As a first response, Cisco offered closer tie-in between its own physical infrastructure and SDN-oriented components. The Open Network Environment (ONE Cisco) provides a programmable platform that includes APIs, agents, controllers, and components for overlay networks. The basis for this platform is the Cisco IOS system software, IOS-XR and NX-OS. In addition, the manufacturer of the Catalyst switches 3560 and 3750 has programmed OpenFlow agents.
Cisco was initially cautious, so as not to threaten its traditional business. Thus, it was all the more surprising when Cisco presented a separate application-centric and data center-wide architecture last November that had the distinctive traits of SDN, while venturing quite a way into the realm private cloud computing.
Cisco ACI (Application Centric Infrastructure, ) automatically adapts the network to changing application requirements and provides IT applications on-demand services by means of virtualization. At the center of ACI is the new Nexus 9000 Data Center Switch with a data throughput of 60Tb per second and an Application Policy Infrastructure Controller (APIC), which uniformly manages the hardware of the virtual and physical network segments. ACI uses policy templates to automatically assign network resources and security policies.
The freely programmable APIC lets you connect several appliances into a cluster. According to the vendor, APIC supports all the applications in the fabric, regardless of whether they are running on virtual or physical servers. Virtual machines – Cisco is compatible with all major hypervisors – are also ready in minutes and can be managed in real time. Cisco has the disclosed APIC APIs and invites open source projects such as OpenStack to integrate their tools.
In mid-February, Cisco announced that it had extended APIC, adding an enterprise module (APIC EM) that extended the ACI beyond the data center to wide area networks and campus networks. For compliance management, APIC EM offers network-wide Quality of Service and accelerates intelligent WAN deployments. More importantly: The extension gives APIC genuine SDN capabilities by automating many configuration and policy changes across the entire network. According to Cisco, this change will make network management and troubleshooting more efficient because it considers the entire network as a unit.
Pros and Cons
SDN centralizes intelligence with the controller, which exchanges the required information with the devices via standardized interfaces. Admins love this solution because it decouples the development of the network from the lifecycle of the underlying hardware. The separation of the data flow and control logic also centralizes management and gives administrators visibility across the entire network.
Admins increasingly have access to new ways of making the data flow more efficient. A firewall no longer has to examine each packet if previous tests were successful and are sufficient. The abstraction of the vendor-related characteristics of individual network devices now supports the automation of configurations that were previously impossible.
Properties or even individual services are no longer defined at the port level, but at a central, perhaps even a global location. Ultimately, SDN provides an abstraction of the network layer that allows business processes to drive network management. You can tailor a network to match your application, because the application knows the network's capabilities.
Of course, the news is not all good. In SDN, the controller is the brain of the network – and thus the single point of failure: If it fails, a meltdown is inevitable. To operate it without high availability concepts borders on negligence. More questions arise almost automatically: How many devices can handle a controller handle? How do you organize the controllers so that they are highly available and also scale well? Admins use a number of different deployment models for applying SDN in the most efficient manner (see the box titled "Variations.")
SDN comes in several forms. The symmetric model centralizes control to the extent possible; of course, failure susceptibility is the most obvious and perhaps most serious disadvantage in everyday life.
The asymmetric approach tries to mitigate concerns about a single point of failure; the individual systems know the relevant configurations and thus continue to work in case of a failure of the control logic. Typically, the administrator organizes the network in cells that work autonomously. The disadvantages of this solution are the unnecessary redundancy of information and the additional decentralized management.
A further distinction is where SDN's brain resides. In highly virtualized environments, it is useful for the hypervisor or the host to do all the thinking. This design differs from the network-centric approach, where dedicated network devices handle the SDN tasks. A hybrid is conceivable, even if it detracts from the benefits of software-defined networking.
A third way of differentiating SDN models relates the to how the information is distributed. On one hand, the controller can distribute information about the known broadcast and multicast mechanisms. But this aproach leads to an increase in network load. The alternative to this is distributed hashing and distributed look-up tables, which means sending far less information over the network. This floodless approach is less central, but it comes with the same disadvantages as the asymmetric model.
Buy this article as PDF
Founder of ownCloud launches the Nextcloud project.
Will The Machine change the way future programmers think about memory?
The new Torus distributed storage system is available under an open source license on GitHub
Juries decides Google’s use of Java APIs Was Fair Use
But if you are not using the latest Linux kernel, your system is insecure.
Home routers will give room for custom firmware but still comply with FCC rules
Frank Karlitschek will continue to lead the open source ownCloud project
“Xenial Xerus” comes with a new packages format and several improvements for the enterprise.
Linux users can now download and install the Windows code editor
New initiative will address security and interoperability concerns around container technology.