Start Your Free Trial

VMware NSX and Cisco Nexus 1000v Architecture Demystified

Author:
Matt Conran
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

Network virtualization brings many benefits to the table – reduced provisioning time, easier/cheaper network management, agility in bringing-up of sophisticated deployments to name a few. A large number of network and data-center architects around the globe are evaluating VMware NSX and Cisco Nexus 1000v to enable network virtualization in their data-centers. This article (part 1 of 3 part series) walks through the architectural elements of VMware NSX & Cisco Nexus 1000v, and explains how Ravello (powered by nested virtualization and networking overlay) can be used as a platform to run and deploy each of the solutions with a couple of clicks for evaluation during the decision-making process. Part 2 compares capabilities supported by Cisco Nexus 1000v and VMware NSX, and Part 3 walks through steps to create a Cisco Nexus 1000v & VMware NSX deployment on Ravello.

The Death of the Monolithic Application

The meaning of the application has changed considerably over the last 10 years. Initially, we started with a monolithic application stack where we had one application installed per server. The design proved to be very inefficient and a waste of server resources. When would an a single application ever consume all server resources? Almost never, unless there was a compromise or some kind of bug. Single server application deployment has considerable vendor lock-in, making it difficult to move the application from one server vendor to the other.

The application has now changed to a multi-tiered stack and is no longer installed on a single server. The application stack may have many dispersed tiers requiring network service such as firewalling, load balancing and routing between each tier. Physical firewall devices can be used to provide these firewalling services. Physical devices are evolved to provide multi tenancy by features such as VRFs and multiple context. But it is very hard to move firewalls in the event of an application stack move. If there is an disaster avoidance or recovery situation, App A might need to move to an entirely new location. If the security policies are tied to a physical device, how can its state and policies move? Some designs overcome this with stretch VLANs across DCI link and stretched firewall clusters. Both of which should be designed with care. A technology was needed to tie the network and security services to the actual VM workload and have it move alongside the VM.

The Birth of Microservices

The era of application microservices is coming aboard. We now have different application components spread across the network. More importantly, all these components need cross communication. Even though we moved from a single application / physical server deployment to application per VM on a hypervisor, it was still not agile enough. Microservice applications are now getting installed in Linux containers, Docker being the most popular. Containers are more lightweight than a VM, spinning up in less than 300 milliseconds. Kubernetes are also getting popular, resulting in massive agile compute environments. So how can traditional networking keep up with this type of agility? Everything that can is being virtualized with a abstracted software layer. We started with compute and storage and now the missing network piece is picking up pace.

Distributed Systems & Network Virtualization

Network virtualization was the missing piece of the puzzle. Now, that the network can be virtualized and put into software, it meets the agility requirements of containers and complex application tiers. The entire world of distributed systems is upon us. Everything is getting pushed into software at the edge of the network. The complexity of the network is no longer in the physical core nodes, it’s at the edges in software. Today’s network consists of two layers, we have an overlay layer and an underlay physical layer. The overlay is the complicated part and allows VM communications. Its entirely in software. The physical underlay is typically a leaf and spine design, solely focusing on forwarding packing from one end point to another. There are many vendors offering open source and proprietary solutions. VMware NSX and Cisco Nexus 1000v are some of the popular choices.

VMware NSX

VMware NSX is a network and security virtualization solution that allows you to build overlay networks. The decoupling / virtualization of networking services from physical assets displays the real advantages of NSX. Network virtualization with NSX offers the same API driven, automated and flexible approach much along the lines of what compute virtualization has done for compute. It enables changing hardware without having to worry about your workload networking which is preserved thanks to being decoupled from the hardware. There are also great benefits from decoupling security policy from its assets, abstracting security policy. All these interesting abstractions are possible as we are on the hypervisor and can see into the VM.

NSX provides the overlay and not the underlay. The physical underlay should be a leaf and spine design, limited to one or two ToR switches. Many implement just two ToR switches. Depending on port count density you might only need one ToR. Each ToR has a connection (or two) to each spine offering a high available design. Layer 2 designs should be limited as much as possible so to minimise the size of broadcast domains. The broadcast domain should be kept to small isolated islands as to minimise the blast radius should a fault occur. As a general design rule, Layer 2 should be used for what it was design for – between two hosts. Layer 3 routing protocols on the underlay should be used as much as possible. Layer 3 uses a TTL that is not present in Layer 2. The TTL field is used to prevent loops.

API Driven Solution

The hypervisor, referenced to as the virtual machine manager, is a device / program that enables multiple operating systems to share a single host. Hypervisors are a leap forward in fully utilizing server hardware as a single operating system per host would never fully utilise all physical hardware resources. It is here we have hypervisor hosts. Soft switches run in the hypervisor hosts and they implement Layer 2 networking over Layer 3 using the IP transport in the middle to exchange data. VMware’s NSX allows you to implement virtual segments in the soft switches and as discussed use MAC over IP. To support remote Layer 2 islands there is no need to stretch VLANs and connect broadcast and failure domains together.

VMware NSX supports complicated application stacks in cloud environments. It has many features including Layer 2 and Layer 3 segments, distributed VM NIC firewalls, distributed routing, load balancing, NAT, and Layer 2 and Layer 3 gateway to connect to the physical world. NSX uses a proper control plane to distribute the forwarding information to soft switches. The NSX cluster controller configures the soft switches located in the hypervisor hosts. The controller will have at a min of 3 nodes with a max of 5 for redundancy.

To form the overlay (on top of the underlay) between tunnel endpoints, NSX uses VXLAN. VXLAN has now become the defacto for overlay creation. There are three modes available – multicast, new unicast modes, hybrid modes. Hybrid modes use multicast locally and does not rely on the transport network for multicast support. This offers huge benefits as many operational teams would not like to implement multicast on core nodes. Multicast is complex. The core should be as simple as possible, concerned only with forwarding packets from A to B. MPLS networks operate this way and they scale to support millions of routes.

VMware NSX operates with Distributed Routers. It looks like all switches are part of the same router, meaning all switches have same IP and all listen to MAC addresses associated with that IP. The distributed approach creates one large device. All switches receive packet sent to the gateway and do Layer 3 forwarding.

One of the most powerful features of NSX is the VM NIC firewalls. The firewalls are In-kernel firewall and no traffic goes into userworld. One drawback of the physical world is that physical firewalls are a network choke point, they also cannot be moved to easily. Networks today need to be agile and flexible and distributed firewalls fit that requirement. They are fully stateful and support IPv4 and IPv6.

Nexus 1000v Series

The Nexus 1000v Series is a software-based NX-OS switch that add capabilities to vSphere 6 (and below) environments. The Nexus 1000v may be incorporated with other Cisco products, such as the VSG and vASA to offer a complete network and security solution. As many organisations move to the cloud they need intelligent and advanced network functions with a CLI that they know.

The Nexus 1000v architecture is divided into two main components – a) Virtual Ethernet Module (VEM) and b) Virtual Supervisor Module (VSM). These components are logically positioned differently in the network. The VEM is inside the hypervisor and executes as part of the ESXi kernel. Each VEM learns individually and in turn builds and maintains its own MAC address table. The VSM is used to manage the VEM’s. The VSM can be designed in the high available design (2 for redundancy) and control communication between the VEM and the VSM can now be Layer 3. When the communication was Layer 2, it required a packet and control VLAN configuration.

The Nexus 1000v can to be viewed as a distributed device. The VSM control multiples VEMs as one logical device. The VEM do not need to be configured independently. All the configuration is performed on the VSM and automatically pushed down to the VEM that sit in the ESXi kernel.
The entire solution is integrated into VMware vCenter. This offer a single point of configuration for the Nexus switches and all the VMware elements. The entire virtualization configuration is performed with the vSphere client software, including the network configuration of the Nexus 1000v switches.

image05

One major configuration feature of the Nexus 1000v is the use of port profiles. Port profiles are configured from the VSM and define the different network policies for the VM. They are used to configure interface settings on the VEM. When there is a change to a port profile setting, the change is automatically propagated to the interfaces that belong to that port profile. The interfaces may be connected to a number of VEM, dispersed around the network. There is no need to configure on an individual NIC basis. In vCenter a port profile is represented as a port group. They are then applied to individual VM NIC through the vCenter GUI.

Port Profiles are dynamic in nature and move when the VM is moved. All policies defined with port profiles follow the VM throughout the network. In addition to moving policies the VM also retains network state.

Conclusion

The terms network virtualization and decoupling go hand in hand. The ability to decouple all services from physical assets is key for a flexible and automated approach to networking. VMware NSX offers an API driven platform for all network and security services while existing vSphere & Cisco Nexus 1K deployments are CLI and orchestration driven. The advantages and disadvantages of both should be weight up not just from a feature parity but also from a deployment model approach. The NSX platform being a big bang approach.

If you are in the process of deciding between these two solutions, and want to actually try them out – Ravello Networking Labs provides an excellent platform to test VMware NSX and Cisco Nexus 1000v deployments with a couple of clicks. One can use an existing NSX or Cisco 1000v blueprint as a starting point to tailor to their requirement, or create one from scratch. Just open a Ravello trial account, and contact Ravello to get your ‘feet wet’ with an existing deployment topology that you can run on your own.

About Ravello Systems

Ravello is the industry’s leading nested virtualization and software-defined networking SaaS. It enables enterprises to create cloud-based development, test, UAT, integration and staging environments by automatically cloning their VMware-based applications in AWS. Ravello is built by the same team that developed the KVM hypervisor in Linux.

Check our product demo video

VMware NSX and Cisco Nexus 1000v Architecture Demystified