Netvisor Analytics: Secure the Network/Infrastructure

We recently heard President Obama declare cyber security as one of his top priorities and we saw in recent time major corporations suffer tremendously from breaches and attacks. The most notable one is the breach at Anthem. For those who are still unaware, Anthem is the umbrella company that runs Blue Shield and Blue Cross Insurance as well. The attackers had access to people details, social security, home addresses, and email address for a period of month. What was taken and extent of the damage is still guesswork because network is a black hole that needs extensive tools to figure out what is happening or what happened. This also means the my family is impacted and since we use Blue Shield at Pluribus Networks, every employee and their family is also impacted prompting me to write this blog and a open invitation to the Anthem people and the government to pay attention to the new architecture that makes network play a role similar to NSA in helping protect the infrastructure. It all starts with converting the network from a black hole to something we can measure and monitor. To make this meaningful, lets look at state of the art today and why it is of no use and a step-by-step example on how Netvisor analytics help you see everything and take action on it.

Issues with existing networks and modern attack vector

In a typical datacenter or enterprise, the typical switches and routers are dumb packet switching devices. They switch billions of packets per second between servers and clients at sub micro second latencies using very fast ASICs but have no capability to record anything. As such, external optical TAPs and monitoring networks have to be built to get a sense of what is actually going on in the infrastructure. The figure below shows what monitoring today looks like: Traditional Network Monitoring

This is where the challenges start coming together. The typical enterprise and datacenter network that connects the servers is running at 10/40Gbps today and moving to 100Gbps tomorrow. These switches have typically 40-50 servers connected to them pumping traffic at 10Gbps. There are 3 possibilities to see everything that is going on:

  1. Provision a fiber optics tap at every link and divert a copy of every packet to the monitoring tools. Since the fiber optics tap and passive, you have to copy every packet and the monitoring tools need to deal with 500M to 1B packets per second from each switch. Assume a typical pod of 15-20 rack and 30-40 switches (who runs without HA), the monitoring tools need to deal with 15B to 40B packets per second. The monitoring Software has to look inside each packet and potentially keep state to understand what is going on which requires very complex software and amazing amount of hardware. For reference, a typical high-end dual socket server can get 15-40M packets into the system but has no CPU left to do anything else. We will need 1000 such servers plus associated monitoring network apart from monitoring software so we are looking at 15-20 racks of just monitoring equipment. Add the monitoring software and storage etc, the cost of monitoring 15-20 racks of servers is probably 100 times more then the servers itself.
  2. Selectively place fiber optic taps at uplinks or edge ports gets us back into inner network becoming a black hole and we have no visibility into what is going on. Key things we learnt from NSA and Homeland security is that a successful defense against attack requires extensive monitoring and you just can’t monitor the edge.
  3. Using the switch them selves to selectively mirror traffic to monitoring tools. A more popular approach these days but this is built of sampling where the sampling rates are typically 1 in 5000 to 10000 packets. Better then nothing but monitoring software has nowhere close to meaningful visibility and cost goes up exponentially as more switches get monitored (monitoring fabric needs more capacity, the monitoring software gets more complex and needs more hardware resources).

So what is wrong with just sampling and monitoring/securing the edge. The answer is pretty obvious. We do that today yet the break in keeps happening. There are many things contributing to it starting from the attack vector itself has shifted. Its not that employees in these companies have become careless but more to do with the myriad software and applications becoming prevalent in a enterprise or a datacenter. Just look at the amount of software and new applications that gets deployed everyday from so many sources and the increasing hardware capacity underneath. Any of these can get exploited to let the attackers in. Once the attackers has access to inside, the attack on actual critical servers and applications come from within. Lot of these platform and application goes home with employees at night where they are not protected by corporate firewalls and can easily upload data collected during the day (assuming the corporate firewall managed to block any connections). Every home is online today and most devices are constantly on network so attackers typically have easy access to devices at home and the same devices go to work behind the corporate firewalls.

Netvisor provides the distributed security/monitoring architecture

The goal of Netvisor is to make a switch programmable like a server. Netvisor leverages the new breed of Open Compute Switches by memory mapping the switch chip into the kernel over PCI-Express and taking advantage of powerful control processors, large amounts of memory, and storage built into the switch chassis. Figure below contrasts Netvisor on a Server-Switch using the current generation of switch chips with a traditional switch where the OS runs on a low powered control processor and low speed busses.

secure2Given that cheapest form of compute these days is a Intel Rangeley class processor with 8-16Gb memory, all the ODM switches are using that as a compute complex. Facebook’s Open Compute Program made this a standard allowing all modern switches to have a small server inside them that lays the foundation of our distributed analytics architecture on the switches without requiring any TAPs and separate monitoring network as shown in the Figure below.

secure3 Each Server-Switch now becomes in network analytics engine along with doing layer 2 switching and layer 3 routing. Netvisor analytics architecture takes advantage of following:

  • TCAM on switch chip that gives it the ability to identify a flow and take a copy of the packet (without impacting the timing of original packet) at zero cost (including TCP control packet and various network control packets)
  • High performance, multi-threaded control plane over PCIe that can get 8-10Gbps of flow traffic into Netvisor kernel
  • Intel Rangeley class CPU which is quad core and 8-16Gb of memory to process the flow traffic

So Netvisor can filter the appropriate packets in switch TCAM while switching 1.2 to 1.8Tbps of traffic at line rate and process millions of hardware filtered flows in software to keep state of millions of connection in switch memory. As such, each switch in the fabric becomes a network DVR or Time machine and records every application and VM flow it sees. With a Server-Switch with Intel Rangeley class processor, 16Gb of memory, each Netvisor instance is capable of tracking 8-10million application flows at any given time. These Server-Switches have a list price of under $20k from Pluribus Networks and are cheaper then your typical switch that just does dumb packet switching.

While the servers have to be connected to the network to provide service (you can’t just block all traffic to the servers), the Netvisor on switch can be configured to not allow any connections into it control plane (only access via monitors) or from selected client only and much easier to defend against attack and provide a uncompromised view of infrastructure that is not impacted even when servers get broken into.

Live Example of Netvisor Analytics (detect attack/take action via vflow)

The Analytics application on Netvisor is a Big Data application where each Server-Switch collects millions of records and when a user runs query from any instance, the data is collected from

Each Server-Switch and presented in coherent manner. The user has full scripting support along with REST, C, and Java APIs to extract the information in whatever format he wants and exports it to any application for further analysis.

We can look at some live example form Pluribus Networks internal network that uses Netvisor based fabric to meet all its network, analytics, security and services needs. The fabric consists of following switches


To look at top 10 client-server pair based on highest rate of TCP SYN is available using following query


Seems like IP address is literally DDOS’ing server That is very interesting. But before digging into that, lets look at which client-server pairs are most active at the moment. So instead of sorting on SYN, we sort on EST (for established) and limit the output to top 10 entries per switch (keep in mind each switch has millions of records that goes back days and months.


It appears that IP address which had a very high SYN rate do not show up in established list at all. Since the failed SYN showed up approx. 11h ago (around 10.30am today morning) so lets look at all the connection with src-ip being


This shows that not a single connection was successfully established. For sanity sake, lets look at the top connections in terms of total bytes during the same period


So the mystery deepens. The dst-port in question was 23398 which is not a well known port. So lets look at the same connection signature. Easiest is to look at all connections with destination port 23398.


It appears that multiple clients have the same signature. Obviously we dig in deeper without limiting any output and look at this from many angles. After some investigation, it appears that this is not a legitimate application and no developer in Pluribus owns these particular IP addresses. Our port analytics showed that these IP belong to Virtual Machines that were all created few days back around same time. The prudent thing is to block this port all together across entire fabric quickly using the vflow API

secure11It is worth noting that we used the scope fabric to create this flow with action drop to block it across the entire network (on every switch). We could have used a different flow action to look at this flow live or record all traffic matching this flow across the network.

Outlier Analysis

Given that Netvisor Analytics is not statistical sample and accurately represent every single session between the servers and/or Virtual Machines, most customer have some form of scripting and logging mechanism that they deploy to collect this information. The example below shows the information person is really interested in by selecting the columns he wants to see

secure12The same command is run from a cron job every night at mid night via a script with a parse able delimiter of choice that gets recorded in flat files and moved to different location.

secure13Another script can actually record all destination IP address and sort them and compares them from the previous day to see which new IP address showed up in outbound list and similarly for inbound list. The IP addresses where both source and destination were local are ignored but IP addresses where either is outside and fed into other tool which keep track of quality of the IP address against attacker databases. Anything suspicious is flagged immediately. Similar scripts are used for compliance to ensure there was no attempt to connect outside of legal services or servers didn’t issue outbound connection to employees laptops (to detect malware).


More investigations later showed that we didn’t had a intruder in our live example but one of the developer had created bunch of virtual machines cloning some disk image which had this application which was misbehaving. Still unclear where it found the server ip address from but things like this and actual attacks have happened in past at Pluribus Networks and Netvisor analytics helps us track and take action. The network is not a black hole but shows the weakness of our application running on servers and virtual machines.

The description of scripts in outlier analysis is deliberately vague since it relates to a customer security procedure but are building more sophisticated analysis engines to detect anomalies in real time against normal behavior.

March 22, 2015 at 9:55 pm Leave a comment

Netvisor Takes SDN Switching Mainstream with $50M Series D

We closed our Series D in financing right before Christmas. This is a $50M round lead by Temasek and Ericsson. Temasek is a $170B plus sovereign fund out of Singapore that is best described as Berkshire Hathaway of Technology. They were the people responsible forinvestments into Alibaba. This is important to understand that with Netvisor achieving success in Enterprise Datacenter and Private Cloud markets, the bigger players now believe that SDN switching and applications on Server-Switches is pretty real.

The finding is primarily to scale our business side and help sell more products, build support infrastructure and create a application group that can write more applications on Netvisor to exploit the world of programmable networks.

Netvisor as an Application Platform

The best way to explain this is to draw a parallel between Netvisor as a switch Hypervisor and Smartphone.


When Apple released a IOS based smartphone, the world was full of small hardware devices like camera, GPS navigators etc. IOS (and later Android) become a software platform that allowed many applications to come of top of this platform.


Netvisor is creating the same paradigm for datacenter switching. Today, you have a physical fabric, a separate Observability fabric (using TAPS and probes which are more expensive then physical network), Hardware appliances for services and a separate overlay networks with their controller appliances. Since Netvisor is a Distributed Bare metal Switch OS, it allows all these functionality to come in as Software application on top of Netvisor.

Applications are always “On Fabric”

Continuing to draw the parallel with the world of smartphone, we can see that a whole new generation of applications got enabled because of the Apple IOS platform. The application write could write application assuming connectivity all the time without understanding the 3G/4G/LTE/bluetooth/Wifi protocol. More importantly, his application doesn’t have to deliver the necessary protocol stack to provide the connectivity. Netvisor Cluster-Fabric provides the same functionality to the application developers. The physical Fabric can be Layer 2, Layer 3 or Tunnel based (and contain any Networking Hardware in between). The applications are always “On Fabric” and don’t have to understand the physical topology opf the Network or how the Network is connected.


The Fabric is constructed using Layer2/3/vxlan via Netvisor cli/UI and applications use Netvisor C/Java/Rest APIs and Fabric abastractions to program the network, get the analytics, or provide the services.

Open Fabric Virtualization – Unify the Overlay and Underlay

We always believed in a data and control plane separation but we always kept the real world in mind. That is why Netvisor was always a bare metal distributed control plane and we documented several years back that centralized control planes will not scale (see seriesD_pic4

While our approach of doing bare metal control plane (separate from data plane) is getting validated we see bulk of the industry again going on a tangent with a separate overlap and underlay networks. That

is a absolute wrong direction to take and complicates the server as well as network with no visibility into what is going on. We are currently starting Beta test of Netvisor Open Fabric Virtualization where we allow the tunnels to move to switches and the encapsulation rules are created to switch hardware on demand. When a VM wants to talk to another VM, Netvisor gets the ARP packet (or a L2 miss) and automatically creates encapsulation rule using its vflow API. This allows the servers to go back to being Linux/KVM based servers without worrying about tunnels and network topology and Netvisor based switches to deal with tunnel tables the same way it deals with switching tables and routing tables.


The year 2015 is going to be a interesting year where SDN is reaching a tipping point and we will see large scale deployments and Networking will finally arrive in 21st century.

January 21, 2015 at 11:11 am Leave a comment

Netvisor powers the Rackscale Architecture from Intel/Supermicro

On May 5th, 2014, we announced that Pluribus Networks Netvisor is now powering the switch blades on the new Intel blade chassis announced by Supermicro Inc. Its creating quite a stir and is a proud moment for everyone at Pluribus Networks and Supermicro who made this possible.

There are several reasons why Netvisor is the ideal Hypervisor to power the switching blades:

  • Integrated Openstack Controller with Horizon and REST APIs as the only management that is needed – The entire Netvisor cluster-fabric and the virtual/physical switching on the compute blades is exported to Openstack via neutron plugins and extensions. Our Freedom series Server-Switches also bundle the full Openstack controller allowing the entire rack of microblades to be managed as one unit via Openstack Horizon GUI. For people wanting to manage the network layer via traditional tools, Netvisor also offers a full featured cli to manage teh cluster-fabric along with high performance and multithreaded native C and Java APIs. Netvisor also provides multiple virtualized services with H/W offload. So services like NAT, DNS/DHCP, IP-Pools, Routing, Load balancing, etc are integrated via Openstack Horizon to support multi-tenancy at scale.
  • Netvisor is a Distributed Plug and Play Hypervisor – The Supermicro blade chassis has 4 switch blades and 14 server blades. This is where the racks are going with micro servers etc. In a typical rack, you will have 7 such chassis giving you 98 server blades and 28 switch blades plus two top of the rack switches. An ideal platform for Rackscale computing that HPC and Private Cloud needs. In such architecture, you can’t deal with individual switches. Netvisor provides a full plug-and-play architecture with zero touch provisioning and entire switching infrastructure appears as one cluster-fabric with synchronized state and configuration sharing with no cabling mess. The entire cluster-fabric protocol and algorithm is based on TCP/IP with no special ASIC and H/W.
  • Application level Analytics and Debuggability – Netvisor has in built support for looking at all physical servers, virtual machines and applications without needing any probes or agent software. The addition of Freedom series as Top of the Rack layer allows users to track millions or VMs and application level flows in real time as well as historical data. Helps with capacity planning, audit, congestion analytics, VM life cycle management, application level performance and debuggability analysis. In multi-tenancy enviroments, Netvisor allows each individual tenant to analyze its own VM, applications, and Services.
  • Netvisor powers all types of switching platforms – Just the way server Hypervisor are now ubiquitously running from laptops to desktops to high end servers, Netvisor also supports Micro switch blades to the ODM Top of the Rack switches and more powerful Server-Switches.
  • Netvisor runs on every possible control plane – On the Supermicro switch blades, Netvisor runs on a Avaton based control processor with 8Gb of memory and limited Flash storage. On other switches, it uses single to dual socket Xeon with 512Gb RAM and Fusion-io based storage.
  • Netvisor is full Open – It runs on all open platforms is built from best of the breed Open source OS with addition for switching, Cluster-Fabric, and switch Virtualization. Run it as traditional switch mode or with your users choice of Linux (Ubuntu and Fedora by default).

Use Netvisor with Openstack on Supermicro’s blade chassis along with the Freedom series Top of the Rack to run the most dense and power efficient Rackscale Architecture today.

Netvisor powers Rackscale Architecture

Netvisor powers Rackscale Architecture

May 6, 2014 at 9:41 am Leave a comment

The Battle for the Top of the Rack

The Battlefield between Sysadmin and Netadmin

The fight for control between sysadmin and network admin has been going on for decades but the boundary line had been pretty static. Anything that ran a full OS and was a end node was is a server is under server ops while anything that connected the servers together was a network device and was under the control of network operations.

If you look at the progression of the two side through the last two decades, you will realize that the server and server OS have gone through change after change with new software packaging system, virtualization, density of servers per rack, and so on while the networking technology has remained pretty static other than speed and feeds and some tagging protocols. While the server admin kept reinventing himself through open source, virtualization, six nine uptime, the network got split into three distinct category (forgive me Gartner for gross simplification):

  • The Datacenter Networking: The heavy lifting being done by the server ops and running applications and virtual machine the most critical need, the network admin tended to come in the way and exerted control via IP address and VLAN management. The network services which used to be important are also becoming software based and open source and fast becoming the domain of server ops. The server operations is looking for network to provide value, analytics, debuggability, instant scale, threat analysis and virtualized services which the network operations wasn’t trained to provide and hence the relationship between the two organizations has increasingly soured.
  • The Telecommunications Networking: The Network admin had to deal with increased complexity and is becoming very specialized where often Ph.D.s are deployed to ensure the complex wide area network continues to work. The security and service level agreements were paramount. In this scenario, the network operation are doing the heavy lifting.
  • The Campus and Enterprise Network: Used to be a server operations dominated world but as each student is getting 5 IP devices and wants a gig+ B/W for his social and gaming needs, the security, analytics, and managability needs are increasing.

Is the future bleak for Network Operations in Datacenter and Campus?

The network is becoming more challenging is the fast moving world of servers, virtualization, threats, need for instant information, and instant scale. The network operations in Datacenter and Campus networking have not upgraded their skill sets resulting in SDN and keep the network dumb initiatives. Now most of these initiatives lack the deployment scale but Pluribus Netvisor is a full fledged open source Network OS which is designed to make the Top of the Rack another server. Given my past experiences with virtual switching,
the Netvisor is running on the Top of the Rack Server-Switch and gives the ability to seamlessly blend and control the virtual and physical network and orchestrate it as one. Our philosophy is similar to Cisco that the Top of the Rack switch is too powerful to ignore but one needs a full Network OS (aka Netvisor) to unleash the power of the top of the rack switch. With the full Unix programmability and C,Java,Perl,Phython interfaces, the existing server tool chain can now program the network as well. Netvisor running on the Server-Switch and working in conjunction with Openstack and VMware NSX style orchestration can really solve the major pain point.

So one would naturally assume the Netvisor running on F64 and the new whiteboxes will become the domain of Server Operations and the Rack will be fully owned by sysadmin and controlled with the same management paradigm and programmed with the server style tool chain (gcc/gdb)!! And a large segment of our customer base is proving the point. But we are also seeing a smaller segment of savvy network admins at our door steps who want to embrace this new paradigm. Pluribus gives a welcome adjunct to VXLAN style server overlays that case for only low-feature networks that do nothing more than push packets. While talking to these guys, we are providing a management interface that provides the network admins a bridge to the new world where the network became a differentiator and adds huge value to the server and application infrastructure but working as one.

The changes needed in Network Operations to be successful

During our talks with the Network Ops side of the house we see a very distinct patterns. A smaller set of network admins in fast moving companies and cloud providers (not all of them run like Google) very quickly realize that using Pluribus Netvisor and Server-Switches, they can have instant view of their server and application infrastructure, give more control to the application guys (and make their clients happy), help them debug performance and security issues, create virtual network that mirror a physical network because they have control over both. There is a larger set of network admins that is realizing that they need to embrace this change if they want to stay relevant and probably told by their CxO that they need to cut costs and add value. We, as in Pluribus Networks would really like to understand the needs to this set of people so we can tweak the management plane to be more comfortable to this set so they can focus on adding differentiation and value instead of worrying about how they configure VXLAN.

For the infrastructure to grow, we can probably take the easy path (which is already happening) and deliver the Top of the Rack to Server Operations but the reality is that the server admin is already very loaded and would love it if his network counter part can help him. Guys, we would love hear your thoughts on the fast changing world of Top of the Rack Switch. Any concrete examples from both network and server operations would be very useful. You can leave comments on this blog or send me direct emails.

August 28, 2013 at 10:01 am Leave a comment

Crossbow on Big F#@!ing Webtone Switch

Back in the days of SUN Microsystem, Scott McNealy asked us to build a big F#@!ing Webtone Switch. At that time, the underlying pieces weren’t there but over last few years the possibilities have opened up. We now have the switch chips from Broadcom and Intel that switch at 1.2Tbps in H/W. From a OS view, 1.2 Tbps of switching at 300ns latency is great but the more amazing thing is PCIe as a control plane which allows 20-40Gbps of control plane B/W where you can change switch registers, L2/L3-tables, TCAMs, etc at nano-second rates.

So after more than three years of work and million lines of C code, the Pluribus Network’s engineering team has the switch chip under Crossbow control. For people who are not sure what I am talking about, in 2005 project Crossbow invented virtual switching inside a server hypervisor and introduced hardware based Virtual NICs and dynamic polling to get 40Gbps of bandwidth through a server OS. The details were published in “Crossbow: From Hardware Virtualized NICs to Virtualized Networks” in ACM Sigcomm VISA 09.

In the goal to benefit from merchent silicon ecosystem and orchestrate the entire infrastructure using Open source OS on switches, the industry has been going on suboptimal paths. The most notable efforts around a centralized controller can barely deal with the scale of single switch and typically requires sending a packet to a controller running on a separate server. The latency of these transactions (typically in milliseconds to seconds) defeats the required reaction time in microseconds in virtualized environments where Network resources are shared. The other approach of just throwing the Intel or Broadcom SDK on a whitebox switch with Linux and Quagga doesn’t really solve the control plane problem. The Broadcom and Intel SDK are crafted for their specific switch chips and meant for configuration ease and not for high speed control plane software.

By bringing the Crossbow Architecture on the switch chip where it is part of the Network OS directly controlling the switch chip via the PCIe allowsus to get following benefit:

  • Integrated Switch Hardware with fully programmable Control Plane allowing the performance and scale necessary to deal with 10Gbps switches (the distributed control plane is part of the Network OS running on the switch itself).
  • Enable applications like DDoS, IDS, Firewall, Load Balancer, routing, messaging, etc that need to be in network to run on the switch itself and benefit from the H/W offload, high speed snooping, and flow capability that switch chip offers via C, Java, Perl, Python, etc programming interfaces in UNIX/Linux environment. Development, Deployment and Resource provisioning of these applications on Crossbow enabled switches is same as current server mechanisms and uses the existing tool chain (gcc/gdb, kvm, etc).
  • Bring the benefit of merchent silicon ecosystem on switches under Openstack control enabling faster pace of innovation and cost advantages.

As we get ready to roll Netvisor (and its open source version – openNetvisor) out, I will discuss more details on this blog in near future.

July 29, 2013 at 10:30 am 2 comments

Netvisor and iTOR Unvieled

After a long wait, we finally unveiled stage 1 of the big solution – the Netvisor and our intelligent Top of the Rack (iTOR) switch. If you haven’t had a chance to see, you can read about it here. At this point, we have enough boxes on the way that we can open the beta to slightly larger audience. Some more details about the hardware – it has 48  10gigabit ethernet ports which can take a sfp+ optical module, sfp+ direct attach or a 1gigbit RJ45 module along with 4x40gigabit qsfp ports. The Network Hypervisor controlling one or more iTOR is a full fledged Operating System and  amongst other things capable of running your hardest applications. Comes with all tools like gcc/gdb/perl already there and you can load anything else that is not there. Why you may ask – if you always had an application that needed to be in the network, now it truly can be on the network. Imagining doing your physical or virtual load balancers, proxy servers, monitoring engines, IDS systems, SPAM filters, running on our network hypervisor where they are truly in the network without needing anything to plug in. Create you virtual networks along with virtual appliances in seconds, snapshot it, clone it and bring it back up in seconds. This is SDN ready for immediate deployment.

Intrigued! We might be able to roll you into the beta program. Its is a limited program but if you have special needs, then we have a place for you. If you truly think the networks today can’t meet your needs then we might have a special place for you. Send us an email or click here. Deploying the beta is very easy. If you have a rack of servers (with 1, 10, or 40gigE ports), then that is all we need. If you are dealing with large number of Virtual Machine on servers where the server runs some variant of Linux and KVM/Xen, then you should definitely talk to us. Join our Beta program or call us to learn more.

August 16, 2012 at 7:35 am 2 comments

How does Openflow, SDN help Virtualization/Cloud (Part 3 of 3) – Why Nicira had to do a deal?

The challenges faced by Openflow and SDN

This is the 3rd and final article in this series. As promised, lets look at some of the challenges facing this space and how we are addressing those challenges.

Challenge 1 – Which is why Nicira had to get a big partner

I have seen a lot of article about Nicira being acquired. The question no one has asked is – if the space is so hot, why did Nicira sell so early? The deal size (1.26B) was hardly chump change but if I were them and my stock was rising exponentially, then I would have held off in lure of changing the world. So what was the rush? I believe the answer lies in some of the issues I discussed in article 2 of this series a few months back–the difference between server (Controller-based) and switch (Fabric-based) approaches. The Nicira solution was very dependent on the server and the server hypervisor. The world of server operating systems and hypervisor is so fragmented that staying independent would have been a very uphill battle. Tying up with one of the biggest hypervisors made sense to ensure that their technology keeps moving forward. And Vmware is good company driven by solid technology. So, perhaps, the only question is how long before the Vmware/EMC and Cisco relationship comes unhinged?

Challenge 2 – The divide between Control place and Data Plane

The current promise of having a standard way of controlling networking and a controller that is platform independent is huge. It provides simplified network management and rapid scale for virtual networks. Yet the current implementations have become problematic.

Controller Based SDN

Since the switches are dumb and do not have global view, the current controllers have turned into a policy enforcement engine as well i.e. new flow setup requires a controller to agree which means every flow needs to go through the controller which instantiates them on the switch. This raises several issues:

  • A controller which is essentially an application running on a server OS over a 10gbs link (with a latency of tens of milli-second) is in charge of controlling a switch which is switching 1.2 Tbps of traffic at an average latency of under a micro second and deals with 100k flows with a 30% being setup or torn down every second. To put things in perspective, a controller takes tens of millisecond to set up a flow while the life of a flow transferring a 10Mb data (typical web page) is 10 msec!!
  • To deal with 100k flows, the switch chips need to have that kind of flow capability. The current (and coming generation) of chips have no where near such capability so one can only use the flow table as a cache which brings the 3rd issue.
  • Flow setup rate is anemic at best on the existing hardware. You are lucky if you can get 1000 flows per second.

So what is lacking is a Network Operating system on the switch to support the controller App. If you look at the server world, the administrator specified the policies and its the job of the OS working very closely with the H/W to enforce the policies. In the current scenario it feels like a application running on the bare metal with no Operating System support. Since this is a highly specialized application, it needs a specialized Operating system – a Network Operating system which can also be virtualized.

Challenge 3 – The Controller based Network

For a while, people were just tried to their inflexible networks which didn’t see any innovation in last two decades while the server and storage went through major metamorphosis. That frustration gave birth to Openflow/SDN which has currently morphed into a controller mania. Moving the brain from body and separating them creates somewhat of a split brain problem since the body (or switch in this case) still needs somewhat of a brain. What we need is a solution that encompasses the entire L2 fabric and the controller and Fabric work as one while providing easy abstractions for user to achieve their virtualization, SLA and monitoring needs.

A Distributed Network Hypervisor or Netvisor to the rescue

So what we (at Pluribus) saw early on that the world of servers is a very good example. The commoditization of chips and value moving to software is pretty much whats happening in the world of storage and is bound to happen in the world of networking. So we decided to do things in the right order i.e. get the bleeding edge commodity chips and create a Network Operating System with the following properties:

  • Network OS – since a switch chip is very specialized and powerful chip.
  • Distributed – There are always more then 1 switches in the network which need to work in tandem to support end-to-end flow
  • Virtualized – Ability to run physical and Virtual Networking applications – As I mentioned before, a switch is not the network. We need to deal with all network services in physical and virtual form and the network OS needs to support that

Hence we created a Distributed Network Hypervisor called Netvisor or nvOS for short. Its designed to run on the switches and support virtual and physical network services. It also runs a controller on itself where the controller is a policy distribution engine and no longer a policy enforcement engine.

Fabric based SDN

As the above figure shows, the current line drawn between the control plane and data place is not going to scale and perform. The line we originally drew (the founding principle of PLuribus Networks) need to be delivered for SDN and Openflow to deliver its true promise.

July 31, 2012 at 7:49 am 3 comments

Older Posts

Top Rated

Recent Posts

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 177 other followers


Get every new post delivered to your Inbox.

Join 177 other followers