How does Openflow, SDN help Virtualization/Cloud (Part 2 of 3)

April 16, 2012 at 7:18 am 3 comments

Using Openflow – state of the ART

In my last article I discussed the components of Openflow and building blocks of a Software Defined Network. In this part, let me discuss some of the things people are doing to make it all work. One of the pieces that needs to be discussed beforehand is the various ways in which a packet can be matched against a flow and what kind of actions can be taken.

Flow Classification and the split between Hardware and Software

A flow is a simple mechanism to identify a group of packets on the wire. So a packets coming from a particular machine can be identified by the machines MAC or IP addresses which appears as source MAC in L2 header or source IP in L3 header. By putting a flow rule around either of those fields and just counting the packets going through the switch that hit that rule, we can determine the number of packets being sent by the machine. Its useful information. To make it more useful, one could add another flow to measure the packets going to our target machine. Adding a destination MAC or destination IP rule based on the machines MAC or IP address will accomplish that and using our 2 flows, we can find out how many packets are coming and going out of the machine.

The next question is, who is implementing the rules we just discussed above. There are several options and advantages/disadvantages of each approach:

  • Server Based Approach – It is the easiest way since the server already has to process the packets and it can keep easily keep track of some statistics as well. The issue is when its not a real server but a virtual machine on the server that we want to track. We can still let the hypervisor track the packets or ask the Virtual Machine to track it. The big disadvantage of this approach is that asking the server to do things on your behalf needs certain level of trust (security holes and digital certificates come to mind), depends on Operating System capability and is a directly proportional to lower performance. Since the server hypervisor has to classify or assign packets to flow and take some actions, it needs to see the packets making hardware based virtualization (SR-IOV) impossible to adopt. Most of the data center bridging standards and IO virtualization standards are going towards hardware based switch in the server and doing things in the S/W layers of the server is not going to be possible. The other major disadvantage is scaling issues since as the number of servers grows, orchestrating policies becomes more difficult.
  • Server based with H/W offload – There is more talk around this than real implementations but its worth mentioning that people have have discussed putting special capabilities in the NICs on server to offload some flow processing. The advantage is performance and security (since the Hypervisor controls the NIC, the Virtual Machine can’t circumvent it). The disadvantage is cost and scaling issues. The chips capable of doing this (TCAMs etc) are expensive and trying to orchestrate across large numbers of servers severely limits the scale. We are already seeing Intel Sandy Bridge architecture coming to life which is integrating 10GigE NICs. Adding TCAMs will increase the basic cost by $800-900 and also add significant complexity.
  • Probe Based Approach – Have probes in the network analyze flows. There are companies out there that specialize in inserting probes in the network and collecting the data that can do this well as long as you only want to observe things. If redirection, traffic shaping, header modification or other actions are needed, these passive probes will not work. In addition inserting them requires intrusive work in cabling etc. Needless to say, this is one of the least favorite approach.
  • Switch based approach – Since all the traffic passes through the switches anyway, having them deal with flows and take associated action makes lot of sense. The modern switch chip have H/W based CAMs and TCAMs which can take a rule and do the needful without adding to the latency or throughput of the packet stream. In my past life, as Architect of Solaris Networking and virtualization, I have done the software based approach but given the growing Virtual machine density, SR-IOV type features and growing need for analytics and traffic shaping with performance, I think the switch based approach is far superior. So the CAM and TCAM measuring flows is the Hardware piece. The software piece is ability to add and delete rules on the fly. And Openflow provides a pseudo standard that allows a programmer to work and program any switch. But the biggest advantages are scale, ease of use, and administrative separation of this approach. The scale comes from orchestrating your flows and policies across less number of devices (one switch for approx 50 servers). Also, the people in charge of networks and storage networks are at times different and keeping the administrative separation is useful although not required.

So needless to say, I have currently taken the approach of solving this problem on the switch in conjunction with coordinating with the host using standards like EVB/DCB etc which we will discuss at a later time. Given that new generation switch chips are pretty similar to the server CPU and have same complexity as a server CPU, the problem begs a real Operating System on top which can help us write openflow based applications. This is where we step in. One part of Pluribus Networks effort is around implementing a distributed network Hypervisor (called Netvisor™ a key component of the  nvOS™ Operating System) to give openflow programmers real teeth. We treat any switch chip the same as server chips and most of the code is platform independent with very little that is written to the chips instruction set. Just the same way same Linux code (with little platform specific stuff) runs on x86 and Power or Opensolaris code runs on x86 and Sparc.

Current implementations

So a little overview of projects and people who are leading the charge in the brave world of flows and Software Defined Networking. Before raking me over coal on missing things, the stuff below is what I consider mainstream implementations that apply in world of data centers today (Disclaimer: I have purposely left out most of the research efforts that didn’t reach a mainstream product since there are too many):

  • The discussion has to start with project Crossbow which I believe is the first flow implementation with dedicated H/W resources approach that was available in OpenSolaris in 2007 and finally shipped in Solaris 11 (delayed due to Oracle/Sun merger). The virtual switching in Host and H/W based patents (7613132, 7643482, 7613198, 7499463, etc) were filed by me and fellow conspirators from 2004 onwards and awarded in 2009 onwards. Keep in mind that when Crossbow had virtual switching with a H/W classifier running in OpenSolaris, Xen etc were just coming out with S/W based bridging. The 2 commands – flowadm and dladm allow users to create Flows and S/W or H/W based virtual NICs that can be assigned to virtual machines. This is the Server Based Approach discussed earlier that ships in main stream OS and is pretty widely deployed.
  • A approach similar to above was later adopted by our fellow company Nicira in form of their NVP Architecture. They enhanced the offering by allowing a Openflow based Orchestrator to control the virtual switching in the host although their focus has primarily been on virtualizaton size and not so much on application flows side.
  • Another of our sister and partner companies, Big Switch Networks has taken a hybrid approach of orchestrating any openflow capable device which can be a switch or a virtual switch inside a hypervisor. Since they are still in stealth partially, it would not be my place to talk details.
  • Obviously, every existing network vendor claims that they are working on SDN and openflow. But by definition, SDN requires programmability and Operating Systems to run your programs on. Most of the existing Network vendors lack the know how or the ability to do this. They have rich bank balances and if they can acquire the right companies and leave them alone, then they can potentially bridge the chasm (although it is going to be painful).

And then its our effort at Pluribus Networks. Its a well kept secret that we are building Server-Switches that runs  Netvisor™ which has massive flow capabilities and would be ideal for all the people developing things in SDN space. But then we are in stealth mode and there is lot more to us which we will get around to discussing in coming days.

About these ads

Entry filed under: Uncategorized. Tags: .

How does Openflow and SDN help Virtualization/Cloud How does Openflow, SDN help Virtualization/Cloud (Part 3 of 3) – Why Nicira had to do a deal?

3 Comments Add your own

  • 1. Raj Channa  |  April 17, 2012 at 12:15 am

    Hi Sunay,

    Thanks.. Where is part 1 and 3 of this ?

    –raj

    Reply
  • [...] blocks of a Software Defined Network. In this part, let me discuss some of the things …sunaytripathi.wordpress.com/…/how-does-openflow-sdn-help-…For more on this story click (author unknown) … No related posts. This entry was posted in [...]

    Reply
  • 3. sh0x  |  April 29, 2012 at 5:14 pm

    Awesome post! Interested to see what your thoughts are for vswitch connectivity.. You mentioned 802.1Qbg, i’m not aware of many EVB implementations, but I think its an excellent protocol and it would be nice to see more support for it.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Trackback this post  |  Subscribe to the comments via RSS Feed


Top Rated

Recent Posts

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 94 other followers


Follow

Get every new post delivered to your Inbox.

Join 94 other followers