OpenStack Networking @ Havana Summit, Portland, April 2013

Last week, I attended the OpenStack Summit in Portland, Oregon and was extremely encouraged by many things: that the conference doubled in size since last year, the successful release of Grizzly, and by some really significant progress made with regard to network services by a diverse group of us network types.  Here are my takeaways from the summit.

Openstack boothLet me begin by congratulating my fellow stackers on the OpenStack Networking (formerly Quantum) team for hitting the milestones and delivering a feature-rich Grizzly release. If Folsom was the coming out party for Quantum, the Grizzly release saw Quantum mature into a core, stable OpenStack service, delivering some advanced features like integration with Loadbalancers. This great momentum set the stage perfectly for an exciting round of brainstorming sessions on anticipated features at the OpenStack Havana Summit in Portland.

As with the summits before, the Havana summit had an independent networking design summit track with over twenty-five sessions covering a breadth of topics involving APIs, extensions, usability, testing, performance improvements, plugins' redesign, network aware scheduling, advanced network services and their insertion, to name a few. The request for sessions was actually well over subscribed bearing testament to the amount of interest in this project. Participants were diverse, representing major networking and cloud vendors, and passionate open source developers. The discussions were focused and feedback was constructive. For a group so diverse, we made significant progress on advancing the discussion and charting the path for the next six months.

The highlight and the popular theme was clearly the interest around network services and how they can be effectively integrated with the other virtual networking components. From my conversations with numerous cloud operators and network services' vendors it was very apparent that this is a key aspect the user community is demanding and the vendor community is interested in offering. Bridging this gap requires an elegant model that helps to capture the user intent for different service insertion modes and service chains, and a mapping at the backend which dynamically realizes this by leveraging services on physical and/or virtual devices. This need is well understood by the project's core team and, to this end, we brainstormed ideas in the Network Services' Chaining, Insertion and Steering session. Big Switch understands this problem space well, and has devised elegant SDN-driven solutions, some of which were showcased in the demo (also see inset).

Service Chaining with Firewall and Loadbalancer Service

It was also an extremely fulfilling experience to participate in driving the Firewall Service discussions along with a host of firewall vendors (it was joked that we represented a United Nations assembly of sorts ;-)). My personal thanks to everyone who participated in creating the design specification that aims to meet the requirements of the user and also enables the rich features that various firewall vendors can provide (a tough balancing act!). We have an exciting journey ahead as we translate these discussions and ideas, and bring them to fruition over the next six months.

While on the topic of services, we devoted a significant amount of time to discussing VPN service (with different technology and deployment modes), and to further advance the Loadbalancer service from an experimental implementation to a more rich reference implementation. We hope to complement this with a service VM framework that we can use across the services. A complementary aspect of this discussion is the integration with other OpenStack components. We had a dedicated session on integration with Ceilometer and expect better, lighter, tighter integration including counters for advanced networking features (all *aaS services, security groups, floating IPs, etc.) and possibly monitoring capabilities beyond just counters. We also discussed how the OpenStack scheduler can be made more networking aware and how network metrics (like delay, hop counts) can be made available.

All in all, lots of progress and lots to look forward to. So here's raising a "Big" toast to Portland for hosting a full-house OpenStack Summit, with attendance of close to 3,000 attendees, almost twice the number from the previous summit, to our new Networking PTL Mark McClain for presiding over a successful agenda, and to six more months of intense hacking and exciting features until we meet again.

About the author: Sumit Naiksatam is a core contributor in the OpenStack Networking Project. He is a Member of Technical Staff at Big Switch Networks and leads their technical contribution in OpenStack.