|By Kapil Raval||
|February 19, 2013 06:30 AM EST||
The networking industry has gone through different waves over last 30+ years. In the '80s, the first wave was all about connecting and sharing; how to connect a computer to other peripheral devices and other computers. There were many players who developed technology and services to address that, e.g. Novell, 3Com, Sun, IBM, DEC, Nortel. Across the industry, small islands of various protocols were created with multiple gateways to bridge them.
In 90's and 00's, Cisco dominated the industry and did a brilliant job of pushing the industry towards a common approach built on Ethernet. They built a hugely successful business and ecosystem and even created new markets like VoIP on the proposition that networking should be on a common highway. We also saw isolation of networks from the rest of the IT infrastructure, in the sense that software innovations continued in the server and storage environments independent of the network area. The focus also remained on different components of the infrastructure and not on the ‘service' delivered by the combination of those infrastructure components, i.e., server, storage and network.
Now, it is all about orchestrated service delivery which requires standards-based open approach. According to Gartner reports on Emerging Technology Analysis and Key Issues for Communications Strategies, a) over 50% workloads will be virtualized by the end of 2012 thanks to Cloud computing, and b) more than 80% of traffic will be server-to-server by 2014 due to federated applications and virtualization.
In this article, I attempt to highlight why we have reached limits of current network technology, how Software Defined Networking will lead the next wave of innovations and its benefits to the IT industry. Today, network elements like switches and routers have resident software in each box. The software in the box provides intelligence using distributed algorithms to decide how each packet should be handled by it. In order for the entire network to function properly, the software in each box must work in coordination with other boxes. This approach has served us well so far.
The coordinated distributed algorithms however make it difficult to introduce a change on the fly. We have to reconfigure the embedded software on all network components (often called boxes) to implement any change. On the other hand, the wave of virtualization demands flexible, adaptive and nimble networks. This wave exposes limitations of the current networking approach, which is inflexible and protocol-heavy. As distributed algorithms are used, not one box has a global view of the network. This results in over provisioning at the time of designing and guess-work while trouble-shooting. For large cloud deployments, compute and storage environments can be virtualized and consumed easily but because of the limitations of networks, its full potential is not realized.
Typically, a network administrator spends a lot of time planning and then configuring the network components with changing business requirements and varying network traffic. Network administrators learn a lot by trial and error and the resulting expertise based on experience is limited to the experienced few.
Research students at Stanford, Berkley and other universities found it hard to experiment with their networks because the software is embedded in each switch or a router and any change has to be coordinated between vendors to make the distributed algorithms interoperable to provide the functionality they needed for research & experimentation. It is with this simple objective that the idea of OpenFlow was born. The first step that these researchers took was to develop ability to program switches, from a remote controller. The OpenFlow protocol was developed to support communication between a switch and a controller. It allows external control software to control the data path of a switch, bypassing traditional L2 and L3 protocols and associated configurations. OpenFlow protocol defines messages, such as packet-received, send-packet-out, modify-forwarding-table, and get-stats. The researchers added OpenFlow support to existing boxes and allowed OpenFlow controller to program part of Flow-Table entries for research and experimentation while rest of the box worked as before. This gave them control over switches from a controller running on a remote industry standard server. This was the start of OpenFlow which basically separated the physical or data layer from the control layer.
OpenFlow and SDN became quite popular in the research community and several service providers and some vendors started to see the value of this approach. Researchers from Stanford and Berkeley took the lead but Open Networking Foundation (ONF) was founded by leading providers (Google, Yahoo!, Microsoft, Facebook, Deutsche Telecom, and Verizon). Some vendors, like HP, expressed their support from the beginning. ONF is the body which defines, standardizes and enhances OpenFlow protocol. ONF has a bigger charter with SDN that goes beyond OpenFlow protocol. It promotes SDN and may standardize different parts of SDN. As a policy, vendors cannot join its board but can become members of ONF and lead some working groups. Vendors have influence over the emerging standard though they don't set the overall agenda and they don't make final decisions on what is standardized and what is not.
Another interesting point is that ONF wants to do as little standardization as possible to encourage creativity. At first it sounded a bit conflicting but ONF looks at the software industry and tries to follow it by taking its best practices. When you look at the software industry, there are fewer standards than the network industry and it has created more innovations and jobs than the network industry. The Network industry has too many protocols defined and standardized, resulting in more complexity and fewer innovations. Academicians are influencing ONF and ensuring that we don't end up with another rigid, inflexible and protocol heavy networking world. ONF has 66 members today and its membership costs $30k/year. This is relatively high compared to other such bodies and the reason could be to ensure that only genuinely interested parties become members. We know that breakthrough innovations would come from small start-ups, some of whom would find it difficult to spend so much for the annual membership. On the other hand, ONF ensures that the development made as part of their body is made available to all members at no charge or royalty etc. One would end up spending more than $30k in lawyer's fees to get the royalty arrangements sorted out.
Google, Amazon, Rackspace, etc., have already implemented OpenFlow based networks, using proprietary hardware and in-house developed software. We see many new start-up focused on this new area to develop applications that leverage virtualized network. Most cloud providers manage huge data centers. "Every day Amazon Web Services (AWS) adds enough new capacity to support all of Amazon.com's global infrastructure through the company's first 5 years, when it was a $2.76 billion annual revenue enterprise" according to Jim Hamilton, their VP at large.
Google embraced OpenFlow very early on. Google's inter-datacenter production network, largest in the world by traffic, runs on OpenFlow and SDN. Google proved that OpenFlow based networks can scale and deliver its promise. The biggest use case, according to Google, for Central controllers is the fact that we can do re-routing, anticipating an event, e.g. if we know that we are introducing a new service which will lead to traffic load, we can pre-provision network in a way to best optimize infrastructure resources. If a small business, say a Flower shop, expects more traffic and compute power on a Valentine day, it is easy to have compute and storage power made available with standard virtualization technology available today. But to make network resources available on demand is challenging. This is where an OpenFlow controller controlling switches can easily provide necessary bandwidth and then tear it down or redirect the network resources for other requests. Google example is impressive but one could argue that how many enterprise customers could afford or dare to do what Google can do. Moreover, just because it made a business case for Google does not mean that it can make a business case for everyone. Each customer will have to evaluate their network, future growth requirements etc and see if there is a positive business case.
Software Defined Networking (SDN) can help you make the network ready for Cloud-bursting as and when required. SDN opens up many possibilities. For example;
- Packet Flow redirection: There is a lot of video traffic coming from sources we trust. Security services on such traffic are not required for some applications. As security services are extremely infrastructure-hungry and CPU-intensive, passing all data to it leads to a sprawl of security devices (many IDS/ IPS, DPI appliances) to monitor traffic. With OpenFlow we can easily redirect traffic away from the costly resources for trusted traffic.
- Policy Management: Because you now have global view of the network and can control the network with software running on OpenFlow controller, defining and implementing business policies become easier, e.g. better bandwidth management: In case of excess traffic which is not anticipated, the controller can make sure to program the network in such a way that higher priority business traffic is given more resources than low priority traffic.
- Virtual Application Network: The OpenFlow controller lets us create virtual networks for different applications on one physical network, such that different applications can have different bandwidth and QoS based on their requirements, with auditable network isolation between applications and simpler compliance (a requirement for the financial industry). One can provide each customer a separate virtual domain for them to manage
- Network Security: OpenFlow can be used to make networks more secure and agile. The OpenFlow controller allows us to monitor and manage network security and
-Dynamically insert security services at any point in the network (on-demand firewall or IDS/IPS, for example)
-Monitor traffic and re-direct suspect flows for full inspection
-Combine per-flow QoS control with network management systems to leverage traffic and end-user identity information
-Dynamically detect and mitigate attacks due to infected PCs by using signature/reputation database to create rules that address specific attacks
- Proprietary Appliances: It is very common today to deploy appliances in the network to deliver specific functionalities. These proprietary appliances can be replaced with an OpenFlow controller and a software application delivering the specific functionality. Communication Service Providers have a significant number of network services that can take advantage of virtualization and industry standard servers. Many application specific appliances that are running on custom ASIC (WAN optimization, Firewalls, DPI, SPAM/MAIL appliances, IDS etc) are good candidates for the SDN approach.
- As SDN matures, a couple of years down the road, more futuristic use case is to monitor traffic patterns, generate intelligence and then use the intelligence to anticipate traffic patterns and optimize available resources. Using this kind of intelligence, we can actually reduce power consumption, too. For example, if we know the usage of the network is less during the nights and early mornings, we can shut off parts of the network in such a way that we still get complete connectivity, yet not have the complete network up.
The list of use cases is growing on a daily basis and will continue to grow even faster as the pace of innovation increases. The number of new start-ups in this area is increasing rapidly. Finally, the networking field, which has been quite dull from the perspective of new innovations, is going to be more vibrant and exciting with new possibilities. Moreover, if ONF is successful in maintaining ‘Open standards', SDN will allow plug and play with multivendor products, empowering IT and Network operators to be more cost-effective and adaptive to agility requirements of a business. We will see that with SDN, the network industry will mirror the innovations and developments seen in the server and storage fields.
Some vendors want to have API's well-defined for applications to leverage OpenFlow controllers or have more protocols supported. It is prudent on the part of ONF not to define and standardize too much and let the market define what an acceptable standard is. It is important to keep OpenFlow protocol unrestricted by defining and standardizing not more than what is absolutely required. This will fuel innovations.
OpenFlow protocol is in its infancy but it has generated tremendous interest from customers, researchers as well as vendors. One can argue that it is not fully matured or ready for prime time but most agree that it will change the network industry fundamentally. It will make the industry more flexible, nimble and drive more innovations. This train has left the station while some debate that its destination is not well-defined or its ETA is not known. The hardware vendors will have to accept the fact that networking hardware will be commoditized just like servers and storage. OpenFlow/SDN, for sure, opens up opportunities for different network based applications. This is where current vendors will have to focus on to continue to play a major role in the future. Network administrators will not be spending hours reconfiguring switches and routers. They will have to get skilled on how to control, manage, test and implement changes from a central controller.
Although the OpenFlow protocol is defined, there are not many vendors in the market supporting its latest version 1.3. Moreover, there is a lack of tools to test, monitor and manage this new environment. HP and other major vendors have openly embraced OpenFlow and are investing in it. HP was one of the first major network vendors to invest in this area, with 60+ deployments of 16 different switches supporting OpenFlow. HP is also leading one of the task forces of ONF to evolve the OpenFlow protocol. With its traditional strength in IT performance & operations (test, monitor and manage) management and telecom OSS, HP is well-positioned to deliver a complete future-proof infrastructure solution, (consisting of server, storage, networking, software, security and analytics) for enterprise IT as well as telecom service providers.
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
May. 27, 2015 03:00 AM EDT Reads: 5,884
We certainly live in interesting technological times. And no more interesting than the current competing IoT standards for connectivity. Various standards bodies, approaches, and ecosystems are vying for mindshare and positioning for a competitive edge. It is clear that when the dust settles, we will have new protocols, evolved protocols, that will change the way we interact with devices and infrastructure. We will also have evolved web protocols, like HTTP/2, that will be changing the very core of our infrastructures. At the same time, we have old approaches made new again like micro-services...
May. 27, 2015 02:30 AM EDT Reads: 5,571
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e...
May. 27, 2015 02:00 AM EDT Reads: 6,338
Collecting data in the field and configuring multitudes of unique devices is a time-consuming, labor-intensive process that can stretch IT resources. Horan & Bird [H&B], Australia’s fifth-largest Solar Panel Installer, wanted to automate sensor data collection and monitoring from its solar panels and integrate the data with its business and marketing systems. After data was collected and structured, two major areas needed to be addressed: improving developer workflows and extending access to a business application to multiple users (multi-tenancy). Docker, a container technology, was used to ...
May. 27, 2015 01:00 AM EDT Reads: 2,507
The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact.
May. 26, 2015 09:00 PM EDT Reads: 5,226
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
May. 26, 2015 06:00 PM EDT Reads: 2,275
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy.
May. 26, 2015 05:00 PM EDT Reads: 4,960
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
May. 26, 2015 02:00 PM EDT Reads: 6,694
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will addresses this very serious issue of profound change in the industry.
May. 26, 2015 01:15 PM EDT Reads: 1,153
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
May. 26, 2015 01:00 PM EDT Reads: 7,308
We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i...
May. 26, 2015 01:00 PM EDT Reads: 4,607
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be
May. 26, 2015 12:15 PM EDT Reads: 2,709
Container frameworks, such as Docker, provide a variety of benefits, including density of deployment across infrastructure, convenience for application developers to push updates with low operational hand-holding, and a fairly well-defined deployment workflow that can be orchestrated. Container frameworks also enable a DevOps approach to application development by cleanly separating concerns between operations and development teams. But running multi-container, multi-server apps with containers is very hard. You have to learn five new and different technologies and best practices (libswarm, sy...
May. 26, 2015 12:00 PM EDT Reads: 2,473
SYS-CON Events announced today that DragonGlass, an enterprise search platform, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. After eleven years of designing and building custom applications, OpenCrowd has launched DragonGlass, a cloud-based platform that enables the development of search-based applications. These are a new breed of applications that utilize a search index as their backbone for data retrieval. They can easily adapt to new data sets and provide access to both structured and unstruc...
May. 26, 2015 12:00 PM EDT Reads: 2,247
The Internet of Things is a misnomer. That implies that everything is on the Internet, and that simply should not be - especially for things that are blurring the line between medical devices that stimulate like a pacemaker and quantified self-sensors like a pedometer or pulse tracker. The mesh of things that we manage must be segmented into zones of trust for sensing data, transmitting data, receiving command and control administrative changes, and peer-to-peer mesh messaging. In his session at @ThingsExpo, Ryan Bagnulo, Solution Architect / Software Engineer at SOA Software, focused on desi...
May. 26, 2015 11:00 AM EDT Reads: 4,267
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
May. 26, 2015 11:00 AM EDT Reads: 5,967
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
May. 26, 2015 10:15 AM EDT Reads: 1,927
While great strides have been made relative to the video aspects of remote collaboration, audio technology has basically stagnated. Typically all audio is mixed to a single monaural stream and emanates from a single point, such as a speakerphone or a speaker associated with a video monitor. This leads to confusion and lack of understanding among participants especially regarding who is actually speaking. Spatial teleconferencing introduces the concept of acoustic spatial separation between conference participants in three dimensional space. This has been shown to significantly improve comprehe...
May. 26, 2015 10:00 AM EDT Reads: 3,386
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists will peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud environment, and we must architect and code accordingly. At the very least, you'll have no problem fil...
May. 26, 2015 10:00 AM EDT Reads: 2,180
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
May. 26, 2015 09:00 AM EDT Reads: 5,434