|By Kapil Raval||
|February 19, 2013 06:30 AM EST||
The networking industry has gone through different waves over last 30+ years. In the '80s, the first wave was all about connecting and sharing; how to connect a computer to other peripheral devices and other computers. There were many players who developed technology and services to address that, e.g. Novell, 3Com, Sun, IBM, DEC, Nortel. Across the industry, small islands of various protocols were created with multiple gateways to bridge them.
In 90's and 00's, Cisco dominated the industry and did a brilliant job of pushing the industry towards a common approach built on Ethernet. They built a hugely successful business and ecosystem and even created new markets like VoIP on the proposition that networking should be on a common highway. We also saw isolation of networks from the rest of the IT infrastructure, in the sense that software innovations continued in the server and storage environments independent of the network area. The focus also remained on different components of the infrastructure and not on the ‘service' delivered by the combination of those infrastructure components, i.e., server, storage and network.
Now, it is all about orchestrated service delivery which requires standards-based open approach. According to Gartner reports on Emerging Technology Analysis and Key Issues for Communications Strategies, a) over 50% workloads will be virtualized by the end of 2012 thanks to Cloud computing, and b) more than 80% of traffic will be server-to-server by 2014 due to federated applications and virtualization.
In this article, I attempt to highlight why we have reached limits of current network technology, how Software Defined Networking will lead the next wave of innovations and its benefits to the IT industry. Today, network elements like switches and routers have resident software in each box. The software in the box provides intelligence using distributed algorithms to decide how each packet should be handled by it. In order for the entire network to function properly, the software in each box must work in coordination with other boxes. This approach has served us well so far.
The coordinated distributed algorithms however make it difficult to introduce a change on the fly. We have to reconfigure the embedded software on all network components (often called boxes) to implement any change. On the other hand, the wave of virtualization demands flexible, adaptive and nimble networks. This wave exposes limitations of the current networking approach, which is inflexible and protocol-heavy. As distributed algorithms are used, not one box has a global view of the network. This results in over provisioning at the time of designing and guess-work while trouble-shooting. For large cloud deployments, compute and storage environments can be virtualized and consumed easily but because of the limitations of networks, its full potential is not realized.
Typically, a network administrator spends a lot of time planning and then configuring the network components with changing business requirements and varying network traffic. Network administrators learn a lot by trial and error and the resulting expertise based on experience is limited to the experienced few.
Research students at Stanford, Berkley and other universities found it hard to experiment with their networks because the software is embedded in each switch or a router and any change has to be coordinated between vendors to make the distributed algorithms interoperable to provide the functionality they needed for research & experimentation. It is with this simple objective that the idea of OpenFlow was born. The first step that these researchers took was to develop ability to program switches, from a remote controller. The OpenFlow protocol was developed to support communication between a switch and a controller. It allows external control software to control the data path of a switch, bypassing traditional L2 and L3 protocols and associated configurations. OpenFlow protocol defines messages, such as packet-received, send-packet-out, modify-forwarding-table, and get-stats. The researchers added OpenFlow support to existing boxes and allowed OpenFlow controller to program part of Flow-Table entries for research and experimentation while rest of the box worked as before. This gave them control over switches from a controller running on a remote industry standard server. This was the start of OpenFlow which basically separated the physical or data layer from the control layer.
OpenFlow and SDN became quite popular in the research community and several service providers and some vendors started to see the value of this approach. Researchers from Stanford and Berkeley took the lead but Open Networking Foundation (ONF) was founded by leading providers (Google, Yahoo!, Microsoft, Facebook, Deutsche Telecom, and Verizon). Some vendors, like HP, expressed their support from the beginning. ONF is the body which defines, standardizes and enhances OpenFlow protocol. ONF has a bigger charter with SDN that goes beyond OpenFlow protocol. It promotes SDN and may standardize different parts of SDN. As a policy, vendors cannot join its board but can become members of ONF and lead some working groups. Vendors have influence over the emerging standard though they don't set the overall agenda and they don't make final decisions on what is standardized and what is not.
Another interesting point is that ONF wants to do as little standardization as possible to encourage creativity. At first it sounded a bit conflicting but ONF looks at the software industry and tries to follow it by taking its best practices. When you look at the software industry, there are fewer standards than the network industry and it has created more innovations and jobs than the network industry. The Network industry has too many protocols defined and standardized, resulting in more complexity and fewer innovations. Academicians are influencing ONF and ensuring that we don't end up with another rigid, inflexible and protocol heavy networking world. ONF has 66 members today and its membership costs $30k/year. This is relatively high compared to other such bodies and the reason could be to ensure that only genuinely interested parties become members. We know that breakthrough innovations would come from small start-ups, some of whom would find it difficult to spend so much for the annual membership. On the other hand, ONF ensures that the development made as part of their body is made available to all members at no charge or royalty etc. One would end up spending more than $30k in lawyer's fees to get the royalty arrangements sorted out.
Google, Amazon, Rackspace, etc., have already implemented OpenFlow based networks, using proprietary hardware and in-house developed software. We see many new start-up focused on this new area to develop applications that leverage virtualized network. Most cloud providers manage huge data centers. "Every day Amazon Web Services (AWS) adds enough new capacity to support all of Amazon.com's global infrastructure through the company's first 5 years, when it was a $2.76 billion annual revenue enterprise" according to Jim Hamilton, their VP at large.
Google embraced OpenFlow very early on. Google's inter-datacenter production network, largest in the world by traffic, runs on OpenFlow and SDN. Google proved that OpenFlow based networks can scale and deliver its promise. The biggest use case, according to Google, for Central controllers is the fact that we can do re-routing, anticipating an event, e.g. if we know that we are introducing a new service which will lead to traffic load, we can pre-provision network in a way to best optimize infrastructure resources. If a small business, say a Flower shop, expects more traffic and compute power on a Valentine day, it is easy to have compute and storage power made available with standard virtualization technology available today. But to make network resources available on demand is challenging. This is where an OpenFlow controller controlling switches can easily provide necessary bandwidth and then tear it down or redirect the network resources for other requests. Google example is impressive but one could argue that how many enterprise customers could afford or dare to do what Google can do. Moreover, just because it made a business case for Google does not mean that it can make a business case for everyone. Each customer will have to evaluate their network, future growth requirements etc and see if there is a positive business case.
Software Defined Networking (SDN) can help you make the network ready for Cloud-bursting as and when required. SDN opens up many possibilities. For example;
- Packet Flow redirection: There is a lot of video traffic coming from sources we trust. Security services on such traffic are not required for some applications. As security services are extremely infrastructure-hungry and CPU-intensive, passing all data to it leads to a sprawl of security devices (many IDS/ IPS, DPI appliances) to monitor traffic. With OpenFlow we can easily redirect traffic away from the costly resources for trusted traffic.
- Policy Management: Because you now have global view of the network and can control the network with software running on OpenFlow controller, defining and implementing business policies become easier, e.g. better bandwidth management: In case of excess traffic which is not anticipated, the controller can make sure to program the network in such a way that higher priority business traffic is given more resources than low priority traffic.
- Virtual Application Network: The OpenFlow controller lets us create virtual networks for different applications on one physical network, such that different applications can have different bandwidth and QoS based on their requirements, with auditable network isolation between applications and simpler compliance (a requirement for the financial industry). One can provide each customer a separate virtual domain for them to manage
- Network Security: OpenFlow can be used to make networks more secure and agile. The OpenFlow controller allows us to monitor and manage network security and
-Dynamically insert security services at any point in the network (on-demand firewall or IDS/IPS, for example)
-Monitor traffic and re-direct suspect flows for full inspection
-Combine per-flow QoS control with network management systems to leverage traffic and end-user identity information
-Dynamically detect and mitigate attacks due to infected PCs by using signature/reputation database to create rules that address specific attacks
- Proprietary Appliances: It is very common today to deploy appliances in the network to deliver specific functionalities. These proprietary appliances can be replaced with an OpenFlow controller and a software application delivering the specific functionality. Communication Service Providers have a significant number of network services that can take advantage of virtualization and industry standard servers. Many application specific appliances that are running on custom ASIC (WAN optimization, Firewalls, DPI, SPAM/MAIL appliances, IDS etc) are good candidates for the SDN approach.
- As SDN matures, a couple of years down the road, more futuristic use case is to monitor traffic patterns, generate intelligence and then use the intelligence to anticipate traffic patterns and optimize available resources. Using this kind of intelligence, we can actually reduce power consumption, too. For example, if we know the usage of the network is less during the nights and early mornings, we can shut off parts of the network in such a way that we still get complete connectivity, yet not have the complete network up.
The list of use cases is growing on a daily basis and will continue to grow even faster as the pace of innovation increases. The number of new start-ups in this area is increasing rapidly. Finally, the networking field, which has been quite dull from the perspective of new innovations, is going to be more vibrant and exciting with new possibilities. Moreover, if ONF is successful in maintaining ‘Open standards', SDN will allow plug and play with multivendor products, empowering IT and Network operators to be more cost-effective and adaptive to agility requirements of a business. We will see that with SDN, the network industry will mirror the innovations and developments seen in the server and storage fields.
Some vendors want to have API's well-defined for applications to leverage OpenFlow controllers or have more protocols supported. It is prudent on the part of ONF not to define and standardize too much and let the market define what an acceptable standard is. It is important to keep OpenFlow protocol unrestricted by defining and standardizing not more than what is absolutely required. This will fuel innovations.
OpenFlow protocol is in its infancy but it has generated tremendous interest from customers, researchers as well as vendors. One can argue that it is not fully matured or ready for prime time but most agree that it will change the network industry fundamentally. It will make the industry more flexible, nimble and drive more innovations. This train has left the station while some debate that its destination is not well-defined or its ETA is not known. The hardware vendors will have to accept the fact that networking hardware will be commoditized just like servers and storage. OpenFlow/SDN, for sure, opens up opportunities for different network based applications. This is where current vendors will have to focus on to continue to play a major role in the future. Network administrators will not be spending hours reconfiguring switches and routers. They will have to get skilled on how to control, manage, test and implement changes from a central controller.
Although the OpenFlow protocol is defined, there are not many vendors in the market supporting its latest version 1.3. Moreover, there is a lack of tools to test, monitor and manage this new environment. HP and other major vendors have openly embraced OpenFlow and are investing in it. HP was one of the first major network vendors to invest in this area, with 60+ deployments of 16 different switches supporting OpenFlow. HP is also leading one of the task forces of ONF to evolve the OpenFlow protocol. With its traditional strength in IT performance & operations (test, monitor and manage) management and telecom OSS, HP is well-positioned to deliver a complete future-proof infrastructure solution, (consisting of server, storage, networking, software, security and analytics) for enterprise IT as well as telecom service providers.
Software AG helps organizations transform into Digital Enterprises, so they can differentiate from competitors and better engage customers, partners and employees. Using the Software AG Suite, companies can close the gap between business and IT to create digital systems of differentiation that drive front-line agility. We offer four on-ramps to the Digital Enterprise: alignment through collaborative process analysis; transformation through portfolio management; agility through process automation and integration; and visibility through intelligent business operations and big data.
Sep. 30, 2014 10:30 AM EDT Reads: 1,369
There will be 50 billion Internet connected devices by 2020. Today, every manufacturer has a propriety protocol and an app. How do we securely integrate these "things" into our lives and businesses in a way that we can easily control and manage? Even better, how do we integrate these "things" so that they control and manage each other so our lives become more convenient or our businesses become more profitable and/or safe? We have heard that the best interface is no interface. In his session at Internet of @ThingsExpo, Chris Matthieu, Co-Founder & CTO at Octoblu, Inc., will discuss how these devices generate enough data to learn our behaviors and simplify/improve our lives. What if we could connect everything to everything? I'm not only talking about connecting things to things but also systems, cloud services, and people. Add in a little machine learning and artificial intelligence and now we have something interesting...
Sep. 29, 2014 06:45 AM EDT Reads: 1,835
Last week, while in San Francisco, I used the Uber app and service four times. All four experiences were great, although one of the drivers stopped for 30 seconds and then left as I was walking up to the car. He must have realized I was a blogger. None the less, the next car was just a minute away and I suffered no pain. In this article, my colleague, Ved Sen, Global Head, Advisory Services Social, Mobile and Sensors at Cognizant shares his experiences and insights.
Sep. 28, 2014 09:45 AM EDT Reads: 1,504
We are reaching the end of the beginning with WebRTC and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) irreversibly encoded. In his session at Internet of @ThingsExpo, Peter Dunkley, Technical Director at Acision, will look at how this identity problem can be solved and discuss ways to use existing web identities for real-time communication.
Sep. 27, 2014 11:30 PM EDT Reads: 1,865
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. It also ensured scalability and better service for customers, including MUY! Companies, one of the country's largest franchise restaurant companies with 232 Pizza Hut locations. This is one example of WebRTC adoption today, but the potential is limitless when powered by IoT. Attendees will learn real-world benefits of WebRTC and explore future possibilities, as WebRTC and IoT intersect to improve customer service.
Sep. 27, 2014 10:30 PM EDT Reads: 1,790
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at Internet of @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, will share some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, deploy, and manage applications integrating voice, video and data. He is the co-founder of TeleStax, an Open Source Cloud Communications company that helps the shift from legacy IN/SS7 telco networks to IP-based cloud comms. An early investor in multiple start-ups, he still finds time to code for his companies and contribute to open source projects.
Sep. 27, 2014 10:30 PM EDT Reads: 2,253
The Internet of Things (IoT) promises to create new business models as significant as those that were inspired by the Internet and the smartphone 20 and 10 years ago. What business, social and practical implications will this phenomenon bring? That's the subject of "Monetizing the Internet of Things: Perspectives from the Front Lines," an e-book released today and available free of charge from Aria Systems, the leading innovator in recurring revenue management.
Sep. 27, 2014 09:45 PM EDT Reads: 2,445
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges.
Sep. 27, 2014 08:45 PM EDT Reads: 2,325
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at 6th Big Data Expo®, Hannah Smalltree, Director at Treasure Data, to discuss how IoT, Big Data and deployments are processing massive data volumes from wearables, utilities and other machines.
Sep. 27, 2014 01:00 PM EDT Reads: 2,010
All major researchers estimate there will be tens of billions devices – computers, smartphones, tablets, and sensors – connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be!
Sep. 27, 2014 11:00 AM EDT Reads: 2,166
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at Internet of @ThingsExpo, Erik Lagerway, Co-founder of Hookflash, will walk through the shifting landscape of traditional telephone and voice services to the modern P2P RTC era of OTT cloud assisted services.
Sep. 26, 2014 11:45 PM EDT Reads: 1,496
While great strides have been made relative to the video aspects of remote collaboration, audio technology has basically stagnated. Typically all audio is mixed to a single monaural stream and emanates from a single point, such as a speakerphone or a speaker associated with a video monitor. This leads to confusion and lack of understanding among participants especially regarding who is actually speaking. Spatial teleconferencing introduces the concept of acoustic spatial separation between conference participants in three dimensional space. This has been shown to significantly improve comprehension and conference efficiency.
Sep. 26, 2014 10:45 PM EDT Reads: 1,434
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, will discuss single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example to explain some of these concepts including when to use different storage models.
Sep. 26, 2014 07:45 PM EDT Reads: 2,259
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
Sep. 26, 2014 06:15 PM EDT Reads: 1,619
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace. These technological reforms have not only changed computers and smartphones, but are also changing the data processing model for all information devices. In particular, in the area known as M2M (Machine-To-Machine), there are great expectations that information with a new type of value can be produced using a variety of devices and sensors saving/sharing data via the network and through large-scale cloud-type data processing. This consortium believes that attaching a huge number of devic...
Sep. 26, 2014 06:00 PM EDT Reads: 1,546
Innodisk is a service-driven provider of industrial embedded flash and DRAM storage products and technologies, with a focus on the enterprise, industrial, aerospace, and defense industries. Innodisk is dedicated to serving their customers and business partners. Quality is vitally important when it comes to industrial embedded flash and DRAM storage products. That’s why Innodisk manufactures all of their products in their own purpose-built memory production facility. In fact, they designed and built their production center to maximize manufacturing efficiency and guarantee the highest quality of our products.
Sep. 26, 2014 05:00 PM EDT Reads: 1,552
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. Download Slide Deck: ▸ Here
Sep. 26, 2014 10:00 AM EDT Reads: 1,499
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. Over the summer Gartner released its much anticipated annual Hype Cycle report and the big news is that Internet of Things has now replaced Big Data as the most hyped technology. Indeed, we're hearing more and more about this fascinating new technological paradigm. Every other IT news item seems to be about IoT and its implications on the future of digital business.
Sep. 26, 2014 10:00 AM EDT Reads: 2,027
BSQUARE is a global leader of embedded software solutions. We enable smart connected systems at the device level and beyond that millions use every day and provide actionable data solutions for the growing Internet of Things (IoT) market. We empower our world-class customers with our products, services and solutions to achieve innovation and success. For more information, visit www.bsquare.com.
Sep. 26, 2014 09:45 AM EDT Reads: 1,398
With the iCloud scandal seemingly in its past, Apple announced new iPhones, updates to iPad and MacBook as well as news on OSX Yosemite. Although consumers will have to wait to get their hands on some of that new stuff, what they can get is the latest release of iOS 8 that Apple made available for most in-market iPhones and iPads. Originally announced at WWDC (Apple’s annual developers conference) in June, iOS 8 seems to spearhead Apple’s newfound focus upon greater integration of their products into everyday tasks, cross-platform mobility and self-monitoring. Before you update your device, here is a look at some of the new features and things you may want to consider from a mobile security perspective.
Sep. 26, 2014 09:00 AM EDT Reads: 1,372