Industrial IoT Authors: William Schmarzo, Elizabeth White, Liz McMillan, Yeshim Deniz, Lori MacVittie

Related Topics: Microservices Expo, Java IoT, Industrial IoT, Containers Expo Blog, Agile Computing, @CloudExpo

Microservices Expo: Blog Feed Post

From ESBs to API Portals, an Evolutionary Journey | Part 2

SOA transformation in the big enterprise

In this article series we would like to build a case that API portals, with the Intel® API Manager and Intel® Expressway Service Gateway, powered by Mashery are representative examples, are the contemporary manifestations of the SOA movement that transformed IT in the early 2000s from IT as a cost center to an equal partner in a company’s  execution of a business strategy and revenue generation.  In the introductory article in Part 1 we discussed some of the business dynamics that led to cloud computing and the service  paradigm.  Let’s now take a closer look  at the SOA transformation in the big enterprise.

If we look at the Google  Webtrends graph for the term “SOA”, we can use the search popularity as an  indicator of the industry we can see that the interest in SOA peaked at around  2007, just as the interest on cloud computing started rising.  There was a brief burst of interest on this term at the end of 2012 which can be attributed at people looking for precedents in SOA as the industry moves to cloud services.

Figure 1. Google Webtrends graph for “SOA.”

Figure 2. Google Webtrends graph for “Cloud Computing.

The search rate for the term “cloud computing” actually peaked in 2011, but perhaps unlike SOA, the trend is not an indication of waning interest, but that the focus of interest has shifted to more specific aspects. See for instance the graphs for “Amazon AWS” and “OpenStack”.

Figure 3. Google Webtrends graph for “Amazon AWS.”

Figure 4. Google Webtrends graph for “OpenStack.”

SOA brought a discipline of  modularity that has been well known in the software engineering community for more than 30 years, but had been little applied in corporate-wide IT  projects.  The desired goal for SOA was  to attain a structural cost reduction in the delivery of IT services through  re-use and standardization.  These savings needed to be weighed against significant upfront costs for architecture and planning as well as from the reengineering effort for interoperability and security.  The expectation was a per  instance lower cost from reuse in spite of the required initial investment.

Traditionally, corporate applications have been deployed in stovepipes, as illustrated in Figure 5 below, one application per server or server tier hosting a complete solution stack.  Ironically, this trend was facilitated by the availability of low-cost Intel-based high volume servers starting fifteen years ago.  Under this system physical servers need to be procured, a process that takes anywhere from two weeks to six months depending on organizational policies and asset approvals in effect.  When the servers become available, they need to be configured and provisioned with an operating system, database software, middleware and the application. Multiple pipes are actually needed to support a running business.  For instance, Intel IT requires as many as 15 staging stovepipes to phase in an upgrade for the Enterprise Resource Planning (ERP) SAP application.  The large number of machines needed to support most any corporate application over its life cycle led to the condition affectionately called “server sprawl.”  In data centers housing thousands if not tens of thousands of servers it is not difficult to lose track of these assets, especially after project and staff turnover from repeated reorganizations and M&As.  This created another affectionate term: “zombies.”  These are forgotten servers from projects past, still powered up, and even running an application, but serving no business purpose.

Figure 5. Traditional Application Stovepipes vs. SOA.

With SOA, monolithic applications are broken into reusable, fungible services as shown on the left side of Figure 6 below.  Much in the same way server sprawl used to exist in data centers, so it was with software, with multiple copies deployed, burdening IT organizations with possibly unnecessary licensing costs, or even worse, with shelfware, that is, licenses paid for software never used.  As an example, in a stovepiped environment each application that requires the employee roster of a company, namely user accounts, phone directory, expense reporting, payroll would each require a full copy of the employee information database.  In addition to the expense of the extra copies, the logistics of keeping each copy synchronized would be complex.

What if instead the need to replicate the employee roster somehow it was possible to build a single copy where every application needing this information can “plug in” into this copy and use it a needed.  There are some complications: the appropriate access and security mechanisms need to be in place.  Locking mechanisms for updates need to be implemented to ensure integrity and consistency of the data.  However the expense of habilitating the database for concurrent access is still significantly less than the expense of maintaining several copies.

If we access this new single-copy employee database through Web Services technology, using either SOAP or REST, we have just created a “service”, the “employee roster service”.  If every application layer in a stack is
re-engineered as a service with possibly multiple users, the stacks in Figure 5 morph into a network as shown in the left part of Figure 6. The notion of service is recursive where most applications become composites of several services and services themselves are composites of services.  Any service can be n “application” if it exposes a user interface (UI) or a service proper if it exposes an API.  In fact a service can be both, exposing multiple UIs and APIs depending on the intended audience or target application: it is possible to have one API for corporate access and yet another one available to third party developers of mobile applications.

Applications structured to operate under this new service paradigm are said to follow a service oriented architecture, commonly known as SOA.  The transition to SOA created new sets of dynamics whose effects are still triggering change today.  For one thing, services are loosely coupled, meaning that as long as the terms of the service contract between the service consumer and the service provider does not change, one service instance can be easily replaced. This feature simplifies the logistics of deploying applications enormously: a service can be easily replaced to improve quality, or ganged together with a similar service to increase performance or throughput.  Essentially applications can be assembled from services as part of operational procedures.  This concept is called “late binding” of application components.

Historically binding requirements have been loosened up over time.  In earlier times most applications components had to be bound together at compile time.  This was really early binding.  Over time it became possible to combine precompiled modules using a linker tool and precompiled libraries.  With dynamically linked libraries it became possible to bind together binary objects at run time.  However this operation had to be done within a given operating system, and allowed only within strict version or release limits.

We can expect even more dynamic applications in the near future.  For instance, it is not hard to imagine self-configuring applications assembled on the fly and real time on demand using a predefined template.  In theory these applications could recreate themselves in any geographic region using locally sourced service components.

There are also business considerations driving the transformation dynamics of application components. Business organizations are subject to both headcount and budgetary constraints for capital expenses.  Under these restrictions it may be easier for an organization to convert labor and capital costs into monthly operational expenses by running their services on third party machines hiring Infrastructure as a Service (IaaS) cloud services, or take one step further and contract out the complete database package to implement the employee roster directly from a Software as a Service (SaaS) provider.  All kinds of variations are possible: the software service may be hosted on infrastructure from yet another party, contracted by either the SaaS provider or the end user organization.

The effect of the execution of this strategy is the externalization of some of the services as shown in the right hand side of Figure 6.  We call this type of evolution inside-out SOA where initially in-sourced service components get increasingly outsourced.

Figure 6. The transition from internal SOA to inside-out SOA.

As with any new approach, SOA transformation required an upfront investment, including the cost of reengineering the applications and breaking them up into service components, and in ensuring that new applications themselves or new capabilities were service ready or service capable.  The latter usually meant attaching an API to an application to make the application usable as a component for a higher-level application.

Implementation teams found the extra work under the SOA discipline disruptive and distracting.  Project participants resented the fact that while this extra work is for the “greater good” of the organization, it was not directly aligned with the goals of the project.  This is part of the cultural and behavioral aspects that a SOA program needs to deal with, which can be more difficult to orchestrate than the SOA technology itself. Most enterprises that took a long term approach and persisted in these efforts eventually reached a breakeven point where the extra implementation cost of a given project was balanced by the savings by reusing past projects.

This early experience also had another beneficial side effect that would pave the way to the adoption of cloud computing a few years later: the development of a data-driven, management by numbers ethic demanding quantifiable QoS and a priori service contracts also known as SLAs or service level agreements.

While the inside-out transformation just described had a significant impact in the architecture of enterprise IT, the demand for third party service components had an even greater economic impact on the IT industry as a whole, leading to the creation of new supply chains and with these supply chains, new business models.

Large companies, such as Netflix, Best Buy, Expedia, Dun & Bradstreet and The New York Times found that the inside-out transformative process was actually a two-way street.  These early adopters found that making applications
“composable” went beyond saving money; it actually helped them make money through the enablement of new revenue streams: the data and intellectual property that benefited internal corporate departments was also useful to external parties if not more.  For instance an entrepreneur providing a travel service to a corporate customer did not have to start from ground zero and making the large investment to establish a travel reservation system.  It was a lot simpler to link up to an established service such as Expedia.  In fact, for this upstart did not have to be bound by a single service: at this level it makes more economic sense to leverage a portfolio of services, in which case the value added of this service upstart is in finding the best choices from the portfolio.  This is a common pattern in product search services whose function is to find the lowest price across multiple stores, or in the case of a travel service, the lowest priced airfare.

The facilitation of the flow of information was another change agent for the industry.  There was no place to hide.  A very visible example is effect of these dynamics on the airline industry, and how it changed irrevocably, bringing up new efficiencies but also significant disruption.  The change empowered consumers, and some occupations such as travel agents and car salespeople were severely impacted.

Another trend that underlies the IT industry transformation around services is the “democratization” of the services themselves: The cost efficiencies gained not only lowered the cost of business for some expensive applications accessible only to large corporations with deep pockets; it made these applications affordable to smaller businesses, the market segment known as SMBs or Small and Medium Businesses.  The economic impact of this trend has been enormous, although hard to measure as it is still in process.  A third wave has already started, which is the industry’s ability to reduce the quantum for the delivery of IT services to make it affordable to individual consumers. This includes social media, as well as more traditional services such as email and storage services such as Dropbox. We will take a look at SMBs in Part 3.

The post From ESBs to API Portals, an Evolutionary Journey Part 2 appeared first on Security [email protected].

Read the original blog entry...

More Stories By Application Security

This blog references our expert posts on application and web services security.

@ThingsExpo Stories
For basic one-to-one voice or video calling solutions, WebRTC has proven to be a very powerful technology. Although WebRTC’s core functionality is to provide secure, real-time p2p media streaming, leveraging native platform features and server-side components brings up new communication capabilities for web and native mobile applications, allowing for advanced multi-user use cases such as video broadcasting, conferencing, and media recording.
Amazon has gradually rolled out parts of its IoT offerings, but these are just the tip of the iceberg. In addition to optimizing their backend AWS offerings, Amazon is laying the ground work to be a major force in IoT - especially in the connected home and office. In his session at @ThingsExpo, Chris Kocher, founder and managing director of Grey Heron, explained how Amazon is extending its reach to become a major force in IoT by building on its dominant cloud IoT platform, its Dash Button strat...
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
One of biggest questions about Big Data is “How do we harness all that information for business use quickly and effectively?” Geographic Information Systems (GIS) or spatial technology is about more than making maps, but adding critical context and meaning to data of all types, coming from all different channels – even sensors. In his session at @ThingsExpo, William (Bill) Meehan, director of utility solutions for Esri, will take a closer look at the current state of spatial technology and ar...
Everyone knows that truly innovative companies learn as they go along, pushing boundaries in response to market changes and demands. What's more of a mystery is how to balance innovation on a fresh platform built from scratch with the legacy tech stack, product suite and customers that continue to serve as the business' foundation. In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, will discuss why and how ReadyTalk diverted from healthy revenue an...
SYS-CON Media announced today that @WebRTCSummit Blog, the largest WebRTC resource in the world, has been launched. @WebRTCSummit Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. @WebRTCSummit Blog can be bookmarked ▸ Here @WebRTCSummit conference site can be bookmarked ▸ Here
SYS-CON Events announced today that Streamlyzer will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Streamlyzer is a powerful analytics for video streaming service that enables video streaming providers to monitor and analyze QoE (Quality-of-Experience) from end-user devices in real time.
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, will be adding the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor...
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity.
The security needs of IoT environments require a strong, proven approach to maintain security, trust and privacy in their ecosystem. Assurance and protection of device identity, secure data encryption and authentication are the key security challenges organizations are trying to address when integrating IoT devices. This holds true for IoT applications in a wide range of industries, for example, healthcare, consumer devices, and manufacturing. In his session at @ThingsExpo, Lancen LaChance, vic...
Cloud based infrastructure deployment is becoming more and more appealing to customers, from Fortune 500 companies to SMEs due to its pay-as-you-go model. Enterprise storage vendors are able to reach out to these customers by integrating in cloud based deployments; this needs adaptability and interoperability of the products confirming to cloud standards such as OpenStack, CloudStack, or Azure. As compared to off the shelf commodity storage, enterprise storages by its reliability, high-availabil...
In the next forty months – just over three years – businesses will undergo extraordinary changes. The exponential growth of digitization and machine learning will see a step function change in how businesses create value, satisfy customers, and outperform their competition. In the next forty months companies will take the actions that will see them get to the next level of the game called Capitalism. Or they won’t – game over. The winners of today and tomorrow think differently, follow different...
The IoT industry is now at a crossroads, between the fast-paced innovation of technologies and the pending mass adoption by global enterprises. The complexity of combining rapidly evolving technologies and the need to establish practices for market acceleration pose a strong challenge to global enterprises as well as IoT vendors. In his session at @ThingsExpo, Clark Smith, senior product manager for Numerex, will discuss how Numerex, as an experienced, established IoT provider, has embraced a ...
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 7-9, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and ...
The Internet of Things (IoT), in all its myriad manifestations, has great potential. Much of that potential comes from the evolving data management and analytic (DMA) technologies and processes that allow us to gain insight from all of the IoT data that can be generated and gathered. This potential may never be met as those data sets are tied to specific industry verticals and single markets, with no clear way to use IoT data and sensor analytics to fulfill the hype being given the IoT today.
Donna Yasay, President of HomeGrid Forum, today discussed with a panel of technology peers how certification programs are at the forefront of interoperability, and the answer for vendors looking to keep up with today's growing industry for smart home innovation. "To ensure multi-vendor interoperability, accredited industry certification programs should be used for every product to provide credibility and quality assurance for retail and carrier based customers looking to add ever increasing num...