Welcome!

Industrial IoT Authors: Jyoti Bansal, Elizabeth White, AppNeta Blog, William Schmarzo, Automic Blog

Related Topics: Microservices Expo

Microservices Expo: Article

Deploying Large-Scale Interoperable Web Services Infrastructures

Issues that architects will frequently have to confront

Web services have moved beyond the experimental stage in many organizations and are now becoming the foundation of numerous service-oriented architectures. Thus, architects are concerned about best practices for building, deploying, and maintaining a large-scale interoperable Web services infrastructure.

In one sense, Web services applications are like other applications. They represent a code base developed by a team of engineers. This code base needs to go through a methodological development life cycle, followed by testing and quality assurance, before it is finally released.

Frequently, however, Web services are not new applications at all, but rather carefully crafted message-based interface layers on top of existing systems and applications. New applications may be composed of Web services, or the services may be orchestrated into new business processes.

Given this evolutionary approach to application design and deployment, Web services and applications and business processes using Web services have a different set of provisioning and management concerns that enterprise architects must consider. This article provides a high-level assessment of four key areas that need to be considered.

Interoperability
The underlying value proposition of Web services is operational efficiency, provided by consistent, standards-based mechanisms that link together heterogeneous business applications. Such systems typically originate on a variety of legacy, host-based, and modern architectures, each with different Web service-enabling capabilities.

Although interoperability best practices for Web services are becoming better understood for lower-level protocols like SOAP and WSDL, issues at the more recently emerged quality-of-service level for security, reliability, and policy are not as well understood. As such, service enabling these applications in a consistent and maximally interoperable fashion is a key concern for enterprise architects.

Performance
Web services are fundamentally a message-based approach to integration and focus on moving XML-based SOAP documents on both public and private networks. Such applications have very different performance characteristics than traditional multitier transactional systems that use binary protocols. Architects must take into account new throughput, latency, network, and client concerns layered on top of existing applications that have frequently been tuned for different usage characteristics.

Quality of Service
Coinciding with the emergence of Web applications was an increase in the number of users for back-end infrastructures. Web services add a new category of client: the programmatic client that has the potential to increase the number of "users" and messages flowing through the environment by another order of magnitude. This new usage model can require new approaches to reliability, availability, and scalability. Furthermore, message-based systems bring new quality-of-service concerns regarding reliable messaging and security of messages coming into the infrastructure.

Manageability
Web services are typically used for application-to-application communication rather than for end user-facing applications. As such, the visibility of a Web services infrastructure to the end user and even the operational staff is less apparent because it is frequently the hidden glue that ties operational systems together. It is critical that architects design with management visibility in mind, taking into account the uniqueness of Web services from a monitoring, diagnostics, service life cycle-management, and service-level agreement perspective.

Let's take a closer look at each of these topics.

Interoperability
A number of key questions regarding interoperability can drive an architect's strategy. Are your Web services internally focused so that you will have control over the clients and the tools that will be used with them? Are your Web services external facing and subject to arbitrary clients and tools? How sophisticated are your users? Are they integration architects using Web services development tools, or are they end users using Web service-enabled desktop and portal productivity software? These are basic questions, but they direct how you might tackle interoperability.

You can begin by following best interoperability practices for developing Web services.

Development
One way to tackle the issue is by publishing Web services bottom-up (i.e., taking existing applications and simply wrapping programmatic APIs as Web services). However, the top-down approach is more interoperable (i.e., modeling the message using XML Schema and designing the interface first in WSDL so the public contract and message definition works for both the client and server implementation).

The top-down approach is more interoperable for several reasons. For one thing, bottom-up approaches not only tightly couple consumers to existing APIs, but they often pollute the WSDL contract with noninteroperable, language-specific interface and message artifacts. A simple Java example that can be difficult to interoperate within .NET is the frequent usage of collections to move data structures between application tiers. From .NET, a common example is ADO.NET data sets, which are specific to Microsoft's platform. Avoiding language-specific types and starting with interface and message definitions can lead to a much higher likelihood of interoperability.

Third-party testing tools can validate best practices for interoperability. One of the most recognized groups focused on this is the Web Services Interoperability Organization (WS-I), which includes companies such as Oracle, IBM, Microsoft, and others. WS-I has created a set of best practices called WS-I Basic Profile 1.1 that describes how best to craft WSDL and SOAP messages so that Web services conforming to these rules have a maximum chance of achieving interoperability. These rules have been codified into a set of testing tools that can be run against Web service WSDLs and SOAP message exchanges to determine if those practices have been followed.

Testing
Conformance to WS-I does not necessarily guarantee interoperability. Rather, it is an indicator that your Web services are highly likely to be interoperable. Sometimes some older Web services infrastructures may not support the default message types required by WS-I: document/literal and rpc/literal. Or, sometimes some service providers are unable to upgrade their infrastructures to generate WS-I compliant services.

Practically speaking, testing with the actual target Web services clients is the only way to prove real interoperability. This enables architects to validate their own internally developed Web services as well as to validate that their preferred toolkits work with non-WS-I compliant Web services. An analogy can be made to Web application development. Just as many organizations test their Web applications with multiple browsers to ensure HTML compatibility, it is frequently incumbent on the Web services provider to try multiple client environments with their Web services end points.

The degree to which the interoperability testing has to be done depends on how you answer the initial questions regarding usage. Externally available Web services have a higher testing bar associated with them due to the unanticipated nature of clients. Internally available Web services may have a lower testing bar required if the client environment is more tightly managed and homogenous.

A very common practice that has emerged is for Web services providers to offer sample clients in popular programming languages: Java, C#, Visual Basic, Perl, and PHP. Examples of widely used services taking this approach include Amazon, Google, and eBay. This approach may seem to indicate that the promise of the interoperability of Web services has yet to be reached. However, it should be seen simply as a sign of a maturing industry as architects take short-term pragmatic steps toward ensuring interoperability and, as a by-product, usability.

In addition, a Web services provider may make a conscious decision to create a poorly interoperable implementation. If such a situation arises, the designer should provide some workarounds for service consumers.

Workarounds
Just as database architects and middle-tier object modelers often relax design constraints for application-specific reasons, Web services providers may consciously design service interfaces that are not maximally interoperable. Some designers prefer tight coupling to back-end systems for performance reasons. Others really want nonschema-based object models represented in the message exchanges, or moved "over the wire," for productivity reasons. Sometimes using SOAP over HTTP just does not meet the performance requirements of the target application.

In these cases, it is typically incumbent on the Web services provider to offer recommendations to clients on how to use these services. Common approaches beyond providing sample working clients include the following:

  1. Working in a homogenous client/server environment (Web services toolkits invariably are symmetrically interoperable with themselves)
  2. Providing custom serializers for proprietary types that can be plugged into third-party toolkits
  3. Describing how to use handlers or interceptor architectures provided by most toolkits to transform messages into a usable form at the client or end point
  4. Providing code samples of how to parse the raw XML SOAP message
Often Web services toolkit providers, and sometimes platform providers themselves, will have completed specific integration work beyond the standards with other platform providers (e.g., Oracle, Microsoft, and IBM). This is to enable easier integration paths for Web services providers. A simple example of this approach is the widespread use of document-literal wrapped Web services, which is a Microsoft-specific approach to document-literal services modeling RPC calls that nearly all Web services toolkits support.

Beyond interoperability concerns, moving to an XML-based integration stack defined by Web services brings performance characteristics to mind.

Performance
In a Web services application, message sizes typically increase substantially from traditional binary protocols. Additionally, a new layer of marshaling, unmarshaling, parsing, and translating XML messages to and from the underlying protocols is introduced.

Therefore, an important part of any deployment architecture for Web services must include a comprehensive plan to understand the performance characteristics of the service end points and the clients using those service end points. Typically the performance needs to focus on two areas, throughput and latency.

Throughput
Throughput is the number of Web services requests, typically measured in bytes, handled in a given time period. Throughput is measured only on the server side and does not include the time in which it took to send or receive the message.

Latency
Latency is the round-trip time between sending a request and receiving a response. Latency is often subject to issues external to the server, such as network bandwidth and, in a heterogeneous environment, characteristics of the client environments.

The first question is, "What is the expected message size that will be passed through individual Web services end points?" Once the message size is determined, it is often a good practice to start with what might be termed a "null processing" test. The goal is to load up the deployment environment with concurrent requests with zero application processing on the server side to determine what overhead the Web services runtime itself puts on the environment. This allows you to ascertain the overhead of the Web services infrastructure independent of its interaction with the underlying systems.

Going through this exercise can reveal a number of issues within a testing and production environment, including the following.

  1. Network. Often when testing the performance of Web services, network bandwidth can be the bottleneck. Network issues can impact both latency and throughput.
  2. Client. Many vendors will optimize their Web services client to work best with their Web services runtime. However, using the Web services runtime-provided client could result in misleading measurements. Instead, it is a good practice to choose neutral third-party clients to generate load to avoid skewing results.
  3. Server. Frequently, to achieve optimal performance on the server side, it is necessary to consult vendor documentation on how to have the server environment take advantage of the hardware resources available. Some of these settings can be vendor proprietary, and others are common to the runtime chosen. For example, configuration can significantly impact the throughput in J2EE environments that provide adequate memory allocation, parameters, garbage collection settings, and thread pool. Another common approach in J2EE environments (specific to each server) is running multiple Java virtual machines to more optimally take advantage of hardware resources.
  4. Memory and CPU. Some client and runtime environments may be more sensitive to memory and CPU requirements?requiring more or less to generate or process Web services messages. If the client or server is bound by either of these constraints, accurate measurement of throughput may not be possible.
  5. Message size and complexity. It is important to use representative message structures when testing Web services. Clearly the larger and more complex the message, the heavier the XML parsing requirement will be on the Web services runtime environment. Many Web services runtimes have different performance characteristics depending on message size and may have specific tuning capabilities that enable them to process messages differently based on the size of the messages.
  6. Asynchronous services versus synchronous services. Most early Web services infrastructures focused on synchronous request/response implementations and one-way messaging. However, with the recent emergence of Business Process Execution Language for Web services (BPEL4WS), many organizations are building infrastructures that contain a significant asynchronous component. In asynchronous services, one typically expects to see the ability to handle larger numbers of inbound requests, but mapping this number to a throughput measure can be skewed when comparing synchronous numbers because of the delayed nature of asynchrony.
These are some of the basic variables to keep in mind when considering basic performance testing of a Web services environment. However, sometimes the performance requirements overwhelm the ability of the Web services runtime to deal with SOAP messages. In these cases many architects will investigate messaging alternatives that are aligned with a service-oriented architectural approach.

One popular approach, available on Java platforms and aligned with Web services, is an Apache open source framework called Web Services Invocation Framework (WSIF). Apache WSIF enables developers to describe their underlying application interfaces using WSDL, and yet client invocations use native protocols rather than SOAP over HTTP. Classic examples of this include calling EJBs using native RMI protocols or vendor-specific optimizations such as using WSIF to natively call database-stored procedures.

In addition to interoperability and performance, Web services must be thought about from the classic reliability, availability, and scalability (RAS) characteristics needed in any large-scale deployment infrastructure.

Quality of Service
Web services typically take advantage of the same quality-of-service characteristics such as clustering, reliable messaging, and security available from server vendors for classical multitier applications.

Clustering
For scalability, developers are typically looking for a server environment that enables them to maintain consistent throughput and latency as concurrency of Web services clients varies. Scalable architectures enable the addition of more hardware, including machines, CPUs, and memory, as well as more resources both vertically (to a single machine) and horizontally (adding more machines to a cluster). Moreover, beyond the manual procedures for handling increased demand, modern server environments are self-adjusting, taking advantage of additional hardware resources on demand.

Remember, most Web services environments are either stateless or, if they are long-running such as business processes, their state is persisted in back-end databases. Both of these scenarios are supported by the classical cluster architectures available from application server vendors. For Web services running over the HTTP protocol, clustering solutions should span multiple tiers - from front-end caching and HTTP and J2EE servers to back-end databases.

Reliable Messaging
Unique to the reliability of Web services is the infrastructure needed to guarantee delivery of a message to an end point. It can be relatively easy to get a message to a new service end point. However, when the back-end systems being exposed through Web services interfaces are not available, approaches using asynchronous technologies need to be evaluated.

A common approach for achieving reliable messaging is to receive SOAP messages over HTTP from external business partners for maximum interoperability and then move the SOAP messages over a reliable J2EE infrastructure backbone such as JMS. A simple protocol mediation layer, HTTP to JMS, can add a significant degree of reliability to message propagation within internal architectures.

More recently, with the arrival of reliable messaging standards that are protocol-independent, including WS-Reliability and WS-ReliableMessaging, organizations are looking at new reliability infrastructures. These are typically built into the Web services runtime infrastructures of platforms and ensure that messages arrive exactly once (often referred to as guaranteed message delivery).

The main issues with using the standards-based approach to reliable messaging are the relative immaturity of implementations interoperability concerns and, of course, the unavailability of such technology on older architectures. Although any serious implementation of reliability will gracefully degrade to work with nonreliability-enabled clients, architects who need reliability in their infrastructure often choose variations of the following strategies:

  1. Work in a homogeneous environment in which both ends are reliable messaging-enabled from the same vendor.
  2. Work with vendor implementations in which bilateral vendor interoperability testing has been done ahead of standards-based interoperability.
  3. Offer different levels of reliable messaging. For nonreliable clients, process the messages but offer higher levels of reliable messaging with clients that meet the requirements of items 1 and 2.
  4. Design a manual logging and log reconciliation of input and output messages.
  5. Develop proprietary agreements between the client and server environments. Approaches here include schemes that rely on message exchange patterns or proprietary mechanisms within message bodies to determine whether messages really did make it to their end point.
Options 1-3 enable reliability to be tactically introduced based on standards. Options 4 and 5 offer solutions independent of standards and interoperability but may set up longer-term upgrade requirements as reliability infrastructures standardize.

Secure Messaging
As with reliable messaging, security of message exchanges has reached the early stages of maturity with the industry-endorsed release of WS-Security in April 2004. WS-Security defines standardized authentication tokens within messages, digital signatures for messages, and message-level encryption for Web services.

This very cleanly separates the security of Web services messaging from the transport protocol layer, providing much more flexibility from more commonly used HTTP protocol security such as SSL/TLS. Much like reliability, the biggest issues for WS-Security are the unevenness of implementations across vendors, interoperability concerns, and availability across older infrastructures.

Approaches for dealing with standards-based message security mirror what architects consider for reliable messaging.

If interoperability is not achievable across WS-Security implementations (e.g., via homogeneous clients and servers or bilateral vendor interoperability), architects will work to the lowest common denominator to achieve secure messaging. Two of the most common approaches are as follows:

1.  Web-based security. Because most Web services run over HTTP, standard Web technologies such as SSL/TLS and basic/digest authentication work equally well. These approaches can be used for authentication, integrity, and encryption of messages "on the wire." Although not Web services-aware, these approaches tend to be supported on both old and new infrastructures, ensuring close to maximum interoperability.
2.  Passing security tokens inside messages that can be used to verify authentication and message integrity. Rather than conforming to WS-Security standards, many organizations engaged in Web services transactions define an encrypted security token in a normal SOAP message for which a key or algorithm for generating or parsing such tokens is provided via an offline secure exchange. Public examples of this include Amazon's public Web services, for which a user key is required before use.

Ultimately, as is obvious from the variety of approaches, how developers tackle message-level quality of service depends on the sophistication of an organization's internal architecture as well as the capabilities of the expected Web services client environment.

Once these issues are addressed, there is a natural tendency to want to establish some sort of governance over those Web services.

Manageability
As a Web services application is deployed, classic management issues begin to appear. These issues can include monitoring and diagnostics, service-level agreements, policy management, centralized auditing and logging, and consolidating around a single identity management infrastructure. What is often done in this arena is to use parallel constructs to address these management concerns in traditional Web and multitier architectures. However, without careful adaptation, organizations can find that these constructs don't fit directly into Web services.

Take auditing and logging, for example. Unlike traditional Web traffic analysis, Web services logging and auditing is typically concerned with confirming how the individual messages or specific content in the messages correspond to what occurred in the back-end business systems. Correlating between these two often distinct tiers is much different than doing simple log analysis typical for Web content.

This simple example touches on the area of monitoring, diagnostics, and root-cause analysis that is critical for large-scale Web services infrastructures. The solutions in this area are mixed with some traditional management frameworks being extended to include this, new vendors emerging in the real time, event-driven business activity monitoring space, and traditional business intelligence tools being extended to be more aware of reporting against message stores.

Similar analysis must be done for service-level agreements (SLA) in the area of quality of service. Take, for example, WS-Security and WS-Reliability/WS-ReliableMessaging. Beyond simply implementing these standards, the longer-term vision is to enable SLAs to be exchanged in an automated fashion using emerging specifications such as WS-Policy. Such an exchange enables clients to programmatically and symmetrically match the quality-of-service capabilities supported by the server. Practically, however, most vendors provide different approaches for doing this today. The most common approach adopted by organizations currently requiring this today is simple, offline, noncomputerized agreements.

A common approach for normalizing the monitoring and diagnostics issues and enabling the centralization of control over Web services infrastructures is the concept of a gateway or intermediary through which all Web services traffic is routed. This central enforcement point provides both a consolidation and separation of management concerns from the back-end infrastructure. It also enables consistent application of quality of service policy as well as a convenient data capture point for analysis of Web services data flow.

The trade-off that architects often have to make with gateway approaches is the centralization of management versus the potential performance overhead of such an approach. Many gateway approaches deal with this performance concern by providing both an intermediary approach and an agent approach that works in conjunction with a centralized monitoring and diagnostics infrastructure.

Conclusion
This article focused on some of the issues that frequently confront architects when they attempt to deploy a large-scale interoperable Web services infrastructure. Although by no means meant to be a comprehensive enumeration of the issues and solutions, I analyzed some of the more widely known Web services concerns in interoperability, performance, quality of service, and management. Hopefully, you have gained an understanding of the key issues for their Web services deployment infrastructure.

More Stories By Mike Lehmann

Mike Lehmann is a senior principal product manager with the Oracle Application Server 10g team at Oracle Corporation. In this role he is focused primarily on building out the Oracle Application Web services infrastructure.

Comments (3) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Patrick McCluskey 01/12/05 03:14:59 PM EST

Great article!

nVision Software's AppVision Service Level Management (SLM) solution for Java and .NET based applications and web-service infrastructures is extremely valuable for architects that want to maintain their uptime and availability through clear 100% visibility into their infrastructures.

Visit www.nVisionSoftware.com for more information and a free demonstration.

Patrick McCluskey 01/12/05 03:12:56 PM EST

Great article!

nVision Software's AppVision Service Level Management (SLM) solution for Java and .NET based applications and web-service infrastructures is extremely valuable for architects that want to maintain their uptime and availability through clear 100% visibility into their infrastructures.

Visit www.nVissionSoftware.com for more information and a free demonstration.

Steven Willmott 01/10/05 06:42:28 PM EST

Its interesting to see the emphasis on managability, policies and SLA's from the industry end. There's a fair bit of work in the research community (meansing distributed systems, distributed artificial intelligence and agent technology) which takes the view that ultimately web/Grid service based systems will have to evolve into systems governed by norms, rules and regulations much like those in human societies.

This is pretty far away right now but groups like Jeff Bradshaw's at IHMC (http://www.ihmc.us/about.php) in Florida, the OWL-S (http://www.daml.org/services/owl-s/1.0/) team and others are developing higher level policy languages or web services interoperability mechanisms which combine research paradigms with WSDL, XML and RDF.

@ThingsExpo Stories
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists will examine how DevOps helps to meet th...
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Val...
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Busine...
SYS-CON Events announced today that Hitachi, the leading provider the Internet of Things and Digital Transformation, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Hitachi Data Systems, a wholly owned subsidiary of Hitachi, Ltd., offers an integrated portfolio of services and solutions that enable digital transformation through enhanced data management, governance, mobility and analytics. We help globa...
SYS-CON Events announced today that T-Mobile will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on ...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Val...
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo | @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software in the hope of capturing value in IoT. Although IoT is relatively new in the market, it has already gone through many promotional terms such as IoE, IoX, SDX, Edge/Fog, Mist Compute, etc. Ultimately, irrespective of the name, it is about deriving value from independent software assets participating in an ecosystem as one comprehensive solution.
SYS-CON Events announced today that Hitachi Data Systems, a wholly owned subsidiary of Hitachi LTD., will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City. Hitachi Data Systems (HDS) will be featuring the Hitachi Content Platform (HCP) portfolio. This is the industry’s only offering that allows organizations to bring together object storage, file sync and share, cloud storage gateways, and sophisticated search an...
SYS-CON Events announced today that Hitachi, the leading provider the Internet of Things and Digital Transformation, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Hitachi Data Systems, a wholly owned subsidiary of Hitachi, Ltd., offers an integrated portfolio of services and solutions that enable digital transformation through enhanced data management, governance, mobility and analytics. We help globa...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
SYS-CON Events announced today that CollabNet, a global leader in enterprise software development, release automation and DevOps solutions, will be a Bronze Sponsor of SYS-CON's 20th International Cloud Expo®, taking place from June 6-8, 2017, at the Javits Center in New York City, NY. CollabNet offers a broad range of solutions with the mission of helping modern organizations deliver quality software at speed. The company’s latest innovation, the DevOps Lifecycle Manager (DLM), supports Value S...
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in compute, storage and networking technologies, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/...
Amazon has gradually rolled out parts of its IoT offerings in the last year, but these are just the tip of the iceberg. In addition to optimizing their back-end AWS offerings, Amazon is laying the ground work to be a major force in IoT – especially in the connected home and office. Amazon is extending its reach by building on its dominant Cloud IoT platform, its Dash Button strategy, recently announced Replenishment Services, the Echo/Alexa voice recognition control platform, the 6-7 strategic...
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Analytic. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.