Welcome!

Industrial IoT Authors: John Katrick, Stackify Blog, Yeshim Deniz, Elizabeth White, SmartBear Blog

Related Topics: Microservices Expo

Microservices Expo: Article

Deploying Large-Scale Interoperable Web Services Infrastructures

Issues that architects will frequently have to confront

Web services have moved beyond the experimental stage in many organizations and are now becoming the foundation of numerous service-oriented architectures. Thus, architects are concerned about best practices for building, deploying, and maintaining a large-scale interoperable Web services infrastructure.

In one sense, Web services applications are like other applications. They represent a code base developed by a team of engineers. This code base needs to go through a methodological development life cycle, followed by testing and quality assurance, before it is finally released.

Frequently, however, Web services are not new applications at all, but rather carefully crafted message-based interface layers on top of existing systems and applications. New applications may be composed of Web services, or the services may be orchestrated into new business processes.

Given this evolutionary approach to application design and deployment, Web services and applications and business processes using Web services have a different set of provisioning and management concerns that enterprise architects must consider. This article provides a high-level assessment of four key areas that need to be considered.

Interoperability
The underlying value proposition of Web services is operational efficiency, provided by consistent, standards-based mechanisms that link together heterogeneous business applications. Such systems typically originate on a variety of legacy, host-based, and modern architectures, each with different Web service-enabling capabilities.

Although interoperability best practices for Web services are becoming better understood for lower-level protocols like SOAP and WSDL, issues at the more recently emerged quality-of-service level for security, reliability, and policy are not as well understood. As such, service enabling these applications in a consistent and maximally interoperable fashion is a key concern for enterprise architects.

Performance
Web services are fundamentally a message-based approach to integration and focus on moving XML-based SOAP documents on both public and private networks. Such applications have very different performance characteristics than traditional multitier transactional systems that use binary protocols. Architects must take into account new throughput, latency, network, and client concerns layered on top of existing applications that have frequently been tuned for different usage characteristics.

Quality of Service
Coinciding with the emergence of Web applications was an increase in the number of users for back-end infrastructures. Web services add a new category of client: the programmatic client that has the potential to increase the number of "users" and messages flowing through the environment by another order of magnitude. This new usage model can require new approaches to reliability, availability, and scalability. Furthermore, message-based systems bring new quality-of-service concerns regarding reliable messaging and security of messages coming into the infrastructure.

Manageability
Web services are typically used for application-to-application communication rather than for end user-facing applications. As such, the visibility of a Web services infrastructure to the end user and even the operational staff is less apparent because it is frequently the hidden glue that ties operational systems together. It is critical that architects design with management visibility in mind, taking into account the uniqueness of Web services from a monitoring, diagnostics, service life cycle-management, and service-level agreement perspective.

Let's take a closer look at each of these topics.

Interoperability
A number of key questions regarding interoperability can drive an architect's strategy. Are your Web services internally focused so that you will have control over the clients and the tools that will be used with them? Are your Web services external facing and subject to arbitrary clients and tools? How sophisticated are your users? Are they integration architects using Web services development tools, or are they end users using Web service-enabled desktop and portal productivity software? These are basic questions, but they direct how you might tackle interoperability.

You can begin by following best interoperability practices for developing Web services.

Development
One way to tackle the issue is by publishing Web services bottom-up (i.e., taking existing applications and simply wrapping programmatic APIs as Web services). However, the top-down approach is more interoperable (i.e., modeling the message using XML Schema and designing the interface first in WSDL so the public contract and message definition works for both the client and server implementation).

The top-down approach is more interoperable for several reasons. For one thing, bottom-up approaches not only tightly couple consumers to existing APIs, but they often pollute the WSDL contract with noninteroperable, language-specific interface and message artifacts. A simple Java example that can be difficult to interoperate within .NET is the frequent usage of collections to move data structures between application tiers. From .NET, a common example is ADO.NET data sets, which are specific to Microsoft's platform. Avoiding language-specific types and starting with interface and message definitions can lead to a much higher likelihood of interoperability.

Third-party testing tools can validate best practices for interoperability. One of the most recognized groups focused on this is the Web Services Interoperability Organization (WS-I), which includes companies such as Oracle, IBM, Microsoft, and others. WS-I has created a set of best practices called WS-I Basic Profile 1.1 that describes how best to craft WSDL and SOAP messages so that Web services conforming to these rules have a maximum chance of achieving interoperability. These rules have been codified into a set of testing tools that can be run against Web service WSDLs and SOAP message exchanges to determine if those practices have been followed.

Testing
Conformance to WS-I does not necessarily guarantee interoperability. Rather, it is an indicator that your Web services are highly likely to be interoperable. Sometimes some older Web services infrastructures may not support the default message types required by WS-I: document/literal and rpc/literal. Or, sometimes some service providers are unable to upgrade their infrastructures to generate WS-I compliant services.

Practically speaking, testing with the actual target Web services clients is the only way to prove real interoperability. This enables architects to validate their own internally developed Web services as well as to validate that their preferred toolkits work with non-WS-I compliant Web services. An analogy can be made to Web application development. Just as many organizations test their Web applications with multiple browsers to ensure HTML compatibility, it is frequently incumbent on the Web services provider to try multiple client environments with their Web services end points.

The degree to which the interoperability testing has to be done depends on how you answer the initial questions regarding usage. Externally available Web services have a higher testing bar associated with them due to the unanticipated nature of clients. Internally available Web services may have a lower testing bar required if the client environment is more tightly managed and homogenous.

A very common practice that has emerged is for Web services providers to offer sample clients in popular programming languages: Java, C#, Visual Basic, Perl, and PHP. Examples of widely used services taking this approach include Amazon, Google, and eBay. This approach may seem to indicate that the promise of the interoperability of Web services has yet to be reached. However, it should be seen simply as a sign of a maturing industry as architects take short-term pragmatic steps toward ensuring interoperability and, as a by-product, usability.

In addition, a Web services provider may make a conscious decision to create a poorly interoperable implementation. If such a situation arises, the designer should provide some workarounds for service consumers.

Workarounds
Just as database architects and middle-tier object modelers often relax design constraints for application-specific reasons, Web services providers may consciously design service interfaces that are not maximally interoperable. Some designers prefer tight coupling to back-end systems for performance reasons. Others really want nonschema-based object models represented in the message exchanges, or moved "over the wire," for productivity reasons. Sometimes using SOAP over HTTP just does not meet the performance requirements of the target application.

In these cases, it is typically incumbent on the Web services provider to offer recommendations to clients on how to use these services. Common approaches beyond providing sample working clients include the following:

  1. Working in a homogenous client/server environment (Web services toolkits invariably are symmetrically interoperable with themselves)
  2. Providing custom serializers for proprietary types that can be plugged into third-party toolkits
  3. Describing how to use handlers or interceptor architectures provided by most toolkits to transform messages into a usable form at the client or end point
  4. Providing code samples of how to parse the raw XML SOAP message
Often Web services toolkit providers, and sometimes platform providers themselves, will have completed specific integration work beyond the standards with other platform providers (e.g., Oracle, Microsoft, and IBM). This is to enable easier integration paths for Web services providers. A simple example of this approach is the widespread use of document-literal wrapped Web services, which is a Microsoft-specific approach to document-literal services modeling RPC calls that nearly all Web services toolkits support.

Beyond interoperability concerns, moving to an XML-based integration stack defined by Web services brings performance characteristics to mind.

Performance
In a Web services application, message sizes typically increase substantially from traditional binary protocols. Additionally, a new layer of marshaling, unmarshaling, parsing, and translating XML messages to and from the underlying protocols is introduced.

Therefore, an important part of any deployment architecture for Web services must include a comprehensive plan to understand the performance characteristics of the service end points and the clients using those service end points. Typically the performance needs to focus on two areas, throughput and latency.

Throughput
Throughput is the number of Web services requests, typically measured in bytes, handled in a given time period. Throughput is measured only on the server side and does not include the time in which it took to send or receive the message.

Latency
Latency is the round-trip time between sending a request and receiving a response. Latency is often subject to issues external to the server, such as network bandwidth and, in a heterogeneous environment, characteristics of the client environments.

The first question is, "What is the expected message size that will be passed through individual Web services end points?" Once the message size is determined, it is often a good practice to start with what might be termed a "null processing" test. The goal is to load up the deployment environment with concurrent requests with zero application processing on the server side to determine what overhead the Web services runtime itself puts on the environment. This allows you to ascertain the overhead of the Web services infrastructure independent of its interaction with the underlying systems.

Going through this exercise can reveal a number of issues within a testing and production environment, including the following.

  1. Network. Often when testing the performance of Web services, network bandwidth can be the bottleneck. Network issues can impact both latency and throughput.
  2. Client. Many vendors will optimize their Web services client to work best with their Web services runtime. However, using the Web services runtime-provided client could result in misleading measurements. Instead, it is a good practice to choose neutral third-party clients to generate load to avoid skewing results.
  3. Server. Frequently, to achieve optimal performance on the server side, it is necessary to consult vendor documentation on how to have the server environment take advantage of the hardware resources available. Some of these settings can be vendor proprietary, and others are common to the runtime chosen. For example, configuration can significantly impact the throughput in J2EE environments that provide adequate memory allocation, parameters, garbage collection settings, and thread pool. Another common approach in J2EE environments (specific to each server) is running multiple Java virtual machines to more optimally take advantage of hardware resources.
  4. Memory and CPU. Some client and runtime environments may be more sensitive to memory and CPU requirements?requiring more or less to generate or process Web services messages. If the client or server is bound by either of these constraints, accurate measurement of throughput may not be possible.
  5. Message size and complexity. It is important to use representative message structures when testing Web services. Clearly the larger and more complex the message, the heavier the XML parsing requirement will be on the Web services runtime environment. Many Web services runtimes have different performance characteristics depending on message size and may have specific tuning capabilities that enable them to process messages differently based on the size of the messages.
  6. Asynchronous services versus synchronous services. Most early Web services infrastructures focused on synchronous request/response implementations and one-way messaging. However, with the recent emergence of Business Process Execution Language for Web services (BPEL4WS), many organizations are building infrastructures that contain a significant asynchronous component. In asynchronous services, one typically expects to see the ability to handle larger numbers of inbound requests, but mapping this number to a throughput measure can be skewed when comparing synchronous numbers because of the delayed nature of asynchrony.
These are some of the basic variables to keep in mind when considering basic performance testing of a Web services environment. However, sometimes the performance requirements overwhelm the ability of the Web services runtime to deal with SOAP messages. In these cases many architects will investigate messaging alternatives that are aligned with a service-oriented architectural approach.

One popular approach, available on Java platforms and aligned with Web services, is an Apache open source framework called Web Services Invocation Framework (WSIF). Apache WSIF enables developers to describe their underlying application interfaces using WSDL, and yet client invocations use native protocols rather than SOAP over HTTP. Classic examples of this include calling EJBs using native RMI protocols or vendor-specific optimizations such as using WSIF to natively call database-stored procedures.

In addition to interoperability and performance, Web services must be thought about from the classic reliability, availability, and scalability (RAS) characteristics needed in any large-scale deployment infrastructure.

Quality of Service
Web services typically take advantage of the same quality-of-service characteristics such as clustering, reliable messaging, and security available from server vendors for classical multitier applications.

Clustering
For scalability, developers are typically looking for a server environment that enables them to maintain consistent throughput and latency as concurrency of Web services clients varies. Scalable architectures enable the addition of more hardware, including machines, CPUs, and memory, as well as more resources both vertically (to a single machine) and horizontally (adding more machines to a cluster). Moreover, beyond the manual procedures for handling increased demand, modern server environments are self-adjusting, taking advantage of additional hardware resources on demand.

Remember, most Web services environments are either stateless or, if they are long-running such as business processes, their state is persisted in back-end databases. Both of these scenarios are supported by the classical cluster architectures available from application server vendors. For Web services running over the HTTP protocol, clustering solutions should span multiple tiers - from front-end caching and HTTP and J2EE servers to back-end databases.

Reliable Messaging
Unique to the reliability of Web services is the infrastructure needed to guarantee delivery of a message to an end point. It can be relatively easy to get a message to a new service end point. However, when the back-end systems being exposed through Web services interfaces are not available, approaches using asynchronous technologies need to be evaluated.

A common approach for achieving reliable messaging is to receive SOAP messages over HTTP from external business partners for maximum interoperability and then move the SOAP messages over a reliable J2EE infrastructure backbone such as JMS. A simple protocol mediation layer, HTTP to JMS, can add a significant degree of reliability to message propagation within internal architectures.

More recently, with the arrival of reliable messaging standards that are protocol-independent, including WS-Reliability and WS-ReliableMessaging, organizations are looking at new reliability infrastructures. These are typically built into the Web services runtime infrastructures of platforms and ensure that messages arrive exactly once (often referred to as guaranteed message delivery).

The main issues with using the standards-based approach to reliable messaging are the relative immaturity of implementations interoperability concerns and, of course, the unavailability of such technology on older architectures. Although any serious implementation of reliability will gracefully degrade to work with nonreliability-enabled clients, architects who need reliability in their infrastructure often choose variations of the following strategies:

  1. Work in a homogeneous environment in which both ends are reliable messaging-enabled from the same vendor.
  2. Work with vendor implementations in which bilateral vendor interoperability testing has been done ahead of standards-based interoperability.
  3. Offer different levels of reliable messaging. For nonreliable clients, process the messages but offer higher levels of reliable messaging with clients that meet the requirements of items 1 and 2.
  4. Design a manual logging and log reconciliation of input and output messages.
  5. Develop proprietary agreements between the client and server environments. Approaches here include schemes that rely on message exchange patterns or proprietary mechanisms within message bodies to determine whether messages really did make it to their end point.
Options 1-3 enable reliability to be tactically introduced based on standards. Options 4 and 5 offer solutions independent of standards and interoperability but may set up longer-term upgrade requirements as reliability infrastructures standardize.

Secure Messaging
As with reliable messaging, security of message exchanges has reached the early stages of maturity with the industry-endorsed release of WS-Security in April 2004. WS-Security defines standardized authentication tokens within messages, digital signatures for messages, and message-level encryption for Web services.

This very cleanly separates the security of Web services messaging from the transport protocol layer, providing much more flexibility from more commonly used HTTP protocol security such as SSL/TLS. Much like reliability, the biggest issues for WS-Security are the unevenness of implementations across vendors, interoperability concerns, and availability across older infrastructures.

Approaches for dealing with standards-based message security mirror what architects consider for reliable messaging.

If interoperability is not achievable across WS-Security implementations (e.g., via homogeneous clients and servers or bilateral vendor interoperability), architects will work to the lowest common denominator to achieve secure messaging. Two of the most common approaches are as follows:

1.  Web-based security. Because most Web services run over HTTP, standard Web technologies such as SSL/TLS and basic/digest authentication work equally well. These approaches can be used for authentication, integrity, and encryption of messages "on the wire." Although not Web services-aware, these approaches tend to be supported on both old and new infrastructures, ensuring close to maximum interoperability.
2.  Passing security tokens inside messages that can be used to verify authentication and message integrity. Rather than conforming to WS-Security standards, many organizations engaged in Web services transactions define an encrypted security token in a normal SOAP message for which a key or algorithm for generating or parsing such tokens is provided via an offline secure exchange. Public examples of this include Amazon's public Web services, for which a user key is required before use.

Ultimately, as is obvious from the variety of approaches, how developers tackle message-level quality of service depends on the sophistication of an organization's internal architecture as well as the capabilities of the expected Web services client environment.

Once these issues are addressed, there is a natural tendency to want to establish some sort of governance over those Web services.

Manageability
As a Web services application is deployed, classic management issues begin to appear. These issues can include monitoring and diagnostics, service-level agreements, policy management, centralized auditing and logging, and consolidating around a single identity management infrastructure. What is often done in this arena is to use parallel constructs to address these management concerns in traditional Web and multitier architectures. However, without careful adaptation, organizations can find that these constructs don't fit directly into Web services.

Take auditing and logging, for example. Unlike traditional Web traffic analysis, Web services logging and auditing is typically concerned with confirming how the individual messages or specific content in the messages correspond to what occurred in the back-end business systems. Correlating between these two often distinct tiers is much different than doing simple log analysis typical for Web content.

This simple example touches on the area of monitoring, diagnostics, and root-cause analysis that is critical for large-scale Web services infrastructures. The solutions in this area are mixed with some traditional management frameworks being extended to include this, new vendors emerging in the real time, event-driven business activity monitoring space, and traditional business intelligence tools being extended to be more aware of reporting against message stores.

Similar analysis must be done for service-level agreements (SLA) in the area of quality of service. Take, for example, WS-Security and WS-Reliability/WS-ReliableMessaging. Beyond simply implementing these standards, the longer-term vision is to enable SLAs to be exchanged in an automated fashion using emerging specifications such as WS-Policy. Such an exchange enables clients to programmatically and symmetrically match the quality-of-service capabilities supported by the server. Practically, however, most vendors provide different approaches for doing this today. The most common approach adopted by organizations currently requiring this today is simple, offline, noncomputerized agreements.

A common approach for normalizing the monitoring and diagnostics issues and enabling the centralization of control over Web services infrastructures is the concept of a gateway or intermediary through which all Web services traffic is routed. This central enforcement point provides both a consolidation and separation of management concerns from the back-end infrastructure. It also enables consistent application of quality of service policy as well as a convenient data capture point for analysis of Web services data flow.

The trade-off that architects often have to make with gateway approaches is the centralization of management versus the potential performance overhead of such an approach. Many gateway approaches deal with this performance concern by providing both an intermediary approach and an agent approach that works in conjunction with a centralized monitoring and diagnostics infrastructure.

Conclusion
This article focused on some of the issues that frequently confront architects when they attempt to deploy a large-scale interoperable Web services infrastructure. Although by no means meant to be a comprehensive enumeration of the issues and solutions, I analyzed some of the more widely known Web services concerns in interoperability, performance, quality of service, and management. Hopefully, you have gained an understanding of the key issues for their Web services deployment infrastructure.

More Stories By Mike Lehmann

Mike Lehmann is a senior principal product manager with the Oracle Application Server 10g team at Oracle Corporation. In this role he is focused primarily on building out the Oracle Application Web services infrastructure.

Comments (3) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Patrick McCluskey 01/12/05 03:14:59 PM EST

Great article!

nVision Software's AppVision Service Level Management (SLM) solution for Java and .NET based applications and web-service infrastructures is extremely valuable for architects that want to maintain their uptime and availability through clear 100% visibility into their infrastructures.

Visit www.nVisionSoftware.com for more information and a free demonstration.

Patrick McCluskey 01/12/05 03:12:56 PM EST

Great article!

nVision Software's AppVision Service Level Management (SLM) solution for Java and .NET based applications and web-service infrastructures is extremely valuable for architects that want to maintain their uptime and availability through clear 100% visibility into their infrastructures.

Visit www.nVissionSoftware.com for more information and a free demonstration.

Steven Willmott 01/10/05 06:42:28 PM EST

Its interesting to see the emphasis on managability, policies and SLA's from the industry end. There's a fair bit of work in the research community (meansing distributed systems, distributed artificial intelligence and agent technology) which takes the view that ultimately web/Grid service based systems will have to evolve into systems governed by norms, rules and regulations much like those in human societies.

This is pretty far away right now but groups like Jeff Bradshaw's at IHMC (http://www.ihmc.us/about.php) in Florida, the OWL-S (http://www.daml.org/services/owl-s/1.0/) team and others are developing higher level policy languages or web services interoperability mechanisms which combine research paradigms with WSDL, XML and RDF.

@ThingsExpo Stories
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
Recently, WebRTC has a lot of eyes from market. The use cases of WebRTC are expanding - video chat, online education, online health care etc. Not only for human-to-human communication, but also IoT use cases such as machine to human use cases can be seen recently. One of the typical use-case is remote camera monitoring. With WebRTC, people can have interoperability and flexibility for deploying monitoring service. However, the benefit of WebRTC for IoT is not only its convenience and interopera...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
Digital Transformation (DX) is not a "one-size-fits all" strategy. Each organization needs to develop its own unique, long-term DX plan. It must do so by realizing that we now live in a data-driven age, and that technologies such as Cloud Computing, Big Data, the IoT, Cognitive Computing, and Blockchain are only tools. In her general session at 21st Cloud Expo, Rebecca Wanta explained how the strategy must focus on DX and include a commitment from top management to create great IT jobs, monitor ...
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
"Digital transformation - what we knew about it in the past has been redefined. Automation is going to play such a huge role in that because the culture, the technology, and the business operations are being shifted now," stated Brian Boeggeman, VP of Alliances & Partnerships at Ayehu, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...