Industrial IoT Authors: JP Morgenthal, Derek Weeks, Liz McMillan, Elizabeth White, William Schmarzo

Related Topics: Microservices Expo

Microservices Expo: Article

Deploying Large-Scale Interoperable Web Services Infrastructures

Issues that architects will frequently have to confront

Web services have moved beyond the experimental stage in many organizations and are now becoming the foundation of numerous service-oriented architectures. Thus, architects are concerned about best practices for building, deploying, and maintaining a large-scale interoperable Web services infrastructure.

In one sense, Web services applications are like other applications. They represent a code base developed by a team of engineers. This code base needs to go through a methodological development life cycle, followed by testing and quality assurance, before it is finally released.

Frequently, however, Web services are not new applications at all, but rather carefully crafted message-based interface layers on top of existing systems and applications. New applications may be composed of Web services, or the services may be orchestrated into new business processes.

Given this evolutionary approach to application design and deployment, Web services and applications and business processes using Web services have a different set of provisioning and management concerns that enterprise architects must consider. This article provides a high-level assessment of four key areas that need to be considered.

The underlying value proposition of Web services is operational efficiency, provided by consistent, standards-based mechanisms that link together heterogeneous business applications. Such systems typically originate on a variety of legacy, host-based, and modern architectures, each with different Web service-enabling capabilities.

Although interoperability best practices for Web services are becoming better understood for lower-level protocols like SOAP and WSDL, issues at the more recently emerged quality-of-service level for security, reliability, and policy are not as well understood. As such, service enabling these applications in a consistent and maximally interoperable fashion is a key concern for enterprise architects.

Web services are fundamentally a message-based approach to integration and focus on moving XML-based SOAP documents on both public and private networks. Such applications have very different performance characteristics than traditional multitier transactional systems that use binary protocols. Architects must take into account new throughput, latency, network, and client concerns layered on top of existing applications that have frequently been tuned for different usage characteristics.

Quality of Service
Coinciding with the emergence of Web applications was an increase in the number of users for back-end infrastructures. Web services add a new category of client: the programmatic client that has the potential to increase the number of "users" and messages flowing through the environment by another order of magnitude. This new usage model can require new approaches to reliability, availability, and scalability. Furthermore, message-based systems bring new quality-of-service concerns regarding reliable messaging and security of messages coming into the infrastructure.

Web services are typically used for application-to-application communication rather than for end user-facing applications. As such, the visibility of a Web services infrastructure to the end user and even the operational staff is less apparent because it is frequently the hidden glue that ties operational systems together. It is critical that architects design with management visibility in mind, taking into account the uniqueness of Web services from a monitoring, diagnostics, service life cycle-management, and service-level agreement perspective.

Let's take a closer look at each of these topics.

A number of key questions regarding interoperability can drive an architect's strategy. Are your Web services internally focused so that you will have control over the clients and the tools that will be used with them? Are your Web services external facing and subject to arbitrary clients and tools? How sophisticated are your users? Are they integration architects using Web services development tools, or are they end users using Web service-enabled desktop and portal productivity software? These are basic questions, but they direct how you might tackle interoperability.

You can begin by following best interoperability practices for developing Web services.

One way to tackle the issue is by publishing Web services bottom-up (i.e., taking existing applications and simply wrapping programmatic APIs as Web services). However, the top-down approach is more interoperable (i.e., modeling the message using XML Schema and designing the interface first in WSDL so the public contract and message definition works for both the client and server implementation).

The top-down approach is more interoperable for several reasons. For one thing, bottom-up approaches not only tightly couple consumers to existing APIs, but they often pollute the WSDL contract with noninteroperable, language-specific interface and message artifacts. A simple Java example that can be difficult to interoperate within .NET is the frequent usage of collections to move data structures between application tiers. From .NET, a common example is ADO.NET data sets, which are specific to Microsoft's platform. Avoiding language-specific types and starting with interface and message definitions can lead to a much higher likelihood of interoperability.

Third-party testing tools can validate best practices for interoperability. One of the most recognized groups focused on this is the Web Services Interoperability Organization (WS-I), which includes companies such as Oracle, IBM, Microsoft, and others. WS-I has created a set of best practices called WS-I Basic Profile 1.1 that describes how best to craft WSDL and SOAP messages so that Web services conforming to these rules have a maximum chance of achieving interoperability. These rules have been codified into a set of testing tools that can be run against Web service WSDLs and SOAP message exchanges to determine if those practices have been followed.

Conformance to WS-I does not necessarily guarantee interoperability. Rather, it is an indicator that your Web services are highly likely to be interoperable. Sometimes some older Web services infrastructures may not support the default message types required by WS-I: document/literal and rpc/literal. Or, sometimes some service providers are unable to upgrade their infrastructures to generate WS-I compliant services.

Practically speaking, testing with the actual target Web services clients is the only way to prove real interoperability. This enables architects to validate their own internally developed Web services as well as to validate that their preferred toolkits work with non-WS-I compliant Web services. An analogy can be made to Web application development. Just as many organizations test their Web applications with multiple browsers to ensure HTML compatibility, it is frequently incumbent on the Web services provider to try multiple client environments with their Web services end points.

The degree to which the interoperability testing has to be done depends on how you answer the initial questions regarding usage. Externally available Web services have a higher testing bar associated with them due to the unanticipated nature of clients. Internally available Web services may have a lower testing bar required if the client environment is more tightly managed and homogenous.

A very common practice that has emerged is for Web services providers to offer sample clients in popular programming languages: Java, C#, Visual Basic, Perl, and PHP. Examples of widely used services taking this approach include Amazon, Google, and eBay. This approach may seem to indicate that the promise of the interoperability of Web services has yet to be reached. However, it should be seen simply as a sign of a maturing industry as architects take short-term pragmatic steps toward ensuring interoperability and, as a by-product, usability.

In addition, a Web services provider may make a conscious decision to create a poorly interoperable implementation. If such a situation arises, the designer should provide some workarounds for service consumers.

Just as database architects and middle-tier object modelers often relax design constraints for application-specific reasons, Web services providers may consciously design service interfaces that are not maximally interoperable. Some designers prefer tight coupling to back-end systems for performance reasons. Others really want nonschema-based object models represented in the message exchanges, or moved "over the wire," for productivity reasons. Sometimes using SOAP over HTTP just does not meet the performance requirements of the target application.

In these cases, it is typically incumbent on the Web services provider to offer recommendations to clients on how to use these services. Common approaches beyond providing sample working clients include the following:

  1. Working in a homogenous client/server environment (Web services toolkits invariably are symmetrically interoperable with themselves)
  2. Providing custom serializers for proprietary types that can be plugged into third-party toolkits
  3. Describing how to use handlers or interceptor architectures provided by most toolkits to transform messages into a usable form at the client or end point
  4. Providing code samples of how to parse the raw XML SOAP message
Often Web services toolkit providers, and sometimes platform providers themselves, will have completed specific integration work beyond the standards with other platform providers (e.g., Oracle, Microsoft, and IBM). This is to enable easier integration paths for Web services providers. A simple example of this approach is the widespread use of document-literal wrapped Web services, which is a Microsoft-specific approach to document-literal services modeling RPC calls that nearly all Web services toolkits support.

Beyond interoperability concerns, moving to an XML-based integration stack defined by Web services brings performance characteristics to mind.

In a Web services application, message sizes typically increase substantially from traditional binary protocols. Additionally, a new layer of marshaling, unmarshaling, parsing, and translating XML messages to and from the underlying protocols is introduced.

Therefore, an important part of any deployment architecture for Web services must include a comprehensive plan to understand the performance characteristics of the service end points and the clients using those service end points. Typically the performance needs to focus on two areas, throughput and latency.

Throughput is the number of Web services requests, typically measured in bytes, handled in a given time period. Throughput is measured only on the server side and does not include the time in which it took to send or receive the message.

Latency is the round-trip time between sending a request and receiving a response. Latency is often subject to issues external to the server, such as network bandwidth and, in a heterogeneous environment, characteristics of the client environments.

The first question is, "What is the expected message size that will be passed through individual Web services end points?" Once the message size is determined, it is often a good practice to start with what might be termed a "null processing" test. The goal is to load up the deployment environment with concurrent requests with zero application processing on the server side to determine what overhead the Web services runtime itself puts on the environment. This allows you to ascertain the overhead of the Web services infrastructure independent of its interaction with the underlying systems.

Going through this exercise can reveal a number of issues within a testing and production environment, including the following.

  1. Network. Often when testing the performance of Web services, network bandwidth can be the bottleneck. Network issues can impact both latency and throughput.
  2. Client. Many vendors will optimize their Web services client to work best with their Web services runtime. However, using the Web services runtime-provided client could result in misleading measurements. Instead, it is a good practice to choose neutral third-party clients to generate load to avoid skewing results.
  3. Server. Frequently, to achieve optimal performance on the server side, it is necessary to consult vendor documentation on how to have the server environment take advantage of the hardware resources available. Some of these settings can be vendor proprietary, and others are common to the runtime chosen. For example, configuration can significantly impact the throughput in J2EE environments that provide adequate memory allocation, parameters, garbage collection settings, and thread pool. Another common approach in J2EE environments (specific to each server) is running multiple Java virtual machines to more optimally take advantage of hardware resources.
  4. Memory and CPU. Some client and runtime environments may be more sensitive to memory and CPU requirements?requiring more or less to generate or process Web services messages. If the client or server is bound by either of these constraints, accurate measurement of throughput may not be possible.
  5. Message size and complexity. It is important to use representative message structures when testing Web services. Clearly the larger and more complex the message, the heavier the XML parsing requirement will be on the Web services runtime environment. Many Web services runtimes have different performance characteristics depending on message size and may have specific tuning capabilities that enable them to process messages differently based on the size of the messages.
  6. Asynchronous services versus synchronous services. Most early Web services infrastructures focused on synchronous request/response implementations and one-way messaging. However, with the recent emergence of Business Process Execution Language for Web services (BPEL4WS), many organizations are building infrastructures that contain a significant asynchronous component. In asynchronous services, one typically expects to see the ability to handle larger numbers of inbound requests, but mapping this number to a throughput measure can be skewed when comparing synchronous numbers because of the delayed nature of asynchrony.
These are some of the basic variables to keep in mind when considering basic performance testing of a Web services environment. However, sometimes the performance requirements overwhelm the ability of the Web services runtime to deal with SOAP messages. In these cases many architects will investigate messaging alternatives that are aligned with a service-oriented architectural approach.

One popular approach, available on Java platforms and aligned with Web services, is an Apache open source framework called Web Services Invocation Framework (WSIF). Apache WSIF enables developers to describe their underlying application interfaces using WSDL, and yet client invocations use native protocols rather than SOAP over HTTP. Classic examples of this include calling EJBs using native RMI protocols or vendor-specific optimizations such as using WSIF to natively call database-stored procedures.

In addition to interoperability and performance, Web services must be thought about from the classic reliability, availability, and scalability (RAS) characteristics needed in any large-scale deployment infrastructure.

Quality of Service
Web services typically take advantage of the same quality-of-service characteristics such as clustering, reliable messaging, and security available from server vendors for classical multitier applications.

For scalability, developers are typically looking for a server environment that enables them to maintain consistent throughput and latency as concurrency of Web services clients varies. Scalable architectures enable the addition of more hardware, including machines, CPUs, and memory, as well as more resources both vertically (to a single machine) and horizontally (adding more machines to a cluster). Moreover, beyond the manual procedures for handling increased demand, modern server environments are self-adjusting, taking advantage of additional hardware resources on demand.

Remember, most Web services environments are either stateless or, if they are long-running such as business processes, their state is persisted in back-end databases. Both of these scenarios are supported by the classical cluster architectures available from application server vendors. For Web services running over the HTTP protocol, clustering solutions should span multiple tiers - from front-end caching and HTTP and J2EE servers to back-end databases.

Reliable Messaging
Unique to the reliability of Web services is the infrastructure needed to guarantee delivery of a message to an end point. It can be relatively easy to get a message to a new service end point. However, when the back-end systems being exposed through Web services interfaces are not available, approaches using asynchronous technologies need to be evaluated.

A common approach for achieving reliable messaging is to receive SOAP messages over HTTP from external business partners for maximum interoperability and then move the SOAP messages over a reliable J2EE infrastructure backbone such as JMS. A simple protocol mediation layer, HTTP to JMS, can add a significant degree of reliability to message propagation within internal architectures.

More recently, with the arrival of reliable messaging standards that are protocol-independent, including WS-Reliability and WS-ReliableMessaging, organizations are looking at new reliability infrastructures. These are typically built into the Web services runtime infrastructures of platforms and ensure that messages arrive exactly once (often referred to as guaranteed message delivery).

The main issues with using the standards-based approach to reliable messaging are the relative immaturity of implementations interoperability concerns and, of course, the unavailability of such technology on older architectures. Although any serious implementation of reliability will gracefully degrade to work with nonreliability-enabled clients, architects who need reliability in their infrastructure often choose variations of the following strategies:

  1. Work in a homogeneous environment in which both ends are reliable messaging-enabled from the same vendor.
  2. Work with vendor implementations in which bilateral vendor interoperability testing has been done ahead of standards-based interoperability.
  3. Offer different levels of reliable messaging. For nonreliable clients, process the messages but offer higher levels of reliable messaging with clients that meet the requirements of items 1 and 2.
  4. Design a manual logging and log reconciliation of input and output messages.
  5. Develop proprietary agreements between the client and server environments. Approaches here include schemes that rely on message exchange patterns or proprietary mechanisms within message bodies to determine whether messages really did make it to their end point.
Options 1-3 enable reliability to be tactically introduced based on standards. Options 4 and 5 offer solutions independent of standards and interoperability but may set up longer-term upgrade requirements as reliability infrastructures standardize.

Secure Messaging
As with reliable messaging, security of message exchanges has reached the early stages of maturity with the industry-endorsed release of WS-Security in April 2004. WS-Security defines standardized authentication tokens within messages, digital signatures for messages, and message-level encryption for Web services.

This very cleanly separates the security of Web services messaging from the transport protocol layer, providing much more flexibility from more commonly used HTTP protocol security such as SSL/TLS. Much like reliability, the biggest issues for WS-Security are the unevenness of implementations across vendors, interoperability concerns, and availability across older infrastructures.

Approaches for dealing with standards-based message security mirror what architects consider for reliable messaging.

If interoperability is not achievable across WS-Security implementations (e.g., via homogeneous clients and servers or bilateral vendor interoperability), architects will work to the lowest common denominator to achieve secure messaging. Two of the most common approaches are as follows:

1.  Web-based security. Because most Web services run over HTTP, standard Web technologies such as SSL/TLS and basic/digest authentication work equally well. These approaches can be used for authentication, integrity, and encryption of messages "on the wire." Although not Web services-aware, these approaches tend to be supported on both old and new infrastructures, ensuring close to maximum interoperability.
2.  Passing security tokens inside messages that can be used to verify authentication and message integrity. Rather than conforming to WS-Security standards, many organizations engaged in Web services transactions define an encrypted security token in a normal SOAP message for which a key or algorithm for generating or parsing such tokens is provided via an offline secure exchange. Public examples of this include Amazon's public Web services, for which a user key is required before use.

Ultimately, as is obvious from the variety of approaches, how developers tackle message-level quality of service depends on the sophistication of an organization's internal architecture as well as the capabilities of the expected Web services client environment.

Once these issues are addressed, there is a natural tendency to want to establish some sort of governance over those Web services.

As a Web services application is deployed, classic management issues begin to appear. These issues can include monitoring and diagnostics, service-level agreements, policy management, centralized auditing and logging, and consolidating around a single identity management infrastructure. What is often done in this arena is to use parallel constructs to address these management concerns in traditional Web and multitier architectures. However, without careful adaptation, organizations can find that these constructs don't fit directly into Web services.

Take auditing and logging, for example. Unlike traditional Web traffic analysis, Web services logging and auditing is typically concerned with confirming how the individual messages or specific content in the messages correspond to what occurred in the back-end business systems. Correlating between these two often distinct tiers is much different than doing simple log analysis typical for Web content.

This simple example touches on the area of monitoring, diagnostics, and root-cause analysis that is critical for large-scale Web services infrastructures. The solutions in this area are mixed with some traditional management frameworks being extended to include this, new vendors emerging in the real time, event-driven business activity monitoring space, and traditional business intelligence tools being extended to be more aware of reporting against message stores.

Similar analysis must be done for service-level agreements (SLA) in the area of quality of service. Take, for example, WS-Security and WS-Reliability/WS-ReliableMessaging. Beyond simply implementing these standards, the longer-term vision is to enable SLAs to be exchanged in an automated fashion using emerging specifications such as WS-Policy. Such an exchange enables clients to programmatically and symmetrically match the quality-of-service capabilities supported by the server. Practically, however, most vendors provide different approaches for doing this today. The most common approach adopted by organizations currently requiring this today is simple, offline, noncomputerized agreements.

A common approach for normalizing the monitoring and diagnostics issues and enabling the centralization of control over Web services infrastructures is the concept of a gateway or intermediary through which all Web services traffic is routed. This central enforcement point provides both a consolidation and separation of management concerns from the back-end infrastructure. It also enables consistent application of quality of service policy as well as a convenient data capture point for analysis of Web services data flow.

The trade-off that architects often have to make with gateway approaches is the centralization of management versus the potential performance overhead of such an approach. Many gateway approaches deal with this performance concern by providing both an intermediary approach and an agent approach that works in conjunction with a centralized monitoring and diagnostics infrastructure.

This article focused on some of the issues that frequently confront architects when they attempt to deploy a large-scale interoperable Web services infrastructure. Although by no means meant to be a comprehensive enumeration of the issues and solutions, I analyzed some of the more widely known Web services concerns in interoperability, performance, quality of service, and management. Hopefully, you have gained an understanding of the key issues for their Web services deployment infrastructure.

More Stories By Mike Lehmann

Mike Lehmann is a senior principal product manager with the Oracle Application Server 10g team at Oracle Corporation. In this role he is focused primarily on building out the Oracle Application Web services infrastructure.

Comments (3) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Most Recent Comments
Patrick McCluskey 01/12/05 03:14:59 PM EST

Great article!

nVision Software's AppVision Service Level Management (SLM) solution for Java and .NET based applications and web-service infrastructures is extremely valuable for architects that want to maintain their uptime and availability through clear 100% visibility into their infrastructures.

Visit www.nVisionSoftware.com for more information and a free demonstration.

Patrick McCluskey 01/12/05 03:12:56 PM EST

Great article!

nVision Software's AppVision Service Level Management (SLM) solution for Java and .NET based applications and web-service infrastructures is extremely valuable for architects that want to maintain their uptime and availability through clear 100% visibility into their infrastructures.

Visit www.nVissionSoftware.com for more information and a free demonstration.

Steven Willmott 01/10/05 06:42:28 PM EST

Its interesting to see the emphasis on managability, policies and SLA's from the industry end. There's a fair bit of work in the research community (meansing distributed systems, distributed artificial intelligence and agent technology) which takes the view that ultimately web/Grid service based systems will have to evolve into systems governed by norms, rules and regulations much like those in human societies.

This is pretty far away right now but groups like Jeff Bradshaw's at IHMC (http://www.ihmc.us/about.php) in Florida, the OWL-S (http://www.daml.org/services/owl-s/1.0/) team and others are developing higher level policy languages or web services interoperability mechanisms which combine research paradigms with WSDL, XML and RDF.

@ThingsExpo Stories
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
"A lot of times people will come to us and have a very diverse set of requirements or very customized need and we'll help them to implement it in a fashion that you can't just buy off of the shelf," explained Nick Rose, CTO of Enzu, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
Who are you? How do you introduce yourself? Do you use a name, or do you greet a friend by the last four digits of his social security number? Assuming you don’t, why are we content to associate our identity with 10 random digits assigned by our phone company? Identity is an issue that affects everyone, but as individuals we don’t spend a lot of time thinking about it. In his session at @ThingsExpo, Ben Klang, Founder & President of Mojo Lingo, discussed the impact of technology on identity. Sho...
"Operations is sort of the maturation of cloud utilization and the move to the cloud," explained Steve Anderson, Product Manager for BMC’s Cloud Lifecycle Management, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"I think that everyone recognizes that for IoT to really realize its full potential and value that it is about creating ecosystems and marketplaces and that no single vendor is able to support what is required," explained Esmeralda Swartz, VP, Marketing Enterprise and Cloud at Ericsson, in this SYS-CON.tv interview at @ThingsExpo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his Day 2 Keynote at @ThingsExpo, Henrik Kenani Dahlgren, Portfolio Marketing Manager at Ericsson, discussed how to plan to cooperate, partner, and form lasting all-star teams to change the...
SYS-CON Events announced today that delaPlex will exhibit at SYS-CON's @CloudExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. delaPlex pioneered Software Development as a Service (SDaaS), which provides scalable resources to build, test, and deploy software. It’s a fast and more reliable way to develop a new product or expand your in-house team.
SYS-CON Events announced today that IoT Now has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. IoT Now explores the evolving opportunities and challenges facing CSPs, and it passes on some lessons learned from those who have taken the first steps in next-gen IoT services.
SYS-CON Events announced today that WineSOFT will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Based in Seoul and Irvine, WineSOFT is an innovative software house focusing on internet infrastructure solutions. The venture started as a bootstrap start-up in 2010 by focusing on making the internet faster and more powerful. WineSOFT’s knowledge is based on the expertise of TCP/IP, VPN, SSL, peer-to-peer, mob...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
The Internet of Things can drive efficiency for airlines and airports. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Sudip Majumder, senior director of development at Oracle, discussed the technical details of the connected airline baggage and related social media solutions. These IoT applications will enhance travelers' journey experience and drive efficiency for the airlines and the airports.
With billions of sensors deployed worldwide, the amount of machine-generated data will soon exceed what our networks can handle. But consumers and businesses will expect seamless experiences and real-time responsiveness. What does this mean for IoT devices and the infrastructure that supports them? More of the data will need to be handled at - or closer to - the devices themselves.
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Fran...
Big Data, cloud, analytics, contextual information, wearable tech, sensors, mobility, and WebRTC: together, these advances have created a perfect storm of technologies that are disrupting and transforming classic communications models and ecosystems. In his session at @ThingsExpo, Erik Perotti, Senior Manager of New Ventures on Plantronics’ Innovation team, provided an overview of this technological shift, including associated business and consumer communications impacts, and opportunities it m...
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).