Welcome!

Industrial IoT Authors: Pat Romanski, Mark Herring, Derek Weeks, Stackify Blog, Yeshim Deniz

Related Topics: Industrial IoT

Industrial IoT: Article

Advanced ANSI SQL Native XML Integration-Part 2 - Supporting advanced XML capabilities

Advanced ANSI SQL Native XML Integration-Part 2 - Supporting advanced XML capabilities

Part 1 of this article demonstrated how standard ANSI SQL can integrate fully, naturally, and seamlessly with XML. This was accomplished by naturally raising SQL processing to a hierarchical level, enabling relational data (including XML-shredded data) to integrate at a full hierarchical level with native XML. Hierarchical processing and the utilization of the hierarchical semantics were also shown in Part 1, along with the hierarchical joining of hierarchical structures.

Part 2 will cover how standard SQL can naturally support more advanced XML capabilities such as node promotion, fragment processing, structure transformation, variable structures, and the handling of shared and duplicate elements.

Node Promotion and Fragment Processing
SQL's hierarchical processing capabilities do not stop with what was presented in Part 1. We will look at supporting XML's more advanced capabilities starting with node promotion and fragment processing. Node promotion occurs when a node in a hierarchical structure moves up past its parent and ascendants that have not been selected for output (this is controlled by projection in relational terms). This slicing out of nodes in the structure definition has the same effect as slicing them out when the data selected is transferred from the relational Working Set to the Result Set. This is shown at the bottom of Figure 1. This means that this standard XML hierarchical processing capability is also performed naturally in relational processing when a node (table or element) is not selected for output.

From what we saw in Part 1, the basic operation of SQL is to use the SELECT clause to specify the desired output data; the FROM clause to specify its input data; the LEFT JOIN to specify its data structure; and the WHERE clause to specify optional data filtering criteria. Using these simple and intuitive SQL language constructs, even more complex node promotion leading to fragment processing can be easily performed. The query in Figure 1 joins a structure fragment selected from the lower structure to the upper structure. The structure fragment is shown encircled in a dashed oval. Dashed boxes represent nodes that are not selected for output.

A fragment is a portion or grouping of nodes from a hierarchical structure that retains its basic hierarchical structure when unselected nodes are removed, enabling it to be manipulated as a unified structure. It can be embedded below the original root of the input structure and can be a loose collection of nodes like the structure fragment shown in Figure 1. It is defined by the nodes that are selected for output from its defined input structure. In this example, the fragment is located below the root node D of its original input structure, which contains nodes O and I, which are not selected for output and will not be included in the fragment. This causes nodes Y and J to be automatically promoted over nodes O and I so they both naturally collect under the fragment root node F. In fact, the whole process of fragment creation and processing is made possible by the natural action of node promotion.

With the structure fragment identified and isolated by data selection, it can be naturally joined to the upper structure as shown in Figure 1. When linking to a lower-level fragment, it is usually to its root node as in this example, where the link point is below the root of the original input structure node. Linking below or above the root of the fragment is also permitted because the root of the original lower structure remains the defining input structure's root (node D in this case) because it still affects the entire input structure. This is particularly important to logical structures that must be built on the relationships from the root down. However, this doesn't mean that these structure-defining nodes need to be in the output structure. For example, in Figure 1 the root node D and the link point node P in the upper structure are not selected for output, but they are necessary for processing the query. Node E, on the other hand, was not selected and is not necessary to the query so it can be optimized out, which is why it is not in the Working Set. The shaded columns in the Working Set are not selected for output, so they are not transferred to the Result Set shown at the bottom of Figure 1. This relational processing action naturally performs hierarchical node promotion, which in turn collectively supports fragments. These advanced procedures and concepts are being performed automatically and are easily controlled by the data fields specified on the SQL Select list, which can also be specified dynamically.

The advantage of linking below the root is that it allows linking using the most appropriate or convenient link criteria. The filtering of the lower-level structure matches the semantics of the link point, which makes sense semantically. Whether you want to link to the lower-level structure in Figure 1 based on a value in node D, F, O, or even J, the Result Structure shown would retain the same hierarchical structure for any of these link points because root D is still the defining root, and the resulting data (not shown) would match the actual link point semantics.

In SQL, users don't need to know about the concept of fragments, they just select the data they're interested in and the fragments will naturally form. The example in Figure 1 neatly generated a single fragment because there was a single fragment root node F, but this is not a requirement imposed on the user. Suppose node E had also been selected for output, creating two unrelated fragments. This does not present a problem because root D is still the defining root that controls how all the fragments are structurally linked to the upper structure. In this case, root E would be located in the Result Set between nodes B and F. This demonstrates the significant power, flexibility, and ease of use of SQL's nonprocedural hierarchical processing. While the underlying fragment-processing logic may seem complex to perform, its specification is very logical, naturally intuitive, and is performed automatically in standard SQL.

Duplicate and Shared Element Support
Now let's look at how to process XML's duplicate and shared elements, which can occur in an XML data structure when accessed from standard SQL. Duplicate elements occur where the same (named) element type occurs in multiple locations in the XML data structure. Shared elements are created by the XML IDREF and ID attribute constructs, which create a logical pathway in the physical XML data structure (usually creating a network structure). Both of these unconventional structures are demonstrated in Figure 2 as the top two structure diagrams. The Addr node in these diagrams represents the duplicate and shared element. The dotted arrow in the XML shared element diagram in Figure 2 represents the logical IDREF pathway. The problem with these two structures is that they are both ambiguous for a nonprocedural hierarchical query language such as SQL because there is no single unambiguous access path to a specific Addr node location. What makes nonprocedural hierarchical query languages so powerful is that the hierarchical data structures they operate on are naturally unambiguous because they only have a single path to each node. In this way, the query can be specified unambiguously because each node in the structure has its own unique access path and specific semantics that can be utilized automatically to answer the query.

The Alias feature of SQL allows the duplicate and shared element structures shown in Figure 2 to be data modeled as unambiguous hierarchical structures by using the optional AS keyword to rename the elements (nodes). In this example, both of the ambiguous structures can use the same data modeling SQL to produce an unambiguous structure that maintains the original semantics of both input structures. This is possible because the semantics of both input structures with differing storage use of the Addr node, are the same and produce the same result. With the unambiguously modeled structure shown in Figure 2, each specific Addr field can be unambiguously referenced by using its unique node name as a prefix to the field name. In this way each Addr field name reference has its own logical path with its own hierarchical semantics. This allows the full nonprocedural hierarchical power of SQL to be easily controlled with simple intuitive queries. Avoiding the use of node name prefixes can be accomplished by using the SQL Alias feature on the SELECT list to rename the duplicate field names to unique names. The underlying XML access module's logic will adapt transparently to the physical storage representation of the Addr element whether it is shared or duplicated.

Hierarchical Data Filtering
There are two levels of hierarchical data filtering supported with ANSI SQL: the WHERE clause query-level data filtering, and the ON clause path-level data filtering. WHERE clause filtering is the most common of the two. It can affect the entire multi-leg hierarchical structure by not only filtering down the structure, but also up the structure from the filtered node points. The ON clause data filtering, on the other hand, filters only downward from its filtered node point and is compatible with XPath data filtering. Both of these SQL data-filtering operations work consistently across ANSI SQL relational processing and also follow standard hierarchical structure processing semantics.

Figure 3 demonstrates WHERE clause query-level data filtering. Notice how it also filters up the structure affecting the ascendants of the explicitly filtered Dpnd node. In this example, the only Dpnd node "Dpnd2" for the Emp node occurrence "Emp2" is filtered out, which causes "Emp2" to be filtered out because the "Dpnd2" path occurrence is filtered. The associated root node data occurrence "DeptX" is qualified because there are other active paths leading to it from the node occurrence "Dpnd1", which was qualified. Also notice that node occurrence "Proj1" is also qualified because its path is still active and is a descendent of the qualified node occurrence "DeptX". If no Dpnd node occurrences had been qualified, then no data from this structure occurrence would be output. This behavior is logical and intuitive for the hierarchical result you would expect with this hierarchical WHERE clause filtering and is how hierarchical processors operate. This is also the result of relational processing (shown in Figure 3) by the Relational Working Set in this example, where the deleted row is filtered out, indicated by a darkened row. This is why the WHERE clause filtering process can affect the entire multi-leg structure occurrence, taking into account (correlating) the semantics between the data and relationships in its sibling legs. As shown in Figure 3, this complex hierarchical semantic processing occurs automatically thanks to the restricted Cartesian product, producing the required combination of hierarchical related data values allowing data selection to be carried out a row at a time by the relational engine.

Figure 4 demonstrates ON clause path-level data filtering using the same filtering criteria used with the Where clause query-level data filtering. ON clause filtering is similar to WHERE clause data filtering, but only filters from the point on the path where it is used and downward to its dependents. Its power is its fine hierarchical data filtering that is isolated to a single node in the structure. This operates just like XPath data filtering. It is also the result produced from relational processing, shown by the Relational Working Set in this example, where only the filtered Dpnd node occurrence "Dpnd2" is deleted. If there were other descendant nodes under the filtered node, they would also be filtered out. The reason that the entire row was not filtered is that the higher-level structure side (left side) of the LEFT JOIN is always preserved, enabling filtering to only occur from its right structure side down the structure. The WHERE clause filtering, on the other hand, is performed using INNER JOIN logic, which means that either side of the join operation (up or down the structure in this case) can cause nodes to be filtered out. Additionally, WHERE clause processing occurs after the entire row is built, causing it to be entirely filtered out or not filtered at all, while ON clause filtering occurs as the row is being built, allowing separate legs of the structure to be filtered out replacing their values with Null values to keep the alignment. This ON clause path filtering mirrors XPath's operation on XML and can be simulated for legacy database access when specified using SQL.

Structure Transformation
By combining fragment processing and the SQL Alias feature used in duplicate and shared element processing in Figures 1 and 2, it's possible to easily perform powerful structure transformations. This operates by enabling the specification of different fragments from the same structure by using the Alias capability to logically create multiple copies of the same structure so that different fragments can be isolated and then independently manipulated. The Alias feature is used to rename views, which enables duplicate input structures to be logically defined so that references to them can be made unambiguously (also useful for processing XML namespaces). Figure 5 demonstrates this by creating two separate and independent fragments from the StoreView view (encircled by dashed ovals). These fragments are then recombined into a different structure by rejoining them. This is a simple example of structure transformation; multiple structures can each have multiple fragments extracted, which can all be combined in any order, allowing fragments to be joined as they are needed. Structural transformations can also be stored in a SQL view for abstraction, easy reuse, and use in constructing larger structures.

Variable Structures
XML can define variable structures, which allow for considerable variability of structure formats for a single definition of the structure. This means that from structure occurrence to occurrence, or even within a single structure occurrence of a record or document, the structure format can vary. While XML does not require it, a variable structure usually has some piece of information from a higher-level node that indicates how a variable substructure is formed or is to form. With this information and using the ON clause filtering described earlier when hierarchical data filtering was covered, SQL data modeling (using Left Joins) can control the building of each structure occurrence to define the appropriate substructures dynamically. An example is shown in Figure 6.

Figure 6 is a simple example of how the generation of the data structure can vary depending on the value of the field StoreType in the Store node. In this case, only one area of the structure was affected, but there's no limit to the number of variations that can be controlled by ON clauses testing for any number of structure indicator values to control how they are generated depending on their current data values. In fact, variable substructures can contain variable substructures. These tests can be coded to duplicate the rules specified in XML DTD and schemas for varying element generation, which can become quite complex. The SQL for specifying a variable structure, as shown in Figure 6, defines how a logical (relational) structure is to be constructed in memory, or can be used to control the navigation of a physical (XML) structure when being retrieved into memory; and in either case, it controls how the variable output structure is generated.

Comparison with XQuery and the SQL/XML Standard
XQuery's use in SQL requires learning another query language and having to program the query logic into its FLWR (For, Let, Where, Return) expression. SQL's SELECT list functionality is contained in the FLWR expression, which is controlled by its FOR loops. This gives XQuery considerable procedural processing power and control, but also means that specifying additional output values is not trivial, requiring program logic modification. The FLWR statement is also used to control or drive database access using XPath navigation. This requires the XQuery developer to be knowledgeable of the data structure being processed. The XML output is constructed with the use of XML templates, which requires that the developer know XML and can include placing the templates in FLWR FOR loops for additional control. XQuery uses functions to abstract and reuse program logic.

With the ANSI SQL native XML integration solution shown in this article, the SQL developer does not need to know XML or the data structure (once it is modeled in a SQL view) and does not need to specify the query logic or database navigation even for the processing of the most complex multi-leg hierarchical structure. The naive user or developer can specify an ad hoc request or easily add an additional relational or XML data item to the SELECT list and it will be retrieved and hierarchically formatted automatically, utilizing the hierarchical semantics in the data. The SQL hierarchical view offers the highest level of data abstraction and reuse, allowing the processing logic to be dynamically tailored to the runtime requirements. These capabilities do not mean that SQL is better than XQuery, in fact XQuery is more powerful, but there is the standard tradeoff - with the increase in control of a computer language, there is a decrease in ease of use. XQuery requires its additional programming control to handle advanced text processing and complex transformations, required in an XML environment. The full extent of these capabilities may be required less in the SQL environment offset by the use of SQL's hierarchical capabilities.

The SQL/XML standards group has done an excellent job specifying and standardizing many useful mappings between XML and SQL that will be used by the standard SQL native XML integration described in this article. The SQL/XML standard also specifies XML-centric functions in SQL for producing XML documents from the standard flat relational data result of the SQL query, which may include input from XML. The desired XML-formatted hierarchical structure is produced by nesting the SQL/XML standard's XMLElement function within itself to control the desired hierarchical structure. As with XQuery, the SQL/XML standard user must know XML and have knowledge of the input and output hierarchical data structures. The XML-centric SQL functions require a programming addition when a new data item is added to the SELECT list for output. Alternatively, the ANSI SQL native XML integration solution described in this article is seamless and transparent, and produces a valid and accurate hierarchically processed result. It can automatically and dynamically publish XML documents without the introduction of XML-centric SQL functions and their limitations described above. This is made possible by seamlessly utilizing standard SQL's significant inherent hierarchical processing capability, described in Part 1 of this article.

Conclusion
This two-part article has demonstrated how standard ANSI SQL can integrate fully, naturally, and seamlessly with XML by raising SQL processing automatically to a hierarchical processing level, enabling relational data to integrate at a hierarchical level directly with native XML. This was proven not only with examples, but by demonstrating at each stage of SQL processing how it works, from SQL syntax and semantics through the Cartesian product relational engine described in Part 1 of this article. It was also shown that the level of hierarchical support was significant, handling complex multi-legged hierarchical queries intuitively, and joining hierarchical structures easily. This allows SQL to fully utilize the hierarchical semantics in the data and the data structure. By operating at a hierarchical level, the memory and processing efficiencies are greatly increased, and because SQL itself is performing the majority of the integration work, the XML support is very efficient and its footprint is very small, making it excellent for embedded use. All of these features and capabilities support dynamic processing. This ad hoc processing includes powerful parameter-driven query specification in the form of SQL SELECT list specification and WHERE clause filtering that dynamically tailors and optimizes the most complex stored hierarchical views.

In Part 2 of this article, advanced XML processing features that can also be performed by ANSI SQL's standard and inherent hierarchical processing and capabilities were covered. These advanced capabilities include seamless support for shared and duplicate elements in the data structure using the SQL Alias capability; node promotion and structure fragment processing automatically controlled by what data fields are selected on the SQL Select list; and structure transformations using a combination of the structure fragment and Alias capability. Since all the capabilities mentioned in this article inherently exist in ANSI SQL, they operate together in a seamless and unrestricted (orthogonal) fashion. This includes the multi-leg hierarchical data filtering, which automatically operates on the SQL modeled data structure and the full unlimited use of SQL views. This SQL native XML integration operation is naturally standard because it seamlessly and naturally stays within ANSI SQL. It does not require the addition of SQL standardized XML-centric functions, producing a powerful and easy-to-use hierarchical ad hoc query language for XML and other hierarchical forms of data, including SQL hierarchically modeled relational data. For more information on all these topics and additional ANSI SQL- supported XML capabilities, visit www.adatinc.com.

More Stories By Michael M David

Michael M. David is founder and CTO of Advanced Data Access Technologies, Inc. He has been a staff scientist and lead XML architect for NCR/Teradata and their representative to the SQLX Group. He has researched, designed and developed commercial query languages for heterogeneous hierarchical and relational databases for over twenty years. He has authored the book "Advanced ANSI SQL Data Modeling and Structure Processing" Published by Artech House Publishers and many papers and articles on database topics. His research and findings have shown that Hierarchical Data Processing is a subset of Relational Processing and how to utilize this advanced inherent capability in ANSI SQL. Additionally, his research has shown that advanced multipath (LCA) processing is also naturally supported and performed automatically in ANSI SQL, and advanced hierarchical processing operations are also possible. These advanced capabilities can be performed and explained in the ANSI SQL Transparent XML Hierarchical Processor at his site at: www.adatinc.com/demo.html.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
"Digital transformation - what we knew about it in the past has been redefined. Automation is going to play such a huge role in that because the culture, the technology, and the business operations are being shifted now," stated Brian Boeggeman, VP of Alliances & Partnerships at Ayehu, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits,...
@DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...
SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on qua...