Welcome!

Industrial IoT Authors: William Schmarzo, Elizabeth White, Stackify Blog, Yeshim Deniz, SmartBear Blog

Related Topics: Industrial IoT

Industrial IoT: Article

XML and the Semantic Web

XML and the Semantic Web

XML-based formats are now becoming the dominant way of marking up data on the Web. There are standard XML languages for hypertext (XHTML), graphics (SVG), syndication (RSS), multimedia (SMIL), service description and discovery (WSDL and UDDI), and many others, not to mention a plethora of more specialized, often ad hoc, XML-based languages.

Furthermore, there are standard ways to query (XPath and XQuery), link into (XLink and XPointer), and parse and program with (SAX, and DOM) your XML documents. Finally, integration with the many other XML formats can be done in a reasonably straightforward manner either by inclusion (using XML Namespaces) or transformation (using XSLT).

XML as an Interchange Mechanism
Given these facts, it's hard to argue against XML as the interchange mechanism for the Web. But the XML model remains primarily rooted in documents - in particular, textual documents with hierarchical structure. For example, a well-formed XML document can have a <US-address> element that contains <US-street-address-line> and <US-zip-code> elements. From the pure XML point of view, there's nothing but a document-based description of the names and current arrangements of these elements: that is, that a <US-address> contains a <US-zip-code>, which is a string. We could, of course, also have a document with two <US-zip-code>s in it. An XML Schema for <US-address>s could go further and state the specific constraints on that structure, for example, that a <US-address> can contain many <US-street-address-line> elements but only one <US-zip-code> element. With this schema a validator can conclude that our <US-street-address> with two <US-zip-code>s was syntactically incorrect.

However, the schema doesn't actually capture much of the semantics of an address. For example, if we were to take several different documents and find two addresses that were identical except for their zip codes, we wouldn't be able to realize, on the basis of the schema, that at least one zip code was wrong. Similarly, there's no automatic way to convert between two very similar encodings of address (for example, to one where there could only be one <US-street-address> element, which itself had many <US-street-address-line>s). Further, if we wanted to do something complex, such as integrating <US-street-address>s with our personnel database, we would have to rely on a human programmer. We'd need someone to provide a specific mapping from the <US-address> to the various fields in our database structure, and this mapping would be vulnerable to even small changes in our XML format or in the database - and just forget about reusing that mapping with unrelated systems.

More strikingly, there's no clear way to connect our various <US-address>s to more general facts about addresses - for example, that they identify locations, that mail can be delivered to them, or that they're associated with people or businesses. And neither the XML model nor XML Schema are going to be much help in determining that a mailing address is like a fax number, at least in the sense that mailing a letter to an address and faxing it to a number both get the text of the missive to its recipient.

On the Web, however, we really need this semantic information. We want to be able to connect an address that we've encoded in XML in some Web document to people, places, concepts, letters, other documents, databases, directories, PDA contact lists, calendars, sensors, services, and all the other resources on the Web. Done right, such links would allow information in various forms and formats to be automatically manipulated in a coherent way by our computer programs. XML is clearly an important part of the answer, but it's very hard to see the XML data model as a feasible way to represent these links and the real-world semantics they encode.

Modeling Links
By examining how links work on the current Web, we can get some idea of what it takes to model them. Web links have two key aspects: first, things on the Web are consistently identified by URIs (Uniform Resource Identifiers). Second, a Web link has the following tripartite structure:

  • The thing you start with (the source of the link). (In XHTML this is the file containing the "a" tag with an "href" attribute.)
  • The connecting, or linking, bit between the source and the target of the link.
  • The thing you end up with (the target). (In XHTML it's typically named named by a URI placed in the href attribute.)

    While, naturally, it's possible to encode Web links in XML (e.g., with <a href="..."/>), note how little in common a Web link has with the XML data model: Web links use URIs instead of tags or QNames; unlike XML, Web links impose no inherent hierarchy, no notion of containment, and no sequencing of the things to which they relate. In fact, a set of Web links doesn't look much like a DOM tree, but does look an awful lot like an RDF (Resource Description Framework) graph, where each link corresponds to an RDF triple. After all, each part of an RDF triple can be, and most often is, identified by a URI, and the structure of a triple is obviously tripartite.

    The parts of a triple, by design, correspond to the parts of a Web link:

    • The subject of a triple is where you start.
    • The predicate connects the subject and the object.
    • The object corresponds to the target of a Web link.
    Indeed, an RDF triple is a representation of a Web link, where each part of the link is made explicit. Thus a collection of RDF triples is a way to represent, share, and process chunks of the Web itself. Having the Web in a standardized representation allows us to enrich the semantics not just of information in documents on the Web, but of the information expressed by the Web. Every Web link is an often vague, usually ambiguous, and almost always underspecified assertion about the things it connects. RDF lets us eliminate that vagueness and nail down the ambiguity. RDF Schema (RDFS) and the new Web Ontology Language (OWL) allow us to model the meaning of our assertion links in precise, machine-processible ways.

    RDFS is a simple language for creating Web-based "controlled vocabularies" and taxonomies. The language defines several RDF predicates for making links between concepts. Most notably, RDFS defines class and property relations and provides a mechanism for restricting the domains and ranges of the properties. In RDFS, for example, we can express site-specific Yahoo-like Web site categories as a hierarchy of classes with sets of named (sometimes inherited) properties. This allows other sites to connect their own information to these terms, providing better interoperability for use in B2C, B2B, or other Web transactions.

    OWL extends RDFS into a more capable language, usable for thesauruses and domain models. OWL is based on DAML+OIL, a Web language jointly developed by the U.S. Defense Advanced Research Projects Agency (DARPA) and the European Union's Information Science and Technology (IST) program. DAML+OIL has begun to get heavy use in the government, and in November 2001 the W3C chartered a Web Ontology Working Group to refine the DAML+OIL standard into a W3C recommendation language, now called OWL.

    OWL extends RDFS with many more constructs for defining the relationships between classes and, more important, placing restrictions on how properties (i.e., predicates) can be used when linking entities. OWL thus allows users to define simple models of their domains using these properties and their constraints. A full discussion of the language is beyond this article (see www.w3.org/2001/sw/WebOnt for more details), but revisiting the "Address" example should make a few things clearer.

    XML and OWL
    In XML we were able to say that there was a document field called a <US-address> that had subfields of a street address, a city name, a state name, and a zip code. In OWL we can explicitly name these objects as classes and properties, and place constraints on how to legally relate these entities to each other or to entities defined in other documents. Thus, for example, we could mention that cities are in states, and that each city is in one, and only one, state. We could know that a U.S. address is a type of international address where the state field is restricted to be one of Alabama, Maine, New York, and so on, and that these addresses have zip codes that consist of either five or nine numbers. We could also add the information that, in general, international addresses have country codes, and that the country code for U.S. addresses always has the value "USA," and many other such facts. (This sort of specification of these relationships in a formal language is called ontology, thus the term ontology language for OWL.)

    Ontologies let us more precisely link to other documents and resources based on shared use of conceptual terms, even where there is only a partial match (a key difference from current XML-based approaches). Our addresses could thus be linked in turn to other vocabularies - for example, knowing that an address names a location, we could link to other location-based Web resources. These could be databases or Web services that would compute the location of the nearest airport (another kind of location) to a given address, the weather forecast for the city the address is located in, or other such location-specific data. Metadata can also be used to link nontext media to ontologies, expressing, for example, that the photo in a particular picture is of a house at a particular address or that the place to complain about the contents of a particular streaming video is in a particular state (allowing you to compare its location to yours and see if local content restrictions might apply).

    Figure 1 provides an example of the linking of a Web site to ontological information. In this case, from a presentation on OWL given at the W3C session of the World Wide Web Conference in May 2002, information about the keynote speaker is linked to information about events, photos, and people.

    The example of addresses is an extremely basic one, yet we already see a tremendous number of possible uses. By mapping the implicit semantics inherent in XML DTDs and schemas into the explicit relationships expressible in RDFS and OWL, a whole range of new applications, largely created by the linking of existing Web resources, become easily implementable. In the business world this kind of linking to models could be done for SEC filings, supply-chain databases, business services posting WSDL descriptions, and a virtually infinite range of others, allowing enterprise integration on a Web-wide scale. Current Semantic Web-related research is also exploring the use and extension of these RDF-based languages to express trust and authorization relationships, to do the automated discovery and composition of Web services, and to design new languages to continue to enhance the potentially revolutionary capabilities of the Semantic Web.

    The Semantic Web is being built on models based on the RDF representation of Web links. To achieve their full impact, however, the enhanced models enabled by the Semantic Web crucially need to be tied to the document-processing and data-exchange capabilities enabled by the spread of XML technologies. If XML- and RDF-based technologies were incompatible, as some people seem to think they are, it would be a true shame. But, in fact, they aren't. While the underlying models are somewhat different, the normative document exchange format for RDF, RDFS, and OWL is XML. Thus, to those preferring to think of the whole world as XML based, RDF, RDFS, and OWL may simply be thought of as yet another XML language to be managed and manipulated using the standard toolkit. To the RDF purist, the documents and datasets being expressed in XML and XML Schema can anchor their models with interoperable data. To those focused on the world of Web services, SOAP and WSDL can carry, in their XML content, RDF models expressing information that can be easily found, linked, and discovered.

    Of course, as is the case with any groups doing overlapping tasks, there is friction between some in the RDF community and some in the XML world. RDF folks often complain about the (for them) superfluous intricacies of XML. XML experts shake their heads at the way the RDF/XML serialization abuses QNames and XML Namespaces and treats certain attributes and child elements as equivalent. However, these kinds of complaints are nothing new. In fact, they're common in the XML community itself: witness the fury that some XML people express over XSLT's use of QNames as attribute content (to pick one example). Similarly, the RDF world has plenty of dark and overcomplicated corners. Both sets of languages are also continuing to evolve, and each is also exploring new non-XML syntaxes (consider Relax-NG, XQuery, and XPath).

    Best of the Best
    In short, the Semantic Web offers powerful new possibilities and a revolution in function. These capabilities will arrive sooner if we stop squabbling and realize that the rift between XML- and the RDF-based languages is now down to the minor sorts of technical differences easily ironed out in the standards process or kludged by designing interoperable tools. Combining the best of all these languages, and their variants, is easily enabled by the combination of the "documents" of the XML Web with the "links" expressed in RDF. Throw interoperable Web services into the mix and the vision is compelling. The future of the Web can be even more exciting than its past, and pulling all these threads together will get us there.

    Useful Links

  • W3C Semantic Web activity: www.w3.org/2001/sw
  • The RDF Schema Language Specification - Working Draft: www.w3.org/TR/rdf-schema
  • "Integrating Applications on the Semantic Web" (paper by J. Hendler, T. Berners-Lee, and E. Miller on using the Semantic Web for business applications): www.w3.org/2002/07/swint
  • W3C Web Ontology Working Group home page: www.w3.org/2001/sw/WebOnt
  • Feature Synopsis for OWL Lite and OWL - Working Draft: www.w3.org/TR/owl-features
  • "Why RDF model is different from the XML model," by T. Berners-Lee: www.w3.org/DesignIssues/RDF-XML.html
  • More Stories By Dr. James Hendler

    Jim Hendler, a University of Maryland professor, is the Director of Semantic Web and agent technology at the Maryland Information and Network Dynamics Laboratory. A fellow of the American Association for Artificial Intelligence, Jim was formerly chief scientist for information systems at the U.S. Defense Advanced Research Projects Agency (DARPA) and cochairs the Web Ontology Working Group for the W3C. He has authored close to 150 technical papers in the areas of artificial intelligence, robotics, agent-based computing and high performance processing. Hendler was the recipient of a 1995 Fulbright Foundation Fellowship, is a member of the US Air Force Science Advisory Board and is a prominent player in the World Wide Web Consortium's Semantic Web Activity.

    More Stories By Bijan Parsia

    Bijan Parsia is a Semantic Web researcher at the Maryland Information and Network Dynamics Laboratory. His research interests include Web logics and rule engines, Semantic Web services, fine-grained, reflective annotation systems, and trust-focused reasoning.

    [email protected]

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
    Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
    IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
    The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
    Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
    Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
    Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
    To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
    In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
    Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...