Welcome!

Industrial IoT Authors: Elizabeth White, Stackify Blog, Yeshim Deniz, SmartBear Blog, Liz McMillan

Related Topics: Industrial IoT

Industrial IoT: Article

Computational XSLT for Financial Statements

A new role for the Extensible Stylesheet Transformation Language?

Schema languages like XBRL (the Extensible Business Reporting Language) can define the structure of a financial statement, and the data itself can be saved as an XML instance of the schema. The data is often processed further using formulas; for example, to verify balances and derive data for financial analysis. This processing is traditionally done in a non-XML application, but staying in the world of XML, I ask: How well can XSLT (the Extensible Stylesheet Transformation Language) do these calculations and what are the advantages? Using this language to create data is called Computational XSLT and it opens the way to distributing financial formulas as a set of equivalent XSLT functions that are readable and run on any XSLT processor. No new standard is proposed but a new role for XSLT 1.0/2.0 or XQuery1.0 (the XML Query Language) is suggested, and this role is compatible with data in XBRL as well as with any other XML format.

Financial Statements: Structure and Data
The items in a financial statement (e.g., current assets and fixed assets) have labels that have an agreed meaning within the accounting standard being applied (e.g., US-GAAP). XBRL defines the items using XSchema and linkbases that use XLink (the XML Linking Language). The schema define the items and their types, and the linkbases contain additional information; for example, while the schema refers to an item by an ID, a presentational linkbase links that ID to a readable label for final presentation. Having defined the items, an instance document assigns data to the items for a particular company at particular points in time. In an XBRL instance document, this point in time and company label for an item's value is called the context of the item's value.

Formulas
Experts in financial data create formulas. These experts must also define the logic for handling missing data and other complications. Without that logic, the formulas can't be evaluated as intended. Formulas are used for data validation and analysis. There are many private and some third-party formula languages being applied to financial data. However there is common ground, as shown by the XBRL Formula Requirements initiative, which provides an analysis of the requirements and has use cases to show how the proposed formula language might work. The formulas are held in calculation linkbases.

XSLT As the Formula-Processing Application
With the input data in place as an instance of an XBRL or some other schema, calculations can be made using the formulas defined in linkbases or other XML or non-XML mark-up languages. The processing application that produces the calculated data usually works from formulas in non-XML scripts, but in the world of XML, an XSLT processor is a standardized application that can transform XML instance data to calculated data using readable instructions in the XSLT language. So to harness the power of XSLT, the missing link is to transform a set of formulas into an XSLT file.

The XSLT file then provides a way of sharing a readable, run-anywhere implementation of formulas and their processing logic. For example, such a file could be shipped with data (input plus calculated) to show how the calculated data was derived; or the user can apply the formulas to his own data in batch mode on a server, or individually in a browser or desktop application. Perhaps the biggest advantage of this approach is that it offers a relatively direct route from relational data to calculated data if it's used in conjunction with the XML extract-and-transform (via XSLT) support now offered by most database vendors (e.g., Oracle's XSQL API).

Objectives
Having made a case for XSLT for financial statement calculations, I will show the steps by which formulas in a schema are transformed into an XSLT file in which each formula has a corresponding XSLT function. The XSLT functions are then applied to input data to give calculated data. A simplified case study will show the essential components and then the complexity is raised to a higher level by introducing the real-world need to handle missing data. Finally, I will cover the performance of computational XSLT based on my experience in applying hundreds of formulas to thousands of statements.

Although XBRL schemas, linkbases, and instances could have been used for the case study, for brevity, and to focus on the essentials, I have merged and condensed the XBRL schema and formula linkbase into a single pseudo-schema that lists the items for both input data and calculated data (with formulas). The simplified instance document has the period as the only XBRL context.

The work was originally done using XSLT 1.0 but I have moved it to XSLT 2.0 to exploit the support in XSLT 2.0 for XPath 2.0 sequences, unrestricted data structures, and Regular Expressions. References to XSLT functions should be understood to mean a function in XSLT2.0, or a template in XSLT 1.0 (or 2.0), or a function in XQuery 1.0.

Creating XSLT from Formulas
In terms of files with self-explanatory names, the components needed to create XSLT functions from formulas in a schema are as follows. Example files are listed in the source code and will be discussed later, but first we will introduce their roles.

  • Schema.xml defines the structure of the financial statement in terms of input items and calculated items. A calculated item contains a formula used for the calculation; the formula refers to other items (input or calculated) in the schema.
  • Instance.xml is where data is assigned to the input items defined in schema.xml; the data for an item is one or more values, each with a context (period). Instance.xml also contains the contexts.
  • Compiler.xslt is hand coded and transforms the formulas in schema.xml into an XSLT file, functions.xslt, where each formula becomes an equivalent XSLT function. The "compiler" analogy refers to the analysis of schema.xml and the generation of equivalent callable XSLT functions. Each distinct schema.xml will be compiled to a distinct equivalent functions.xslt file.
To apply the formulas, the end user needs schema.xml to create a compatible instance.xml, and functions.xslt, whose functions they will call from their own XSLT. In the source code, host.xslt shows how the functions in functions.xslt are used.

The interactions of these components are shown in Figure 1, which emphasizes the two distinct roles of XSLT in the whole process: as a compiler to create functions.xslt, and for numeric computations when using functions.xslt. The compilation must be re-run after any change to a formula and is unlikely to be time-critical; but the numeric computations, which may be run in a browser, desktop, or server application, should be as efficient as possible.

Case Study: Simplified Use of Computational XSLT 2.0
The simplification is that missing (null) data should be treated as having a value of zero. The file schema.xml (Listing 1) has three input and four calculated items. When transformed to functions.xslt (Listing 2), a calculated item becomes a <xsl:function> element with the same name as the item. In functions.xslt the namespace prefix formula is used for these auto-generated formula-derived functions to distinguish them from the fixed helper functions (namespace helper) they call (see below). The input data in instance.xml (Listing 3) is assumed to be for a single company.

The structure of a formula function in functions.xslt is simple: it has one parameter, the context_id, which is simply a reference to the period of the statement being evaluated from instance.xml. It contains an XSLT variable for each distinct argument in the item's formula, and it returns the formula, evaluated as written. For example, formula:F10 is the formula function for:


<item id="F10" formula="$F1 + $F2" type="calc"/>
and formula:F10 contains variables $F1 and $F2, created like this:

<xsl:variable name="F1" select="helper:get_input_value
( 'F1', $context_id)" as="xs:double"/>
and it returns:

<xsl:sequence select="$F1 + $F2"/>

More Stories By Edmund Gimzewski

Edmund Gimzewski has 15 years of experience in developing software for the financial sector and has been working with XML since its inception. In the last five years he has specialized in XML-centric systems for general software development and for defining and applying financial and other calculations.
Edmund formerly worked as a scientist researching in the field of thermodynamics and has published nearly a dozen papers on thermodynamics (see Thermochimica Acta).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and sh...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
What are the new priorities for the connected business? First: businesses need to think differently about the types of connections they will need to make – these span well beyond the traditional app to app into more modern forms of integration including SaaS integrations, mobile integrations, APIs, device integration and Big Data integration. It’s important these are unified together vs. doing them all piecemeal. Second, these types of connections need to be simple to design, adapt and configure...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and Bi...
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determin...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
The Founder of NostaLab and a member of the Google Health Advisory Board, John is a unique combination of strategic thinker, marketer and entrepreneur. His career was built on the "science of advertising" combining strategy, creativity and marketing for industry-leading results. Combined with his ability to communicate complicated scientific concepts in a way that consumers and scientists alike can appreciate, John is a sought-after speaker for conferences on the forefront of healthcare science,...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...