Introduction
This Portal gives an overview on ISO 15926 and then zooms in to places where detailed information about all (or most) aspects can be found.
Scope
The scope of ISO 15926 is "Integration of life-cycle data for process plants, including oil and gas production facilities".
Of course that is not a goal in itself, but it serves two main purposes:
- global semantic interoperability, defined as:
"Semantic interoperability is the ability of computer systems to exchange information with unambiguous, shared meaning.
It is a requirement to enable inferencing, knowledge discovery, and data federation between information systems."
This allows applications and systems, used globally during the life of a facility, to share information. - archiving, collecting the information required for:
- interoperability during day-to-day activities
- reasoning with integrated data
- CAE - Computer Aided Engineering
- Digital Twin software
- knowledge mining
- operations optimalization
- root cause analysis
- the next revamp
- etc

Overview
The various aspects, shown on this diagram, will be detailed in order of appearance:
- Foundation concepts
- General concepts
- Discipline-specific concepts
- Parts 7/8 templates
- Mapping
- Application model and data
- Adapter - export side
- SHACL
- SPARQL query
- Adapter - import side
- API
- RDF Triple Store
- OWL File for reasoning
- In closing
Foundation concepts
The "Part 2 Upper Ontology" refers to the data model defined in ISO 15926-2, which will, in the next version of Part 2, be extended with entity types that detail PhysicalObjects. This model is very generic and strongly typed. For example it defines PhysicalObject but not Pump, that comes later in the General concepts. All entity types defined here are specializations of Thing. And no matter how far we proceed in this Portal, that line of descent is maintained throughout.
A special feature of the Part 2 data model is that it has 122 generic Relationship classes that, together with 93 generic classes, form the basis of its Upper Ontology. See here for an overview of those Relationships.
A special feature of the Part 2 data model is that it has 122 generic Relationship classes that, together with 93 generic classes, form the basis of its Upper Ontology. See here for an overview of those Relationships.
General concepts
The above generic model is extended with, still generic, instances of the Part 2 entity types. For example CENTRIFUGAL PUMP is an instance of ClassOfInanimatePhysicalObject. And it has many specializations that together form a class hierarchy or taxonomy. All this is done in the RDL - Reference Data Library. Try the first set of specializations of PUMP here and all 258 specializations here (this and others can be found at RDL2 views ).
Each of the 21,206 classes that are in the ISO 15926-4 RDL, plus 18,346 extensions for industry standards, has a URI, accessible via the internet, and is typed with a Part 2 entity type. Pumps and other items are generically defined, so supplier-, or standardization body-, or project-specific pumps (or other object classes) are not defined in the RDL, but in extensions thereof.
Visit Reference data for further details.
Each of the 21,206 classes that are in the ISO 15926-4 RDL, plus 18,346 extensions for industry standards, has a URI, accessible via the internet, and is typed with a Part 2 entity type. Pumps and other items are generically defined, so supplier-, or standardization body-, or project-specific pumps (or other object classes) are not defined in the RDL, but in extensions thereof.
Visit Reference data for further details.
Discipline-specific concepts
Extensions of the RDL can be made by anybody who has private classes that are further specializations of one or more of the above generic RDL classes, thus extending the taxonomy.
Examples are:
Examples are:
Such specialization can also be indirectly defined, as shown in the diagram here
At the bottom of such a taxonomy, that stretches from Thing downwards, is:
At the bottom of such a taxonomy, that stretches from Thing downwards, is:
- a Requirements Class, usually defined with a technical specification
- a Product Class, as defined by
- a product specification of a manufacturer/supplier, or
- a product class that has been further specialized by configuration and pricing in a quotation.
Parts 7/8 templates
All above classes are possible ingredients for Templates, as defined in ISO 15926-7 and implemented in line with ISO 15926-8. At present some 200 templates have been specified in Template Specifications which can be found here
Where Part 2 is the Upper Ontology, these Templates are Elementary Ontologies.Information, as represented by data elements in applications, can be mapped to instances of these Template classes. The template axiom defines whether such a mapping is valid. See SHACL below.
Where Part 2 is the Upper Ontology, these Templates are Elementary Ontologies.Information, as represented by data elements in applications, can be mapped to instances of these Template classes. The template axiom defines whether such a mapping is valid. See SHACL below.
Mapping
The result of mapping is a set of:
The problem with mapping is that there is a shortage of data engineers. It is expected that AI - Artificial Intelligence can help to find a solution, for instance by comparing a definition of a data element in a source application with the set of template definitions in the template specifications. See here for a problem description.
- declarations
- template instances
The problem with mapping is that there is a shortage of data engineers. It is expected that AI - Artificial Intelligence can help to find a solution, for instance by comparing a definition of a data element in a source application with the set of template definitions in the template specifications. See here for a problem description.
Application model and data
The application model can be anything, as long as it has a defined structure and, preferrably, a data dictionary of some sort. In that data dictionary a definition of the semantics of each data element is expected. That definition forms
the basis for the mapping. Right now this is still a human activity, but with the rise of machine learning and AI it may become possible to let software select the applicable template class. (as an aside: The CFIHOS data model has a good data dictionary.)
Adapter - export side
Each ISO 15926-compliant application shall have an ISO 15926 Adapter. That Adapter has an export function and an import function, that uses the mapping software described above.
SHACL
The exported ISO 15926-8 file shall be validated before being uploaded to a triple store. For this software implementing the W3C Recommendation Shapes Constraint Language (SHACL)
shall be used. See also here.
shall be used. See also here.
SPARQL query
The ISO 15926-8 file is an RDF file in Turtle (preferred) or RDF/XML format. This file is converted to N-triples and stored in a Triple Store.
The query language for that is SPARQL. Most SPARQL implementations produce a query result in JSON format.
The query language for that is SPARQL. Most SPARQL implementations produce a query result in JSON format.
Adapter - import side
One of the important goals of ISO 15926 is to achieve semantic interoperability. So not only exporting generated data, but also importing data produced by another application, recently or perhaps twenty years ago and now required for a revamp project. The adapter must map particular query results (in JSON) to the internal format and naming convention of the importing application .
API
This is about the API services that are required for the triple store. The design of ISO 15926-9 hasn't started yet, but the requirements have been formulated here.
RDF Triple Store
A Triple Store (or Quad Store, as preferred by some) basically is a one table data base with three (or four) data fields. The larger commercially available triple stores can store and handle one trillion triples, see here.
OWL File for reasoning
Storing the life-cycle information of a plant, its components, its streams, and its activities will, in the long run, become a treasure trove of hitherto unknown knowledge, for example in the context of energy optimization. Using SPARQL the relevant information of a domain of discourse can be collected and mapped to OWL for reasoning purposes. Other use of such information could also be for AI. But this is in the future, because first that information must be uploaded for some years.
In closing
Another possible use of the above is that when you use a Workflow System a dedicated SPARQL query for each task can be designed. At the time, according the planning, that a task must be executed the query can be launched and the data imported. The resulting data then can be mapped and uploaded to the triple store.
A possible configuration for a project could look like this one (note the integration of an EDMS):

A possible configuration for a project could look like this one (note the integration of an EDMS):
