AIRO (AI Risk Ontology) is an ontology for expressing risk of AI systems based on the requirements of the proposed AI Act and ISO 31000 series of standards. AIRO assists stakeholders in determining "high-risk" AI systems, maintaining and documenting risk information, performing impact assessments, and achieving conformity with AI regulations.


The AI Act aims to avoid the harmful impacts of AI on critical areas such as health, safety, and fundamental rights by setting down obligations which are proportionate to the type and severity of risk posed by the system. It distinguishes specific areas and the application of AI within them that constitutes "high-risk" and has additional obligations (Art. 6) that require providers of high-risk AI systems to identify and document risks associated with AI systems at all stages of development and deployment (Art. 9).

Existing risk management practises consist of maintaining, querying, and sharing information associated with risks for compliance checking, demonstrating accountability, and building trust. Maintaining information about risks for AI systems is a complex task given the rapid pace with which the field progresses, as well as the complexities involved in its lifecycle and data governance processes where several entities are involved and need to share information for risk assessments. In turn, investigations based on this information are difficult to perform which makes their auditing and assessment of compliance a challenge for organisations and authorities. To address some of these issues, the AI Act relies on creation of standards that alleviate some of the compliance related obligations and tasks (Art. 40).

We propose an ontology-based approach regarding the information required to be maintained and used for the AI Act’s compliance and conformance by utilising open data specifications for documenting risks and performing AI risk assessment activities. Such data specifications utilise interoperable machine-readable formats to enable automation in information management, querying, and verification for self-assessment and thirdparty conformity assessments. Additionally, they enable automated tools for supporting AI risk management that can both import and export information meant to be shared with stakeholders - such as AI users, providers, and authorities.

AIRO is an ontology for expressing risk of harm associated with AI systems based on the proposed EU AI Act and the key standards in the ISO 31000 series including 31000:2018 Risk management – Guidelines and ISO 31073:2022 Risk management — Vocabulary. AIRO assists with expressing risk of AI systems as per the requirements of the AI Act, in a machine-readable, formal, and interoperable manner through use of semantic web technologies.


The purpose of AIRO is to express AI risks to enable organisations to represent their AI systems and the associated AI risks and determine whether their AI systems are "high-risk" as per Annex III of the AI Act and

We analysed the requirements of the AI Act, in particular the list of high-risk systems in Annex III, and identified the specific concepts whose combinations determine whether the AI system is considered high-risk.These are listed in Table 1 in the form of: competency questions, concepts, and relation with AI system.

Table 1: Questions necessary to determine whether an AI system is high-risk according to Annex III
ID Competency Question Concept Relation
1 In which domain is the AI system used? Domain isAppliedWithinDomain
2 What is the purpose of the AI system? Purpose hasPurpose
3 What is the capability of the AI system? AICapability hasCapability
4 Who is the user of the AI system? AIUser isUsedBy
5 Who is the AI subject? AISubject hasAISubject
annex III concepts
Figure 1: concepts required for determining high-risk AI applications as per Annex III

To specify the conditions where use of an AI system is classified into the high-risk category, we determine values of the identified concepts by answering the 5 questions for each clause in Annex III. Combinations of values, which can be treated as rules for high-risk uses, for Annex III's high-risk applications are represented in Figure 2.
If an AI system meets at least one of the conditions, it is considered as high-risk unless (i) its provider demonstrates that ``the output of the system purely accessory in respect of the relevant action or decision to be taken and is not therefore likely to lead to a significant risk to the health, safety or fundamental rights.'' (Art. 6 (3)), or (ii) it is put into service by a small-scale provider in the public or private sector for their own use to assess creditworthiness, determine credit score, health/life insurance risk assessment, or health/life insurance pricing (Annex III, pt. 5(a) and 5(b)).
annex III combinations
Figure 2: Describing Annex III high-risk conditions using the 5 concepts


Core Concepts and Relations

AIRO’s core concepts and relations are illustrated in Figure 3. The upper half shows the main concepts required for describing an AI System (green boxes), and the lower half represents key concepts for expressing Risk (purple boxes). The relation hasRisk links these two halves by connecting risk to either an AI system or a component of the system.

AIRO concepts
Figure 3: AIRO core concepts and relations

Namespace declarations

Table 3: Namespaces used in the document


AI System

Term AISystem
Label AI System
Definition An engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy [NIST]
Source AI Act proposal
SubClassOf prov:Entity


Term Domain
Label Domain
Definition Refers to domain, sector, or industry


Term Purpose
Label Purpose
Definition Refers to the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation. [AI Act, Art. 3(12)]
Source AI Act proposal

AI Capability

Term AICapability
Label AI Capability
Definition The capability of an AI system that enables realisation of the system's purposes

AI Technique

Term AITechnique
Label AI Technique
Definition Approach or technique used in development of an AI system

AI Lifecycle Phase

Term AILifecyclePhase
Label AI Lifecycle Phase
Definition A Phase of AI lifecycle which indicates evolution of the system from conception through retirement

AI Component

Term AIComponent
Label AI Component
Definition Component (element) of an AI system


Term Output
Label Output
Definition Output of an AI system


Term Event
Label Event
Definition occurrence or change of a particular set of circumstances
Source ISO 31000, 3.5

Risk Source

Term RiskSource
Label Risk Source
Definition An element that has the potential give rise to a risk
SubClassOf airo:Event
Source ISO 31000, 3.4


Term Risk
Label Risk
Definition Risk of harm associated with an AI system
SubClassOf airo:Event


Term Consequence
Label Consequence
Definition Outcome of an event affecting objectives
SubClassOf airo:Event
Source ISO 31000, 3.6


Term Impact
Label Impact
Definition Outcome of a consequence on persons, groups, facilities, environment, etc.
SubClassOf airo:Consequence

Area Of Impact

Term AreaOfImpact
Label Area Of Impact
Definition Areas that can be affected by an AI system


Term Control
Label Control
Definition A measure that maintains and/or modifies risk
Source ISO 31000, 3.8


Term Document
Label Document
Definition A piece of written, printed, or electronic matter that provides information or evidence [from Oxford Languages dictionary]
SubClassOf prov:Entity


Term Standard
Label Standard
Definition A resource, established by consensus and approved by a recognized body, that provides, for common and repeated use, rules, guidelines or characteristics for activities or their results, aimed at the achievement of the optimum degree of order in a given context [ISO/IEC TR 29110-1:2016(en), 3.59]
SubClassOf prov:Entity


Term Stakeholder
Label Stakeholder
Definition Represents any individual, group or organization that can affect, be affected by or perceive itself to be affected by a decision or activity [ISO/IEC TR 29110-1:2016(en), 3.59]
SubClassOf prov:Entity

AI Provider

Term AI Provider
Label AI Provider
Definition A natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service under its own name or trademark, whether for payment or free of charge [AI Act, Common position, Art.3(2)]
SubClassOf airo:AIOperator

AI User

Term AIUser
Label AI User
Definition Any natural or legal person under whose authority the system is used [AI Act, Common position, Art.3(4)]
SubClassOf airo:AIOperator

AI Subject

Term AISubject
Label AI Subject
Definition An entity that is subjected to the use of AI
SubClassOf airo:Stakeholder

Affected Stakeholder

Term AffectedStakeholder
Label Affected Stakeholder
Definition An entity that is affected by AI
SubClassOf airo:Stakeholder

AI Operator

Term AIOperator
Label AI Operator
Definition the provider, the product manufacturer, the user, the authorised representative, the importer or the distributor[AI Act, Common position, Art.3(8)]
SubClassOf airo:Stakeholder


Term Version
Label Version
Definition A unique number or name that is assigned to a unique state of an AI system


Term Characteristic
Label Characteristic


Term Likelihood
Label Likelihood
Definition Chance of an event happening
Source ISO 31000, 3.7


Term Severity
Label Severity
Definition Indicates level of severity of an event that reflects level of potential harm


is applied within domain

Term isAppliedWithinDomain
Label is applied within domain
Definition Specifies the domain an AI system is used within
Domain airo:AISystem
Range airo:Domain

has purpose

Term hasPurpose
Label has purpose
Definition Indicates the intended purpose of an AI system
Domain airo:AISystem
Range airo:Purpose

has capability

Term hasCapability
Label has capability
Definition Specifies capabilities implemented within an AI system to materialise its purposes
Domain airo:AISystem
Range airo:Capabiliy

uses technique

Term usesTechnique
Label uses technique
Definition Indicates the AI techniques used in an AI system
Domain airo:AISystem
Range airo:AITechnique

produces output

Term producesOutput
Label produces output
Definition Specifies an output generated by an AI system
Domain airo:AISystem
Range airo:Output

has component

Term hasComponent
Label has component
Definition Indicates components of an AI system
Domain airo:AISystem
Range airo:AIComponent

has risk

Term hasRisk
Label has risk
Definition Indicates risks associated with an AI system, an AI component, etc.
Range airo:Risk

is risk source for

Term isRiskSourceFor
Label is risk source for
Definition Specifies risks caused by materialisation of a risk source
Domain airo:RiskSource
Range airo:Risk

has consequence

Term hasConsequence
Label has consequence
Definition Specifies consequences caused by materialisation of a risk
Domain airo:Risk
Range airo:Consequence

has impact

Term hasImpact
Label has impact
Definition Specifies impacts caused by materialisation of a consequence
Domain airo:Consequence
Range airo:Impact

has impact on area

Term hasImpactOnArea
Label has impact on area
Definition Specifies the area that is affected by an AI impact
Domain airo:Impact
Range airo:AreaOfImpact

has impact on stakeholder

Term hasImpactOnStakeholder
Label has impact on stakeholder
Definition Specifies stakeholders that are affected by an AI impact
Domain airo:Impact
Range airo:AffectedStakeholder

modifies event

Term modifiesEvent
Label modifies event
Definition Indicates the control used for modification of an event
Domain airo:Control
Range airo:Event

detects event

Term detectsEvent
Label detects event
Definition Indicates the control used for detecting an event
Domain airo:Control
Range airo:Event
SubPropertyOf modifiesEvent

eliminates event

Term eliminatesEvent
Label eliminates event
Definition Indicates the control used for eliminating an event
Domain airo:Control
Range airo:Event
SubPropertyOf modifiesEvent

mitigates event

Term mitigatesEvent
Label mitigates event
Definition Indicates the control used for mitigating an event
Domain airo:Control
Range airo:Event
SubPropertyOf modifiesEvent

is followed by control

Term isFollowedByControl
Label is followed by control
Definition Specifies the order of controls
Domain airo:Control
Range airo:Control

is part of control

Term isPartOfControl
Label is part of control
Definition Specifies composition of controls
Domain airo:Control
Range airo:Control

has documentation

Term hasDocumentation
Label has documentation
Definition Indicates documents related to an entity, e.g. AI system
Range airo:Document

conforms to standard

Term conformsToStandard
Label conforms to standard
Range airo:Standard

has stakeholder

Term hasStakeholder
Label has stakeholder
Definition Indicates stakeholders of an AI system
Domain airo:AISystem
Range airo:Stakeholder

is provided by

Term isProvidedBy
Label is provided by
Definition Indicates provider of an AI system
Domain airo:AISystem
Range airo:AIProvider

is used by

Term isUsedBy
Label is used by
Definition Indicates user of an AI system
Domain airo:AISystem
Range airo:AIUser

has AI subject

Term hasAISubject
Label has AI subject
Definition Indicates subject of an AI system
Domain airo:AISystem
Range airo:AISubject


Term affects
Label affects
Definition Indicates the stakeholders affected by the AI system
Domain airo:AISystem
Range airo:AffectedStakeholder

has version

Term hasVersion
Label has version
Definition Indicates the version of an AI system
Domain airo:AISystem
Range airo:Version

has severity

Term hasSeverity
Label has severity
Definition Indicates severity of a consequenve or an impact
Domain airo:Consequence or airo:Impact
Range airo:Severity

has likelihood

Term hasLikelihood
Label has likelihood
Definition Indicates the probability of occurrence of an event
Domain airo:Event
Range airo:Likelihood

has lifecycle phase

Term hasLifecyclePhase
Label has lifecycle phase
Definition Indicates the AI system's lifecycle phase
Domain airo:AISystem
Range airo:LifecyclePhase

AIRO Usage and Application


To be published.

Identification of High-risk AI Systems

To assist with determination of whether the system would be considered a high-risk AI system under the AI Act, the concepts presented in Table 1 need to be retrieved for the use-cases and compared against the specific criteria described in Annex III.

For demonstration, we first utilise a SPARQL query, depicted below, to list the concepts necessary to determine whether the system is high-risk.

PREFIX airo: <>
SELECT ?system ?domain ?purpose ?capability ?user ?subject 
      ?system a airo:AISystem ;
              airo:isAppliedWithinDomain ?domain ;
              airo:hasPurpose ?purpose ;
              airo:hasCapability ?capability ;
              airo:isUsedBy ?user ;
              airo:hasAISubject ?subject . }


1. Artificial Intelligence Act: Proposal for a regulation of the European Parliament and the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelli-gence Act) and amending certain Union legislative acts,, (2021).
2. ISO 31000 Risk management — Guidelines, (2018).
3. ISO/IEC DIS 22989(en) Information technology — Artificial intelligence — Artificial intelligence concepts and terminology,, (2022)


This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497, as part of the ADAPT SFI Centre for Digital Media Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant#13/RC/2106_P2.