Analysis of the AI Act's Annex IV on Technical Documentation

Delaram Golpayegani, Isabelle Hupont, Cecilia Panigutti, Harshvardhan J. Pandit, Sven Schade, Dave Lewis

Copyright © 2024 the document editors/authors. This work is available under the Creative Commons Attribution 4.0 International Public License; additional terms may apply

1. General description of the AI system

(a) its intended purpose, the person's developing the system the date and the version of the system;

ID Information requirement
R1a-1 AI system's intended purpose
AR1a-2 AI capability(ies)
R1a-3 AI developer(s)
AR1a-4 AI provider(s)
R1a-5 AI system's release date
R1a-6 AI system's version

(b) how the AI system interacts or can be used to interact with hardware or software that is not part of the AI system itself, where applicable;

R1b-1 External software the AI system interacts with
R1b-2 Details of interaction with external software
R1b-3 External hardware the AI system interacts with
R1b-4 Details of interaction with external hardware
R1b-5 External software that can be interacted with using the AI system
R1b-6 Details of interaction with external software through the AI system
R1b-7 External hardware that can be interacted with using the AI system
R1b-8 Details of interaction with external hardware through the AI system

(c) the versions of relevant software or firmware and any requirement related to version update;

AR1c-1 AI system's version release note
R1c-1-1 AI version update's software requirements (dependencies)
R1c-1-1-1 Software the AI system is dependent on
R1c-1-1-2 Version of the software the AI system is dependent on
AR1c-1-2 AI version update's hardware requirements
R1c-1-3 AI version update's firmware requirements
R1c-1-3-1 Firmware the AI system is dependent on
R1c-1-3-2 Version of the firmware the AI system is dependent on
R1c-1-4 AI version update's additional requirements

(d) the description of all forms in which the AI system is placed on the market or put into service;

R1d-1 Form(s) (modalities) in which the AI system is placed on the market or put into service
R1d-2 Description of each form in which the AI system is placed on the market or put into service

(e) the description of hardware on which the AI system is intended to run;

R1e-1 Hardware (components) required for running the AI system
R1e-2 Description of the hardware (components) required for running the AI system

(f) where the AI system is a component of products, photographs or illustrations showing external features, marking and internal layout of those products;

R1f-1 Entities (e.g. products, photographs, and illustrations) of which AI system is a component
R1f-2 External features of the entity of which the AI system is a component
R1f-3 Marking of the entity of which the AI system is a component
R1f-4 Internal layout of the entity of which the AI system is a component

(g) instructions of use for the user and, where applicable installation instructions;

R1g-1 Instruction for use
R1g-2 Installation instruction
Back to table of content

2. Detailed description of the elements of the AI system and of the process for its development

(a) the methods and steps performed for the development of the AI system, including, where relevant, recourse to pre-trained systems or tools provided by third parties and how these have been used, integrated or modified by the provider;

R2a-1 Methods used for the development of the AI system
R2a-2 AI system's development processes (steps)
R2a-3 Third-party systems, e.g. pre-trained systems, used in/for the development of the AI system
R2a-4 Description of how third-party systems have been used, integrated, or modified by the AI provider
R2a-5 Third-party tools used in/for the development of the AI system
R2a-6 Description of how third-party systems have been used, integrated, or modified by the AI provider

(b) the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; the main classification choices; what the system is designed to optimise for and the relevance of the different parameters; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Title III, Chapter 2;

R2b-1 AI system's design specifications
R2b-1-1 Overall logic of the AI system
AR2b-1-2 AI system's algorithmic design
AR2b-1-2-1 Algorithms used within the AI system
R2b-1-2-2 Logic of the AI system's algorithms
R2b-1-3 Description of AI design choices made during AI development
R2b-1-3-1 Rationale of design decisions
R2b-1-3-2 Assumptions made in regard to design designs
R2b-1-3-3 AI subjects considered in design decisions
R2b-1-4 Choices made in regard to classification tasks
R2b-1-5 Optimisation purpose of the AI system, i.e. quality parameters the AI system is optimised for
R2b-1-6 Relevance of AI parameters
R2b-1-7 Trade-offs made in implementing technical solutions to comply with the AI Act's requirements for high-risk AI systems

(c) the description of the system architecture explaining how software components build on or feed into each other and integrate into the overall processing; the computational resources used to develop, train, test and validate the AI system;

R2c-1 AI system architecture illustrating the components incorporating the system and their relationships
R2c-2 AI system architecture description (documentation)
R2c-2-1 Software components incorporating the AI system
R2c-2-2 Description of software component development
R2c-2-3 Description software components integration
R2c-2-4 Description of how software components are integrated into the overall processing of the AI system
R2c-3 Computational resources used in different stages of the AI lifecycle
R2c-3-1 Computational resources used for AI development
R2c-3-2 Computational resources used for training of the AI system
R2c-3-3 Computational resources used for testing of the AI system
R2c-3-4 Computational resources used for validation of the AI system

(d) where relevant, the data requirements in terms of datasheets describing the training methodologies and techniques and the training data sets used, including information about the provenance of those data sets, their scope and main characteristics; how the data was obtained and selected; labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection);

AR2d-1 Models trained
R2d-1-1 Training methodology
R2d-1-2 Training technique
R2d-1-3 Data requirements which include:
Data labelling procedure
R2d-1-3-1 Information about the datasets used for training the model (training datasets)
R2d-1-3-1-1 Training dataset's provenance information
R2d-1-3-1-2 Training dataset's scope
R2d-1-3-1-3 Training dataset's characteristics
R2d-1-3-1-4 Training data acquisition process (how the training dataset was obtained)
R2d-1-3-1-5 Training data selection process (how each dataset was selected)
R2d-1-3-1-6
R2d-1-3-1-7 Data cleaning methodologies

(e) assessment of the human oversight measures needed in accordance with Article 14, including an assessment of the technical measures needed to facilitate the interpretation of the outputs of AI systems by the users, in accordance with Articles 13(3)(d);

AR2e-1 Description of human oversight measure
AR2e-1-1 Purpose of human oversight measure, e.g. interpretation of the output (see Art. 14(4) for more examples)
AR2e-1-2 The risks the human oversight measures aim to minimise (Art. 14(2))
AR2e-1-3 Type of human oversight measure, e.g. technical and organisational
AR2e-1-4 Implementation type of human oversight measure, as described by Art. 14 (3): "measures (a) identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service; (b) identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the user."
R2e-2 Assessment of Human oversight measures

(f) where applicable, a detailed description of pre-determined changes to the AI system and its performance, together with all the relevant information related to the technical solutions adopted to ensure continuous compliance of the AI system with the relevant requirements set out in Title III, Chapter 2;

R2f-1 Description of pre-determined changes to the AI system
R2f-2 Description of pre-determined changes to the AI system's performance
R2f-3 Technical solutions in place to ensure compliance with the requirements of high-risk AI systems in the aftermath of the pre-determined changes to the AI system and its performance

(g) the validation and testing procedures used, including information about the validation and testing data used and their main characteristics; metrics used to measure accuracy, robustness, cybersecurity and compliance with other relevant requirements set out in Title III, Chapter 2 as well as potentially discriminatory impacts; test logs and all test reports dated and signed by the responsible persons, including with regard to pre-determined changes as referred to under point (f).

R2g-1 AI system validation procedures
R2g-2 Information about the datasets used for validating the model (validation datasets)
R2g-3 AI system testing procedures
R2g-4 Information about the datasets used for testing the model (testing datasets)
R2g-5 AI system characteristics, including but not limited to accuracy, robustness, cybersecurity, compliance with the requirements of high-risk Act, and bias
R2g-5-1 Metrics used to measure the characteristic
AR2g-5-2 Tests, benchmarks and/or standards used for measuring the metric and their results
AR2g-6 Test documentation
R2g-6-1 Test log
R2g-6-1-1 Test log date
R2g-6-1-2 Responsible person(s) for test log
R2g-6-1-3 Log of Tests conducted in regard to the AI system's pre-determined changes
R2g-6-2 Test report
R2g-6-2-1 Test report date
R2g-6-2-2 Responsible person(s) for test report
R2g-6-2-3 Report of Tests conducted in regard to the AI system's pre-determined changes
Back to table of content

3. Monitoring, functioning, and control

3. Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose; the foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights and discrimination in view of the intended purpose of the AI system; the human oversight measures needed in accordance with Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users; specifications on input data, as appropriate;

R3-1 Information about monitoring of the AI system
R3-1-1 Monitoring of the AI system's capabilities
R3-1-1-1 Monitoring of the AI system's performance
R3-1-1-2 Monitoring of the AI system's accuracy for intended AI subjects
R3-1-1-3 Monitoring of the AI system's overall accuracy
R3-1-2 Monitoring of the AI system's limitations
R3-1-2-1 Monitoring of the AI system's performance limitations
R3-1-2-2 Monitoring of the AI system's limitations in regard to the degree of accuracy for intended AI subjects
R3-1-2-3 Monitoring of the AI system's limitations in regard to its overall degree of accuracy
R3-1-3 Monitoring for foreseeable unintended outcomes
R3-1-4 Monitoring for risk sources
R3-1-4-1 Monitoring for sources of risk to health
R3-1-4-2 Monitoring for sources of risk to safety
R3-1-4-3 Monitoring for sources of risk to fundamental rights
R3-1-4-4 Monitoring for sources of risk to non-discrimination, i.e. bias risk
R3-1-5 Monitoring of human oversight measures
R3-1-6 Monitoring of input data as per its specifications
R3-2 Information about functioning of AI system
R3-2-1 AI system's capabilities
R3-2-1-1 AI system's performance
R3-2-1-2 AI system's accuracy for intended AI subjects
R3-2-1-3 AI system's overall accuracy
R3-2-2 AI system's limitations
R3-2-2-1 AI system's performance limitations
R3-2-2-2 AI system's limitations in regard to the degree of accuracy for intended AI subjects
R3-2-2-3 AI system's limitations in regard to its overall degree of accuracy
R3-2-3 Functioning of the system in the event of foreseeable unintended outcomes
R3-2-4 Functioning of the system in the event of materialisation of risk sources
R3-2-4-1 Functioning of the system in the event of materialisation of sources of risk to health
R3-2-4-2 Functioning of the system in the event of materialisation of sources of risk to safety
R3-2-4-3 Functioning of the system in the event of materialisation of sources of risk to fundamental rights
R3-2-4-4 Functioning of the system in the event of materialisation of sources of discrimination risk
R3-2-5 Functioning of human oversight measures
R3-2-6 Functioning of AI system in regard to input data specifications
R3-3 Information about control of AI system
R3-3-1 Controls in place to ensure the AI system's expected capabilities
R3-3-1-1 Controls in place to ensure the AI system's expected level of performance
R3-3-1-2 Controls in place to ensure the AI system's expected level of accuracy for intended AI subjects
R3-3-1-3 Controls in place to ensure the AI system's expected overall accuracy
R3-3-2 Controls in place in regard to the AI system's limitations
R3-3-2-1 Control in place in regard to the AI system's performance limitations
R3-3-2-2 Controls in place in regard to the AI system's limitations regarding the degree of accuracy for intended AI subjects
R3-3-2-3 Controls in place in regard to the AI system's limitations regarding its overall degree of accuracy
R3-3-3 Controls for foreseeable unintended outcomes
R3-3-4 Controls in place regarding risk sources
R3-3-4-1 Controls in place regarding sources of risk to health
R3-3-4-2 Controls in place regarding sources of risk to safety
R3-3-4-3 Controls in place regarding sources of risk to fundamental rights
R3-3-4-4 Controls in place regarding sources of discrimination risk
R3-3-5 Controls in place in regard to human oversight measures
R3-3-6 Control of AI system in regard to input data specifications
Back to table of content

4. Risk management system

4. A detailed description of the risk management system in accordance with Article 9;

AR4-1 Risk management system documnetation
AR4-1-1 Role of the organisation in relation to the AI system
AR4-1-2 External context of AI system related to the AI risk management system
AR4-1-3 Internal context of AI system related to the AI risk management system
AR4-1-3-1 Intended purpose of the AI system
AR4-1-4 Needs and expectations of stakeholders (interested parties) in regard to AI risk management
AR4-1-5 scope of AI risk management system
AR4-1-6 AI risk management policies
AR4-1-7 AI risk management system roles and responsibilities
AR4-1-8 AI risk management information
AR4-1-8-1 Scope of AI risk management
AR4-1-8-2 AI risk management objectives
AR4-1-8-3 AI risk management tools
AR4-1-8-4 AI risk management techniques
AR4-1-8-5 AI risk management resources
AR4-1-8-6 AI risk management responsibilities
AR4-1-8-7 Internal context of the AI system related to AI risk management
AR4-1-8-8 External context of the AI system related to AI risk management
AR4-1-8-9 AI Risk criteria (for evaluation of risk significance)
AR4-1-8-10 AI risk assessment information
AR4-1-8-10-1 AI risk identification information
AR4-1-8-10-1-1 Assets and their value
AR4-1-8-10-1-2 Risk sources (events)
AR4-1-8-10-1-3 Entities associated with risk sources
AR4-1-8-10-1-4 Risks
AR4-1-8-10-1-5 Consequences
AR4-1-8-10-1-6 Impacts
AR4-1-8-10-2 AI risk analysis information
AR4-1-8-10-2-1 Analysis of risk sources
AR4-1-8-10-2-2 Analysis of risks
AR4-1-8-10-2-3 Analysis of consequences
AR4-1-8-10-2-4 Analysis of impacts
AR4-1-8-10-3 AI risk evaluation information
AR4-1-8-10-3-1 Evaluation of risk sources
AR4-1-8-10-3-2 Evaluation of risks
AR4-1-8-10-3-3 Evaluation of consequences
AR4-1-8-10-3-4 Evaluation of impacts
AR4-1-8-11 AI risk treatment information
AR4-1-8-11-1 Statement of applicability
AR4-1-8-11-1-1 Control measures
AR4-1-8-11-1-2 Objectives of the measures
AR4-1-8-11-1-3 Residual risk
AR4-1-8-12 AI system impact assessment
AR4-1-9 AI quality objectives
AR4-1-10 AI management system change plan
AR4-1-11 AI risk management system resources
AR4-1-12 Information regarding competence
AR4-1-13 Information regarding operation of AI risk management system processes
AR4-1-14 Results of AI risk assessments
AR4-1-15 Results of AI risk treatments
AR4-1-16 Results of AI impact assessments
AR4-1-17 Results of monitoring, measurement, analysis and evaluation of the AI risk management system
AR4-1-18 Information regarding implementation of the audit programme
AR4-1-19 Audit results
AR4-1-20 Information regarding AI risk management system review
AR4-1-21 Information regarding non-conformity and corrective actions
Back to table of content

5. Changelog

5. A description of any change made to the system through its lifecycle;

R5-1 Description of the changes made to the AI system
AR5-2 Description of the changes made to the components incorporating the AI system
Back to table of content

6. Harmonised standards

6. A list of the harmonised standards applied in full or in part the references of which have been published in the Official Journal of the European Union; where no such harmonised standards have been applied, a detailed description of the solutions adopted to meet the requirements set out in Title III, Chapter 2, including a list of other relevant standards and technical specifications applied;

R6-1 Harmonised standards applied
R6-1-1 Level of conformity, i.e. full or in part
AR6-1-2 Type of conformity assessment, e.g. self-assessment, third-party assessment
R6-2 Other standards applied
R6-2-1 Level of conformity, i.e. full or in part
R6-2-2 The AI Act's high-risk AI requirements met by applying the standard
AR6-2-3 Type of conformity assessment, e.g. self-assessment, third-party assessment
R6-3 Technical specifications applied
R6-3-1 Level of conformity, i.e. full or in part
R6-3-2 The AI Act's high-risk AI requirements met by applying the technical specification
AR6-3-3 Type of conformity assessment, e.g. self-assessment, third-party assessment
R6-4 Description of other solutions adopted to meet the AI Act's high-risk AI requirements
Back to table of content

7. EU Declaration of Conformity

7. A copy of the EU declaration of conformity;

R7-1 EU declaration of conformity as per Annex V
Back to table of content

8. Post-market monitoring system

8. A detailed description of the system in place to evaluate the AI system performance in the post-market phase in accordance with Article 61, including the post-market monitoring plan referred to in Article 61(3).

R8-1 Description of the AI performance evaluation plan, which is a part of the AI management system (see ISO/IEC 42001, point 9 on performance evaluation)
R8-2 Post-market monitoring plan, according to Art. 61 (3), an implementing act containing a template for the post-market monitoring plan will be adopted by the European Commission
Back to table of content

Acknowledgements

The views expressed in this article are purely those of the authors and may not, under any circumstances, be regarded as an official position of the European Commission

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497 (PROTECT ITN), as part of the ADAPT SFI Centre for Digital Media Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant#13/RC/2106_P2