Context-Based Risk Analysis

The main purpose of the Context-Based Risk Analysis (COBRA) is to carry out a preliminary assessment of the extent to which, if at all, an AI system, in view of risk factors arising in the context(s) of its lifecycle, has the potential to interfere with human rights, democracy and the rule of law. This is achieved by collecting information about whether, and how, properties of (1) the system’s application context, (2) the system’s design and development context, and (3) the system’s deployment context could lead to a higher likelihood of outcomes that negatively impact human rights, democracy, and the rule of law. It is imperative that teams carry out this process by accounting for the worst possible risks that could occur prior to any mitigation actions being applied to the identified risks.

On the basis of this initial assessment, the COBRA process helps the project team to establish a tailored approach to subsequent steps of the methodology and in particular the level of stakeholder engagement, and the extent of governance intervention, that is optimally required throughout the project’s lifecycle.

In order to support with the completion of the COBRA, project teams can ask themselves a set of questions that range across the following areas:

  • the system’s application context
  • the system’s design and development context
  • the system’s deployment context

These questions will help to determine the system’s potential to interfere with human rights, democracy and the rule of law by pointing at various risk factors and the corresponding governance interventions and mitigation measures and requiring the project team to assess the risks of impacts to human rights, democracy and the rule of law in view of the examination of these factors. The answers to these questions can be recorded in the templates provided below.

To see an example of what some of these questions, please see the below.

Risk factors arising in the system's application context

Questions Governance interventions to be addressed when assessing potential adverse effects on human rights, democracy, and the rule of law Response Completed by
Will the AI system serve an essential or primary function in a high impact or safety critical sector (e.g., transport, social care, healthcare, other divisions of the public sector, see Appendix II for the indicative list of other sensitive sectors/domains)? Make sure to focus upon considerations surrounding the prevention of harm to personal, physical, mental and moral integrity and, as applicable, other protected areas/spheres.