Eu FLag
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No101022004

Enhancing the Transparency of AI systems

Transparency of emerging technologies has been a contested term across different domains. In the finalised draft of the European Artificial Intelligence (AI) Act that reached provisional agreement in December 2023, the transparency of AI systems is a key priority and broadly defined as enabling end-users and citizens to understand the design and use of AI systems by disclosing certain kinds of information about system decisions, predictions, and performance. In principle, this definition emphasises the need for AI systems to be developed in ways that ensure users can understand the system’s outputs and use them appropriately. In practice, transparency has been proven difficult to achieve since the kind of information that needs to be communicated by the AI system and how a system can be sufficiently transparent differ across applications of AI technologies and user needs. This lack of clarity can lead to the creation of AI systems that have multiple negative social ramifications, such as exclusion, unequal treatment, discrimination, and the violation of privacy. If a hiring platform is not transparent about the criteria through which the algorithm chooses one candidate over another, issues of discrimination inevitably arise. If a medical treatment system cannot provide evidence on the reasoning of choosing one treatment over another, people’s lives may be put at risk. 

The effects of insufficient transparency adds to the challenges of designing and developing law enforcement AI systems that facilitate people’s safety and security. A lack of transparency on the use of AI systems by law enforcement agencies (LEAs) may drive societal mistrust towards law enforcement and the justice system; people who are unable to understand why decisions were made might seek out information from alternative and potentially unreliable sources. In response, the draft of the European AI Act has categorised these tools as high-risk, meaning that they should comply with additional ethical and data protection requirements. These involve system interfaces that allow human oversight during the period in which the AI system is in use, provision of information for users, accounting for issues of bias and additional safeguards. Yet full transparency does not necessarily drive acceptance of policies and new technologies. Where a lot of information is provided, end-users might be overloaded, leading to the transparency paradox. Where this occurs, people have a lack of perceived control over the AI system because training and testing techniques are too complex for them to fully understand. 

To balance these conflicting issues, TRACE partner Trilateral Research has investigated how we can build upon the one dimension that all transparency definitions share – the fact that transparency is about providing information for decision-making processes. Empirical research has indicated the different decision-making processes associated with transparency and the mechanisms that are more likely to accommodate transparency needs across different audiences. Yet simply providing information on different decisions does not ensure transparency. Instead, transparency can be achieved by communicating both the decisions and the reasons behind these decisions, also known as transparency in rationale. Communicating decisions and reasons for these decisions is beneficial for:

  • Increasing public understanding of why certain decisions have been made 
  • Increasing the perceived legitimacy of the decisions made 
  • Showing that the decision-makers respect and care about the affected individuals
  • Driving more favourable understandings of the AI system

From our work in the TRACE project, which develops technologies for analysing financial crime data, we argue that achieving effective communication about the decisions made with AI systems and the reasons for them is integral to law enforcement operational practices and relationship with the public, considering the impact that criminal justice has on civil liberties. In TRACE we have been working on an integrated model for transparency in rationale. The model includes a set of questions for legal and ethical decisions related to the transparency of the TRACE systems, a task on reporting the reasons for these decisions and tailoring the answers to the requirements of difference audience. For example, a system or human decision and its justification will need to be communicated differently to system developers and law enforcement officers.  By applying this model to the TRACE technologies, we can assess their degree of transparency and identify ways to strengthen this, while upholding ethical standards and mitigating security risks. In doing so, we can ensure that decisions on potential risks and benefits of using TRACE-like technologies have been properly justified to respective audiences, thus serving public interest and safeguarding. 

Our work demonstrates that the pursuit of transparency in law enforcement AI systems can benefit from a combination of exploring decisions made with AI systems and their reasons and tailoring communication strategies for diverse audiences. Drawing from the insights gained through the TRACE project, our transparency approach provides a valuable tool for assessing and strengthening transparency in emerging AI systems. By adopting this approach, we can not only enhance transparency but also contribute to the responsible deployment of technology in domains critical to civil liberties, such as law enforcement and criminal justice.

Author: Dr Anastasia Kordoni, Trilateral Research

Related News