Protecting individuals in the age of AI
On 24 and 25 June, the Justice and AI Jean Monnet Centre of Excellence: Effective Judicial Redress in the Rising European and Global AI litigation (JUST-AI) at the University of Liège held its first annual summer colloquium. During this two-day event, researchers from a number of internationally renowned universities debated on the best ways to protect individuals from harms that artificial intelligence systems can cause. A necessary discussion at a time when those systems are ever more present in various spheres of our lives.
W
hen we think of artificial intelligence (AI), we immediately think of ChatGPT or Midjourney. And yet there are all the others, less visible, that can be found in every field: automatic translation, CV analysis, recommendation algorithms, facial recognition cameras, autonomous cars... All those systems incorporate automated information processing, and sometimes make decisions autonomously.
In March 2024, the ubiquity of AI prompted the European Union to adopt the Regulation laying down harmonised rules on artificial intelligence (AI Act). This instrument, in particular, creates a framework for AI systems that are likely to affect fundamental rights, health and safety within the European Union. While the enactment of the AI Act is undeniably a success, it is also where things get tricky. "We have reached a global consensus on the need to develop AI that is ethical, and this must be saluted. But there is a difference between theorizing fairness and operationalizing it in practice", says Ljupcho Grozdanovski, an FNRS Research Associate at ULiège and head of the Jean Monnet Centre of Excellence.
In other words, once we have agreed on the "what", the question of the "how" arises. How can we ensure that an AI makes a decision that is fair? In a world with so many different genders, religions, social and cultural sensitivities, is this even possible? And who should be responsible for this delicate task?
In many cases, AI systems are responsible for, say, discrimination because of the way they have been designed, or because of the data used to program them. For example, one of the speakers pointed out that for the Google translation tool, the term "doctor" is gendered as male, while the term "nurse" as female. Similarly, while social networks are widespread around the world, moderation algorithms are designed on the basis of Western, even American, values. The notion of "freedom of speech" for example does not have the same meaning everywhere.
However, some researchers believe that it is possible to develop tools capable of taking better account of diversity. This would be done by 'coding' fairness, as it were, or carrying out audits to correct AI systems’ performance over time. One way of doing this would be to make the data more accurate, so that it is more representative of the real world. But others are wondering. "Who will be responsible for defining the standards? asks Julien Bois, a post-doctoral researcher in law and political science at ULiège. How far is it acceptable for a few engineers to define what a machine considers to be fair, when its decisions are likely to involve thousands of people?”
Human supervision
The AI Act stipulates that decisions taken by so-called high-risk AIs, i.e. those active in areas such as health, education, recruitment or law enforcement, must be supervised by a human. The aim is to prevent another human from being, say discriminated against by a machine.
With this in mind, some researchers have proposed systems where the machine itself, via a chatbot, explains its decision to the human responsible for validating that decision. However, in practice, this supervision raises other questions: "The person in charge of this supervision must be aware of the limits of AI, as well as his or her own tendency to trust the decisions proposed by an automatic system", warned the speakers.
Finally, the researchers discussed the "aftermath". What should be done if, despite all the safeguards, an AI does eventually harm humans? "Such harm is inevitable, and we also need to consider how citizens can obtain compensation", argued Ljupcho Grozdanovski. To this end, a number of researchers felt that it was possible for independent bodies to carry out AI audits, in order to gather evidence and uncover biases, even though those systems may often be opaque.
Ultimately, all the issues developed by the Jean Monnet Centre of Excellence are crucial for the future. “Between now and 2026, companies will have to comply with the main principles of the AI Act," explains Ljupcho Grozdanovski. “And a number of consultants in charge of these issues have already announced that they want to work with us to achieve this.”
Contacts
Links
Artificial Intelligence Act
Justice AI - Jean Monnet Centre of Excellence
https://www.just-ai-jmce.uliege.be/cms/c_11237458/en/justaijmce?id=c_11237458
