Assistant Professor in private law and postdoctoral researcher at Studio Europa Maastricht Catalina Goanta talked about the future of artificial intelligence (AI) on Al Jazheera television. Goanta was joined by Eddy Borges-Rey (Associate Professor digital journalism and emerging media at NorthWestern University in Qatar) and Caroline Sinders (senior fellow Mozilla foundation).
The experts looked at the various problematic as well as promising aspects of AI. Goanta started by underlining that AI is often looked at as a blanket term, but that in practice it is mostly a set of tools to do tasks like facial recognition or hate speech recognition.
Fitting legal frameworks
In addition, Goanta thinks it is important to look at the legal framework that these tools are embedded in. Since AI is nothing new, there are already a lot of existing rules that are very fitting. Impact of AI tools can only be measured in terms of their accuracy (in how far they deliver on the task that they are designed for) and the legal framework they are embedded in.
In terms of the application of legal frameworks, Goanta stresses that it is important to keep in mind that AI is used in different contexts. AI systems used in warfare, like drones, will fall under humanitarian law. Consumer products with embedded AI tools will fall under consumer protection law.
Economic incentives
Goanta adds that the business models emerging around AI still need to be fully comprehended. As an example, she mentions the data enrichment companies who are responsible for the harvesting of data from companies. What are these data brokerage models and what are the economic incentives underlying these models and what economic incentives do they result in?
“Business models emerging around AI still need to be fully comprehended”

Risks of biases in AI
The data sets used as an input for AI are at risk of harbouring integrated biases. Caroline Sinders mentions as an example facial recognition in predictive policing systems. Louisiana, where she comes from, has the highest number of incarceration rates in the US and people with darker skins are overly represented in these prisons. If the historical data sets of Louisiana are used for predictive policing systems, then those will obviously turn out to be biased.
Harm and opt-outs
Sinders also speaks about the need for better legibility of harms caused by AI. What happens when there are failures, when is something to be considered a failure resulting from the use of an AI tool, who gets to decide whether the tool should be used or not, should users understand how the different AI tools work and should there be an opt out for users in which they can declare not use or wanting to be subjected to the use of AI?
“Better technological literacy and legibility of AI policy and design for a wider audience are essential in order to respect the rights and protection of consumers and population in general.”
Conclusion: there is great necessity for technological literacy
All the speakers end by agreeing that a lot more should be done in terms of information literacy. Better technological literacy and legibility of AI policy and design for a wider audience are essential in order to respect the rights and protection of consumers and population in general. In this area, a lot of work remains to be done…
The full conversation can be watched here.