“I lost control. It happened fast.” Professor of Computer Science and Artificial Intelligence Gerhard Weiss smiles explaining why his interview with Studio Europa Maastricht was postponed. In short, a mountain bike, a steep trail, and a bee stuck in a cycling helmet meant he faced a split-second choice: spiral out of control down the slope or come to a screeching halt, assisted by a tree. He chose the emergency stop and is recovering from three broken ribs as we speak. Weiss quipped, “It could’ve been worse”. We speak to him about the much-anticipated European regulation on artificial intelligence, the first one of its kind proposed just weeks ago. “I expect that it’ll have a strong impact worldwide”.
Artificial intelligence involves technology capable of making thousands of split-second choices, much like the decision Weiss made on his bike. Better known examples range from automated investment software to self driving cars. But AI also promises a revolution in the intimate, emotional territories of love and family. In acclaimed Netflix-series Black Mirror, characters’ opportunities in life depend on a social score that others give them, they are able to monitor their childrens’ health and emotions through a tablet, and they can bring AI versions of their deceased loved ones back. The series stands in a long line of pop culture works fascinated with intelligent technologies, veering between utopian visions and dystopian nightmares.
“The popular image of a super powerful, Terminator-like AI is far from reality”
How do these popular images of AI compare with reality?
Weiss smiles. “We all like new, exciting solutions to our grand challenges, but AI is not a panacea. It’s not a solution to all the big problems mankind is facing, like climate change. On the flipside, some mistake it for the biggest danger in the history of civilisation, something that could spell the end of the human race. Even Stephen Hawking said that some years ago. It is not, as tech giants like Elon Musk have also claimed, humanity’s greatest existential threat. There is this popular image of a super powerful Terminator-like AI, but that’s far from reality. AI is many, many years from being so superintelligent that it could replace humans as the dominant life form on earth.”
What is AI, then?
“While some people are getting overexcited, AI is still one of the most important technologies of our century. It has tremendous societal and economic impact and can help to treat diseases, improve education, increase industrial productivity. It will transform the labour market and economy.”
“The European Parliament should consider banning predictive policing technologies”
Last April, the European Commission published its much-awaited proposal to regulate high-risk AI technologies. What do you think about it?
“I’m happy about it. There are so many uses for AI that raise very serious ethical and legal issues – they need regulating. That’s true for AI applications in particular that process a huge amount of private and personal data, which risk generating or amplifying gender or racial biases. These technologies might violate fundamental rights. That’s why we need a rulebook, even if the current proposal isn’t perfect and has some open ends.”

What’s flawed about the proposal?
“The rulebook carves out a whole host of companies and activities from regulation, so we need to make it further-reaching. For example, the rulebook would prohibit AI social scoring by public authorities but private actors are kept out of the line of fire, so companies could still use social scoring technologies. Similarly, technology that scans people’s faces and runs them through a database in real-time cannot be used by law enforcement in public spaces, but that doesn’t apply to other public authorities and private actors. What’s more, the proposal doesn’t ban predictive policing, – such as algorithms that make risk assessments for offending or reoffending – even though it does mark them out as high-risk. This is a highly sensitive issue and the European Parliament should consider banning this kind of technology. Lastly and very importantly, the regulatory framework doesn’t address lethal autonomous weapons like drones that can select and engage targets without human involvement. It’s now up to the European Parliament and the Council to correct these shortcomings.”
“‘Made in Europe’ has the potential to become a very valuable label for AI”
Far from being too soft, some in the tech sector fear that the regulation will stifle innovation and negatively impact the EU’s competitiveness.
“I think quite the opposite is true. When thinking about competitiveness, we can’t overlook that the United States is also thinking about legislating. This could easily mean that in the near future, there will be a competition for the most trustworthy, human centric, accountable AI. If our regulations are implemented properly, then ‘Made in Europe’ has the potential to become a very valuable quality for AI. That means people in the United States will prefer to install and use AI systems with that label. The market will play its role, and it will be risky for companies to use systems without this label. So, in this race against the US, it is important for the EU to take action as fast as possible. The rulebook will be the starting point for lengthy discussions in member states, but it’s crucial that its implementation is rigorous and fast.”

The Commission’s proposal has been called a watershed moment for AI policy. Would you agree?
“Yes, if implemented appropriately. It’s really novel, and it also applies to non-European companies who use AI in Europe, meaning companies elsewhere need to undergo an approval process if they want to sell their technology in the EU. If enforced well, there’s no way to evade this regulation. I expect this will have a strong impact, but that also depends on how strict the Parliament and Council is in limiting lobbyists in the years before being fully enacted.”
No doubt lobbyists will be busy in the meantime. What should the limits of the lobbying by big tech companies be?
“There’s a clear answer to this: lobbying should be limited very strictly when it undermines the clear protection of fundamental human rights. It’s fine if big tech is trying to do some lobbying, that’s part of the process. But the parliament should be extremely strict and say: ‘it’s nice that you try to make it work for you, but this is just too much.’”
“The EU needs to make sure it isn’t stung by AI-expert brain drain to the US”
Some say the rulebook shows the EU is more concerned with protecting its citizens than with keeping an eye on China. What are the major risks of not ‘keeping an eye’ on China?
“In the field of AI, ignoring China would simply be stupid”. Weiss laughs. “It would mean actively ignoring how much China invests in AI. It’s amazing, China is investing considerably more money in AI than Europe. It’s pushing AI education, and it’s growing its AI research at a remarkable pace. Falling behind would be a serious economic problem because the best, most profitable AI applications would then come from non-European countries like the US and China. But it would also have a negative societal impact because, if your technological progress stalls, you’ll become reliant on other countries. That’s true of all kinds of technologies, including AI.”

What about the EU itself? Is its labour market ready to deal with the rise of AI?
“Europe’s weak point is its high demand for AI experts. It’ll continue to grow in the coming decades and we need to ensure that enough AI experts are available. We should think about making basic AI education part of the curriculum across the board in secondary and primary schools. Companies should think about investing in AI education for their employees; they could be more open-minded towards establishing alliances with universities or other knowledge institutions to get access to state of the art AI technology. It’s not just up to the government to increase education or investment, it’s also up to the private sector to invest in employees, build up alliances with AI experts across the workforce and make sure the EU isn’t stung by an AI-expert brain drain to the US.”
Gerhard Weiss is Professor of Computer Science and Artificial Intelligence at the Department of Data Science and Knowledge Engineering (DKE) at Maastricht University. He previously worked as an advisor for several European intelligent technology companies.
This is the second interview in the second year of our series on current affairs in Europe. Read the previous interviews here.