AI: It's time to tame the algorithms and this is how we'll do it, says Europe

The European Commission has new ideas for ensuring AI is used responsibly – not like it is in other countries.

With the US and China racing to reap the benefits of artificial intelligence, the European Union doesn't want to be left behind. So it has unveiled a fresh approach to the bloc's digital economy with a new "strategy to shape Europe's digital future".

Behind that broad description, the new strategy's objectives are specific: establish rules on data and AI that are quintessentially European – regulation that "puts people first" and fosters "trustworthy technology". 

In a white paper on artificial intelligence released as part of the new strategy announcement, the European organisation stresses its desire to make the technology a "force for good", and not one that will harm citizens. 

The paper describes the risks inherent in using AI "to track and analyse the daily habits of people", and the potential for state authorities to exploit the technology for mass surveillance.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

Although the paper does not name and shame, the risks mentioned by the Commission are reminiscent of recent news from the US, where last month a lawsuit was filed against New York-based start-up Clearview AI, after it was found that the company sold information scraped from social-media networks to law-enforcement agencies across the country. 

Clearview AI gathered photos without citizens' consent – three billion pictures, in fact, which were taken from sites like Facebook, Twitter, YouTube, and others. 

Turning to the EU's other major competitor in technology, China, it is equally easy to point the finger. The Chinese government has been using facial recognition for a long time, often to the detriment of its citizens. 

Recently, for instance, it was found that the authorities had set up gate-like scanning systems to record biometric, three-dimensional images, as well as the smartphone fingerprints of Muslims living in the country's Xinjiang province, to track the population's movements. 

The EU is keen not to allow this type of application. Ursula von der Leyen, the president of the Commission, said: "I want that digital Europe reflects the best of Europe – open, fair, diverse, democratic, and confident."

To achieve this objective, the Commission wants to create an "ecosystem of trust" for AI. And it starts with placing a question mark over facial recognition. The organisation said it would consider banning the technology altogether. Commissioners are planning to launch a debate about "which circumstances, if any" could justify the use of facial recognition.

The EU's white paper also suggests having different rules, depending on where and how an AI system is used. A high-risk system is one used in a critical sector, like healthcare, transport or policing, and which has a critical use, such as causing legal changes, or deciding on social-security payments. 

Such high-risk systems, said the Commission, should be subject to stricter rules, to ensure that the application doesn't transgress fundamental rights by delivering biased decisions.

In the same way that products and services entering the European market are subject to safety and security checks, argues the Commission, so should AI-powered applications be controlled for bias. 

The dataset feeding the algorithm could have to go through conformity assessments, for instance. The system could also be required to be entirely retrained in the EU. 

For lower-risk applications of AI, the Commission suggests a voluntary labelling scheme based on benchmarks defined by the EU, which would reassure citizens that a given AI system is "trustworthy". 

Mark Coeckelbergh is a member of the high-level expert group on artificial intelligence appointed by the EU Commission to elaborate recommendations for the ethical deployment of AI. He told ZDNet that the Commission's desire to make ethical AI a priority is encouraging, but that the move might have come too early.

"I'm not sure about how quickly this is going," he said. "The Commissioners have not waited for the expert group to be finished with their work, so we are not seeing much of our input going into this plan.  

"Because it came out so early, it seems pretty light, and there are lots of opportunities for interpretation. From an ethical point of view, I am not so happy with it."

The expert group, he continued, will be releasing their recommendations in May or June – and Coeckelbergh believes that there is a lot that the Commission could have benefited from by waiting a couple of months. 

Along with the 51 other members of academia, civil society and industry that make up the high-level expert group, Coeckelbergh is effectively working on more detailed guidelines tailored to each sector, and bridging between defenders of stricter regulation and those in favor of flexibility. 

According to Coeckelbergh, what the Commission defines as "high-risk AI systems", for example, is not as clear cut as the white paper makes it sound. A lot of AI software works across sectors, he argued, and doesn't have a single outcome in a specific area.

"We need a regulatory framework that is more robust and covers more areas," said Coeckelbergh. "This document could have been written a year ago – it doesn't include the detailed work we have been carrying out on our side. It doesn't give me confidence that there will be effective regulation on the ethical side."

The reason that Europe may have been keen to announce early where it stands on artificial intelligence may be down to geopolitics. The continent has been lagging behind its American and Chinese counterparts for a number of years in developing AI-powered technologies – and the Commission's new strategy is a way to voice that the old continent is still relevant.

Recent research showed that less than half of European firms have adopted AI technology, and that only four European companies are in the top 100 global AI startups. In Europe, "the pace of AI diffusion and investments remains limited", noted analysts from research firm McKinsey. 

The Commission's paper does acknowledge the need to catch up, noting that the €3.2bn ($3.4bn) invested in AI in Europe since 2016 is "still a fraction" of the investments in other regions of the world. In North America, the sum invested during the same period amounts to €12.1 bn ($13bn).

SEE: Dutch court rules AI benefits fraud detection system violates EU human rights

The objective announced today by the organisation is to attract over €20bn ($21.5bn) of total investment per year in AI in the next decade. "Europe does well at research and innovation, but it isn't good at translating this into products for the market," said Coeckelbergh. "There is a feeling that we need to do something."

"My suspicion is that perhaps the Commission wrote the paper now to make a statement to the rest of the world, and show that we are onto this. I don't think that's problematic, but it could have come slightly later, in the form of a more robust strategy," he added.

On a more positive note, Coeckelbergh noted that the Commission's work is showing the way forward – and that investments in AI need to go hand in hand with an ethical framework. 

The organisation's announcement certainly has a GDPR ring to it, and hopefully, just as the European rules on data protection have become a global model, so can the EU's stance on AI inspire other countries to follow suit. 

The white paper on artificial intelligence is open for public consultation until 19 May, 2020, after which the Commission will "take further action to support the development of trustworthy AI".