BERLIN — An act of genius, or slow suicide.
That’s the range of opinions on Europe’s plan for catching up with the United States and China in the global artificial intelligence race.
As American and Chinese companies dominate the AI battlefield, the EU has pinned its hopes on becoming a world leader in what it calls “trustworthy” artificial intelligence.
By ensuring AI applications follow ethical guidelines and base decisions on transparent criteria, policymakers believe they can boost consumer confidence in European AI, providing the bloc with a silver bullet against competitors in Silicon Valley and Shenzhen.
“Ethics and competitiveness are intertwined, they’re dovetailed,” said Pekka Ala-Pietilä, who chairs the EU’s high-level expert group on AI. “We need to create an environment where the use of AI is felt and seen as trustworthy.”
“I am personally convinced that ethical guidelines will be enablers of innovation for artificial intelligence” — Mariya Gabriel, digital commissioner
“If that kind of sustainably leveled playing field is established, that gives a great incentive for companies to create products or services where ethics is part of their competitive advantage,” he added.
Ala-Pietilä’s group of 52 experts is set to release the final version of its broad guidelines for the ethical use of AI in April. Later this year, a second document will follow, listing recommendations for boosting European investment in AI. Its release was originally scheduled for May but will likely be pushed back to the early summer.
The two documents won’t be binding, but they will provide lawmakers with a roadmap for regulating the emerging technology. The next European Commission, which takes office at the end of the year, is expected to make a priority of tackling AI.
The EU’s AI experts remain divided about what rules are needed, according to conversations with nine members of the group — but they’re united in their belief that an ethics-first approach will eventually allow Europe to set global standards for AI.
Mariya Gabriel, the European commissioner for digital | European Union
The political leaders at the European Commission agree.
“I am personally convinced that ethical guidelines will be enablers of innovation for artificial intelligence,” said Digital Commissioner Mariya Gabriel. “History will tell us we were right,” is how her colleague, Justice Commissioner Věra Jourová, put it.
Not everybody is so sure.
The EU’s “softball” approach to AI is “naive” and will cause the bloc to lose out to the U.S. and China, said Daniel Castro, vice president of the Information Technology and Innovation Foundation (ITIF), a think tank whose board includes members from U.S. tech giants including Amazon, Apple, Google and Microsoft.
Consumers care primarily about effectiveness — and an ethics-first approach will prevent Europe from coming up with competitive products, he said. “It’s like any other race: You can have the more ethical race car driver, but if his car is not faster, you are going to lose,” Castro said.
Ethics vs. price
There’s no denying the potential for AI to transform entire industries. The technology’s transformative power is often compared to breakthroughs like widespread electrification. Artificial intelligence is set to change the way people work, communicate, treat diseases and conduct wars.
The trouble is that potential comes with significant risks. The “deep learning” underlying many of today’s cutting-edge applications, for example, essentially trains AI systems by seeking out patterns in vast troves of data.
The result is often highly effective, but it can also turn programs into black boxes, making it difficult — or even impossible — to discern the logic behind their decisions. What’s more, because AI algorithms “learn” from real-world data, they are vulnerable to incorporating often unconscious biases against minorities and other vulnerable groups.
“It’s absurd to believe that you can become world leader in ethical AI before becoming world leader in AI first” — Ulrike Franke, policy fellow at the European Council on Foreign Relations
Amazon had to scrap an AI-powered recruiting tool that discriminated against women, according to Reuters. ProPublica revealed that predictive policing software used by U.S. authorities is biased against black people. And Google issued an apology after Motherboard reported that one of its machine-learning applications labeled being Jewish or being gay as negative.
Developments like these are what the EU wants to counter with what it calls “trustworthy” artificial intelligence.
The EU’s pitch: AI technology that respects European values and is engineered in a way that prevents it from causing intentional or unintentional harm — even when it’s operated by people with little or no technical background.
It’s less about moralist finger-pointing than about what’s best for consumers, said Virginia Dignum, a professor of social and ethical artificial intelligence at Sweden’s Umeå University and a member of EU’s high-level expert group.
“In a sense, ‘ethics’ isn’t the goal,” she said. “We want [AI] to be ethical and socially responsible because we want AI systems to be trusted, and useful for people.”
European principles on data have already hobbled the bloc’s AI | Stephanie Lecocq/EPA
Critics like the ITIF’s Castro remain doubtful.
“This idea of ethics-by-design, it undercuts the idea that at the end of the day, this is still a market-based economy,” he said. “You have to create something of … more value than your competitors.”
“The European Commission itself has not provided any evidence that consumers are actually willing to pay for that,” he added.
Confronted with his position, Dignum answered that the EU has not been able to provide such evidence because, so far, there are no “trustworthy products in the way we propose to build them.” She said she is convinced that just as some consumers are willing to pay more for organic products, there will be consumer demand for “trustworthy AI” once it hits the market.
There’s one area where European principles have already hobbled the bloc’s AI.
The Continent has some of the strictest rules in the world for the use of personal data, reflecting widespread concerns over privacy.
The more information a deep learning system is given, the better it becomes — and European tech firms say that a lack of access to data is putting them at a disadvantage to global competitors.
Although the EU introduced its General Data Protection Regulation, or GDPR, last year to harmonize data protection rules across the bloc, there remains a patchwork of different interpretations on to what extent companies can process private and public data.
Loubna Bouarfa, the CEO of health care firm OKRA Technologies and a member of the high-level expert group, said that data barriers between European countries “are making it very hard” for entrepreneurs to fully exploit the potential of AI technology.
European leaders such as Commission Vice President Andrus Ansip have been signaling to the industry that they’re aware of the situation. Ansip announced in December a plan to create “common data spaces in areas such as health, energy or manufacturing, to aggregate data for public sector and for business-to-business.” His colleague, Competition Commissioner Margrethe Vestager, added last month that “it is not enough that you want to do [AI] in a way that corresponds to our basic values — you also need the raw material.”
Margrethe Vestager, the European commissioner for competition | John Thys/AFP via Getty Images
The bloc needs to act fast, and it needs to act now, Bouarfa said. Time is running out, she warned: “Europe is falling behind on AI, and we do really need to act quickly.”
Companies like her startup compete against, on the one side, American companies, which U.S. President Donald Trump’s administration has signaled don’t have much to fear from government regulation.
On the other side, they are up against growing competition from China, where companies have access to an internal market of 1.4 billion people protected by scant privacy rights.
That’s the reality European companies are confronted with as they seek to incorporate ethics — and incoming regulation — into their AI strategies.
Ulrike Franke, a policy fellow at the European Council on Foreign Relations, quoted an analyst in Brussels who recently joked that in the EU, “ethical AI is the new Green” — something everyone can gather behind.
But she added that Europe will only be able to push its AI standards globally if its ethical ambitions are accompanied by efforts to boost a top-notch AI industry across the bloc.
“It’s absurd to believe that you can become world leader in ethical AI before becoming world leader in AI first,” Franke said.
This article is part of POLITICO’s premium Tech policy coverage: Pro Technology. Our expert journalism and suite of policy intelligence tools allow you to seamlessly search, track and understand the developments and stakeholders shaping EU Tech policy and driving decisions impacting your industry. Email [email protected] with the code ‘TECH’ for a complimentary trial.