Europe’s secret weapon in the race against the U.S. and China on artificial intelligence is … ethics.

That was the message at the core of the EU’s AI strategy unveiled Wednesday and developed by a team of European Commissioners under the supervision of Commission Vice President Andrus Ansip. In its “Charter on AI Ethics,” the Commission wants to spell out how to preserve fundamental rights along with the rise of AI. This, the bloc believes, will boost consumer trust in European AI applications and help the Continent — which lags far behind the U.S. and China in building a state-of-the-art AI industry — catch up with competitors.

The strategy includes a boost to the bloc’s annual spending on AI research and development of about 70 percent, to around €500 million, effective immediately. And the Commission said it will name a high-level group of experts by this July who will release a broad ethical framework on how to legislate AI in Europe by the end of the year.

The ambitious plan, however, faces two major hurdles, according to a Commission official speaking on the condition of anonymity: Once the current seven-year-budget expires at the end of 2020, the Commission suggests that the bloc’s overall investment in AI, including money from national and regional governments as well as the private sector, should be brought up to at least €20 billion per year. It’s far from certain that enough money will be put aside in the upcoming budget talks to reach this target, the official cautioned.

And officials are well-aware that if the Commission doesn’t get member countries on board with their ethics-first strategy, it could lead nowhere.

Aiming for EU-wide harmony

In fact, some countries are already forging ahead with their own national plans. Last month, France released its own AI strategy — which Commission officials said helped set the tone for the EU’s strategy. From Berlin to Rome to Helsinki, task forces in other governments are working on their individual strategies, as well.

While it is “laudable” that French President Emmanuel Macron and German Chancellor Angela Merkel are rolling out their own national plans to boost the development of AI, “those [initiatives] make most sense if a strong impulse from Paris and Berlin gets bundled in Brussels as an offer to all member states,” EU Budget Commissioner Günther Oettinger said Monday during a speech at the Hannover trade fair.

Industry officials echoed this call, stressing that from a business point of view, it’s crucial to make sure there are similar rules across the Continent.

“It’s understandable that every country wants to have its own AI strategy,” said Liam Benham, vice president for government and regulatory affairs in Europe at IBM, “but the minimum businesses can expect is that there is a harmonized legal framework at EU level, governing rules around things such as liability.”

Nothing illustrates that better than the development of self-driving cars, which could start to appear on European streets as early as 2021, and for which member country after member country is starting to pass its own rules.

In January 2017, the European Parliament’s legal affairs committee stressed in its report on how to update EU civil law on robotics that it was important to come up with European legal guidelines as soon as possible to prevent a hodgepodge of regulation across the Continent.

The warning fizzled. Although Parliament passed a resolution based on the report the following month, not much has happened since.

Committee Vice Chair Mady Delvaux said last month she now sees “that different member states are introducing their own regulations.” This, she added, is “just what we wanted to prevent.”

A three-part plan

The strategy has three distinct pieces: The EU will facilitate access to data for companies, boost development and set up centers to improve the communication between researchers and entrepreneurs. It aims to ensure that advances in automation will not leave parts of its population without jobs, for example by implementing advanced training measures. And it wants to set ethical standards that could one day also serve as a blueprint for other regions of the world.

Business officials said they generally welcomed the EU’s ambitions to become a stronghold of data protection in a world that could soon be dominated by AI-powered technologies but said they wanted a balanced approach.

“We see an opportunity here for the EU,” said IBM’s Benham. “At the same time, however, the EU should also not think that it can just tighten every screw on data policy and AI can still be successful.”

Europe — despite being home to strong AI basic research — has been watching from the sidelines as American and Chinese tech firms battle over who will dominate artificial intelligence in the decades to come.

The U.S. remains the world leader in the field, with much of the cutting-edge expertise held by a handful of its private tech companies. China, which wants to become the world leader in AI by 2030, has caught up by boosting research, granting subsidies to companies and providing its AI industry with access to data about more than 1.4 billion citizens protected only by scant privacy laws.

The answer, however, is not copying the U.S. or Chinese approach — but making the algorithmic decision-making process at the core of AI technologies transparent to customers, the Commission argues.

Shedding light on ‘black box’ technologies

Although research on AI — technologies enabling machines to do jobs that previously required human thinking — dates back to the 1950s, only in recent years have computers become fast and stable enough, with sufficient data available, to make it applicable to many aspects of life.

At the core of artificial intelligence is technology that allows computers to learn and make their own decisions by mimicking and proliferating human brain patterns.

In both military and civilian contexts, such “deep learning” opens vast potential across sectors, from driverless cars to medical microscopes identifying cancer cells. But it also turns some computers into “black boxes,” making it increasingly difficult or even impossible to understand why certain decisions were made. And many AI systems mirror biases from “the real world,” where their data was collected.

One of the central goals in the EU strategy is to provide customers with insight into the systems.

That could be easier said than done.

“Algorithmic transparency doesn’t mean [platforms] have to publish their algorithms,” Ansip said, “but ‘explainability’ is something we want to get.”

AI experts say that to achieve such explainability, companies will, indeed, have to disclose the codes they’re using – and more.

Virginia Dignum, an AI researcher at the Delft University of Technology, said “transparency of AI is more than just making the algorithm transparent,” adding that companies should also have to disclose details such as which data was used to train their algorithms, which data are used to make decisions, how this data was collected, or at which point in the process humans were involved in the decision-making process.

In purely technical terms, this is not difficult to achieve, Dignum said.

“But it’s more a governance challenge, and it will imply a mind shift when it comes to how we do business and how we interact between businesses and consumers.”

Joanna Plucinska, Laurens Cerulus and Hans von der Burchard contributed reporting. 

This article is part of POLITICO’s Data & Digitization Pro service, which cuts across sectors to capture how modern advances in data & technology, and the societal implications which follow, are shaping a new business, regulatory and political landscape. To read the weekly Digitization Insights newsletter, and get updates and analysis on GDPR, AI and more, request a complimentary trial here.

Categories:

Tags:

Comments are closed