Researchers in Europe are concerned: they fear that they may fall behind their rivals from China and the US in the field of artificial intelligence (AI). In response, 550 leading European AI experts recently joined forces, to set up a research association: Claire (Confederation of Laboratories for Artificial Intelligence in Europe) aims to contribute to making Europe a competitive AI research location. The European Union has also acknowledged the problem: “Artificial intelligence is changing the world as the steam engine and electricity once did”, said Andrus Ansip, Vice President for the Digital Single Market in the European Commission, in April. The European Commission had just announced EUR 1.5 billion of investment in AI-based technologies.
Early risk recognition and containment
Digitization, AI, the Internet of Things are all closely connected keywords. They also pose questions concerning their inherent risks – questions which ultimately affect us all and, of course, our industry. Digitization is disrupting almost every area of society and the economy at an unprecedented pace. The resulting chances and opportunities will only lead to a positive outcome if new questions that arise are answered and new risks recognized and contained sooner rather than later.
Let me give you an example: insurers in Germany see over 250 occupational accidents a year involving industrial robots and resulting in injury or even death to employees. This did not faze the Hannover Messe, which had “Connect and collaborate” as its lead theme in April, based on the interplay between man and machine in the digital factory. “The robot is turning into a cobot and leaving its safety enclosure”, to quote an article in our trade magazine “Positionen”, which outlines the situation under the title “Bloss nicht durchdrehen” (Just don’t go nuts). “We worry that Amazon’s Alexa could spy on us but we still put it on the kitchen table. Self-driving cars cause fatal accidents, but they will almost certainly prevail in the end.”
Simply lumping liability on the producer would hamper innovation
What does this mean for politics? For the economy and society? And for insurers? Artificial intelligence needs intelligent regulation, which includes a balanced allocation of liability. The fact is, intelligent robots, autonomous vehicles and connected products must all meet justified security expectations. We have to keep track with these expectations through industrial norms and standards and product safety law. They will then add substance to the current and proven liability law that adequately takes into account the interests and responsibility of the producer as well as the user of AI. The current rules create a fair framework fit for tomorrow’s digital economy. Simply lumping liability onto the producer would hamper innovation and is therefore to be avoided, supplementary obligatory insurance and knee-jerk calls for compensation funds are also unnecessary and counterproductive.
While manufacturers do have to vouch for the quality of their product, buyers and users of a product are responsible for its upkeep and proper use – this should be reflected in apportioning responsibility in the event of a claim. Even if nobody is at fault and an intelligent robot, well, goes nuts, there is a rule everyone can agree to comply with: strict liability. It is already well established for, by way of example, people who own a dog or aircraft – and it can, indeed should, also apply to the operators of intelligent machines in future.
Our contribution to society’s acceptance of rapid technological change
The insurance industry already offers manufacturers and users of intelligent systems, autonomous vehicles and connected devices the risk protection they need – and it will continue to do so. As a result, it not only protects its clients but also ensures optimal cover for victims, thus making a major contribution to society’s acceptance of rapid technological change.
Jörg von Fürstenwerth