Go to search
Digitalisation

Europe must better exploit the potential of artificial intelligence

In a joint position paper, the German Insurance Association (GDV) and 14 other associations support the European Commission's approach to Europe-wide regulation for artificial intelligence (AI).

Reading time
© unsplash

The potential of AI applications is far from exhausted.

A regulatory framework must provide security against undesirable effects of AI on the one hand. On the other hand, insurers and other companies need freedom to unleash the full potential of AI. AI is still only in its infancy. It is still hard for us to even guess what innovations will make life easier for consumers in the future and what contribution AI can make to our national economy. At the moment, however, we are not fully exploiting the potential of AI in Europe and are in danger of falling even further behind in the global technological race.

The signatories of the paper welcome the European Commission's regulatory approach, but the associations also see a need for improvement in some aspects of the proposal.

  • Only "real" artificial intelligence should be covered by regulation. There should be no algorithms that do not involve some form of machine learning, no linear models, no statistical methods.
  • The delineation of "high-risk" applications that are subject to particularly intensive regulation should be based on the purpose of the application and not on the affiliation with a particular economic sector. Otherwise, harmless applications such as a digital inbox risk being heavily regulated in the future - and thus abolished.
  • Many areas, such as insurance, are already closely regulated. This should be taken into account when creating new rules.
  • New rules should be proportionate and fit into the governance structures of EU countries.
  • By lawyers, but not only for lawyers: For developers and users of AI solutions, the new rules should be "translated" in a practical and understandable way.

The demand of the German industry for the regulation of artificial intelligence 

Artificial intelligence (AI) is one of the key technologies of the 21st century and offers enormous economic application opportunities. It permeates almost all areas of our lives and brings with it both opportunities and risks. AI offers enormous potential for increasing innovation, growth, productivity and job creation. We are currently not fully exploiting this potential in Europe. This is where we need to get better. The EU is in danger of falling behind in the global technological race. It is often the technology leaders who create innovation clusters in their countries and set standards. Therefore, we should set a smart framework for the development and application of AI in Europe and promote AI in a targeted manner instead of making its development and market introduction more difficult. It is crucial to ensure the right balance between limiting potential risks on the one hand and supporting technological innovation on the other. In this context, the European Commission's risk-based approach to regulating AI is fundamentally sensible and to be welcomed. In our view, however, some of the regulations proposed by the European Commission still need to be improved: 

1. Narrower definition of AI 

The definition of AI is critical to the scope of future regulation. However, the definition contained in the European Commission's AI regulation proposal is too broad, so that a large number of conventional IT systems would be improperly covered and consequently fall within the scope of regulation. The definition of AI should therefore be made more precise. In particular, algorithms that do not involve any form of machine learning or self-optimisation should by definition be exempt from AI regulation. The same is true for linear models, supporting methods from the field of explainable AI, and established statistical methods. The position of the Council of the EU goes in the right direction here. 

2. Risk definition taking into account the purpose of use 

The list of high-risk AI applications currently only takes into account the general scope of use, but not the specific design of the AI, its specific use, or the actual risk it poses. This blanket classification results in a large number of harmless AI applications being considered high-risk. This applies, for example, to applications for increasing the efficiency of workflows, such as digital incoming mail solutions or applications in applicant management that make the review of applications more objective. 

3. Avoid double regulation 

The AI regulatory proposal affects almost all business and industrial sectors. Therefore, it is important to establish consistency with existing regulations and supervisory structures. The existing regulatory framework already provides sufficient protection for consumers and of course also applies to AI applications. New requirements and specifications through additional AI regulation could therefore lead to duplicate regulations and an inconsistent legal framework, which would not only result in additional expense and considerable legal uncertainty for undertakings, but also inhibit innovation in the long term. This is especially true for industries that are already highly regulated, such as critical infrastructure, banking, insurance, pharmaceuticals, and medical devices. 

4. Ensure proportionality of regulation

Of key importance is regulation based on the principles of necessity and proportionality, following the principle of "as much as necessary, as little as possible". Requirements arising from the development and operation of high-risk AI must also be justified on a case-by-case basis. However, some of the requirements for high-risk AI systems are too far-reaching, such as the requirement for error-free data sets for training data or requirements for technical documentation, so that adaptation and specification appropriate to the respective risk are necessary and can be implemented to a reasonable extent. 

5. Ensure appropriate governance structure 

In order to ensure an appropriate governance structure, the recourse to already existing supervisory structures and national distribution of responsibilities provided for in the Commission proposal is to be welcomed in order to avoid duplication of supervisory activities. Particularly in industries that are already regulated and subject to supervision, a dual structure would not be expedient in our view. 

6. Guidelines for developers 

Since the regulations are primarily written for lawyers, we would welcome it if the Commission could present an application-oriented guide after the conclusion of the legislative process, in which the regulations are "translated" in a practical and easy-to-understand fashion for developers of AI, e.g., with corresponding checklists and step-by-step instructions. For example, the guidelines could help developers answer questions such as when an AI application poses a high risk or how to ensure that datasets do not contain bias.