EU Digital Regulation Blog Banner

Feb 25, 2020 | 11 min read

Europe’s Digital Future: AI Regulation Proposal Brief [Part 1]

Pavel Kaplunou , Marketing Communications


On February 19, the European Commission, the executive branch of the European Union, released two communications and one White Paper. These were: “Shaping Europe’s Digital Future“, “White Paper on Artificial Intelligence: A European Approach to Excellence and Trust” and the other “A European strategy for data”. In this post, we condense the 14,000 word white paper into a 2,000 word summary with key ideas.

The paper outlines the framework of standards required to uphold trustworthy use of AI and an agile approach to data use and exchange. The report on strategy for data will be addressed in part 2 of this article on Europe’s Digital Future.

According to the Commission’s press release, the five year strategy drafted in the publications will pave the way for digital transformation in Europe, while upholding the core principles of making technology work for people; creating a level and highly competitive playing field; striving for a society based on sustainability and democracy.

While the standards are put forth by the European government, the principles are unquestionably global. The EU proposes to approach the subject of AI regulation based on an ecosystem of excellence and trust.

1. Introduction

The introductory paragraphs serve to outline the goals of the White Paper, which aims to develop a framework of principles that will govern AI regulation. It provides an explanation for the need of such legislature due to an increase in the use of AI technologies, linked to an overall growth of computer technologies and the abundance and availability of data. It predicts that future even more data will be generated by industries, businesses and the state.

As part of the overall effort to become a global leader in innovation in the data economy and applications, directly quoted from the white paper, the technological benefits will profit society, business and public interest.

Moreover, development of AI systems can be beneficial in meeting Sustainable Development Goals: minimizing carbon emissions, extending the use of renewable energy sources and reducing environmental impact, thus upholding the European Green Deal.

The White Paper proposes the building blocks of policy alternatives that will define the previously mentioned “ecosystem of excellence” as multi-level cooperation between the private and public sector in order to motivate research and adoption of AI-based solutions by small and medium-sized businesses. Similarly, “an ecosystem of trust” will be availed by establishing complience with EU rules and policies, while avoiding fragmentation of the single market.

2. Capitalizing on Strengths in Industrial and Professional Markets

Europe is well equipped to facilitate not only the consumption, but also the development and propagation of AI technologies across such sectors as manufacturing, services, automotive, healthcare, energy, financial services, and agriculture.

The EU’s technological capacity will enable it to thrive in a data economy that supports the building of trustworthy AI-enabling data pools.

Europe will continue strengthening its position in the technological value chain, by further growing its hardware production, software application development, e-government and “intelligent enterprise” initiatives. 

Funding for research and innovation in AI technology has risen to 1.5bln euro (approx. 1.64bln dollars) in the past three years. While this marks a 70% increase as compared to the previous period, it remains a small number in relation to, for example, the US and Asia, where investments are as much as four times higher.

3. Seizing the opportunities ahead: the next data wave

Europe stands at a competitive disadvantage when it comes to consumer applications and online platforms. Using a change in the trend the way data is stored and processed, from data centres to the edge, can be a pivotal point, or the next wave of technological change, that can be leverages to gain leadership in the field. Europe’s current strengths in low-power electronics and neuromorphic solutions, as well the provision of quantum computing testing facilities.

4. An ecosystem of excellence


Following a Coordinated Plan adopted in December 2018, the EU Member States will act on the adopted 70 joint actions to foster efficient cooperation in research, innovation and deployment of technologies, which will span until 2027. The EU also plans to reach over 20bln euro of investments into AI in the next ten years, as well as tackle the issue of environmental and societal well-being in order to tackle climate change and degradation. 

Proposed action: Produce amendment suggestions to incorporate AI solutions to these problems in a revised Coordinated Plan before the end of 2020.


The fragmented nature of today’s centers of competence and the lack of proper connections and talent networks currently prevents Europe from being a unified source of innovation.

Proposed action: Establishment of centers of excellence to assemble research talent and provide testing centers for multiple investment sources and a new potential legal framework. 


Filling competency shortages and upskilling the workforce to a level where AI-led transformation is a priority.

Proposed action: Support masters programmes in AI through the Advanced skills pillars of the Digital Europe Programme through top academia in higher learning establishments.


Addresses the creation of Digital Innovation Hubs and digital AI-on-demand platforms.

Proposed action: Ensure that at least one Member State has a high degree of competency in AI. Support Digital Innovation Hubs through the Digital Europe Programme. Supply equity financing in Q1 of 2020 to support innovative developments in AI.


Lower that required level of financial involvement in research and innovation from the private sector.

Proposed action: Set up a new public private partnership in AI, data and robotics. Establish cooperation with other such partnerships and work with testing facilities and Digital Innovation Hubs.


Rapid deployment of AI-based systems and services in the public sector.

Proposed action: Begin open sector dialogues to develop an ‘Adopt AI programme’ for public procurement processes.


Large volumes of data will have to be generated in order to successfully develop AI technologies. This leads to the need to establish secure data management practices and compliance with FAIR principles.


EU has launched discussions around guidelines for the ethical use of AI, which have laid the foundation for standards adopted by international establishments, such as UNESCO, OECD, WTO and the ITU.

5. An ecosystem of trust: regulatory framework for AI

Socioeconomic aspects of AI use have previously been addressed in the Commission’s AI strategy in 2018, where it wrote up a Coordinated Plan with the Member States to synchronize their approaches and strategies. Guidelines on trustworthy AI were published by the Commission’s expert group in 2019.

The 7 key requirements in the Guidelines:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Societal and environmental wellbeing
  • Accountability

While the current framework remains non-binding, a comprehensive regulatory framework could also drive forward Europe’s internal market and AI-adoption.


Key risks involve limitation of material and immaterial loss, as well as violations of fundamental rights, such as personal data, privacy protection and many others.

Abuse of AI-powered systems is a key point. Mass surveillance and intrusive monitoring are major concerns. On the other hand, to protect personal and political freedoms, AI applications are to be designed in a way that allows for human intervention into processes.

AI’s unpredictable and complex behavior can make way for bias and discrimination. This requires the implementation of certain checks and balances that will govern compliance with EU laws. At the same time this may lead to problems of underskilled authorities, who will not be able to perform inspections on such advanced technological systems as AI.

According to the terms of the Product Liability Directive, a manufacturer of AI systems may not be held accountable for the damage caused by their product because the causal link may be much harder to establish, as compared to other products.


While the EU has many directives and regulations pertaining to equality in personal and professional spheres of life, and already pioneers data-protection regulation with GDPR, other assessments need to be made to evaluate if this also covers AI involvement.

The EC outlines the following risks that need to be addressed:

  • Effective application and enforcement of existing EU and national legislation: primarily addressing transparency and liability limitations in relation to AI technologies.
  • Limitations of scope of existing EU legislation: clarifying standalone software safety rules and compliance, especially with products that receive updates post market launch.
  • Changing functionality of AI systems: addressing safety risks of software and machine learning technologies to products already on the market. 
  • Uncertainty as regards the allocation of responsibilities between different economic operators in the supply chain: liability definitions for non-producer added software integrations to products already on the market.
  • Changes to the concept of safety: enhancing evidence cases on potential risks of AI applications to assess and address the AI threat landscape.


The scope of a regulatory framework begins with the proper definition of AI, its elements, applications. To avoid being prescriptive the EC recommends adopting a risk-based approach, wherein any AI application be classified as low-risk or high-risk according to its use case.

A high-risk application is one that a) relates to a sector with significant potential risks b) the AI’s use will likely incur risks (specifically legal effects, material or immaterial damage to individuals or legal entities). Irrespective of the sector, employment processes and surveillance are always to be deemed high-risk.


The EC proposes the following key features for requirements towards high-risk AI applications (to be fully clarified for all actors in the future):

  • Training data – broad data training scenarios are used for risk assessment; use of sufficiently representative data sets to counteract discrimination; personal data and privacy protection outsie of the GDPR and LED.
  • Keeping of records and data – accurate records of training data sets and actual data sets used, the latter, where justifiable; documentation of methodologies, processes and techniques used in developing the AI system.
  • Information provision – system capabilities and limitations; full disclosure about AI use in interactions.
  • Robustness and accuracy – reliable and intended behavior of systems at all stages; reproducible outcomes, adequate handling of errors; resilience against manipulations.
  • Human oversight – human involvement in high-risk AI systems; human-controlled output; ensured human intervention post system output; system shut down failsafes and operational constraints.
  • Specific requirements for remote biometric identification – current GDPR rules forbid the explicit identification of natural persons unless for matters of public interest; safeguards to uphold protection of fundamental rights.


Legal requirements of the proposed framework address developers, deployers, producers, distributors, importers, service providers, professional or private users. The future framework, according to the Commissions stance, shall address actors most appropriate to address specific concerns. As for the scope of the regulation, it shall encompass all economic operators within the EU.


The subsequent regulation would be enacted by competent authorities on both a national and pan-European level, who will possess the capacity to evaluate risks. A conformity assessment would be conducted to ensure that outlined mandatory requirements are met in relation to high-risk applications.

Existing conformity mechanisms will be used, while new mechanisms may be introduced so long as they rely on objective criteria that comply with international requirements. Liability is further reported on in an accompanying report.


Operators of low- or no-risk AI applications may voluntarily undergo assessment and, as such, be rewarded a quality label to ensure trust and promote the technology. The process would involve the creation of a new legal instrument to establish the labelling framework.


A new unifying structure is necessary to uphold the principles of the framework and establish inter-state cooperation. This structure would serve as a hub for all AI-related legal, regulatory and research activity and actors across all levels of society to provide expertise on the subject matter. 

6. Conclusion

AI, dubbed a strategic technology, has many benefits, from increasing wellbeing of citizens to strengthening industry, addressing climate change challenges and sustainability at all levels.

To power AI technology Europe will have to become a global hub for data, while employing an ethical and transparent approach to handle everything related to it. The Commission expects extensive dialogue to build on the points outlined in the proposed framework.

25 February 2020


Pavel Kaplunou, Marketing Communications

Pavel is Smart IT's Marketing Communication Manager. He oversees content creation and is in charge of the official Smart IT blog. Contact Pavel to learn about potential media and content collaborations. [email protected]