Europe is often described as being “behind” in artificial intelligence because of regulation. The AI Act, GDPR, and a dense compliance landscape are frequently framed as brakes on innovation. Our recent study, commissioned by European Commission, suggests a different diagnosis.
Europe’s core bottleneck is not a lack of AI capability or regulation, but access to high-quality, interoperable, and legally usable data - combined with fragmented compliance systems and uneven global data governance.
This distinction matters. If the problem is regulation alone, the solution is deregulation. If the problem is data, the solution lies in governance, infrastructure, and practical experimentation.
Data is the real frontier for Generative AI
Europe has strong research capacity, a vibrant startup ecosystem, and world-leading expertise in applied AI. Yet most European Generative AI companies operate at the application layer rather than developing foundational models. One reason is clear: data access.
Our study shows that European AI developers face persistent barriers in accessing multilingual, domain-specific, and legally shareable datasets. These barriers are legal (uncertain GDPR interpretation, fragmented enforcement), technical (lack of interoperability and standards), and economic (high licensing and compliance costs). For SMEs in particular, the burden is often prohibitive.
Synthetic data and privacy-enhancing technologies offer part of the answer - but only if they are supported by clear definitions, evaluation standards, and regulatory confidence.
Testing governance through use cases
Rather than stopping at high-level recommendations, we focused on real-world use cases where Europe can test and strengthen its data governance and AI capabilities in practice.
The study identifies six illustrative cases that cut across sectors and borders:
- Digital Battery Passports, enabling traceability and sustainability reporting across global value chains
- Federated learning for rare-disease genomics, allowing health research without sharing sensitive patient data
- Interoperable safety-data exchange for autonomous vehicles, requiring trusted neutral data intermediaries
- Deforestation-free supply-chain traceability, balancing inclusiveness with regulatory demands
- Privacy-enhancing Passenger Name Record (PNR) exchange, combining security with fundamental rights
- Federated digital twins for aerospace, where industrial collaboration meets IP protection challenges
These cases show that data governance is not an abstract policy debate. It is something that must work operationally, across jurisdictions, organisations, and technologies.
Compliance does not have to be manual
A second key finding concerns regulatory compliance itself. Across sectors such as healthcare, energy, agriculture, and chemicals, compliance reporting is still dominated by manual processes - spreadsheets, ad-hoc documentation, and duplicated reporting.
Automated and semi-automated compliance tools already exist, but adoption remains uneven. Interoperability gaps, unclear incentives, and lack of harmonised standards slow progress. The risk is that digital compliance becomes something only large organisations can afford, leaving SMEs behind.
The opportunity is clear: automated, interoperable, and proportionate compliance infrastructures can reduce costs while improving data quality and trust.
Europe’s opportunity lies in shaping the global data economy
Internationally, the data economy is fragmenting. Rights-driven, market-driven, and state-driven governance models coexist. Europe’s strength lies in offering a credible “third way”: strong safeguards combined with innovation-friendly, federated data architectures.
To realise this, the study points to practical steps: piloting interoperable data spaces, clarifying the role of synthetic data, expanding regulatory sandboxes, and investing in privacy-enhancing technologies and trusted intermediaries.
Looking beyond the obvious
If Europe wants to lead in human-centric AI, it must look beyond the obvious debate about regulation versus innovation. The real work lies in making data usable - legally, technically, and economically - and in turning governance principles into systems that work in practice.
Get to know our experts
Arash Hajikhani is an expert in evaluating AI systems and in their ethical and responsible design. He is a research professor of artificial intelligence and large language models at VTT. Arash’s previous positions include research team leader in the Foresight and Data Economy research area, as well as roles as data scientist and project manager. His research focuses on human-centred AI to support decision-making. He holds a PhD from the Software Engineering Department at LUT University, where his research focused on designing novel metrics to measure innovation from text data. Arash values the multidisciplinary environment and collaborative community at VTT and enjoys taking part in its active sports clubs.