AI-powered Competency Path is carefully evaluated for security and compliance

News

Competency Path is a new digital service designed to offer education and career planning for a broad range of users, from citizens to educators, through conversational AI. The launch of such an innovative service based on large language models and AI carries a critical responsibility: ensuring it is rigorously evaluated for security and compliance, especially under the EU AI Act. Recognising this, the project engaged VTT’s AI and cyber security experts to conduct a thorough assessment from the outset.

On the Competency Path webpage, users may describe their needs and areas of interest in their own words, eliminating the necessity to select from extensive multiple-choice options. As this type of service falls under the newly introduced EU AI Act, an independent audit was called for. The task was carried out by VTT, whose extensive experience in security of emerging technologies combined with AI utilisation and evaluation ensured a rigorous and reliable review.

“Competency Path utilises AI to generate new opportunities based on your interests, background and recommendations. The objective was to audit these AI components to ensure their trustworthiness and compliance with the AI act”, says Samuel Marchal, research professor of AI-focused cyber security at VTT and leader of the evaluation.

Digital pathways to support lifelong learning

Competency Path, funded by the EU’s Recovery and Resilience Facility (RRF), was developed through a broad cross‑government collaboration involving ministries and public agencies. Senior adviser Jenni Larjomaa of the Ministry of Education and Culture coordinated the AI audit.

She notes that digital services can open valuable pathways for exploring education and career options, but they can never replace human guidance completely.

Larjomaa appreciates the AI audit:

“The audit provided valuable support for our development work. Although we had already conducted a separate security audit, this process offered additional perspectives. It was both necessary and genuinely useful.”

6 aspects of evaluation

The AI Act sets out stringent transparency requirements for AI systems, depending on their risk category. In practice, this means ensuring that users understand when they are interacting with an AI system and in what context. People should always be aware when AI is influencing the interaction.

While the AI Act defines a broad set of requirements, it is not realistic to address all of them to the same extent in every system. Requirements need to be prioritised based on how the system is used and the risks it may pose. 

Marchal emphasises that understanding which aspects are most critical in a given context is a central part of any meaningful evaluation and a prerequisite for making practical, actionable recommendations.

In its assessment, VTT evaluated the service from six key perspectives. These include:

  • ethical considerations
  • sustainability of AI use
  • security, robustness and reliability
  • data privacy and data quality assurance
  • transparency towards users
  • compliance with EU regulations.

Together, these aspects form the basis of VTT’s evaluation approach and support recommendations for the responsible use of AI.

“We have a solid methodology for this type of assessment and can provide well‑grounded, trustworthy recommendations. At this stage, it is not about certifications but about guidance and legislative alignment. In emerging areas where no established methodologies or technical standards yet exist, we focus on identifying and recommending best practices”, Marchal says.

Security, robustness and reliability in focus

According to Marchal, security, robustness and reliability are among the most complex aspects of AI systems to assess. They are often not the primary concern in development, as improving them can be costly and does not necessarily bring immediate, visible value to users. 

“These aspects are critical, as there are many ways AI systems can be misused or manipulated, for example through techniques such as prompt injection. As AI systems evolve, new types of threats also emerge, requiring continuous monitoring, updates to defensive measures and regular security assessment”, Marchal reminds.

Another key learning relates to the timing of AI evaluations. In many cases, assessments are carried out only at the end of the development process, which limits their impact. An evaluation conducted at this stage captures a snapshot of the system, rather than guiding its design choices. 

Marchal notes that a more effective approach would be a two‑stage process, with a preliminary assessment based on the system design, followed by a final evaluation of the implemented solution. This would allow potential issues to be identified earlier and addressed more systematically.

The development of Competency Path was the responsibility of "digital service package for continuous learning", which was a joint project between the Ministry of Education and Culture and the Ministry of Economic Affairs and Employment from 2021 to 2025. The Finnish National Agency for Education, KEHA Centre, higher education institutions through the Digivision 2030 project and the Service Centre for Continuous Learning and Employment were closely involved in the development work. The project was funded by the EU’s Recovery and Resilience Facility (RRF).

What is the EU AI Act? The EU AI Act is the world's first comprehensive AI regulation, introducing a risk-based framework to protect fundamental rights and safety while supporting innovation.

How it works? AI systems are classified into four risk tiers: banned practices (e.g. manipulative systems or biometric surveillance); high-risk systems (e.g. in recruitment or healthcare) facing strict requirements on transparency and oversight; limited-risk systems subject to transparency obligations; and minimal-risk systems that remain largely unregulated.

Who it applies to? Any EU or non-EU organisation whose AI systems are placed on the EU market or whose outputs are used in the EU.

Timeline? In force since August 2024, with bans on prohibited AI from February 2025, obligations for general-purpose AI models from August 2025, most high-risk requirements from August 2026, and legacy regulated-product systems until 2027.

Why it matters? The EU AI Act sets new expectations for governance, transparency and accountability – comparable in significance to GDPR for data protection.

Meet our expert

Samuel Marchal
Samuel Marchal
Research Professor

Samuel Marchal is a research professor of AI-focused cyber security at VTT. He holds a PhD in network and system security, pioneering the use of machine learning to detect phishing attacks, mitigate online deception and improve network security. Samuel served as a postdoctoral researcher and later as a research fellow at Aalto University, advancing adversarial machine learning and publishing some of the first defenses against adversarial attacks such as model stealing. He also worked as a senior data scientist at F-Secure/WithSecure. At VTT, he now focuses on AI-based security automation, AI supply-chain assurance, and securing emerging agentic systems. Outside of research, he enjoys strategy-driven pursuits such as competitive sailing in the summer and hunting in the winter – both of which demand the same analytical mindset needed to outsmart cyber attackers.

Continue reading
Share
Samuel Marchal
Samuel Marchal
Research Professor