3 misconceptions about artificial intelligence – AI professors share their views

News, Press release

A great deal of money is being invested in artificial intelligence, and expectations for returns are high. But which expectations are realistic, and which are not? VTT’s new AI professors, Arash Hajikhani and Samuel Marchal, set the record straight.

1. AI knows everything (or nothing)

Arash Hajikhani, Research Professor in artificial intelligence and large language models, often encounters polarized opinions: either AI is expected to know everything, or it is dismissed as generating nonsense.

“Neither is true. Everything depends on what AI is being used for,” he explains.

Hajikhani compares AI to fire: you can use it to burn down a house or to cook a steak to perfection – if you’re skilled in cooking and know how to handle fire.

The same goes for technology. There are many ways to build effective AI systems. If used properly, AI can greatly increase efficiency, especially by automating straightforward tasks. Hajikhani himself uses AI to sum up research articles, for example.

AI’s “intelligence” also depends heavily on prompting, meaning the ability to clearly communicate the desired outcome to the system. Vague commands typically produce poor results.

The training data also matters. If AI has not been fed the right information, it cannot provide it to the user.
“Garbage in, garbage out,” Hajikhani sums up.

2. AI takes away jobs

Another common belief is that AI will take away most jobs within just a few years. Samuel Marchal, newly appointed Research Professor of cybersecurity in the AI era, disagrees.

“I don’t believe things will happen as quickly as people think.”

He reminds us that machine learning systems have been under development for over 50 years. It took decades of work before AI became effective in repetitive tasks. Although major leaps have been made in recent years, so-called artificial general intelligence (AGI) is still a long way off.

According to Marchal, expectations are inflated by science fiction movies and other grand visions, with some responsibility lying with the media as well.

“Headlines usually highlight AI’s latest successes. But I can assure you that there are many, many more failed experiments. They just aren’t seen as newsworthy,” he notes.

In Marchal’s own field of cybersecurity, AI works well in areas like detecting phishing attacks, scams and malware. It also helps professionals handle massive amounts of data. 

At its best, AI is an excellent assistant for security experts – but it does not replace humans, at least not yet.

3. AI makes everything more efficient

A third misconception is that AI makes everything more efficient and should be applied everywhere. In reality, successful use of AI is far from simple. AI cannot make everything more efficient, nor should it be applied indiscriminately.

For example, many companies have rushed to add AI-powered chatbots to their websites, only to find that the bots do not actually meet their goals.

“If the data isn’t right and the purpose hasn’t been carefully considered, the result will be a failure. Organizations must understand their own operations, and AI must be integrated into company culture,” Hajikhani says.

He emphasizes ethical and responsible use of AI. In systems handling critical information, AI’s operations must be transparent and open to intervention.

Security is particularly important in AI systems used in cybersecurity, Marchal stresses. Researchers often find more benefit in older machine learning methods than in large language models or generative AI, which are still too prone to errors.

In cybersecurity, moderate accuracy is not enough. One mistake can cause serious damage.

“AI without carefully considered security is like handing the keys of an organization to cybercriminals. If systems are built in a rush and the doors are left unlocked, the cost is not only lost data – it also means lost trust, lost money and lost control,” Marchal concludes.

Meet our team

Arash Hajikhani
Arash Hajikhani
Research Professor

Arash Hajikhani is an expert in evaluating AI systems and in their ethical and responsible design. He is a research professor of artificial intelligence and large language models at VTT. Arash’s previous positions include research team leader in the Foresight and Data Economy research area, as well as roles as data scientist and project manager. His research focuses on human-centred AI to support decision-making. He holds a PhD from the Software Engineering Department at LUT University, where his research focused on designing novel metrics to measure innovation from text data. Arash values the multidisciplinary environment and collaborative community at VTT and enjoys taking part in its active sports clubs.

Samuel Marchal
Samuel Marchal
Research Professor

Samuel Marchal is a research professor of AI-focused cyber security at VTT. He holds a PhD in network and system security, pioneering the use of machine learning to detect phishing attacks, mitigate online deception and improve network security. Samuel served as a postdoctoral researcher and later as a research fellow at Aalto University, advancing adversarial machine learning and publishing some of the first defenses against adversarial attacks such as model stealing. He also worked as a senior data scientist at F-Secure/WithSecure. At VTT, he now focuses on AI-based security automation, AI supply-chain assurance, and securing emerging agentic systems. Outside of research, he enjoys strategy-driven pursuits such as competitive sailing in the summer and hunting in the winter – both of which demand the same analytical mindset needed to outsmart cyber attackers.

Share
Arash Hajikhani
Arash Hajikhani
Research Professor
Samuel Marchal
Samuel Marchal
Research Professor