How can you deceive artificial intelligence?

Blog post

AI applications are becoming part of our everyday life at an accelerating pace. But do they always work correctly? Is it possible to fool them just like people?

The answer to the first question is of course not. Artificial intelligence systems also make mistakes and work in ways that are not appropriate. If possible biases in the training data are not taken into account already in the design and training phase of the system, the system will start implementing these when used.

The answer to the question about the possibility of deception is somewhat twofold. On the one hand, artificial intelligence systems are capable of significantly better identification and prediction than humans in many areas, such as facial recognition and lip reading. Some applications even interpret micro expressions better than people. In addition, artificial intelligence systems neither make mistakes even when tired nor ever due to carelessness.

On the other hand, however, it seems that it is quite easy to fool artificial intelligence systems. Several studies show how a system that recognises images or sound can be deceived to think that a person or object is someone or something completely different. The same studies indicate that people would not make a similar mistake.

Weaknesses in artificial intelligences should be studied in time

Thus, what has now started is a race similar to the one that has been going on in the fields of information and cyber security already for a few decades. New systems are being created quickly and vulnerabilities are constantly being found in both the old and the new. Attempts are then made to fix these vulnerabilities as well and as quickly as possible.

I believe that a hacker community similar to the one previously formed around computers will also develop around artificial intelligences. It would be advisable for those utilising artificial intelligence solutions to learn from previous developments and welcome these seekers of weaknesses in artificial intelligences. This would allow artificial intelligences to be made better and safer at a quicker pace for different purposes.

As previously in the search for vulnerabilities in computers, good, open tools are also needed in the research into deceiving artificial intelligences. At present, some platform solutions for evaluating different methods have already been developed, but a comprehensive and easy-to-use open platform for deceiving artificial intelligences is still absent from the scene. I think that one should be built soon for us to have a chance to find the artificial intelligence solutions that are easy to deceive in time and create safer and better ones to replace them.

Our vision beyond 2030

Digitalisation brings about new potential and innovations for societies and businesses.