Even artificial intelligences can be vulnerable, and there are no perfect artificial intelligence applications

Blog post
VTT

Nearly all aspects of life and industry aim to take advantage of artificial intelligence. Even though artificial intelligence creates new opportunities for many fields, it also creates possibilities for misuse. You should be aware that various kinds of malicious actors may attempt to attack the artificial intelligence and make it operate in ways that serve their own purposes. This side of artificial intelligence should be studied more in advance in order to avoid problems.

Perhaps the simplest way to attack an artificial intelligence is to treat it as perfectly ordinary software with weaknesses or bugs. By taking advantage of those weaknesses, it is possible to carry out exactly the same kinds of data breaches and other cyber attacks as through the bugs in “ordinary” software. However, this is only one method that can be used to attack an artificial intelligence.

Confusing the identification methods

Artificial intelligence has its own special characteristics that also make other kinds of attacks against these systems possible. Because an artificial intelligence usually attempts some kind of identification and then makes decisions based on it, the attacker may want to trick the artificial intelligence. This problem has been encountered in the fields of pattern and facial recognition in particular. Last year, it was published that Google’s artificial intelligence algorithm was tricked into classifying a turtle as a rifle[1]. As for facial recognition, makeup and hairstyles that fool facial recognition algorithms[2] have been developed.

Of course, people also make mistakes in identifying objects or faces, but the methods used for identification by an artificial intelligence are very different. This means that the errors made by an artificial intelligence seem bizarre to humans, because even small children can tell a turtle from a rifle, and these camouflage methods do not work against people. In an automated environment, in which artificial intelligence makes the decisions, such deceptions can be successful and may help the attacker.

Tampering with the training material

Tricking an artificial intelligence is not the only attack method. Most of the artificial intelligence methods require a training phase, in which the method tries to learn the desired task, such as identifying a cat in an image, as well as possible. Several different methods exist, but what if the attacker is able to access the training material and adapt it to either benefit the attacker or create otherwise undesirable results? One example of such an attack was seen a few years ago, when Microsoft launched an artificial intelligence bot called Tay on the Twitter service[3]. Because Tay learned from discussions and the people talking with it made rude, racist and misogynist comments on purpose, Tay also started to produce similar text. Naturally, a cunning attacker may only try to alter the artificial intelligence’s training material in minor ways that are as inconspicuous as possible, yet still provide the attacker with the desired benefits when the artificial intelligence is finally deployed.

The training material also involves a different kind of threat. It is usually thought that after the training phase, the artificial intelligence itself no longer contains any of the individual parts of the training material. This is important for the protection of privacy in medical applications, for example, where the artificial intelligence is taught with material such as personal information on several patients. Nevertheless, methods have been proposed for extracting such potentially sensitive training data from the algorithm[4].

The information security of artificial intelligence applications must be considered

In addition to the benefits, an artificial intelligence also opens a new door to attackers, as the examples above have shown. Therefore, you should already consider in advance how to protect your own artificial intelligence system from different kinds of attacks. Just like the benefits of artificial intelligence, only the tip of the iceberg of these attacks is visible, and the future will bring new and unpredictable developments.

In fact, the developers of artificial intelligence applications should ensure that their applications are as secure as possible by designing, implementing and testing their applications from the perspective of information security, too. In addition, the users of artificial intelligence should also consider what kind of misuse of the system various actors may attempt, and what kind of benefits they might gain from the misuse. Considering these issues makes it possible to design ways to detect and prevent misuse.

[1] https://www.theverge.com/2017/11/2/16597276/google-ai-image-attacks-adversarial-turtle-rifle-3d-printed
[2] https://cvdazzle.com/
[3] https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
[4] https://arxiv.org/pdf/1709.07886.pdf

Share
VTT
VTT