Situational awareness is the key to unlocking the full potential of automation

Blog post
Sauli Eloranta,
Fabrice Saffre

Picture a savannah in afternoon heat. You want to get to the nearest watering hole to have a drink. On your way there, you spy a leopard napping in a tree and decide to take a detour just to be safe. We are the descendants of those who decided to take that detour. Over the course of generations, humans have developed excellent situational awareness capabilities. This also means we have come to take situational awareness for granted. We are constantly gathering information from our surroundings by using our senses, processing that information and making decisions based on it. It’s a fast, seamless, even intuitive process. Now machines should be capable of doing the same.

At the moment, our society is embracing increasing levels of automation. We already share our streets with grocery delivery robots, and self-driving cars are almost here.

This means that we need to approximate what we humans take for granted for our creations, too: we need to create situational awareness for autonomous machines that we are starting to introduce to the environment they share with people.

Unexpected events call for unexpected reactions

Good situational awareness is essential for an automated system to behave properly among humans. So-called dumb machines, such as robots on an assembly line, tend to be goal-driven. This is fine, even desirable in a controlled environment, but it means their focus is very narrow. Our real, physical world on the other hand is full of unexpected events, both big and small, which more advanced, intelligent autonomous devices will need to take into account. Being able to navigate such a world requires constantly collecting clues about events all around you, i.e. to develop situational awareness.

Human situational awareness is heavily based on earlier experience. How can a machine born on a production line learn from the experiences of earlier generations? A difficult question already introduced in the sci-fi classic Bladerunner. How can we simulate an optimal learning process? Machine learning based situational awareness might not be sufficient, since there are situations that haven’t been encountered before. Situational awareness should be at the heart of designing intelligent systems. Contrary to a robot on an assembly line, the more sophisticated and autonomous systems we design, the more they need to be like humans in order to ensure their and our safety.

Safe systems through thoughtful design and extensive testing

With increasing global uncertainty, situational awareness is becoming a buzzword in all sorts of different fields and spheres of society. However, increasing automation is the field where we should be having the most urgent discussion. As we’ve established, good situational awareness is essential for safe autonomous operations in a dynamic and partly unpredictable environment, so if we want our new machines to be safe from now on, we should build them with this in mind. Thinking about the ethical side of it produces even more compelling reasons: machines never get tired or bored on the job, but a human air traffic controller or ship’s crew might. If technology has been proven to increase safety, we have a moral obligation to take that technology into use. This means utilizing those technologies either to aid us flawed humans or even do some critical jobs instead of us. Autonomy could free up human brain power to do other meaningful jobs instead of boring ones.

Another reason is that we have already started treating technology, especially vehicles, as autonomous, even when they really aren’t quite autonomous yet. This is due to lack of proper situational awareness. That’s why autonomous cars collide with ambulances and boats have accidents when the crew trust their radar too much and don’t concentrate on navigating.

The strongest reason for starting a serious discussion on situational awareness is that ultimately, we will have machines making decisions for us. Autonomous intelligent systems are going to be used not only in advisory roles but also as decision makers in the future. We’ll inevitably offload humans more and more.

This means we need to start stress-testing our design methods now. There’s a need for comprehensive investigation of safety critical situational awareness systems, which would require a methodology involving a combination of simulations to test a large number of hypothetical scenarios and physical experiments to verify performance in the face of real-world constraints. No safety administration anywhere in the world is currently able to do that with the accuracy and confidence required. VTT wants to be a forerunner in tackling this challenge.

Sauli Eloranta
Sauli Eloranta
Fabrice Saffre
Fabrice Saffre
Research Professor
Our vision beyond 2030

A safe society is a wonderful thing. It should be treasured and strengthened so that known and unknown threats both in the real and virtual worlds do not jeopardise it.