What is machine learning? Why does artificial intelligence draw conclusions differently than humans do? How does artificial intelligence become superintelligence?
Early this year, I spent a night at a big hotel in Berlin. When I stepped into my room, it felt quite cool inside. There was a sticker by the door, telling that the hotel had introduced a ”Smart climate control” system and I could adjust the temperature to the desired level through my TV. I opened the TV and navigated to the climate control page through various turns. And there it was: the present temperature was 18 degrees and the target temperature set by the previous customer was 25. I set the target temperature to 22 degrees and went out to have dinner. When I returned to my room, the temperature had climbed to 19 degrees, probably due to my PC which I had left on in the room. It still felt quite cool, so I called the hotel reception for help. The help soon arrived. A janitor brought an old-style fan heater for my use. I could not keep the noisy fan on at night, so the temperature dropped back to around 18 degrees for the night. However, in the morning, I woke up well rested after a good night’s sleep. After all, you sleep better in a cool environment. This left me wondering that maybe the smart climate control was smart enough to understand better than I what was the ideal temperature for me. I would still have appreciated some kind of an explanation, because the “smart” system that does what it pleases without giving any say to a human left me feeling powerless. The hotel staff had also clearly resigned itself in front of the smart climate control and did not even try to fix the system in my room but resorted to using a good old fan heater. If the system really was smart, would it not also keep people up to date on the decisions it has made, telling what it is aiming at. If it does not function or cannot fulfil people’s wishes, would it not also give a reason for this?
From artificial intelligence to superintelligence
Artificial intelligence (AI) has been studied for decades, but now it is experiencing a strong renaissance. The earlier attempts to bring all expert knowledge on one subject into a single machine were defeated by their own impossibility. Today, the prevailing trend is the development of an AI based on machine learning, where the idea is that the machine learns little by little when being taught, but also on its own. Machine learning is well suited for the analysis of large masses of data and for supporting people in data-based decision-making. In medicine, for example, AI allows examination of different measurement data, and the machine can draw connections between data. Therefore, AI can be used for such a purpose as forecasting the development of a disease, when a patient’s data is compared to data on earlier patients. It is typical of machine learning that the result is not exact, but it is a probability-based forecast. That is why a machine cannot give similar detailed explanations for its conclusions as a human expert can.
A lot is expected of machine learning not only in medicine, but also in service business of companies, where AI can be used for analysing machine data collected from the field and forecasting, for example, occurrence of faults. In such applications, AI functions independently, analysing data and giving suggestions to people about the next necessary maintenance measures and even about their suitable timing, considering the financial factors.
In addition to these positive effects, futures researchers have also been painting some very gloomy scenarios about the “superintelligence” of the future that would be able to, for example, develop its own intelligence, draw its own conclusions and generate a will of its own, and could thus get out of the hands of both its designers and users.
What would be a potential path from the present machine learning-based AI systems to such superintelligence? AI is being introduced not only to services accessible via the internet, but also to mobile machines, such as autonomous cars and robots. Would this be the right time to consider making the future development paths such that the AI will remain under human control for sure?
A clever person solves the problems a wise person knows to avoid. This old wisdom should be applied to AI as well: if AI represents the cleverness and humans represent the wisdom, then humans must be secured a role in which they can prevent problems that AI might cause to itself or to humans. There must be an easy connection between AI and humans, and humans must have the final decision-making power. This prevents AI from getting out of human hands even as it learns new things.