Will artificial intelligence remain under human control?

How can one communicate fluently with artificial intelligence? Can one cooperate with artificial intelligence?

The existing artificial intelligence (AI) systems based on machine learning are often independent actors that inform people about their conclusions, but otherwise interact with people in a very limited scale.  AI is being increasingly introduced not only in services accessible via the internet, but also in mobile machines, such as autonomous cars and robots. We should consider how to ensure that AI will always remain under human control, and how humans can and how they should be able to interact with AI.

Verbal and non-verbal communication

In trend analyses of technology, the interactive properties of AI have been identified as the next major step in their development. Dialogical interaction does not require the user to seek and learn commands, but the correct function is negotiated through free dialogue with the machine. Interaction can be supplemented by non-verbal communication in such a manner that the machine identifies and reacts to the person’s emotional state, such as the person being confused. A machine can learn to identify individuals and adjust its operations according to which matters the person is and is not familiar with, and how he or she prefers to operate. Personal virtual assistants, such as Apple’s Siri, strive to establish a relationship with their owner and learn their preferences in such a manner that, with time, they can predict the person’s needs and offer assistance even before the person takes the initiative to ask for it.

In the internet, nowadays you often encounter chatbots. They are already relatively clever, and, when dealing with them, you may not always notice at first that you are not encountered by a real human being. A chatbot’s ability to discuss is based on the fact that it knows very well the limited service area within which it operates. It has learned to predict what kind of questions people may have. Every now and then, a chatbot may feel a little bit rude. This probably derives from the fact that they are programmed by people who transfer their own manners to the robot.

Interest towards AI solutions where a human and AI operate in collaboration with each other is increasing.  Collaborative human power can be used, for example, for collecting data or interpreting images in solutions, where a large group of people and AI form a collectively functioning entity. This kind of collective  intelligence has been used for such purposes as digitalisation of old texts. A human eye is incomparable in recognising words, even when written in strange letters. When AI carries out easy text recognition tasks and lets people deal with any unclear cases, the work will advance quickly with such collective power.

Fluent interaction requires learning and participation

Fluent interaction between humans and AI still requires a lot of development in many areas. In the future, we will see increasing amounts of work teams consisting of humans and robots. A robot can assist humans in many kinds of maintenance and service tasks. Fluent interaction is based on AI, with the help of which the robot interprets its environment and humans. Recognising the intentions of one another plays a key role: a human must be able to anticipate the robot’s actions, and, in the same way, the robot must be able to anticipate human actions. Dialogical interaction solutions are needed in this field as well.

Autonomous cars and other vehicles largely function on their own, but when they encounter a problematic situation, they may easily need human assistance. In such a situation, it is good if the machine has kept the human up to date on what is going on, so that he or she may quickly resolve the problematic situation. Indicating and recognising intentions is important also with a view to bystanders: when pedestrians encounter an autonomous car, how can they be sure that the car has seen them and stops at a pedestrian crossing to give way for them? How do you establish an eye contact with an autonomous car?

Different smart services at home and in offices strive to fulfil people’s wishes and predict their desires. Often such services remain unnoticed by people, in which case it may remain unclear why air conditioning is blowing at full blast or why the temperature does not rise. An easy interaction channel is needed, so that people can find out why things are going the way they are going, and that they can influence matters.

AI is not infallible − it can make mistakes and it may have faults. Once humans learn to understand the limitations of AI, and the way AI draws conclusions and functions, the interaction between them will become easier. When people understand the basics of the way AI functions, they can put themselves on a level with it, in the same manner as people naturally tune into the same level with the person they are talking with.  It is important to develop AI solutions in such a manner that people who will work with AI are allowed to participate in the design of the solutions.

Read more: VTT and Smart City

Kaasinen Eija
Eija Kaasinen
Senior Scientist, VTT



How will we manage with artificial intelligence in the future?

What is machine learning? Why does artificial intelligence draw conclusions differently than humans do? How does artificial intelligence become superintelligence?

Early this year, I spent a night at a big hotel in Berlin. When I stepped into my room, it felt quite cool inside. There was a sticker by the door, telling that the hotel had introduced a ”Smart climate control” system and I could adjust the temperature to the desired level through my TV. I opened the TV and navigated to the climate control page through various turns. And there it was: the present temperature was 18 degrees and the target temperature set by the previous customer was 25. I set the target temperature to 22 degrees and went out to have dinner. When I returned to my room, the temperature had climbed to 19 degrees, probably due to my PC which I had left on in the room. It still felt quite cool, so I called the hotel reception for help. The help soon arrived. A janitor brought an old-style fan heater for my use. I could not keep the noisy fan on at night, so the temperature dropped back to around 18 degrees for the night. However, in the morning, I woke up well rested after a good night’s sleep. After all, you sleep better in a cool environment. This left me wondering that maybe the smart climate control was smart enough to understand better than I what was the ideal temperature for me. I would still have appreciated some kind of an explanation, because the “smart” system that does what it pleases without giving any say to a human left me feeling powerless. The hotel staff had also clearly resigned itself in front of the smart climate control and did not even try to fix the system in my room but resorted to using a good old fan heater. If the system really was smart, would it not also keep people up to date on the decisions it has made, telling what it is aiming at. If it does not function or cannot fulfil people’s wishes, would it not also give a reason for this?

From artificial intelligence to superintelligence

Artificial intelligence (AI) has been studied for decades, but now it is experiencing a strong renaissance. The earlier attempts to bring all expert knowledge on one subject into a single machine were defeated by their own impossibility. Today, the prevailing trend is the development of an AI based on machine learning, where the idea is that the machine learns little by little when being taught, but also on its own. Machine learning is well suited for the analysis of large masses of data and for supporting people in data-based decision-making. In medicine, for example, AI allows examination of different measurement data, and the machine can draw connections between data. Therefore, AI can be used for such a purpose as forecasting the development of a disease, when a patient’s data is compared to data on earlier patients. It is typical of machine learning that the result is not exact, but it is a probability-based forecast. That is why a machine cannot give similar detailed explanations for its conclusions as a human expert can.

A lot is expected of machine learning not only in medicine, but also in service business of companies, where AI can be used for analysing machine data collected from the field and forecasting, for example, occurrence of faults. In such applications, AI functions independently, analysing data and giving suggestions to people about the next necessary maintenance measures and even about their suitable timing, considering the financial factors.

In addition to these positive effects, futures researchers have also been painting some very gloomy scenarios about the “superintelligence” of the future that would be able to, for example, develop its own intelligence, draw its own conclusions and generate a will of its own, and could thus get out of the hands of both its designers and users.

What would be a potential path from the present machine learning-based AI systems to such superintelligence? AI is being introduced not only to services accessible via the internet, but also to mobile machines, such as autonomous cars and robots. Would this be the right time to consider making the future development paths such that the AI will remain under human control for sure?

A clever person solves the problems a wise person knows to avoid. This old wisdom should be applied to AI as well: if AI represents the cleverness and humans represent the wisdom, then humans must be secured a role in which they can prevent problems that AI might cause to itself or to humans. There must be an easy connection between AI and humans, and humans must have the final decision-making power. This prevents AI from getting out of human hands even as it learns new things.

In the next part of the blog series, I will focus more on the interaction between humans and AI.

Read more: VTT and Smart City

Kaasinen Eija
Eija Kaasinen
Senior Scientist, VTT


In the next part of the blog series, I will focus more on the interaction between humans and AI.

Pärjäämmekö tulevaisuuden tekoälyn kanssa?

Mitä on koneoppiminen? Miksi tekoäly päättelee eri tavoin kuin ihminen? Miten tekoälystä tulee superälyä?

Yövyin alkuvuodesta isossa hotellissa Berliinissä. Huoneeseen astuessani siellä tuntui olevan viileää. Ovensuusta löytyi tarra, jossa kerrottiin, että hotellissa oli otettu käyttöön ”Smart climate control” javoisin itse säätää haluamani lämpötilan TV:n kautta. Avasin TV:n ja navigoin muutaman mutkan kautta ilmastointisivulle. Sieltähän se löytyi: nykyinen lämpötila 18 astetta ja edellisen asiakkaan asettama tavoitelämpö 25. Säätelin tavoitelämmön 22 asteeseen ja lähdin illalliselle. Palattuani lämpö oli kivunnut 19 asteeseen, johtuen varmaankin huoneeseen päälle jääneestä PC:stäni. Aika viileältä tuntui vielä, joten soittelin apua hotellin vastaanotosta. Pian apua tulikin. Huoltomies toi käyttööni vanhan ajan lämpöpuhaltimen. Kovaäänistä puhallinta ei voinut pitää yöllä päällä, joten yöksi lämpö taas laski 18 asteen tuntumaan. Aamulla heräsin kuitenkin virkeänä oikein hyvin nukutun yön jälkeen, sillä onhan se niin, että viileässä nukkuu paremmin. Jäinkin miettimään, että ehkä se Smart climate control oli niin fiksu, että se tajusi minua paremmin minulle sopivan lämpötilan. Olisin kuitenkin arvostanut jonkinlaista selitystä, sillä nyt jäi voimaton olo ”älykkäästä” systeemistä, joka tekee mitä tahtoo, eikä ihmisellä ole siihen sanomista. Hotellin henkilöstökin oli selvästi alistunut älykkään ilmastoinnin edessä, eikä edes yrittänyt korjata huoneeni systeemiä vaan tukeutui vanhaan kunnon lämpöpuhaltimeen. Eikö oikeasti fiksu systeemi pitäisi myös ihmisen ajan tasalla päätöksistään – kertoisi, mihin se pyrkii. Jos se ei toimi tai ei pysty täyttämään ihmisen toivetta, niin myös kertoo syyn tälle?

Tekoälystä superälyyn

Tekoälyä on tutkittu jo vuosikymmeniä, mutta nyt se on kokemassa vahvan renessanssin. Aiemmat yritykset, joissa koneeseen koetettiin tuoda jonkun aiheenkaikki asiantuntijatietämys, kaatuivat omaan mahdottomuuteensa. Nykyään vallalla on koneoppimiseen perustuva tekoälyn kehittäminen, jossa ajatuksena on, että kone oppii pikkuhiljaa, kun sitä opetetaan, mutta myös itsekseen. Koneoppiminen soveltuu hyvin isojen datamäärien analysointiin ja tukemaan ihmistä datapohjaisessa päätöksenteossa. Esimerkiksi lääketieteessä tekoälyn avulla voidaan tutkia erilaisia mittauksia, ja kone pystyy muodostamaan yhteyksiä datan välille. Näin tekoälyn avulla voidaan muun muassa ennustaa taudin kehittymistä, kun verrataan potilasdataa aiempien potilaiden dataan. Koneoppimiselle on tyypillistä, että tulos ei ole eksakti vaan se on todennäköisyyksiin perustuva ennustus. Siksi kone ei pysty antamaan johtopäätöksilleen samanlaisia yksityiskohtaisia perusteluja kuin ihmisasiantuntija.

Koneoppimiselta odotetaan paljon paitsi lääketieteessä myös yritysten palveluliiketoiminnassa, jossa tekoälyn avulla voidaan analysoida kentältä kerättyä laitetietoa ja ennustaa esimerkiksi vikaantumista. Näissä sovelluksissa tekoäly toimii itsenäisesti, analysoi dataa ja antaa ihmisille ehdotuksia seuraavaksi tarvittavista huoltotoimista ja jopa niiden sopivasta ajankohdasta ottaen huomioon taloudelliset tekijät.

Näiden positiivisten vaikutusten lisäksi tulevaisuuden tutkijat ovat maalailleet synkkiäkin tulevaisuudenkuvia tulevaisuuden ”superälystä”, joka pystyisi esimerkiksi itse kehittämään omaa älykkyyttään, tekisi itse johtopäätöksiä, muodostaisi oman tahdon ja näin voisi karata niin suunnittelijoiden kuin käyttäjienkin käsistä.

Millainen olisi mahdollinen polku nykyisistä koneoppimiseen perustuvista tekoälysysteemeistä tuohon superälyyn? Tekoälyä on tulossa paitsi verkon kautta saataviin palveluihin myös liikkuviin koneisiin kuten autonomisiin autoihin ja robotteihin. Olisiko nyt jo syytä miettiä kehityspolkuja sellaisiksi, että tekoäly varmasti pysyy ihmisen hallinnassa?

Älykäs ihminen osaa ratkaista ongelmat, joihin viisas ihminen ei edes joudu. Tätä vanhaa viisautta kannattaa soveltaa myös tekoälyyn: jos tekoäly edustaa älykkyyttä ja ihminen viisautta, niin ihmiselle on taattava rooli, jossa hän pystyy estämään tekoälyn itselleen ja ihmisille aiheuttamat ongelmat. Tekoälyn ja ihmisen välillä on oltava sujuva yhteys ja päätösvallan pitää viime kädessä olla ihmisellä. Näin tekoäly ei oppiessaankaan karkaa ihmisten hallinnasta.

Lue lisää: VTT and Smart City

Kaasinen Eija
Eija Kaasinen
Senior Scientist, VTT


Blogisarjan seuraavassa osassa paneudutaan ihmisen ja tekoälyn vuorovaikutukseen.

Will machines take our work? – Part 3: People as models for machines

How do artificial and human intelligence differ? Why does research of the subconscious matter when dividing work between robots and humans? Should VTT build autonomous super AI?

Rick Deckard, the main character in the film, Bladerunner, kills replicants – machines that resemble humans – for a living. However, by the end of the film, Deckard, who is played by Harrison Ford, falls in love with a replicant. Deckard’s ambivalence towards replicants reflects the current debate about artificial intelligence. Some predict huge changes and see super AI plotting to take control, in a very human manner. Others are  more unconvinced: ‘Replicants are like any other machine,’ said Deckard as well, before he changed his mind. Machines modelled on people is therefore a classic science fiction idea. I want to enter the debate by comparing human and artificial intelligence. I discuss the argument that it is difficult to copy human intelligence in machines, because our intelligence cannot be separated from its environment. In addition, I would claim that analysis of human activity would be useful when remodelling working life.

Human intelligence

Conscious language-based thinking (inner conversations) is just the tip of the human intelligence iceberg. When an expert is asked how they managed to solve a problem or conflicting situation, they often answer that they ‘just knew’. Human expertise is a combination of schooling and book learning, personal conscious thinking, and something based on learning by doing. A knack and feeling for something, an expert eye and ear, and vision are popular ways of describing the tacit knowledge and ‘feel’ we have for performing various tasks. A person works intuitively and adaptively, selectively using millions or perhaps billions of sensory cells, depending on the situation.

It could be argued that intelligence and knowledge are located in the connections between the brain and countless sensory cells and nerves, rather than simply in the brain. Experimental psychology has shown that, in many respects, human activity and decision-making are directly connected to the environment. Action does not therefore require conscious thinking. The idea that our thought processes are embedded in our experienced environment is logical, since cerebral intelligence is connected to the sensory cells, which are themselves in direct contact with the environment.

In addition to having bodily intelligence, people are able to interpret and learn meanings. We excel at this in comparison to other species, due to the way in which our ancestors gathered food. Over short distances, humans are often outpaced by their prey, but we can jog for extremely long distances. Game therefore had to be followed over a very long journey. This was done by following tracks. We interpreted signs imprinted in the environment in order to survive – reading is species-typical behaviour for us. Now also people read continuously; some read their mobile phones, while others read newspapers. Because our senses are relatively dull, we cannot identify poisonous plants and fungi from the edibles just by smell, but by learning to distinguish the good from the bad. That is why adults are also generally able to distinguish good from bad, in other words we have a moral conscience. It is highly apt that Adam and Eve ate from the tree of knowledge, of good and evil, in the Book of Genesis. Conscious thinking was essential in the everyday lives of hunter-gatherers. People paid a price for stupidity even then – our intelligence and consciousness are by-products of evolution.

Artificial intelligence

Artificial Intelligence makes predictions on the basis of data and follows written instructions. Its predictive capabilities are based on neural networking in particular.  Neural networks consist of mathematically interconnected nodes, i.e., neurons. Initially, the connections are typically random, but they are then strengthened and weakened through trial and error. For example, thousands of images, categorised in a manner meaningful to people, can be fed into a neural network: ‘woman’, ‘man’, ‘cat’, ‘dog’, etc. By grouping certain features, the neural network can guess what each image contains. The right answers reinforce the pathways between network nodes that lead to the right answers. Pathways that lead to the wrong answers are weakened. This ‘guessing machine’ becomes an effective predictor after hundreds of thousands of tries. Prediction is ultimately so certain that artificial intelligence can identify images or words from a soundtrack.

Many kinds of human activity can be performed through prediction and by following instructions. We may wonder, for example, whether artificial intelligence is capable of creative tasks. AI can be made to compose musical pieces: it can analyse the melodies of the most popular compositions and predict the most catchy ones on that basis. With very careful preparation, it could also stretch to writing some lyrics and instrumentation. But music is more than sounds: young people have a tendency to develop their own, original vibes and grooves, which irritate their elders. AI cannot truly create new musical genres that reflect the times that the listeners are living through. Music cannot be separated from dance styles and the issues we consider important. And which is the creative actor here – artificial intelligence or the software developer?

Artificial general intelligence (or AGI) refers to AI capable of design, adaptation, reasoning and linguistic communication in a similar manner to humans. Definitions and proposed criteria for AGI tend to vary.  A machine succeeding in making coffee in a strange flat could be a sign that we are in the presence of AGI. In the scientific community, there is no consensus on whether AGI is even possible in principle. Both Microsoft and Google are running research programmes to achieve AGI. This is understandable, because sceptics rarely find their way into top positions in American blue chip companies. These programmes will undoubtedly lead to commercially exploitable technologies, even if they do not actually achieve AGI itself.

Cognitive architecture and task analysis

In principle, to build a truly human robot, we need to analyse people themselves. Cognitive scientists use the concept of cognitive architecture to describe human thinking holistically, including feelings and other matters. Because people are so fiendishly complex, it seems to me that the study of general cognitive architecture is better suited to basic research of artificial intelligence than the development of AI in practice.

Technology companies can meet their needs through task analysis. When replacing an employee with robots, we need to analyse what kind of work people are doing. This will ensure that the new robotised approach provides a result of at least the same quality and safety as the traditional way of operating. Since people will always be needed, even based on the new operating model, a division of labour between people and robots will be designed. Task analysis could also indicate that there is no point in robotisation.

There are many forms of task analysis. Cognitive task analysis involves modelling an employee’s thinking. Dozens of tools are available for this. Task analysis can also analyse an employee’s movements or the features of a work organisation. In particular, VTT uses the core task analysis method developed by now-retired research professor Leena Norros. The idea is to contrast observed working practices and challenges with the general goals and critical phases of work. The personal characteristics of employees are secondary: as ‘core’ suggests, this concerns the analysis of a task’s core features. Bearing the main goals in mind enables us to view a task’s performance from a number of perspectives – the same general goal can be achieved by human action or a robot. This makes core task analysis ideal as a design aid: it guides but does not cramp the designer’s creativity. Depending on the research questions, core task analysis can flexibly include various features of cognitive task analysis, the micro-level analysis of work practices, or the modelling of the operating environment.

It is beneficial to blur the boundary between worker and researcher when mapping cognitive processes during task analysis. Workers are seldom aware of their own mental models while working, because skills are subconscious in nature and the subconscious cannot be directly studied by transferring information from the employee to a researcher. The idea is that the researcher and worker together explore issues hidden in the subconscious, which means that the employee, in a sense, becomes both ‘teacher’ and ‘pupil’ at the same time. Workers love task analysis of this kind, because they find insights about their own work fascinating by nature. A good, practical technique for this involves watching a video of the worker in action, together with him or her.

User-centred AI research

Human and artificial intelligence are different. Even an accurate study of people will not enable us to create actors that precisely resemble humans. The quantum computers of the future may be millions of times faster than today’s computers. However, this does not alter the fact that, in the future, artificial intelligence will not be able to function adaptively in the real world as people do. In the absence of new developments, we would still fail to achieve seamless interaction between the complex whole of the sensory system and decision-making. As concepts, neural networks and artificial intelligence hint at the replication of human intelligence. However, in practice it would make more sense to study the use of AI.

Without research, it is hard to figure out how different work assignments may benefit from data-based predictions. Through object-recognition capabilities, AI will solve problems in a number of job tasks, because camera technology is already used for other purposes in many sectors. Surgeons, for example, often use a camera image when operating: in the future, AI may help to identify cancer cells and nerve paths. It is to be explored how work-life focused user research could provide practical tools for the building of neural networks. Interdisciplinary work is necessary.

Mikael Wahlström
Senior Scientist, PhD in Social Sciences (Social Psychology)


The author is exploring the division of labour between AI and people in a project that explores the future of seafaring. He has performed task analysis blurring the boundaries between worker and researcher in Academy of Finland project called WOBLE, which he led, focusing on robot-assisted surgery. The final report of the WOBLE project in Finnish can be found here.

Will machines take our work? – Part 2: Robot trucker at the mercy of people


Autonomous transport is on the way. Ships and cars are being fitted out to drive themselves. But is there a business in this, or will the hype fizzle out? Will people accept these machines?

Robots are always connected to people. Even a Mars rover’s tasks are planned each day by people. In this blog, I’m going to consider the relationship between people and autonomous vehicles on three interrelated levels: physical, commercial and social. Devices must work where intended, must be sellable, and must be acceptable to people.

When such devices are being used, the nature of the operating environment and the connection to people are as important as technical features.  For example, robot vehicles already perform commercial tasks well, but within the enclosed environments of mining areas. People only venture into the vicinity of these giant robot trucks if they are sitting in a truck cabin themselves.

Money and safety at sea

Safety critical work is work in which human life would be endangered if something went wrong. Such work is generally governed by rules. Areas such as seafaring and motoring have their own sets of rules. Accidents are avoided if all parties comply with the rules and nothing surprising happens. Artificial intelligence complies with the rules set for it, but cannot adapt to unforeseen circumstances. In addition, a fault or accident may originate in AI itself. That is why control of autonomous transport devices operating among humans should not be left to AI alone. Human supervision is needed.

Labour is saved when one person can supervise several devices that are under the direct control of artificial intelligence. Employees no longer need to be at the mercy of field conditions, but work is done in the comfort and safety of a control centre. Such work does, however, involve new challenges. The scientific community has only recently begun to discuss the so-called transparency of artificial intelligence, i.e how easy it is for users to monitor the operations and functionality of AI. The supervision of self-learning AI – which can modify its own instructions – is particularly challenging. At the same time, there is a need to monitor and understand the operating environment of devices controlled by AI and the operation of sensor and communication technologies. Sensors of various kinds should be used to monitor activities in case some sensors do not work, or the signal is interrupted. For example, reliance on a GPS signal alone is unwise, since an external operator can disrupt positioning by generating a signal stronger than your own satellite signal.

The big challenge lies in the fact that operations must be economically viable in comparison to the traditional approach. For savings to be made, paying a control room team must be clearly cheaper than the wages of traditional field employees, since AI-controlled devices need new kinds of sensors and communication tools in order to function. More and more equipment is vulnerable to malfunctions and can no longer be serviced or repaired on-site, but a technical expert must be sent into the field.  Personnel costs currently account for around six percent of a ship’s operating costs, but costs are also generated by the infrastructure required by people: an autonomous vessel does not require a toilet or kitchen.

Regardless of the challenges, both businesses and innovations will be created

There is still no certainty about which systems are most cost-effective when controlling ships, whether such systems are based in remote centres or on-deck. Seafaring is a conservative sector: attitudes to autonomous ships range from enthusiasm to scepticism. I believe that autonomous technologies will be useful. Even if commercially viable, unmanned ocean-going vessels are some way off, seafarers will soon benefit in various ways from sensor technologies and AI.  Remote monitoring of ships is already happening.

For example, is it always necessary to maintain a 24-hour watch on the high seas? Fatigue, boredom and frustration all undermine safety. Perhaps it would be better if AI and the bridge kept watch at night, waking up the crew member on watch only if necessary. In addition, in challenging conditions new tools provide strong assistance in gaining situational awareness.

Change is slow to arrive. Good task planning, in which workers must be involved, is needed.  They can provide information on challenges in the operating environment, which must be taken into account in the design of automated equipment.

Metro, automation and strong emotions: should we be afraid of fear itself?

My favourite transport system is the Metro in Paris. It connects people to every part of the metropolis within 45 minutes. Intuitive maps clearly show where you are going when walking in the platform area. When I lived in Paris, Line 14 was the only unmanned metro line. Somewhat unexpectedly, stepping into the carriage made me feel anxious. This feeling faded quickly, when the train lurched into motion. Among so many passengers, even subconsciously I understood that there was nothing to fear.

Later, I used the theoretical framework I had studied in Paris to study what local residents in Helsinki thought about a driverless metro. The theory states that people’s shared understanding of the world develops as they discuss new phenomena; the discussion in question is affected by existing structures of power and meanings in society. French social psychological theory builds a bridge between society and human understanding.  I noted that people in Helsinki had negative attitudes towards the driverless metro, despite the media’s positive discussion of the issue. On the other hand, this negativism was reduced by facts about the automated metro. The idea of an automated metro was associated with experiences of unreliable computers, unemployment and dystopic images from science fiction.

My study of automatic metros provided the ideal basis or theoretical exploration, but was of little practical relevance. Helsinki never obtained a driverless metro and I now believe that preconceived ideas have limited influence on technology acceptance. People’s opinions are ultimately formed through direct use of the tool in question. This is demonstrated by my own experiences of line 14 of the Paris metro and the statistics: user experiences can be highly positive, even in the face of prejudices against robot technology.

So if the devices themselves are good, we shouldn’t worry too much about people’s preconceived ideas. However, fear should be dispelled through communications. If fear of the unknown is combined with problems or accidents, disproportionate damage may be done to the reputation of technology.

Automation and those being automated

I also think that technology firms do not need to be too worried about their workers, who, in principle, are the ones threatened by automation. The Finnish Seafarers’ Union is sceptical about autonomous ships in the same way as the metro drivers’ trade union was about the driverless metro. Despite this, the drivers were very open-minded about metro automation, at least when talking to an external researcher. There was no sign of a ‘rebellion’. On the other hand, the drivers were promised that they wouldn’t lose their jobs, but that their duties would change. In addition, perhaps the old drivers saw retirement approaching, while the younger ones were fascinated by involvement in a technological transition.


Mikael Wahlström
Senior Scientist, VTT


The writer studied the safety of autonomous ships as part of the AAWA project. A report of the safety analysis project completed alongside Aalto University is available here. The study on public opinion concerning Helsinki’s automated metro can be found here.

The first part of this three-part blog series discussed health care.
The last part, which will be published in February, will consider Human as a model for machines.


Business out of data in urban environments

The role of local authorities and cities is undergoing a transformation, and it is becoming more common to regard them as service platforms. One enabler of such development is a transfer from closed to open systems, but also new modes of operation, such as the city as a platform thinking included in the Smart Tampere ecosystem, contribute to this.

It is possible to collect a lot of electronic data on the behaviour and needs of municipal residents. Using artificial intelligence (AI) or augmented reality (AR) tools, such data can be utilised in decision-making and the development of new services. With the help of refined data, the future service needs of municipal residents can be predicted, and services can be personified according to different life situations. When someone is moving, AI can automatically recommend him or her the best residential area and suitable day care centres with openings, or suggest the most sensible jobs etc. in accordance with the user’s personal interests. Cities know their residents increasingly well, and the data offers huge opportunities for different stakeholders to provide new services.

However, enterprises have been slower to seize the opportunities offered by open data than expected. The user data is dispersed between various public and private digital sources, and the creation of major data-based business would require integration of data from several sources. In other words, ground rules and bold initiatives for sharing data are also needed between operators. The creation of new data-based business activities requires examining services from the viewpoint of municipal residents instead of using the data sources as the starting point for service development. Turku with its ‘circular economy of data’ project and Forum Virium Helsinki, with user-oriented open innovation as its mode of operation, are excellent examples of trendsetters.

Use of open data from various sources in applications and services

Open data can be used in different service contexts. Most examples of such applications can be found in financial and taxation services, such as Budjettipeli budget game, with the help of which you can test different models for sharing the financing burden of welfare services between public communities and private citizens. It is based on the data resources of Statistics Finland, the National Institute for Health and Welfare and the Finnish Centre for Pensions. A lot of examples can also be found in map applications, such as the online and mobile service Aaltopoiju, which offers boaters and free-time seafarers exact observation and forecast data on different weather phenomena, such as water level and wave height. Aaltopoiju uses the open data material produced by the meteorological institutes of Finland, Estonia, Sweden and Germany.

The success factors of a business process based on open data

With a view to making business, it is important that applications based on open data have easy-to-use user and customer interfaces. The integration of data and information systems plays a key role in how utilizable the data is. Technological solutions must support the usability of the application. In addition, securing the information security of individuals is a prerequisite for creating profitable business out of open data. When collecting and using personal data of municipal residents, the delicate nature of such data must be taken into account in every stage of the data process.

Below, as an example, we have listed data initiatives related to parking and traffic, including light traffic, that are being planned, in progress or in their final stages in various cities. In the services of Helsinki Region Transport (HRT), the current issues include starting the operation of Länsimetro and the relevant changes, whereas in Tampere the construction of a tramline reforms the transport structure in the Pirkanmaa area and business activities related to that. Identifying the critical missing pieces in services from the point of view of those moving in city areas can serve as a basis when planning new data initiatives. This enables more efficient creation of new, data-based business operations.


Customer-oriented and comprehensive service solutions

In urban environments, services utilising open data must be based on the customers’ needs, and not only on the needs of individual data-based services. There is already a lot of data available from various sources, but identifying the critical missing data and its open provision may create new value-creation opportunities. Accumulation of data in the various phases of the use of services and business process may create new opportunities, when we learn to refine them to usable form. Therefore, the roles required for the analysis and utilisation of data (e.g. technical implementation and final use) and the operators in a comprehensive ecosystem must be identified to enable value-creation for the final user. It is also important to collect feedback on the use of applications to develop the services.

Antti Ruuska
Business Development Manager, VTT
Twitter: @antti_ruuska

Salla Paajanen
Research Scientist, VTT

Katri Valkokari
Research Manager, VTT

Antti Knuuti
Key Account Manager, VTT

If you want to read more about VTT’s vision regarding smart and sustainable cities, read our new white paper: Let’s turn your Smart City vision into reality. Smart City development is inherently multi-technological and cross-disciplinary, and as an application-oriented research organisation VTT is an ideal partner. We work with the public sector and private companies as well as technology providers in research and innovation activities that expedites the development of smarter cities.  We can guide you from the early phases of vision-creation and concept development to practical implementations of smart outcomes.

Viekö koneet työt? – Osa 3: Ihminen koneen mallina

Miten tekoäly ja ihmisen älykkyys eroavat toisistaan? Miksi alitajunnan tutkimuksella on väliä, kun suunnitellaan robotin ja ihmisen työnjakoa? Pitäisikö VTT:n rakentaa itsenäisesti toimivaa supertekoälyä?

Bladerunner-elokuvan päähahmona hääräävä Rick Deckard lahtaa työkseen ihmisiä muistuttavia koneita eli replicantteja. Elokuvan lopuksi Harrison Fordin tähdittämä Deckard kuitenkin rakastuu replicanttiin. Deckardin jakomielinen suhtautuminen replicantteihin kuvastaa tämän päivän keskustelua tekoälystä. Toiset maalailevat suuria muutoksia ja peräti uskovat ihmismäisesti juonittelevan supertekoälyn ottavan meistä vallan. Toppuuttelijoitakin on: ”Replicants are like any other machine”, toteaa myös Deckard ennen ajattelutapansa muutosta. Ihminen koneen mallina on siis vanhasta scifistä tuttu klassinen ajatus. Tuon tähän keskusteluun sisältöä vertaamalla ihmisälyä ja tekoälyä. Valotan sitä ajatusta, että ihmisen älyn kopioiminen koneen käyttöön on hankalaa, koska ihmisen älykkyys ei ole ympäristöstä irrallista. Lisäksi väitän, että ihmisen toiminnan analyysi on hyödyllistä työtä uudistavassa suunnittelutyössä.


Tietoinen erityisesti kieleen perustuva ajattelu eli pään sisäinen keskustelu on vain ihmisen älyn jäävuorenhuippu. Kun ammattiosaajalta kysyy, miten hän tietää, kuinka jokin ongelma tai ristiriitainen asia ratkaistaan, tämä usein vastaa kutakuinkin, että ”sen vain tietää”. Ihmisen osaaminen on yhdistelmä kirjatietoa, tietoista omaa miettimistä sekä sitä jotain, jonka oppii vasta tekemällä. Näppi- ja perstuntuma, ammattilaisen silmä ja korva sekä näkemys ovat kansanomaisia tapoja kuvata eri työtehtävien hiljaista ja kehollista osaamista. Ihminen toimii intuitiivisesti ja joustavasti, sillä se hyödyntää miljoonia tai kukaties biljoonia aistisolujaan valikoivasti tilanteen mukaan.

Voidaankin väittää, että älykkyys ja tietoisuus eivät oikeastaan sijaitse aivoissa, vaan aivojen ja liki lukemattomien aistisolujen ja -hermojen yhteydessä. Kokeellinen psykologia on osoittanut, että ihmisen toiminta ja päätöksenteko perustuvat monessa mielessä suoraan yhteyteen ympäristön kanssa. Toiminta ei siis vaadi tietoista ajattelua. Ajatus siitä, että ihmisen ajatustoiminta on suorassa yhteydessä elettyyn ympäristöön, on looginen, sillä aivojen älykkyys ei ole erillistä aistisoluista, ja aistisolut ovat suorassa yhteydessä elettyyn ympäristöön.

Kehollisen älykkyyden lisäksi ihmisellä on kyky lukea ja oppia merkityksiä. Tässä ihminen on erinomainen muihin eläimiin verrattuna. Selitys löytyy esi-isiemme tavasta hankkia ruokaa. Ihminen on lyhyellä matkalla yleensä riistaa hitaampi liikkumaan, mutta kykenemme erittäin pitkiin hölkkiin. Saalistettavaa eläintä piti siis seurata pitkä taivallus. Tämä onnistui jälkiä seuraamalla. Tulkitsimme ympäristöön painettuja merkityksiä selviytyäksemme eli lukeminen on meille lajityypillistä käyttäytymistä. Nykyäänkin ihminen lukee jatkuvasti – joku kännykkäänsä ja toinen sanomalehtiä. Aistimme ovat aika heikkoja, joten myrkkykasveja ja -sieniä ei eroteta syötävistä pelkällä nenätuntumalla vaan oppimalla erottamaan hyvä ja huonosta. Tästä syystä aikuisella ihmisellä on myös yleinen kyky erotella hyvä ja paha toisistaan eli eettinen omatunto. On osuvaa, että raamatun alkukertomuksessa Aatami ja Eeva syövät hyvän ja pahan tiedon puusta. Metsästäjä-keräilijän arjessa tietoinen ajattelu oli valttia. Tyhmyydestä sakotettiin jo silloin eli älykkyytemme ja tietoisuutemme ovat evoluution sivutuotteita.


Tekoäly tekee ennusteita datan perusteella ja noudattaa sille kirjoitettua ohjetta. Ennustamaan se oppii erityisesti neuroverkko-menetelmän avulla. Neuroverkko koostuu toisiinsa matemaattisesti kytköksissä olevista solmukohdista eli ns. neuroneista. Aluksi kytkökset ovat tyypillisesti sattumanvaraisia, mutta kokeilun kautta kytköksiin tehdään vahvennuksia ja heikonnuksia. Neuroverkkoon voidaan esimerkiksi syöttää tuhansia kuvia, jotka ihminen on luokitellut ihmiselle merkityksellisiin kategorioihin: ”nainen”, ”mies”, ”kissa”, ”koira”, jne. Ryhmittelemällä kuvan piirteitä neuroverkko arvailee, mitä kussakin kuvassa on. Oikeat vastaukset vahvistavat niitä verkon solmukohtien välisiä polkuja, jotka veivät oikeaan vastaukseen. Samalla väärään vastaukseen vieviä polkuja heikennetään. Satojentuhansien kokeilujen jälkeen arvauskone muuttuu tehokkaaksi ennustimeksi. Lopulta ennustaminen on niin varmaa, että tekoäly voi käytännössä tunnistaa kuvia tai sanoja ääniraidasta.

Ennustamalla ja käskyjä noudattamalla voi tehdä monenlaista ihmismäistä toimintaa. Voidaan miettiä vaikkapa, kykeneekö tekoäly luovaan työhön. Tekoäly saadaan luomaan musiikkia: se voi analysoida suosituimpien kappaleiden melodiakulut ja siltä pohjalta ennustaa ihmiskuulijaa koukuttavat sävelet. Jonkinlaiseen sanoittamiseen ja sovittamiseenkin se hyvin suunniteltuna pystynee. Mutta musiikki on muutakin kuin ääntä: nuorisolla on tapana kehitellä omanlaisensa ja oman nimisensä pörinät ja kilinät, mikä ärsyttää vanhempaa väestönosaa. Tekoäly ei oikein osaa luoda uutta musikaalista genreä, joka puhuttelee oman aikakautensa kuulijoita. Musiikki ei ole erillistä tanssityyleistä ja mieltä painavista aiheista. Ja kumpi tässä on se luova toimija: tekoäly itse vai ohjelmistokehittäjä?

Yleistekoälyllä tarkoitetaan tekoälyä, joka kykenee ihmismäiseen suunnitteluun, joustavuuteen, päättelyyn ja kielelliseen kommunikointiin. Yleistekoälyn määritelmät ja ehdotetut kriteerit vaihtelevat. Kahvin laittaminen ennalta vieraassa asunnossa on yksi tehtävä, jonka onnistuminen voisi olla merkki yleistekoälyn olemassaolosta. Tiedeyhteisö ei ole yksimielinen sen suhteen, onko yleistekoälyn rakentaminen ylipäänsä mahdollista. Sekä Microsoftilla että Googlella on yleistekoälyyn tähtäävät tutkimusohjelmat. Tämä on ymmärrettävää, koska epäilevillä tuomailla tuskin on sijaa amerikkalaissuuryritysten johtopaikoilla. Vaikka yleistekoälyä ei varsinaisesti saavutettaisi näissä tutkimusohjelmissa, niissä varmasti muodostuu muuta kaupallisesti hyödynnettävää teknologiaa.

Kognitiivisen arkkitehtuurin tutkimus ja tehtäväanalyysi

Jos halutaan rakentaa todella ihmismäinen robotti, pitäisi periaatteessa analysoida ihmistä itseään. Kognitiotieteilijät käyttävät kognitiivisen arkkitehtuurin käsitettä kuvastamaan ihmisen ajattelun kokonaisuutta tunteineen kaikkineen. Koska ihminen on pirullisen monimutkainen, kognitiivisen kokonaisarkkitehtuurin tutkimus soveltuu mielestäni paremmin akateemiseen perustutkimukseen tekoälystä kuin käytännön tekoälyn kehittämistyöhön.

Ratkaisuja teknologiayritysten tarpeisiin löytyy tehtäväanalyysin kautta. Kun halutaan korvata työntekijä robotilla, selvitetään, minkälaista työ on ihmisen suorittamana. Tällä varmistetaan se, että uusi robotisoitu tapa toimia tarjoaa vähintään yhtä laadukkaan ja turvallisen lopputuloksen kuin perinteinen toimintamalli. Ihmistä tarvitaan aina johonkin uudessakin toimintamallissa, joten käytännössä suunnitellaan robotin ja ihmisen välistä työnjakoa. Analyysin lopputuloksena voi myös olla, että robotisointi ei kannata.

Tehtäväanalyysin muotoja hyvin monia. Kognitiivisessa tehtäväanalyysissa selvitetään työntekijän ajattelua, ja analyysityökaluja tähän on kymmeniä. Tehtäväanalyysi voi selvittää myös esimerkiksi työntekijän liikeratoja tai työorganisaation piirteitä. VTT:llä käytetään eritoten entisen tutkimusprofessorin Leena Norroksen kehittämää perustehtäväanalyysia. Ajatuksena on suhteuttaa havaitut työtavat sekä haasteet työn yleisiin päämääriin ja kriittisiin vaiheisiin. Työntekijöiden henkilökohtaiset ominaisuudet ovat toissijaisia eli, nimensä mukaisesti, kyse on työtehtävän perustavanlaatuisten piirteiden analyysistä. Yleisten päämäärien mielessä pitäminen mahdollistaa sen, että työtehtävän suorittamista voi katsoa erilaisten toimintamallien näkökulmasta – samaan yleiseen päämäärään voidaan päästä robotin tai ihmisen toiminnalla. Tästä syystä perustehtäväanalyysi soveltuu hyvin suunnittelun apuvälineeksi: se ohjaa suunnittelijan luovuutta sitä liikaa rajaamatta. Perustehtäväanalyysiin voidaan sisällyttää kognitiivisen tehtäväanalyysin muotoja, työkäytäntöjen mikrotason erittelyä sekä toimintaympäristön mallintamista joustavasti riippuen tutkimuskysymyksistä.

Kognitiivisia asioita kartoittavassa tehtäväanalyysissa on hyvä hämärtää työntekijän ja tutkijan välistä raja-aitaa. Työntekijä ei useinkaan ole tietoinen omista ajattelumalleistaan työtä tehdessä, koska osaaminen on alitajunnassa. Alitajuntaa ei oikein voi tutkia suoraan siirtämällä tietoa työntekijältä tutkijalle. Lähtökohtana on, että tutkija ja työntekijä yhdessä oppivat alitajuntaan unohtuneista asioista, joten työntekijä ikään kuin muuttuu sekä oppijaksi että opettajaksi tutkimuksen aikana. Työntekijät ovat innoissaan, kun tehtäväanalyysia tehdään näin, koska oivalluksien tekeminen omasta työstä on äärimmäisen kiinnostavaa. Yksi hyvä käytännön tekniikka tähän on se, että katsotaan työntekijän kanssa yhdessä videokuvaa työntekijän työnteosta.

Käyttäjäkeskeinen tekoälytutkimus

Ihmisäly ja tekoäly ovat erilaisia, eikä edes ihmistä tutkimalla oikein voida luoda ihmisen kaltaista toimijaa. Tulevaisuuden kvanttitietokoneet saattavat olla satoja miljoonia kertoja nykytietokoneita nopeampia. Tämä ei kuitenkaan muuta sitä, että tulevaisuudessakaan tekoäly ei pysty toimimaan ihmisen tavoin joustavasti tosimaailmassa ­­– mm. monimutkaisen aistien kokonaisuuden ja päätöksenteon saumaton yhteispeli puuttuu myös tulevaisuudessa, jos jotain selkeästi uutta ei kehitetä. Neuroverkot ja tekoäly käsitteinä viittaavat ihmisen älyn replikointiin, mutta käytännössä on järkevämpää tutkia tekoälyn käyttöä.

Datapohjaisen ennustuskyvyn hyödyt työelämälle eivät tutkimatta selviä. Tekoälyn kohteentunnistuskyvyt tulevat ratkaisemaan ongelmia monilla aloilla, koska useilla aloilla muutenkin jo nyt käytetään kamerateknologiaa. Esimerkiksi kirurgit usein leikkaavat kamerakuvan perusteella: tulevaisuudessa tekoäly kenties auttaa tunnistamaan syöpäsolut ja hermoradat. Selvittelyn alainen asia on, miten työelämään kohdistuva käyttäjätutkimus voi tarjota käytännön apuvälineitä neuroverkkojen rakenteluun. Poikkitieteellistä tekemistä tarvitaan.

Mikael Wahlström
Erikoistutkija, valtiot. tohtori (sosiaalipsykologia)


Kirjoittaja työskentelee projektissa, jossa tutkitaan, miten tekoälyn ja ihmisen välinen työnjako järjestetään tulevaisuuden merenkäynnissä. Työntekijän ja tutkijan välistä raja-aitaa hämärtävää tehtäväanalyysia hän on tehnyt mm. vetämässään WOBLE-nimisessä Akatemia-projektissa, jossa tutkittiin robottikirurgien työtä. WOBLE-projektin suomenkielinen loppuraportti löytyy täältä.

Kolmiosaisen kirjoitussarjan ensimmäisessä osassa käsiteltiin terveydenhuoltoa.
Viimeisessä osassa, joka julkaistaan tammikuussa, käsitellään ihmistä koneen mallina.