Open technologies will democratize AI

The ongoing digitalization and AI-driven change of the global economy, national economies, and corporations has started and seems to have no end in sight. This change represents societal disruption with many impacts. Continuous change, development and experimentation is the new normal. In order to stay competitive, organizations need continuous exploration of opportunities to exploit data and AI technologies to improve existing business processes and offerings, as well as to find new ones.

A recent PWC report estimates Artificial Intelligence (AI) could make $15.7T potential contribution to the global economy by 2030[i].  The same report identifies nearly 300 use cases for AI spanning business and society.  Finland’s goal to become a leader in applying AI represents an ongoing digitalization and societal change[ii].

AI and the information, communication and automation technologies used in its realization are developing at a breathtaking pace. Development is so fast that education systems face challenges to meet rapidly changing skills needs in the training of workforce to the labor market. Various on-line courses and mini-degrees have increased their popularity in response to rapid skill development needs[iii].

Availability of open AI technologies and related pool of experts has been growing steadily over the past few years.  In 2017, the GitHub community for open source software developers reached 24 million developers working across 25 million repositories of open source code[iv].  Open AI technologies have become a serious option for commercial AI technology offerings.

For example, Google has opened the source code of its machine learning platform behind its own production services, which has created a significant developer and user community around it. In 2017, TensorFlow and TensorFlow Models were two of the top ten most active code repositories on GitHub. Several other AI technologies have also become available with open source licenses.  Just under half of the 100 largest companies in the United States (by revenue) use GitHub Enterprise to build software.  Furthermore, to address the AI skills shortage, globally only 5 thousand teachers and 500 thousand students actively used GitHub in 2017.

Development of new services requires strong AI technology expertise

VTT and IBM Research – Almaden are in research exchange co-operation at Silicon Valley. The goal is to study architecture, ecosystem and future development of open AI technologies from the viewpoint of AI systems development and engineering. Preliminary results of the work are published on a joint blog (http://opentechai.blog) and the topic is discussed at the international OpenTech AI Workshop in Helsinki.

The advantages of open AI technologies include rapid pace of development. The research activities on the field of AI produce new algorithms and machine learning models. For reproducibility of results, these are often implemented and made available with open technologies first. In addition to open source code, lot is happening also around open datasets, machine learning models, benchmarking and leaderboards. The ecosystem around open AI technologies has emerged and is evolving rapidly. This evolution is not only worth following in the sidelines, but calls for active participation to research, development and exploitation of open AI technologies. Clarifying the role and importance of open AI technology for any organization is wise preparation for the future.

The evolution of open AI technologies is a development that has emerged during the past few years. This is continuation to the open source development in software products, which started already earlier. The open development on AI technologies is democratizing opportunities for exploitation of AI; It enables building needed skills, code sharing and exploitation independently of individual vendors in an open ecosystem. Also in the field of AI, value creation and commercial competition are shifting from software products to applications and related services. Here crucial is strong and versatile expertise on AI technologies and capability to apply new and rapidly evolving technology together with customers.


Daniel Pakkala
Principal Scientist, Data Driven Solutions, VTT

Jim Spohrer
Director, Cognitive Open Technologies, IBM

For more information:

http://opentechai.blog

https://developer.ibm.com/opentech/2018/01/29/helsinki-march-2018-opentech-ai-workshop/

[i]  PWC (2017) Artificial Intelligence Study. https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html
[ii] VTT (2018) Finland AI Strategy. http://www.vttresearch.com/Impulse/Pages/Finland-seeking-top-spot-in-application-of-artificial-intelligence-AI.aspx
[iii] For example, for a freely available, easy to access online set of courses see http://cognitiveclass.ai
[iv] GitHub (2017) State of the Octoverse. https://octoverse.github.com/

 

Will artificial intelligence remain under human control?

How can one communicate fluently with artificial intelligence? Can one cooperate with artificial intelligence?

The existing artificial intelligence (AI) systems based on machine learning are often independent actors that inform people about their conclusions, but otherwise interact with people in a very limited scale.  AI is being increasingly introduced not only in services accessible via the internet, but also in mobile machines, such as autonomous cars and robots. We should consider how to ensure that AI will always remain under human control, and how humans can and how they should be able to interact with AI.

Verbal and non-verbal communication

In trend analyses of technology, the interactive properties of AI have been identified as the next major step in their development. Dialogical interaction does not require the user to seek and learn commands, but the correct function is negotiated through free dialogue with the machine. Interaction can be supplemented by non-verbal communication in such a manner that the machine identifies and reacts to the person’s emotional state, such as the person being confused. A machine can learn to identify individuals and adjust its operations according to which matters the person is and is not familiar with, and how he or she prefers to operate. Personal virtual assistants, such as Apple’s Siri, strive to establish a relationship with their owner and learn their preferences in such a manner that, with time, they can predict the person’s needs and offer assistance even before the person takes the initiative to ask for it.

In the internet, nowadays you often encounter chatbots. They are already relatively clever, and, when dealing with them, you may not always notice at first that you are not encountered by a real human being. A chatbot’s ability to discuss is based on the fact that it knows very well the limited service area within which it operates. It has learned to predict what kind of questions people may have. Every now and then, a chatbot may feel a little bit rude. This probably derives from the fact that they are programmed by people who transfer their own manners to the robot.

Interest towards AI solutions where a human and AI operate in collaboration with each other is increasing.  Collaborative human power can be used, for example, for collecting data or interpreting images in solutions, where a large group of people and AI form a collectively functioning entity. This kind of collective  intelligence has been used for such purposes as digitalisation of old texts. A human eye is incomparable in recognising words, even when written in strange letters. When AI carries out easy text recognition tasks and lets people deal with any unclear cases, the work will advance quickly with such collective power.

Fluent interaction requires learning and participation

Fluent interaction between humans and AI still requires a lot of development in many areas. In the future, we will see increasing amounts of work teams consisting of humans and robots. A robot can assist humans in many kinds of maintenance and service tasks. Fluent interaction is based on AI, with the help of which the robot interprets its environment and humans. Recognising the intentions of one another plays a key role: a human must be able to anticipate the robot’s actions, and, in the same way, the robot must be able to anticipate human actions. Dialogical interaction solutions are needed in this field as well.

Autonomous cars and other vehicles largely function on their own, but when they encounter a problematic situation, they may easily need human assistance. In such a situation, it is good if the machine has kept the human up to date on what is going on, so that he or she may quickly resolve the problematic situation. Indicating and recognising intentions is important also with a view to bystanders: when pedestrians encounter an autonomous car, how can they be sure that the car has seen them and stops at a pedestrian crossing to give way for them? How do you establish an eye contact with an autonomous car?

Different smart services at home and in offices strive to fulfil people’s wishes and predict their desires. Often such services remain unnoticed by people, in which case it may remain unclear why air conditioning is blowing at full blast or why the temperature does not rise. An easy interaction channel is needed, so that people can find out why things are going the way they are going, and that they can influence matters.

AI is not infallible − it can make mistakes and it may have faults. Once humans learn to understand the limitations of AI, and the way AI draws conclusions and functions, the interaction between them will become easier. When people understand the basics of the way AI functions, they can put themselves on a level with it, in the same manner as people naturally tune into the same level with the person they are talking with.  It is important to develop AI solutions in such a manner that people who will work with AI are allowed to participate in the design of the solutions.

Read more: VTT and Smart City

Kaasinen Eija
Eija Kaasinen
Senior Scientist, VTT
@eijakaasinen 
eija.kaasinen(a)vtt.fi

 

 

How will we manage with artificial intelligence in the future?


What is machine learning? Why does artificial intelligence draw conclusions differently than humans do? How does artificial intelligence become superintelligence?

Early this year, I spent a night at a big hotel in Berlin. When I stepped into my room, it felt quite cool inside. There was a sticker by the door, telling that the hotel had introduced a ”Smart climate control” system and I could adjust the temperature to the desired level through my TV. I opened the TV and navigated to the climate control page through various turns. And there it was: the present temperature was 18 degrees and the target temperature set by the previous customer was 25. I set the target temperature to 22 degrees and went out to have dinner. When I returned to my room, the temperature had climbed to 19 degrees, probably due to my PC which I had left on in the room. It still felt quite cool, so I called the hotel reception for help. The help soon arrived. A janitor brought an old-style fan heater for my use. I could not keep the noisy fan on at night, so the temperature dropped back to around 18 degrees for the night. However, in the morning, I woke up well rested after a good night’s sleep. After all, you sleep better in a cool environment. This left me wondering that maybe the smart climate control was smart enough to understand better than I what was the ideal temperature for me. I would still have appreciated some kind of an explanation, because the “smart” system that does what it pleases without giving any say to a human left me feeling powerless. The hotel staff had also clearly resigned itself in front of the smart climate control and did not even try to fix the system in my room but resorted to using a good old fan heater. If the system really was smart, would it not also keep people up to date on the decisions it has made, telling what it is aiming at. If it does not function or cannot fulfil people’s wishes, would it not also give a reason for this?

From artificial intelligence to superintelligence

Artificial intelligence (AI) has been studied for decades, but now it is experiencing a strong renaissance. The earlier attempts to bring all expert knowledge on one subject into a single machine were defeated by their own impossibility. Today, the prevailing trend is the development of an AI based on machine learning, where the idea is that the machine learns little by little when being taught, but also on its own. Machine learning is well suited for the analysis of large masses of data and for supporting people in data-based decision-making. In medicine, for example, AI allows examination of different measurement data, and the machine can draw connections between data. Therefore, AI can be used for such a purpose as forecasting the development of a disease, when a patient’s data is compared to data on earlier patients. It is typical of machine learning that the result is not exact, but it is a probability-based forecast. That is why a machine cannot give similar detailed explanations for its conclusions as a human expert can.

A lot is expected of machine learning not only in medicine, but also in service business of companies, where AI can be used for analysing machine data collected from the field and forecasting, for example, occurrence of faults. In such applications, AI functions independently, analysing data and giving suggestions to people about the next necessary maintenance measures and even about their suitable timing, considering the financial factors.

In addition to these positive effects, futures researchers have also been painting some very gloomy scenarios about the “superintelligence” of the future that would be able to, for example, develop its own intelligence, draw its own conclusions and generate a will of its own, and could thus get out of the hands of both its designers and users.

What would be a potential path from the present machine learning-based AI systems to such superintelligence? AI is being introduced not only to services accessible via the internet, but also to mobile machines, such as autonomous cars and robots. Would this be the right time to consider making the future development paths such that the AI will remain under human control for sure?

A clever person solves the problems a wise person knows to avoid. This old wisdom should be applied to AI as well: if AI represents the cleverness and humans represent the wisdom, then humans must be secured a role in which they can prevent problems that AI might cause to itself or to humans. There must be an easy connection between AI and humans, and humans must have the final decision-making power. This prevents AI from getting out of human hands even as it learns new things.

In the next part of the blog series, I will focus more on the interaction between humans and AI.

Read more: VTT and Smart City

Kaasinen Eija
Eija Kaasinen
Senior Scientist, VTT
@eijakaasinen 
eija.kaasinen(a)vtt.fi

 

In the next part of the blog series, I will focus more on the interaction between humans and AI.

Pärjäämmekö tulevaisuuden tekoälyn kanssa?

Mitä on koneoppiminen? Miksi tekoäly päättelee eri tavoin kuin ihminen? Miten tekoälystä tulee superälyä?

Yövyin alkuvuodesta isossa hotellissa Berliinissä. Huoneeseen astuessani siellä tuntui olevan viileää. Ovensuusta löytyi tarra, jossa kerrottiin, että hotellissa oli otettu käyttöön ”Smart climate control” javoisin itse säätää haluamani lämpötilan TV:n kautta. Avasin TV:n ja navigoin muutaman mutkan kautta ilmastointisivulle. Sieltähän se löytyi: nykyinen lämpötila 18 astetta ja edellisen asiakkaan asettama tavoitelämpö 25. Säätelin tavoitelämmön 22 asteeseen ja lähdin illalliselle. Palattuani lämpö oli kivunnut 19 asteeseen, johtuen varmaankin huoneeseen päälle jääneestä PC:stäni. Aika viileältä tuntui vielä, joten soittelin apua hotellin vastaanotosta. Pian apua tulikin. Huoltomies toi käyttööni vanhan ajan lämpöpuhaltimen. Kovaäänistä puhallinta ei voinut pitää yöllä päällä, joten yöksi lämpö taas laski 18 asteen tuntumaan. Aamulla heräsin kuitenkin virkeänä oikein hyvin nukutun yön jälkeen, sillä onhan se niin, että viileässä nukkuu paremmin. Jäinkin miettimään, että ehkä se Smart climate control oli niin fiksu, että se tajusi minua paremmin minulle sopivan lämpötilan. Olisin kuitenkin arvostanut jonkinlaista selitystä, sillä nyt jäi voimaton olo ”älykkäästä” systeemistä, joka tekee mitä tahtoo, eikä ihmisellä ole siihen sanomista. Hotellin henkilöstökin oli selvästi alistunut älykkään ilmastoinnin edessä, eikä edes yrittänyt korjata huoneeni systeemiä vaan tukeutui vanhaan kunnon lämpöpuhaltimeen. Eikö oikeasti fiksu systeemi pitäisi myös ihmisen ajan tasalla päätöksistään – kertoisi, mihin se pyrkii. Jos se ei toimi tai ei pysty täyttämään ihmisen toivetta, niin myös kertoo syyn tälle?

Tekoälystä superälyyn

Tekoälyä on tutkittu jo vuosikymmeniä, mutta nyt se on kokemassa vahvan renessanssin. Aiemmat yritykset, joissa koneeseen koetettiin tuoda jonkun aiheenkaikki asiantuntijatietämys, kaatuivat omaan mahdottomuuteensa. Nykyään vallalla on koneoppimiseen perustuva tekoälyn kehittäminen, jossa ajatuksena on, että kone oppii pikkuhiljaa, kun sitä opetetaan, mutta myös itsekseen. Koneoppiminen soveltuu hyvin isojen datamäärien analysointiin ja tukemaan ihmistä datapohjaisessa päätöksenteossa. Esimerkiksi lääketieteessä tekoälyn avulla voidaan tutkia erilaisia mittauksia, ja kone pystyy muodostamaan yhteyksiä datan välille. Näin tekoälyn avulla voidaan muun muassa ennustaa taudin kehittymistä, kun verrataan potilasdataa aiempien potilaiden dataan. Koneoppimiselle on tyypillistä, että tulos ei ole eksakti vaan se on todennäköisyyksiin perustuva ennustus. Siksi kone ei pysty antamaan johtopäätöksilleen samanlaisia yksityiskohtaisia perusteluja kuin ihmisasiantuntija.

Koneoppimiselta odotetaan paljon paitsi lääketieteessä myös yritysten palveluliiketoiminnassa, jossa tekoälyn avulla voidaan analysoida kentältä kerättyä laitetietoa ja ennustaa esimerkiksi vikaantumista. Näissä sovelluksissa tekoäly toimii itsenäisesti, analysoi dataa ja antaa ihmisille ehdotuksia seuraavaksi tarvittavista huoltotoimista ja jopa niiden sopivasta ajankohdasta ottaen huomioon taloudelliset tekijät.

Näiden positiivisten vaikutusten lisäksi tulevaisuuden tutkijat ovat maalailleet synkkiäkin tulevaisuudenkuvia tulevaisuuden ”superälystä”, joka pystyisi esimerkiksi itse kehittämään omaa älykkyyttään, tekisi itse johtopäätöksiä, muodostaisi oman tahdon ja näin voisi karata niin suunnittelijoiden kuin käyttäjienkin käsistä.

Millainen olisi mahdollinen polku nykyisistä koneoppimiseen perustuvista tekoälysysteemeistä tuohon superälyyn? Tekoälyä on tulossa paitsi verkon kautta saataviin palveluihin myös liikkuviin koneisiin kuten autonomisiin autoihin ja robotteihin. Olisiko nyt jo syytä miettiä kehityspolkuja sellaisiksi, että tekoäly varmasti pysyy ihmisen hallinnassa?

Älykäs ihminen osaa ratkaista ongelmat, joihin viisas ihminen ei edes joudu. Tätä vanhaa viisautta kannattaa soveltaa myös tekoälyyn: jos tekoäly edustaa älykkyyttä ja ihminen viisautta, niin ihmiselle on taattava rooli, jossa hän pystyy estämään tekoälyn itselleen ja ihmisille aiheuttamat ongelmat. Tekoälyn ja ihmisen välillä on oltava sujuva yhteys ja päätösvallan pitää viime kädessä olla ihmisellä. Näin tekoäly ei oppiessaankaan karkaa ihmisten hallinnasta.

Lue lisää: VTT and Smart City

Kaasinen Eija
Eija Kaasinen
Senior Scientist, VTT
@eijakaasinen 
eija.kaasinen(a)vtt.fi

 

Blogisarjan seuraavassa osassa paneudutaan ihmisen ja tekoälyn vuorovaikutukseen.

Will machines take our work? – Part 3: People as models for machines

How do artificial and human intelligence differ? Why does research of the subconscious matter when dividing work between robots and humans? Should VTT build autonomous super AI?

Rick Deckard, the main character in the film, Bladerunner, kills replicants – machines that resemble humans – for a living. However, by the end of the film, Deckard, who is played by Harrison Ford, falls in love with a replicant. Deckard’s ambivalence towards replicants reflects the current debate about artificial intelligence. Some predict huge changes and see super AI plotting to take control, in a very human manner. Others are  more unconvinced: ‘Replicants are like any other machine,’ said Deckard as well, before he changed his mind. Machines modelled on people is therefore a classic science fiction idea. I want to enter the debate by comparing human and artificial intelligence. I discuss the argument that it is difficult to copy human intelligence in machines, because our intelligence cannot be separated from its environment. In addition, I would claim that analysis of human activity would be useful when remodelling working life.

Human intelligence

Conscious language-based thinking (inner conversations) is just the tip of the human intelligence iceberg. When an expert is asked how they managed to solve a problem or conflicting situation, they often answer that they ‘just knew’. Human expertise is a combination of schooling and book learning, personal conscious thinking, and something based on learning by doing. A knack and feeling for something, an expert eye and ear, and vision are popular ways of describing the tacit knowledge and ‘feel’ we have for performing various tasks. A person works intuitively and adaptively, selectively using millions or perhaps billions of sensory cells, depending on the situation.

It could be argued that intelligence and knowledge are located in the connections between the brain and countless sensory cells and nerves, rather than simply in the brain. Experimental psychology has shown that, in many respects, human activity and decision-making are directly connected to the environment. Action does not therefore require conscious thinking. The idea that our thought processes are embedded in our experienced environment is logical, since cerebral intelligence is connected to the sensory cells, which are themselves in direct contact with the environment.

In addition to having bodily intelligence, people are able to interpret and learn meanings. We excel at this in comparison to other species, due to the way in which our ancestors gathered food. Over short distances, humans are often outpaced by their prey, but we can jog for extremely long distances. Game therefore had to be followed over a very long journey. This was done by following tracks. We interpreted signs imprinted in the environment in order to survive – reading is species-typical behaviour for us. Now also people read continuously; some read their mobile phones, while others read newspapers. Because our senses are relatively dull, we cannot identify poisonous plants and fungi from the edibles just by smell, but by learning to distinguish the good from the bad. That is why adults are also generally able to distinguish good from bad, in other words we have a moral conscience. It is highly apt that Adam and Eve ate from the tree of knowledge, of good and evil, in the Book of Genesis. Conscious thinking was essential in the everyday lives of hunter-gatherers. People paid a price for stupidity even then – our intelligence and consciousness are by-products of evolution.

Artificial intelligence

Artificial Intelligence makes predictions on the basis of data and follows written instructions. Its predictive capabilities are based on neural networking in particular.  Neural networks consist of mathematically interconnected nodes, i.e., neurons. Initially, the connections are typically random, but they are then strengthened and weakened through trial and error. For example, thousands of images, categorised in a manner meaningful to people, can be fed into a neural network: ‘woman’, ‘man’, ‘cat’, ‘dog’, etc. By grouping certain features, the neural network can guess what each image contains. The right answers reinforce the pathways between network nodes that lead to the right answers. Pathways that lead to the wrong answers are weakened. This ‘guessing machine’ becomes an effective predictor after hundreds of thousands of tries. Prediction is ultimately so certain that artificial intelligence can identify images or words from a soundtrack.

Many kinds of human activity can be performed through prediction and by following instructions. We may wonder, for example, whether artificial intelligence is capable of creative tasks. AI can be made to compose musical pieces: it can analyse the melodies of the most popular compositions and predict the most catchy ones on that basis. With very careful preparation, it could also stretch to writing some lyrics and instrumentation. But music is more than sounds: young people have a tendency to develop their own, original vibes and grooves, which irritate their elders. AI cannot truly create new musical genres that reflect the times that the listeners are living through. Music cannot be separated from dance styles and the issues we consider important. And which is the creative actor here – artificial intelligence or the software developer?

Artificial general intelligence (or AGI) refers to AI capable of design, adaptation, reasoning and linguistic communication in a similar manner to humans. Definitions and proposed criteria for AGI tend to vary.  A machine succeeding in making coffee in a strange flat could be a sign that we are in the presence of AGI. In the scientific community, there is no consensus on whether AGI is even possible in principle. Both Microsoft and Google are running research programmes to achieve AGI. This is understandable, because sceptics rarely find their way into top positions in American blue chip companies. These programmes will undoubtedly lead to commercially exploitable technologies, even if they do not actually achieve AGI itself.

Cognitive architecture and task analysis

In principle, to build a truly human robot, we need to analyse people themselves. Cognitive scientists use the concept of cognitive architecture to describe human thinking holistically, including feelings and other matters. Because people are so fiendishly complex, it seems to me that the study of general cognitive architecture is better suited to basic research of artificial intelligence than the development of AI in practice.

Technology companies can meet their needs through task analysis. When replacing an employee with robots, we need to analyse what kind of work people are doing. This will ensure that the new robotised approach provides a result of at least the same quality and safety as the traditional way of operating. Since people will always be needed, even based on the new operating model, a division of labour between people and robots will be designed. Task analysis could also indicate that there is no point in robotisation.

There are many forms of task analysis. Cognitive task analysis involves modelling an employee’s thinking. Dozens of tools are available for this. Task analysis can also analyse an employee’s movements or the features of a work organisation. In particular, VTT uses the core task analysis method developed by now-retired research professor Leena Norros. The idea is to contrast observed working practices and challenges with the general goals and critical phases of work. The personal characteristics of employees are secondary: as ‘core’ suggests, this concerns the analysis of a task’s core features. Bearing the main goals in mind enables us to view a task’s performance from a number of perspectives – the same general goal can be achieved by human action or a robot. This makes core task analysis ideal as a design aid: it guides but does not cramp the designer’s creativity. Depending on the research questions, core task analysis can flexibly include various features of cognitive task analysis, the micro-level analysis of work practices, or the modelling of the operating environment.

It is beneficial to blur the boundary between worker and researcher when mapping cognitive processes during task analysis. Workers are seldom aware of their own mental models while working, because skills are subconscious in nature and the subconscious cannot be directly studied by transferring information from the employee to a researcher. The idea is that the researcher and worker together explore issues hidden in the subconscious, which means that the employee, in a sense, becomes both ‘teacher’ and ‘pupil’ at the same time. Workers love task analysis of this kind, because they find insights about their own work fascinating by nature. A good, practical technique for this involves watching a video of the worker in action, together with him or her.

User-centred AI research

Human and artificial intelligence are different. Even an accurate study of people will not enable us to create actors that precisely resemble humans. The quantum computers of the future may be millions of times faster than today’s computers. However, this does not alter the fact that, in the future, artificial intelligence will not be able to function adaptively in the real world as people do. In the absence of new developments, we would still fail to achieve seamless interaction between the complex whole of the sensory system and decision-making. As concepts, neural networks and artificial intelligence hint at the replication of human intelligence. However, in practice it would make more sense to study the use of AI.

Without research, it is hard to figure out how different work assignments may benefit from data-based predictions. Through object-recognition capabilities, AI will solve problems in a number of job tasks, because camera technology is already used for other purposes in many sectors. Surgeons, for example, often use a camera image when operating: in the future, AI may help to identify cancer cells and nerve paths. It is to be explored how work-life focused user research could provide practical tools for the building of neural networks. Interdisciplinary work is necessary.

OLYMPUS DIGITAL CAMERA
Mikael Wahlström
Senior Scientist, PhD in Social Sciences (Social Psychology)
mikael.wahlstrom(a)vtt.fi

 

The author is exploring the division of labour between AI and people in a project that explores the future of seafaring. He has performed task analysis blurring the boundaries between worker and researcher in Academy of Finland project called WOBLE, which he led, focusing on robot-assisted surgery. The final report of the WOBLE project in Finnish can be found here.

Will machines take our work? – Part 2: Robot trucker at the mercy of people

 

Autonomous transport is on the way. Ships and cars are being fitted out to drive themselves. But is there a business in this, or will the hype fizzle out? Will people accept these machines?

Robots are always connected to people. Even a Mars rover’s tasks are planned each day by people. In this blog, I’m going to consider the relationship between people and autonomous vehicles on three interrelated levels: physical, commercial and social. Devices must work where intended, must be sellable, and must be acceptable to people.

When such devices are being used, the nature of the operating environment and the connection to people are as important as technical features.  For example, robot vehicles already perform commercial tasks well, but within the enclosed environments of mining areas. People only venture into the vicinity of these giant robot trucks if they are sitting in a truck cabin themselves.

Money and safety at sea

Safety critical work is work in which human life would be endangered if something went wrong. Such work is generally governed by rules. Areas such as seafaring and motoring have their own sets of rules. Accidents are avoided if all parties comply with the rules and nothing surprising happens. Artificial intelligence complies with the rules set for it, but cannot adapt to unforeseen circumstances. In addition, a fault or accident may originate in AI itself. That is why control of autonomous transport devices operating among humans should not be left to AI alone. Human supervision is needed.

Labour is saved when one person can supervise several devices that are under the direct control of artificial intelligence. Employees no longer need to be at the mercy of field conditions, but work is done in the comfort and safety of a control centre. Such work does, however, involve new challenges. The scientific community has only recently begun to discuss the so-called transparency of artificial intelligence, i.e how easy it is for users to monitor the operations and functionality of AI. The supervision of self-learning AI – which can modify its own instructions – is particularly challenging. At the same time, there is a need to monitor and understand the operating environment of devices controlled by AI and the operation of sensor and communication technologies. Sensors of various kinds should be used to monitor activities in case some sensors do not work, or the signal is interrupted. For example, reliance on a GPS signal alone is unwise, since an external operator can disrupt positioning by generating a signal stronger than your own satellite signal.

The big challenge lies in the fact that operations must be economically viable in comparison to the traditional approach. For savings to be made, paying a control room team must be clearly cheaper than the wages of traditional field employees, since AI-controlled devices need new kinds of sensors and communication tools in order to function. More and more equipment is vulnerable to malfunctions and can no longer be serviced or repaired on-site, but a technical expert must be sent into the field.  Personnel costs currently account for around six percent of a ship’s operating costs, but costs are also generated by the infrastructure required by people: an autonomous vessel does not require a toilet or kitchen.

Regardless of the challenges, both businesses and innovations will be created

There is still no certainty about which systems are most cost-effective when controlling ships, whether such systems are based in remote centres or on-deck. Seafaring is a conservative sector: attitudes to autonomous ships range from enthusiasm to scepticism. I believe that autonomous technologies will be useful. Even if commercially viable, unmanned ocean-going vessels are some way off, seafarers will soon benefit in various ways from sensor technologies and AI.  Remote monitoring of ships is already happening.

For example, is it always necessary to maintain a 24-hour watch on the high seas? Fatigue, boredom and frustration all undermine safety. Perhaps it would be better if AI and the bridge kept watch at night, waking up the crew member on watch only if necessary. In addition, in challenging conditions new tools provide strong assistance in gaining situational awareness.

Change is slow to arrive. Good task planning, in which workers must be involved, is needed.  They can provide information on challenges in the operating environment, which must be taken into account in the design of automated equipment.

Metro, automation and strong emotions: should we be afraid of fear itself?

My favourite transport system is the Metro in Paris. It connects people to every part of the metropolis within 45 minutes. Intuitive maps clearly show where you are going when walking in the platform area. When I lived in Paris, Line 14 was the only unmanned metro line. Somewhat unexpectedly, stepping into the carriage made me feel anxious. This feeling faded quickly, when the train lurched into motion. Among so many passengers, even subconsciously I understood that there was nothing to fear.

Later, I used the theoretical framework I had studied in Paris to study what local residents in Helsinki thought about a driverless metro. The theory states that people’s shared understanding of the world develops as they discuss new phenomena; the discussion in question is affected by existing structures of power and meanings in society. French social psychological theory builds a bridge between society and human understanding.  I noted that people in Helsinki had negative attitudes towards the driverless metro, despite the media’s positive discussion of the issue. On the other hand, this negativism was reduced by facts about the automated metro. The idea of an automated metro was associated with experiences of unreliable computers, unemployment and dystopic images from science fiction.

My study of automatic metros provided the ideal basis or theoretical exploration, but was of little practical relevance. Helsinki never obtained a driverless metro and I now believe that preconceived ideas have limited influence on technology acceptance. People’s opinions are ultimately formed through direct use of the tool in question. This is demonstrated by my own experiences of line 14 of the Paris metro and the statistics: user experiences can be highly positive, even in the face of prejudices against robot technology.

So if the devices themselves are good, we shouldn’t worry too much about people’s preconceived ideas. However, fear should be dispelled through communications. If fear of the unknown is combined with problems or accidents, disproportionate damage may be done to the reputation of technology.

Automation and those being automated

I also think that technology firms do not need to be too worried about their workers, who, in principle, are the ones threatened by automation. The Finnish Seafarers’ Union is sceptical about autonomous ships in the same way as the metro drivers’ trade union was about the driverless metro. Despite this, the drivers were very open-minded about metro automation, at least when talking to an external researcher. There was no sign of a ‘rebellion’. On the other hand, the drivers were promised that they wouldn’t lose their jobs, but that their duties would change. In addition, perhaps the old drivers saw retirement approaching, while the younger ones were fascinated by involvement in a technological transition.

 

OLYMPUS DIGITAL CAMERA
Mikael Wahlström
Senior Scientist, VTT
mikael.wahlstrom(a)vtt.fi

 

The writer studied the safety of autonomous ships as part of the AAWA project. A report of the safety analysis project completed alongside Aalto University is available here. The study on public opinion concerning Helsinki’s automated metro can be found here.

The first part of this three-part blog series discussed health care.
The last part, which will be published in February, will consider Human as a model for machines.