How to code a robot to serve as a shopping centre guide

In the last week of September, we took VTT’s robot, Pepper, to Ideapark in Lempäälä where, together with French, Scottish and Swiss scientists, we fired up a Hacklab in the central plaza. We spent a week coding and testing the robot from dawn to dusk. Our goal was to have Pepper independently and reliably serve as a guide to the shopping centre, able to direct customers to various stores by giving them the route, pointing the way and moving in the right direction, while entertaining them with chat about topics such as films and music.

Mummer_kuva

At the same time, user tests were coded and performed as part of the MuMMer R&D and testing week.

First, the robot must learn to listen

There is still a range of challenges involved in the use of social robots for customer service in public spaces. The most critical of these is speech recognition. The shopping centre’s info staff say that customers most often enquire about the way to the nearest toilet. Even a basic question like this can be asked in ten different ways. Background noise makes speech recognition particularly difficult: people other than the users of the robot may be talking nearby, while music and ads blare in the background, shopping trolleys rattle and floor cleaners hum. A robot will find it difficult to distinguish the speaker’s words from the sea of voices.

Another challenge lies in getting the robot to understand its location and direction in the space in question. Localisation and navigation technologies are sophisticated: after being taught the map of a certain place, transport robots are perfectly able to move from A to B independently and without colliding. However, Pepper was not designed for this. The shopping centre’s shiny floor, reflective metal surfaces and glass windows make it difficult for the robot to sense its environment in the same way as laser-equipped transport robots do on a continuous basis. In addition, short-range sensors, of which Pepper has several, are insufficient for measuring distances stretching for dozens of metres in large public spaces. The most reliable way to do this may be for the robot to identify its environment based on the visual cues found all over shopping centres, such as logos, signs, bright colours and clear shapes everywhere. Pepper can use cameras to recognise people and faces. On the other hand, it has limited processing capacity and needs to be connected to an external computer to recognise its environment.

The shopping centre adventure is part of the EU-funded MuMMER project (http://www.mummer-project.eu/), whose researchers spent the pilot week working on the robot’s functionality in a number of ways. The idea is to enable the robot to understand and respond sensibly to free-form Finnish, using the AI conversational software Alana. The English-language Alana was originally designed for Amazon Echo. Ready-made questions and answers, Finnish news sites, other online sources and Google Translate are being used in the Finnish version. The robot has been taught a 3D model of Ideapark. This enables it to move appropriately when giving directions and take account of the fact that customers cannot see around corners.

Pepper IdeaParkissa 2018.jpg

Pepper being prepared for user tests.

When robot meets person – and receives feedback

During the pilot week, VTT studied human/robot interaction when the robot was giving directions. The participants were a group of test users, who had to ask the robot for directions to a particular store and genuinely search for the retailer based on the instructions. The test users’ interaction with the robot were recorded using several cameras, and they were also interviewed. The information gathered during the study is being compared to information obtained on a similar setup where a human worker took the place of the robot. In this way, it can be determined whether the robot should act just like a human guide, or whether customers would rather it behaved differently. Among other issues, the researchers are interested in how a robot can deduce that a customer does not understand its directions and needs further guidance. Customers were also asked to evaluate the robot’s personality and behaviour on the basis of various gestures.

During the final days of the pilot week, Thursday and Friday, customers were able to try out Pepper’s guidance in unstructured situations. We are not yet where we aim to be: the robot sometimes fails to recognise a name given by a customer, and some ‘good will’ may be needed to understand the answers generated by the robot’s artificial intelligence. However, the correct directions and routes are given and the process raises a smile among customers. The project still has a year to go: by then, Pepper should be able to direct customers independently, reliably and efficiently. Then the benefits of a social robot will begin to become a reality.

Pepper IdeaParkissa vol2. 2018

The Finnish directions given still leave room for interpretation.

 

Partners involved in the Multi-Modal Mall Entertainment Robot (MuMMER) project:

  • University of Glasgow, United Kingdom (coordinator)
  • Heriot Watt University, United Kingdom
  • IDIAP Research institute, Switzerland
  • LAAS-CNRS, France
  • Softbank Robotics Europe, France
  • VTT, Finland
  • Ideapark, Finland

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: