Communication has to be learnt
Preparing robots to absorb information with all senses / New cluster of excellence „Science of Intelligence“ with HU and TU Berlin
Languages, gestures, facial expressions and much more – that is how human-to-human communication works. How the complex human ability to communicate can be reproduced in an artificial intelligence is the subject of research for two scientists in Adlershof.
There are many facets to intelligence. But what fundamental laws and principles underlie the different forms of intelligence – be it artificial, individual or collective? A new cluster of excellence aims to shed light on this. “Science of Intelligence” is a joint project of the Technische Universität Berlin and Humboldt-Universität zu Berlin (HU) that launched in January this year. Scientists from the greatest diversity of disciplines – from psychology to robotics, from computer science to philosophy and behavioural science – are working together in this project.
The computer scientist Verena Hafner and psychologist Rasha Abdel Rahman are part of the project. The two university professors of the HU on the Adlershof campus are studying the role of multimodality in communication between humans and robots. Multimodality is the simultaneous use of different sensory channels to convey information. “Collaboration between man and robot will become increasingly relevant for many everyday situations in the future. For it to work, there has to be a working task-related exchange of information between the two,” Abdel Rahman describes the initial conditions. The aim of the joint project is to give a robot the capability of integrating information from different sensory modalities. People find it natural to integrate information from spoken language with, for example, visual information from hand movements, facial expressions, the direction another person is looking and tactile information. “So that robots can do the same, we equip them with microphones, cameras and touch sensors, and implement learning strategies so that they will learn from experience,” explains Hafner, a specialist for adaptive systems.
Before that can happen, however, it is first necessary to describe and understand the core elements of human communication. To this end, Verena Hafner, Abdel Rahman and their team, which also includes a neuroscientist from Charité Berlin, are using neurocognitive methods such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). And, of course, humanoid robots like Pepper are also included in the workgroup.
One current line of study is, for example, changes in perspective. When it comes to putting oneself in another’s shoes, so as to stand by them in their tasks and be able to anticipate their behaviour, factors such as divided attention and one’s own expectations play a role. “With the help of EEG and MRI, we want to gain insights into how people interpret and process information from other human and even artificial parties,” says Abdel Rahman. Also important is whether a robot is perceived as intelligent or as a social actor.
“Once we have understood the communicative behaviour of humans, we can adapt the communicative behaviour of robots to that of humans in order to improve the communication and cooperation between man and robot,” Verena Hafner is convinced.
By Kathrin Reisinger and Sylvia Nitschke for Adlershof Journal