|
09:00-09:30 |
Welcome: Emre Ugur, Innsbruck University, Austria |
|
Sensorimotor Learning-I |
09:30-10:05 |
Prof. Tamim Asfour, Karlsruhe Institute of Technology (KIT), Germany.
Affordance-based Grasping, Balancing and Walking
(Abstract)
Exploiting interaction with the environment is a promising and powerful way to enhance humanoid robots' capabilities and robustness while executing locomotion and manipulation tasks. In this talk we first present an approach for autonomous interactive segmentation of unknown objects in a complex scene through physical interaction and strategies for reactive grasping using visual and haptic information. Following the idea of duality between object grasping and humanoid balancing, we show how co-joint object-action representations, Object-Action Complexes, are used and extended for associating whole-body actions of a humanoid robot with affordances of objects and environmental elements in the scene. We show how affordance hypotheses are generated through visual exploration and verified using haptic feedback as well as reachability and stability measures of the robot. Results on grasping unknown objects as well as generation of whole-body action for balancing and footstep planning will be discussed.
| 10:05-10:40 |
Luka Peternel, Jozef Stefan Institute, Slovenia.
Human-in-the-loop approach for teaching robots to dynamically interact with environment and humans.
(Abstract)
We present a novel human-in-the-loop approach for teaching robots how to perform dexterous tasks involving physical interaction with unstructured and unpredictable environment. By including human into the robot control loop, the human learns to perform a given task using the robot. The newly acquired human skill is then captured and added to the robot control system. This enables the robot to perform the task autonomously. To demonstrate the applicability of the approach, I will present several experiments including reactive postural control of a humanoid robot, human-robot cooperation in dynamic manipulation tasks, and control of exoskeleton robots. Using our approach, we can collect a large amount of sensorimotor skills for specific robotic tasks. To achieve full robot autonomy, a cognitive-level algorithm is needed for the robot to decide which skill to use in the given situation. In addition, such algorithm could enable the robot to merge different sensorimotor skills and potentially create new skill. We hope that the discussion and idea-sharing at this workshop will lead to possible solutions of this issue.
|
10:40-11:00 |
Coffee break
|
|
Sensorimotor Learning-II |
11:00-11:35 |
Prof. Justus Piater, University of Innsbruck, Austria.
Stacked learning of affordances
(Abstract)
General-purpose autonomous robots for deployment in unstructured domains
such as service and households require a high level of understanding of
their environment. For example, they need to understand how to handle
objects, how to operate devices, the function of objects and their
important parts, etc. How can such understanding be made available to
robots? Hard-coding is not feasible, and conventional machine
learning approaches will not work in such high-dimensional, continuous
perception-action spaces and realistic amounts of training data. One
way to get robots to learn higher-level concepts may be to focus on
simple learning problems first, and then learn harder problems in ways
that make use of simpler problems already-learned. For example,
learning problems can be stacked by making the output of lower-level
learners available as input to higher-level learning problems,
effectively turning hard problems into easier problems by expressing
them in terms of highly-predictive attributes. This talk discusses how
this can be done, including further boosting learning efficiency by
active learning, and automatic, unsupervised structuring of sets of
learning problems and their interconnections.
|
11:35-12:10 |
Dr. Lorenzo Jamone , Instituto Superior Tecnico, Portugal.
Learning affordances for tool use and planning
(Abstract)
Inspired by the extraordinary ability of young infants to learn how to grasp and manipulate objects, many works in robotics have proposed developmental approaches to allow robots to learn the effects of their own motor actions on objects, i.e., the objects affordances. While holding an object, infants also promote its contact with other objects, resulting in object-object interactions that may generate effects not possible otherwise. Depending on the characteristics of both the held object (intermediate) and the acted object (primary), systematic outcomes may occur, leading to the emergence of a primitive concept of tool. This will later on enable more and more complex planning skills, eventually allowing for problem solving. I will discuss our attempts toward modeling this kind of knowledge acquisition and exploitation in the humanoid robot iCub. The robot first learns a probabilistic causal model of object affordances through the interactive exploration of the environment, and then uses such model to make predictions, take decisions and plan sequences of actions to achieve given goals.
The learned affordances are used to ground the planning rules, so to adapt them to the actual motor and perceptual capabilities of the robot, and to the properties of the objects around; this is made possible by the use of probabilistic techniques both for modeling affordances and for computing the plans.
|
12:10-12:45 |
Prof. Sanem Sariel, Istanbul Technical University, Turkey.
Robots learn they can not afford in every context
(Abstract)
Several studies present different methods for robots to learn to complete tasks more efficiently; and task completion is always the highest priority in these studies. However, especially in unstructured environments, there are cases where task completion is not possible, or certain precautions should be taken into account to ensure safety in task execution. We study how learning helps a robot determine general or specific limitations on task execution beyond its capabilities, and gain experience on these cases to make safe decisions on future tasks. In this talk, I will present our experiential learning framework for robots to build online experience and transfer knowledge among appropriate contexts. We use Inductive Logic Programming (ILP) to frame hypotheses represented in first-order logic that are useful for further reasoning and planning processes. We analyzed the performance of the learning method on our autonomous mobile robot and our robot arm both building their experience on action executions at runtime.
|
12:45-14:30 |
Lunch
|
|
Cognitive Development |
14:30-15:05 |
Prof. Minoru Asada, Osaka University, Japan.
Title: Social learning of early vocal development
(Abstract)
Vocal communication is a unique means to bi-laterally exchange
messages in real-time. The developmental origin of such communication
is the vocal interactions between an infant and a caregiver, and one
of the big mysteries is how the infant learns to vocalize the mother
tongue of the caregiver. Many theories claim to explain an infant's
capability to imitate a caregiver based on acoustic matching. However,
the acoustic qualities of the infant and the caregiver are quite
different, and, therefore, cannot fully explain the imitation.
Instead, the interaction itself may have an important role, but the
mechanism is still unclear. In this talk, I review studies addressing
this problem using constructive approaches based on cognitive
developmental robotics.
|
15:05-15:40 |
Prof. Yukie Nagai, Osaka University, Japan.
Predictive Learning of Sensorimotor Information as a Key for Cognitive Development
(Abstract)
Human infants acquire various cognitive abilities such as self-other cognition, imitation, and cooperation in the first few years of life. Although developmental studies have revealed behavioral changes in infants, underlying mechanisms for the development are not fully understood yet. We hypothesize that predictive learning of sensorimotor information plays a key role in infant development. Predictive learning is defined as a process to minimize the prediction error between an actual sensory feedback and a predicted one. For example, minimizing the prediction error enables infants to discriminate the self from others because the self's body is controllable and thus recognized as a perfectly predictable entity. Social behaviors such as imitation and cooperation also emerge through predictive learning. A failure in others' action, for example, induces a larger prediction error and thus triggers the execution of infants' action to reduce the error, which results in a cooperative behavior. My talk will present our robotics studies investigating how predictive learning reproduces infant cognitive development. Furthermore, a potential of our hypothesis to explain the mechanism of autism spectrum disorders (ASD) will be explained. Our research supports a recent hypothesis that ASD is characterized by a difficulty in learning of sensorimotor prediction rather than in social interaction.
|
15:40-16:15 |
Dr. Amit Kumar Pandey, Aldebaran Robotics, France.
Development of Socially Intelligent Robots and the need of Learning: an industrial perspective and use cases
(Abstract)
With significant advancements in robotics, now the robots are beginning to coexist and work with us, to assist and accompany us, and to interact, play, learn and teach. Time has arrived, when social robots are getting deployed or available for practical purposes in homes, stores, and public places. E.g., P epper robot of Aldebaran SoftBank group is planned for mass production and already being used for social interaction at SoftBank stores in Japan, the Romeo2 project, focusing on development and evaluation of humanoid robot companion for everyday life, etc. However diverse the application might be, the common requirement is that such robots, beyond their short term engaging effect due to novelty, should actually be able to establish long term social relations with human and with individuals, by behaving in socially expected and accepted manners. For this, the S ocial Intelligence , being the underlying engine for reasoning, play crucial role. But the question is how to embody such capabilities and how to close the loop of interaction and learning? There are also crucial from commercial and industrial perspective of exploitation of social robots. The talk will emphasize on these aspects, highlight some of the R&D challenges and needs from industrial perspective along with some use cases.
|
16:15-16:30 |
Coffee break
|
|
Language Learning |
16:30-17:05 |
Prof. Angelo Cangelosi, Plymouth University, UK.
Developmental Robotics for Embodied Language Learning
(Abstract)
Growing theoretical and experimental research on action and language processing and on number learning and space representation clearly demonstrates the role of embodiment in cognition and language processing. In psychology and neuroscience this evidence constitutes the basis of embodied cognition, also known as grounded cognition (Pezzulo et al. 2012). In robotics, these studies have important implications for the design of linguistic capabilities in cognitive agents and robots for human-robot communication, and have led to the new interdisciplinary approach of Developmental Robotics (Cangelosi \& Schlesinger 2015). During the talk we will present examples of developmental robotics models and experimental results from iCub experiments on the embodiment biases in early word acquisition studies, on word order cues for lexical development and number and space interaction effects. The presentation will also discuss the implications for the "symbol grounding problem" (Cangelosi, 2012) and how embodied robots can help addressing the issue of embodied cognition and the grounding of symbol manipulation use on sensorimotor intelligence.
References:
Cangelosi A. (2012). Solutions and open challenges for the symbol grounding problem. International Journal of Signs and Semiotic Systems, 1(1), 49-54 (with commentaries)
Cangelosi A, Schlesinger M (2015). Developmental Robotics: From Babies to Robots. Cambridge, MA: MIT Press.
Pezzulo G., Barsalou L.W., Cangelosi A., Fischer M.H., McRae K, Spivey M.J. (2011). The mechanics of embodiment: a dialog on embodiment and computational modelling. Frontiers in Psychology, 2(5), 1-21
|
17:05-17:40 |
Prof. Takayuki Nagai , The University of Electro-Communications, Japan.
Toward robots that learn concepts and words through experience
(Abstract)
To interact naturally with humans, robots need to understand human words and take actions based on the meaning behind those words. Moreover, it is desirable for robots to express their intentions through language in communication with humans. To achieve this many works have been done on the symbol grounding problem in the field of intelligent robotics. In this talk a statistical modeling of concepts and language, which makes possible for robots to learn concepts, words and grammar in a bottom-up manner, is introduced. The key ideas behind the framework are spatiotemporal segmentation and multimodal categorization.
|
17:40-18:10 |
Round-table discussion
|
|