Conference Programme

Presentation schedule in PDF HERE

Recent UPDATE! We start at 10.00 on Sunday. The keynote by Monica Gonzales Marques is shifted (after coffee)


Guidelines for speakers

The regular presentations should be planned for 20 minutes + 10 minutes for discussion/questions.
The invited lectures should be planned for 45 minutes + 15 minutes for discussion/questions.

Invited talks

We are happy to announce the  GESPIN 2017 invited speakers here below and the abstracts of the keynote talks.

  • Sylvain Calinon (Idiap Research Institute, Lausanne)
    • Title: Learning and synthesis of gestures in robotics from human demonstration
    • Abstract: Varied applications in robotics require robots to acquire gestures from human demonstration, and to produce natural movements that can adapt to new situations. Such challenge requires: 1) the development of intuitive active learning interfaces to acquire meaningful demonstrations; 2) the development of movement primitive models that can exploit the structure and geometry of the acquired data in an efficient way; 3) the development of control techniques that can exploit gesture variations and coordination patterns. The developed models need to serve several purposes (recognition, prediction, online synthesis), and be compatible with different learning strategies (imitation, emulation, exploration). They also need to take into account various additional modalilities.
      I will present an approach combining model predictive control and statistical learning for the encoding and synthesis of gestures, that I will illustrate in several human-robot interaction applications, with robots either close to us (human-robot collaboration in the arts, robot for dressing assistance), part of us (prosthetic hand), or far from us (teleoperation of bimanual robot in deep water).
  • Alan Cienki (Faculty of Humanities, Language Network Institute, Vrije Universiteit Amsterdam)
  • Kurt Feyaerts (KU Leuven)
    • TitleBody and speech in face-to-face interaction. A tour of some of the modalities involved in the process of interactive meaning making
    • Abstract: This paper takes a wider perspective than the interplay between gesture and speech alone as it intends to explore the ways in which different modalities may be involved in the cognitive processing – either by a speaker or hearer – of speech in face-to-face interaction. To achieve that goal, I will take a tour along the human body and report on recent and ongoing empirical studies in our research group, in which through both methodological specialization and interdisciplinary cooperation different aspects of multimodal meaning making are investigated. Accordingly, I will focus on the role played by human hands, head and torso as essential information channels for the expression of (stance taking) concepts like obviousness, doubt and (types of) humor. Next to these ‘obvious’ gesture dimensions, I will also highlight the potential of facial expression and even heart rate as relevant semiotic dimensions in the complex process of multimodal meaning making in face-to-face interactions. A major point of attention in this overview concerns the role of eye-gaze as an important factor in the realization of different interactive phenomena such as turn and hesitation management in different interactive settings. On the methodological level, finally, I will point to (i) the advantageous inclusion into the analysis of mobile eye tracking technology in order to be able to maintain an uninterrupted multifocal and multimodal data stream from within the interaction as well as (ii) the application of a cross-recurrence analysis onto the different sets of data in order to capture valuable temporal relations among the observed phenomena on different semiotic levels of expression.
  • Monica Gonzales-Marquez (RWTH – Aachen University, Department of English, American & Romance Studies (IFAAR) )
    • TitleA Brave New Science: Incorporating Open Science Practice into the emerging field of Interdisciplinary Gesture Research
    • Abstract: Science is in the midst of a paradigm shift. This isn’t necessarily news. Scientific progress is shaped by repeated paradigm shifts. Perhaps the most famous of these in the social sciences is the shift from the brain-as-computer paradigm to the current embodied model. What makes the current shift noteworthy is that instead of involving our understanding of a given natural phenomena, it involves the practice of science itself.
      Researchers across many fields have long wrestled with practices that impede scientific progress. Here progress is defined as the accumulation of knowledge, based on replicable results, that is openly shared across a scientific community. Given its position as a cornerstone of science, issues involving replicability have become particularly salient as what is now known as the Replicability Crisis.
      The replicability crisis came to a head for the social sciences in 2015 with the publication of a massive study in the journal Science. The study, conducted by the “Open Science Collaboration”, found that only about a third of 100 prominent studies reproduced results akin to those in the original research. These results were taken as evidence of long suspected problems in the reporting of scientific results. The most broadly cited likely cause was a lack of transparency in methods and analysis, which manifested in various ways. These included a disconnect between the research conducted and the research reported (stemming from a series of questionable research practices), and work where key elements of the research process were withheld from the readership (denying access to original data sets or detailed methodologies). In short, all of these practices severely curtailed replicability, thus limiting the accumulation of reliable scientific knowledge.
      One response to the need for reform has been The Open Science movement. Open Science emerged from the belief that science, including its methods, processes and products, is a human endowment. And that as such, it should be accessible in its entirety to anyone wishing to examine any aspect of it, from the details of a given methodology, to the raw data produced by an experiment, to the finished report.
      In this talk I will discuss the endemic problems plaguing the progress of science in the context of solutions proposed by Open Science practice. I will then discuss how the emerging field of interdisciplinary gesture research can learn from the mistakes of its sister disciplines by incorporating practices early on that will allow it to circumvent many of the problems affecting the science in general.
  • Hedda Lausberg (Deutsche Sporthochschule, Köln)
    • TitleOn the neuropsychology of gesture (and speech) production and its implication for gesture research methodology
    • Abstract: NEUROGES, developed by Hedda Lausberg, is an objective and reliable system for the analysis of speech-accompanying hand movements and gestures. Up to now, it has been applied for the analysis of hand movements and gestures in more than 500 individuals from different cultures (Germans, U.S. Americans, francophone and anglophone Canadians, Suisse, Koreans, Kenyans, and Papua New Guineans), including healthy individuals as well as individuals with brain damage and with mental illness. A recent review of 18 empirical studies using NEUROGES in combination with ELAN demonstrates a good reliability of the system (Lausberg and Sloetjes 2015).
  • Sotaro Kita (University of Warwick)
    • Title: Gesture, language and cognition
    • Abstract: This presentation concerns a theory on how speech-accompanying gesture (“co-speech gesture”) is generated, in coordination with speech, and how co-speech gestures facilitate the gesturer’s own speech production process. I will present evidence that co-speech gesture is generated from a general-purpose Action Generator (which also generates “practical” actions such as grasping a cup to drink). The Action Generator generates gestural representation in close coordination with the Message Generator in the speech production process, which generates conceptual representation for each utterance (Kita & Ozyurek, 2003). I will also present evidence that co-speech gesture facilitates speech production because they shape the ways we conceptualize our experiences, through four basic functions: gesture activates, manipulates, packages and explores spatio-motoric representations for the purposes of speaking (Kita, Chu, & Alibali, 2017).
  • Jordan Zlatev (Lund University)
    • Title: The emergence of gestures in ontogeny: a view from cognitive semiotics
    • Abstract: I argue that a comprehensive theory of gestures, and particularly for their emergence in ontogeny, requires the integration of biological, sensorimotor, social and semiotic factors. Such a unified perspective is offered by  cognitive semiotics, the new transdisciplinary study of meaning (Zlatev 2015; Zlatev, Sonesson and Konderak 2016). To make this point, I will briefly review five thorny issues that have been the topic of explicit, or implicit, controversies in gesture studies, and show how the adopted perspective provides possible resolutions: (a) the definition and classification of gestures; (b) the emergence and universality of pointing; (c) the emergence of representational gestures; (d) the interaction between gesture and early language development; (e) accounting for speech-gesture alignment.



The Neuropsychological Gesture (NEUROGES) Analysis System – Behavioral and Automated Analysis in Research of Gesture and Speech Interaction (KINEMO) 

A joint workshop about the NEUROGES coding system developed by prof. Hedda Lausberg and KINEMO software for automatisation of hand movement annotation developed by Konrad Juszczyk and Kamil Ciecierski.


Teaching Tool Codified Gestures – Can More Pupils Learn More?

A workshop convened by Natasha Janzen Ulbricht (Freie Universität Berlin) on codified gestures, theater and improving oral fluency in beginning foreign language classrooms.

  • More details can be found here (PDF).
  • The workshop is planned for 90 minutes and up to 25 participants.