Projects

  1. ACLEW: Analyzing Child Language Experiences Around the World (HJ-253479) – 14 winning projects in total
    T-AP (Trans-Atlantic Platform for the Social Sciences and Humanities along with Argentina (MINCyT), Canada (SSHRC, NSERC), Finland (AKA), France (ANR), United Kingdom (ESRC/AHRC), United States (NEH)) Digging into Data Challenge 4th round
    Runtime: 01.06.2017 – 31.05.2020
    Role: Principal Investigator, Co-Author Proposal
    Partners: Duke University, École Normale Supérieure, Aalto University, CONICET, Imperial College London, University of Manitoba, Carnegie Mellon University, University of Toronto
    An international collaboration among linguists and speech experts to study child language development across nations and cultures to gain a better understanding of how an infant’s environment affects subsequent language ability.
  2. ARIA-VALUSPA: Artificial Retrieval of Information Assistants – Virtual Agents with Linguistic Understanding, Social skills, and Personalised Aspects (#645378)

    EU Horizon 2020 Research & Innovation Action (RIA) – 9.3% acceptance rate in the call
    Runtime: 01.01.2015 – 31.12.2017
    Role: Principal Investigator, Coauthor Proposal, Project Steering Board Member, Workpackage Leader
    Partners: University of Nottingham, Imperial College London, CNRS, University of Augsburg, University of Twente, Cereproc Ltd, La Cantoche Production
    The ARIA-VALUSPA project will create a ground-breaking new framework that will allow easy creation of Artificial Retrieval of Information Assistants (ARIAs) that are capable of holding multi-modal social interactions in challenging and unexpected situations. The system can generate search queries and return the information requested by interacting with humans through virtual characters. These virtual humans will be able to sustain an interaction with a user for some time, and react appropriately to the user’s verbal and non-verbal behaviour when presenting the requested information and refining search results. Using audio and video signals as input, both verbal and non-verbal components of human communication are captured. Together with a rich and realistic emotive personality model, a sophisticated dialogue management system decides how to respond to a user’s input, be it a spoken sentence, a head nod, or a smile. The ARIA uses special speech synthesisers to create emotionally coloured speech and a fully expressive 3D face to create the chosen response. Back-channelling, indicating that the ARIA understood what the user meant, or returning a smile are but a few of the many ways in which it can employ emotionally coloured social signals to improve communication. As part of the project, the consortium will develop two specific implementations of ARIAs for two different industrial applications. A ‘speaking book’ application will create an ARIA with a rich personality capturing the essence of a novel, whom users can ask novel-related questions. An ‘artificial travel agent’ web-based ARIA will be developed to help users find their perfect holiday – something that is difficult to do with existing web interfaces such as those created by booking.com or tripadvisor.