Wekinator

wekinator2

Wekinator is software for real-time, interactive machine learning. Wekinator facilitates the use of machine learning as a prototyping and design tool, enabling composers, musicians, game designers, and makers to create new gestural interactions or semantic analysis systems from data.

Authored by Dr Rebecca Fiebrink, Wekinator has been downloaded over 3000 times and used in dozens of computer music performances utilising new musical instruments built with machine learning.

Vconect: Video Communications For Networked Communities

vconect2

Vconect are novel video communication technologies for communities building intelligent multi-camera and multi-location video communication, combined with synchronous video share

  • Duration: 2011-2014
  • Total Project value: €5,500,000 Euros
  • Funding Source: European Union FP7 Strep Project
  • Partners: BT, Alcatel-Lucent (Belgium), Portugal Telecom, CWI (Netherlands), Fraunhofer (Germany), Joanneum Research (Austria), Eurescom (Germany), University College Falmouth (UK)
    Co-investigator
  • Vconect website

Transforming Musicology

transforming_musicology2

Funded under the AHRC Digital Transformations in the Arts and Humanities scheme, Transforming Musicology seeks to explore how emerging technologies for working with music as sound and score can transform musicology, both as an academic discipline and as a practice outside the university. The work is being carried out collaboratively between Goldsmiths College, Queen Mary College, Oxford University, the Oxford e-Research Centre, and Lancaster University with an international partner at Utrecht University.

Telepresence Reinforcement-Learning Social Agent

teresa2

European Commission FP7 TERESA project

The TERESA project aims to develop a telepresence robot of unprecedented social intelligence, thereby helping to pave the way for the deployment of robots in settings such as homes, schools, and hospitals that require substantial human interaction. In telepresence systems, a human controller remotely interacts with people by guiding a remotely located robot, allowing the controller to be more physically present than with standard teleconferencing. We are developing a new telepresence system that frees the controller from low-level decisions regarding navigation and body pose in social settings. Instead, TERESA will have the social intelligence to perform these functions automatically. In particular, TERESA will semi-autonomously navigate among groups, maintain face-to-face contact during conversations, and display appropriate body-pose behaviour.

Achieving these goals requires advancing the state of the art in cognitive robotic systems. The project will not only generate new insights into socially normative robot behavior, it will produce new algorithms for interpreting social behavior, navigating in human-inhabited environments, and controlling body poses in a socially intelligent way. The project culminates in the deployment of TERESA in an elderly day centre. Because such day centres are a primary social outlet, many people become isolated when they cannot travel to them, e.g., due to illness. TERESA will provide a socially intelligent telepresence system that enables them to continue social participation.

Teclo Networks AG

teclo2

Teclo is using its knowledge and experience in TCP/IP data acceleration to help mobile operators and businesses significantly improve the speed, stability and efficiency of data delivery across networks around the world. Performance improvement can be significant – even where latest generation hi-speed technology is being used – and the benefits, immediate.

RAPID-MIX

rapid_mix2

Realtime Adaptive Prototyping for Industrial Design of Multimodal Interactive Expressive Technology (RAPID-MIX) is a Horizon 2020-funded project. The RAPID–MIX consortium has devoted years of research to the design and evaluation of embodied, implicit and wearable human-computer interfaces. These interfaces, developed and applied through creative fields such as music and video games, provide natural and intuitive pathways between expressivity and technology.

RAPID–MIX will bring these innovations out of the lab and into the wild, directly to users, where they will have true impact. RAPID–MIX will bring cutting edge knowledge from three leading European research labs specialising in embodied interaction, to a consortium of five creative companies.

PRAISE: Performance and Practice Agents Inspiring Social Education

praise2

PRAISE is a system for enabling online communities to practice and perform together using state of art in music analysis gesture analysis, natural language and community management. It is a social network for music education with tools for giving and receiving feedback. It aims to widen access to music education and make learning music more accessible and more social.

At its heart PRAISE will provide a supportive, social environment using the latest techniques in social networks, online community building, intelligent personal agents and audio and gesture analysis.

Any member of any community can post audio to any community for which they are a member and ask for specific kinds of feedback on various regions of that audio. Any community member can respond with text, or with other audio to emphasize a particular point about style or performance for example.

  • Duration: 2013-2016
  • Funding Source: European Union FP7 Strep Project
  • Total Project value: €3,500,000
  • Partners: Artificial Intelligence Research Institute, Spanish Research Council, Spain; Sony Computer Science Laboratory, Paris; VUB University Brussels, Principal Investigator

OMRAS2: A Distributed Research Environment for Music Informatics and Computational Musicology

omras2

Online Music Recognition And Searching (or Ontology-driven Music Retrieval & Annotation Sharing service) is a framework for annotating and searching collections of both recorded music and digital score representations such as MIDI.

  • Duration: 2007 – 2011
  • Funding Source: Engineering and Physical Sciences Research Council (EPSRC)
  • Project Value: £2,183,000
  • Goldsmiths Value: £735,000
  • Web (project): http://www.omras2.com
  • Partners: Queen Mary, Kings College, Royal Holloway, Lancaster University, University of Surrey
    Principal Investigator
  • OMRAS2 website

Multimodal Analysis of Human Nonverbal Behaviour in Real-World Settings

mahnob2

European Research Council Starting Grant (FP7) MAHNOB
Project lifespan: 2008 – 2013

Existing tools for human interactive behaviour analysis typically handle only deliberately displayed, exaggerated expressions. As they are usually trained only on series of such exaggerated expressions, they lack models of human expressive behaviour found in real-world settings and cannot handle subtle changes in audiovisual expressions typical for such spontaneous behaviour.

The main aim of MAHNOB project is to address this problem and to attempt to build automated tools for machine understanding of human interactive behaviour in naturalistic contexts. MAHNOB technology will represent a set of audiovisual spatiotemporal methods for automatic analysis of human spontaneous (as opposed to posed and exaggerated) patterns of behavioural cues including head pose, facial expression, visual focus of attention, hands and body movements, and vocal outbursts like laughter and yawns.

As a proof of concept, MAHNOB technology will be developed for two specific application areas: automatic analysis of mental states like fatigue and confusion in Human-Computer Interaction contexts and non-obtrusive deception detection in standard interview settings.

A team of 5 Research Assistants (RAs), led by the PI and having the background in signal processing and machine learning will develop MAHNOB technology. The expected result after 5 years is MAHNOB technology with the following capabilities:

  • analysis of human behaviour from facial expressions, hand and body movements, gaze, and non-linguistic vocalizations like speech rate and laughter
  • interpretation of user behaviour with respect to mental states, social signals, dialogue dynamics, and deceit/veracity
  • near real-time, robust, and adaptive processing by means of incremental processing, robust observation models, and learning person-specific behavioural patterns
  • provision of a large, annotated, online dataset of audiovisual recordings providing a basis for benchmarks for efforts in machine analysis of human behaviour.