OMRAS2: A Distributed Research Environment for Music Informatics and Computational Musicology

omras2

Online Music Recognition And Searching (or Ontology-driven Music Retrieval & Annotation Sharing service) is a framework for annotating and searching collections of both recorded music and digital score representations such as MIDI.

  • Duration: 2007 – 2011
  • Funding Source: Engineering and Physical Sciences Research Council (EPSRC)
  • Project Value: £2,183,000
  • Goldsmiths Value: £735,000
  • Web (project): http://www.omras2.com
  • Partners: Queen Mary, Kings College, Royal Holloway, Lancaster University, University of Surrey
    Principal Investigator
  • OMRAS2 website

Multimodal Analysis of Human Nonverbal Behaviour in Real-World Settings

mahnob2

European Research Council Starting Grant (FP7) MAHNOB
Project lifespan: 2008 – 2013

Existing tools for human interactive behaviour analysis typically handle only deliberately displayed, exaggerated expressions. As they are usually trained only on series of such exaggerated expressions, they lack models of human expressive behaviour found in real-world settings and cannot handle subtle changes in audiovisual expressions typical for such spontaneous behaviour.

The main aim of MAHNOB project is to address this problem and to attempt to build automated tools for machine understanding of human interactive behaviour in naturalistic contexts. MAHNOB technology will represent a set of audiovisual spatiotemporal methods for automatic analysis of human spontaneous (as opposed to posed and exaggerated) patterns of behavioural cues including head pose, facial expression, visual focus of attention, hands and body movements, and vocal outbursts like laughter and yawns.

As a proof of concept, MAHNOB technology will be developed for two specific application areas: automatic analysis of mental states like fatigue and confusion in Human-Computer Interaction contexts and non-obtrusive deception detection in standard interview settings.

A team of 5 Research Assistants (RAs), led by the PI and having the background in signal processing and machine learning will develop MAHNOB technology. The expected result after 5 years is MAHNOB technology with the following capabilities:

  • analysis of human behaviour from facial expressions, hand and body movements, gaze, and non-linguistic vocalizations like speech rate and laughter
  • interpretation of user behaviour with respect to mental states, social signals, dialogue dynamics, and deceit/veracity
  • near real-time, robust, and adaptive processing by means of incremental processing, robust observation models, and learning person-specific behavioural patterns
  • provision of a large, annotated, online dataset of audiovisual recordings providing a basis for benchmarks for efforts in machine analysis of human behaviour.

Creativeworks – AHRC Digital Economy Hub

creativeworks2

Aim: build new partnerships and commercial opportunities between academia and the Creative Economy

  • Duration: 2012 – 2016
  • Total Project value: £4,000,000
  • Funding Source: Arts and Humanities Research Council
  • Partners: Queen Mary, University of London (lead institution); Birkbeck College; Central School of Speech and Drama; City University; the Courtauld Institute; Kingston University; Guildhall School of Music and Drama; King’s College London; Royal Holloway; School of Oriental and African Studies; Roehampton University; Trinity Laban Conservatoire of Music and Dance; University of the Arts
    Co-investigator and Goldsmiths lead
  • Creativeworks website

BioBlox

bioblox2

BioBlox is a multimedia interactive platform combining 3D and 2D visualisations, interactions via haptics or gestures, and sonification. It is also a puzzle game  in which each team-member controls one molecule.

We address the docking of molecules onto a given target molecule (typically a protein). Such a molecular target offers a complex 3D form with many pockets, holes and textured regions — imagine a moon-like surface wrapped around a 3D folded ribbon—on its bounding surface where other (usually much smaller) molecules can come by and bind to. The process is dynamic in that each such molecule has a bounding form which slightly deforms upon interactions and under the constant bombardment of tiny water molecules (in vivo simulation). This problem is key to the understanding of all cellular processes and in particular to the practical application of drug design.

Crowd sourcing of complex scientific problem is still in its infancy, but is gaining grounds and approval by the scientific communities. Early examples include SETI@home (search for extra terrestrial life) and GalaxyZoo (classification of galaxies). More recent and directly relevant projects are FoldIt and EteRNA: on-line scientific puzzle-solving games about folding proteins and RNA.

Automatic Sentiment Analysis in the Wild

sewa2

European Commission Horizon 2020 Programme SEWA

The Automatic Sentiment Analysis in the Wild (SEWA) is a EC H2020 funded project. The main aim of SEWA is to deploy and capitalise on existing state-of-the-art methodologies, models and algorithms for machine analysis of facial, vocal and verbal behaviour, and then adjust and combine them to realise naturalistic human-centric human-computer interaction (HCI) and computer-mediated face-to-face interaction (FF-HCI).

This will involve development of computer vision, speech processing and machine learning tools for automated understanding of human interactive behaviour in naturalistic contexts. The envisioned technology will be based on findings in cognitive sciences and it will represent a set of audio and visual spatiotemporal methods for automatic analysis of human spontaneous (as opposed to posed and exaggerated) patterns of behavioural cues including continuous and discrete analysis of sentiment, liking and empathy.

Astrogrid

astrogrid2

AstroGrid was the UK’s Virtual Observatory development project from 2001-2010. The AstroGrid project began in 2001 as part of the UK’s government e-Science initiative, and proceeded in three phases. Following a short exploratory phase, the original AstroGrid project centred on research and prototyping; the follow-on project was the engineering and construction phase. The third phase was an operations project.

We launched working services and user software in 2008. AstroGrid software and services are still used by astronomers all over the world on a daily basis. Fionn Murtagh was a Lead Investigator and a founder member.

ACE: Autonomic Agents For Online Cultural Experiences

ace_autonomic_agents_cultural2

ACE enables users to synchronously share their online experiences including social synchronous browsing and annotation

  • Duration: 2011-2013
  • Goldsmiths: €325,000 Euros
  • 1st call of the ERA-NET CHIST-ERA (European Coordinated Research on Long-term Challenges in Information and Communication Sciences\Technologies
    Partners: Artificial Intelligence Research Institute, Spanish Research Council, Spain; Toulouse Institute of Computer Science Research, France
    Principal Investigator
  • ACE project website

Andy Freeman

andy_freemandata

Andy Freeman works with communities and artists to create digital and citizen science interventions to build awareness of digital, social and environmental issues. His practice has ranged from internet audio performance tools (earshot 1998-2002) through to citizen science based attempts to trace the movement of pollutants around the Thames estuary using sensor networks and digital mapping (Talking Dirty, 2015).

Andy has a wide experience of internet development for the media sector and currently teaches interdisciplinary courses in digital methods and visualisation for Goldsmiths’ Department of Computing. Current research interests include citizen science, digital mapping and tools for hyperlocal journalism and activism.

Rebecca Fiebrink

rebecca-fiebrink

Dr Rebecca Fiebrink is a Lecturer in Computing at Goldsmiths. Her work lies at the intersection of human-computer interaction and machine learning; many of her projects focus on making machine learning and data mining techniques more usable by—and useful to—domain experts.

She is the author of the Wekinator software for real-time interactive machine learning, which enables musicians, composers, and interaction designers to apply machine learning to create new systems for real-time analysis and control. She has also published in the domain of music information retrieval, on the topic of automated musical audio analysis.

Fiebrink is a Co-I on the Horizon 2020-funded RAPID-MIX project on real-time adaptive prototyping for industrial design of multimodal expressive technology, which aims to produce improved machine learning and signal analysis tools for software developers creating music, games, and health applications.

Publications

Katan, S., M. Grierson, and R. Fiebrink. Using interactive machine learning to support interface development through workshops with disabled people. Proceedings of ACM CHI, April 18–23 2015.

Wolf, K. E., G. Gliner, and R. Fiebrink. A model for data-driven sonification using soundscapes. Proceedings of ACM Conference on Intelligent User Interfaces (IUI), March 29–April 1, 2015.

Hipke, K., M. Toomim, R. Fiebrink, and J. Fogarty. BeatBox: End-user interactive definition and training of recognizers for percussive vocalizations. Proceedings of AVI 2014 International Working Conference on Advanced Visual Interfaces, Como, Italy, May 27–30.

Laguna, C., and R. Fiebrink. 2014. Improving data-driven design and exploration of digital musical instruments. CHI’14 Extended Abstracts, 26 April–1 May.

Fried, O., and R. Fiebrink. Cross-modal sound mapping using deep learning. Proceedings of New Interfaces for Musical Expression (NIME), Daejeon, South Korea, May 27–30, 2013.

Fiebrink, R., and D. Trueman. End-user machine learning in music composition and performance. Presented at the CHI 2012 Workshop on End-User Interactions with Intelligent and Autonomous Systems. Austin, Texas, May 6, 2012.

Morris, D., and R. Fiebrink. Using machine learning to support pedagogy in the arts. Personal and Ubiquitous Computing, April 2012.

Fiebrink, R., P. R. Cook, and D. Trueman. Human model evaluation in interactive supervised learning. Proceedings of ACM CHI, Vancouver, May 7–12, 2011.

Projects

RAPID-MIX

I am a CO-I on Horizon 2020-funded project Realtime Adaptive Prototyping for Industrial Design of Multimodal Interactive Expressive Technology (RAPID-MIX).

The RAPID–MIX consortium has devoted years of research to the design and evaluation of embodied, implicit and wearable human-computer interfaces. These interfaces, developed and applied through creative fields such as music and video games, provide natural and intuitive pathways between expressivity and technology.

RAPID–MIX will bring these innovations out of the lab and into the wild, directly to users, where they will have true impact. RAPID–MIX will bring cutting edge knowledge from three leading European research labs specialising in embodied interaction, to a consortium of five creative companies.

Wekinator

I am the author of the Wekinator software for real-time, interactive machine learning. Wekinator facilitates the use of machine learning as a prototyping and design tool, enabling composers, musicians, game designers, and makers to create new gestural interactions or semantic analysis systems from data.

The Wekinator has been downloaded over 3000 times and used in dozens of computer music performances utilising new musical instruments built with machine learning.