Date: October 30th, 2012
Cate: Uncategorized

Neurosky RSVP Game

The Sound, Image and Brain project is looking for Neurosky users to play an online game and help us gather some data for event related potential analysis.  More details here:  http://www.doc.gold.ac.uk/eavi/EAVI/?page_id=171

Date: January 2nd, 2012
Cate: Uncategorized

Mogees : transforming any surface into a music instrument with sound

I just published a short video that shows how a cheap contact microphone can be used to recognize different types of fingers touch and transform any surface into an interactive board.  Users can train the software with their own ‘gestures’, using both bare hands and objects. In the video demo we put the microphone over different surfaces such as kitchen tables and balloons. The sound synthesis is based on two different techniques:

1 - physical modeling, which consists in generating the sound by simulating physical laws. Different materials can be simulated, such as membranes, strings, tubes and plates

2 – mosaicing, that works as follow: first, users load a sound folder;  then, the noise coming from the microphone is analysed and the software continuously finds and plays its closest segment within the sound folder.

More details: www.brunozamborlin.com/mogees

Date: May 31st, 2011
Cate: Uncategorized

Active Appearance Modeling

I’ve been working on modeling facial expressions and head-pose, and recently came across two very good libraries.  One, Jason Mora Saragih’s work on shape modeling [1], and a library by GreatYao, aam-library.

Here I am using Jason’s library for automatically building a database of images with 67 facial landmarks tagged in each of them.

After 200 images of training, I use aam-library to build an active appearance model and register my own face with a reprojection.  The results are pretty fun:

References:

[1] J. M. Saragih, S. Lucey, and J. F. Cohn. Face Alignment through Subspace Constrained Mean-Shifts. International Journal of Computer Vision (ICCV), September, 2009.
Date: February 28th, 2011
Cate: Uncategorized

First prize at the Guntman Musical Instrument 2011 competition

The MO-Kitchen project, developed by the Interlude consortium in which Bruno Zamborlin worked last year at IRCAM, won the first prize of the Guntman Musical Instrument 2011 competition

Date: December 21st, 2010
Cate: Uncategorized

Collaboration with IRCAM

Last week we visited IRCAM (Institute de Recherche et Coordination Acoustique/Musique) in Paris, where Bruno Zamborlin is working before joining us for his PhD. In particular we met with Fréderic Bevilacqua, leader of the Real-time Interaction Group. They are doing a lot of exciting work around gesture and movement. We have agreed to collaborate in future, around Bruno’s PhD and other projects.

Date: November 18th, 2010
Cate: Uncategorized
1 msg

A new research group

We are pleased to announce the foundation of a new research group in Goldsmiths Department of Computing, the Embodied Audio-Visual Interaction group.