Category Archives: Staff profiles and activity

Throwback Thursday: LumiSonic

This week we revisit a research project developed by Goldsmiths’ Mick Grierson in collaboration with Sound and Music, Whitefield Schools and Centre and the London Philharmonic Orchestra.

LumiSonic is a sound application that visualises sound in real-time in a way that allows hearing-impaired individuals to interact with a specifically designed, graphical representation of sound.

Inspired equally by experimental film tradition and neuroscientific research, Lumisonic aims to help those with hearing difficulties have a better understanding of sound.

The Lumisonic iPhone app, created by Goldsmiths Creative Computing and Strangeloop, is a “proof of concept” visualisation tool that generates a filtered visual display based on live sound. Sound is transformed in real-time through the implementation of a Fast Fourier Transform (FFT) and translated into a moving image. The aesthetic and conceptual approach was informed by research in both visual perception and experimental film. Concentric ring formations are a common feature in both visual hallucinations, and experimental animation. Research suggests this is due to the structure of the visual system. Therefore this method of representing sound could be more effective than linear ‘graphic’ approaches.

Throwback Thursday: How music ‘moves’ us – listeners’ brains second-guess the composer

Brain_NotesHave you ever accidentally pulled your headphone socket out while listening to music? What happens when the music stops?

Psychologists believe that our brains continuously predict what is going to happen next in a piece of music. So, when the music stops, your brain may still have expectations about what should happen next.

A new paper [it’s Throwback Thursday, so ‘new’ means 2010] published in Neuro Image predicts that these expectations should be different for people with different musical experience and sheds light on the brain mechanisms involved.

Research by Marcus Pearce, Geraint Wiggins, Joydeep Bhattacharya and their colleagues at Goldsmiths, University of London has shown that expectations are likely to be based on learning through experience with music.

Music has a grammar, which, like language, consists of rules that specify which notes can follow which other notes in a piece of music. According to Pearce, “the question is whether the rules are hard-wired into the auditory system or learned through experience of listening to music and recording, unconsciously, which notes tend to follow others.”

The researchers asked 40 people to listen to hymn melodies (without lyrics) and state how expected or unexpected they found particular notes. They simulated a human mind listening to music with two computational models. The first model uses hard-wired rules to predict the next note in a melody. The second model learns through experience of real music which notes tend to follow others, statistically speaking, and uses this knowledge to predict the next note.

The results showed that the statistical model predicts the listeners’ expectations better than the rule-based model. It also turned out that expectations were higher for musicians than for non-musicians and for familiar melodies—which also suggests that experience has a strong effect on musical predictions.

In a second experiment, the researchers examined the brain waves of a further 20 people while they listened to the same hymn melodies. Although in this experiment the participants were not explicitly informed about the locations of the expected and unexpected notes, their brain waves in responses to these notes differed markedly. Typically, the timing and location of the brain wave patterns in response to unexpected notes suggested that they stimulate responses that synchronise different brain areas associated with processing emotion and movement. On these results, Bhattacharya commented, “… as if music indeed ‘moves’ us!”

These findings may help scientists to understand why we listen to music. “It is thought that composers deliberately confirm and violate listeners’ expectations in order to communicate emotion and aesthetic meaning,” said Pearce. Understanding how the brain generates expectations could illuminate our experience of emotion and meaning when we listen to music.


‘5 Robots named Paul’

From 4-8th September Patrick Tresset will be exhibiting his project ‘5 Robots named Paul’ at ARS electronica 2014 festival in Austria.

Patrick Tresset a Visiting Research Fellow at Goldsmiths uses what he calls “clumsy robotics” to create autonomous cybernetic entities that are playful projections of the artist.

In ‘5 Robots named Paul’, a scene reminiscent of a drawing class has been created, with robots attached to old school desks which are equipped with biros and paper. A seated volunteer is sketched by the robots through the ‘eyes’ of their obsolete digital cameras and webcams. The robot’s depictions look untidy, mimicking the movements of a human hand creating sketches which are pinned to the wall throughout the duration of the exhibition.

The project has been built upon research findings from computer vision, artificial intelligence and cognitive computing.

Patrick-Tresset-Sketches-by-Paul-2011

 

Event: Prof Mark Bishop introduces ARTIFICIAL INTELLIGENCE

AIFrom Westworld to Wal-E, Hollywood’s fascination with robots has created films that ask serious questions about human identity, technology and responsibility.

On Saturday 20 September 2014, Goldsmiths’ Professor Mark Bishop, a world authority on artificial intelligence, introduces a screening of A.I. ARTIFICIAL INTELLIGENCE at the V&A Museum. This sci-fi, created by Stanley Kubrick and Stephen Spielberg, tells the story of a prototype robot child named David (Sixth Sense’s Haley Joel Osment) who is programmed to ‘love’.

Examining the film’s exploration of cognitive computing design, Professor Bishop traces the film’s genesis in Kubrick’s earlier 2001: A Space Odyssey and A Clockwork Orange, and discusses its relation to current A.I. technology and philosophy.

This event, part of London Design Festival at the V&A, was programmed by Goldsmiths Computing’s Phoenix Fry, and is one of four film events that explore how design alters our perception of reality.

Where: Victoria & Albert Museum Lecture Theatre
When: 7pm – 10pm Saturday 20 September 2014
Tickets: Buy online £10 (£7 concessions) – buy online

Throwback Thursday: British Museum Motion Capture Workshop

britishmuseum

This week’s Throwback Thursday post revisits an EAVI project from 2011.

In March 2011, Andrea Kleinsmith, Will Robinson, Parag Mital, Bruno Zamborlin and  Marco Gillies from Goldsmiths’ Embodied Audio-Visual Interaction research group ran a series of workshops for the British Museum Samsung Digital Discovery Research centre.

These workshops allowed 13-18 year olds to explore characters and artefacts from the museums collection by performing in the Goldsmiths’ motion capture suite. The participants movements were mapped on to images of characters from the museum collection.

More images on the British Museum’s Flickr site


  • Andrea Kleinsmith is now a Postdoctoral Researcher at the Department of Computer & Information Science & Engineering, University of Florida

Research focus: Make Your SoundLab

soundlab/Make Your SoundLab/ is a collaborative project helping people with learning disabilities express themselves musically and collaborate with other people using readily available musical technologies.

The first step in doing this will be to evaluate a number of technologies from gestural controllers, to iPad and other touch based devices together with a wide range of apps and other enabling software to work out which combinations of technologies are best for different groups of users and in different working environments. The project will then look at building a software framework to enable educators to make the best use of these technologies without having to be a technical expert.

The first SoundLab workshop, in June 2014, investigated which different technologies would help people to make music, and tried to discover what kind of music the participants wanted to make, and why. Everyone was able to try out any or all of the available technologies (Kinnect, Tether, Leap Motion, IK Multimedia iRing) to see how they worked for them and if they found them useful in achieving their own goals in making music. Interviewing participants gave the team a great insight into motivations and also how some of the technologies might help. One participant, Lily, was clear that she wanted to make proper beats that didn’t seem childish, and that she was willing to take her time to make sure the tracks she developed were the best they could be. Wayne, already very familiar with the iPad as a tool for music making, seemed as determined as ever to try everything to see how it might fit with his way of making tracks.

1650881

The second workshop happened about about a week later, and involved a larger group of participants. Over 40 people took part, with a big mix of motivations for music making among the group. Some were writing musicals, someone wanted to do skiffle using iPads, and quite a few wanted to make dance music or DJ. In this session, the team learned that working with a large group over a much longer time is really different to the short workshop sessions. They also started to see that people making music together at the same time, like a band, is something that the participants were interested in. They used the Kinnect to trigger drum patterns and this was really successful – many different people wanted to have a go and it seemed to encourage dancing as a way to get into the music and do the triggering at the same time. They learnt quite quickly that visual feedback is very important for the Kinnect experience: if a player can see their body motions on a screen, they begin to understand how the music is being triggered by their own movements, without too much explanation.

The team

  • Heart n Soul is widely acknowledged as a leader in the field of inclusive practice, creating space for artists with and without learning disabilities to come together and make high quality work
  • EVIE, the Embodied AudioVisual Interaction Group at Goldsmiths, provides a strong track record in the area of research-led music and music technology design
  • Public Domain Corporation provides interactive experiences and technology for the games and digital arts sectors.

Visit the Make Your SoundLab website   |  Hear the team talking about their work

 

New Scientist article uncovers automatic art

newscientistAn article in this week’s New Scientist magazine provides a short history of Automatic Art – from Russian Constructivism to protein visualisations and acid house album art.

The article gives an overview of the new exhibition, Automatic Art, which explores how art built on logical and mathematical rules ended up giving science new ways of seeing the world.

The Shamen’s Heal (The Separation) video from 1995, directed by William Latham, now Professor of Computing at Goldsmiths.