Throwback Thursday: Scan_Memories

ini

We’re going all the way back to 2009 for this week’s Throwback Thursday look at past projects developed at Goldsmiths Computing. 

The Scan_Memories project investigated how new technology can create or participate in the process of reconstructing memories in comparison with the existing way of remembering the deceased and being remembered by the bereaved.

Developed at Goldsmiths by Miguel Andres-Clavera and Inyong Cho, the project used radio-frequency ID, mobile and multimedia technologies to give people a gate to keep an emotional relation with the deceased person.

Clavera and Cho said: “The project opens a heterogeneous and direct access to the memories materialized in physical spaces and in objects connected with the dead, presenting the dialectic between constructed formations based on presence and absence, and memory reconstruction through patterns technology mediated.”

Watch the 20 minute documentary

 

Theseus Returned

bish

Mark Bishop’s short story ‘Theseus Returned’ recently made the short list for CyberTalk Magazine’s Flash Fiction Competition.

The story was also published in ‘The Envelope: A Collection of Short Stories’ by Stephen Westland and Helen Disley which is available now on the Kindle.

It will appear in Cybertalk’s printed magazine later in the year.

Throwback Thursday: LumiSonic

This week we revisit a research project developed by Goldsmiths’ Mick Grierson in collaboration with Sound and Music, Whitefield Schools and Centre and the London Philharmonic Orchestra.

LumiSonic is a sound application that visualises sound in real-time in a way that allows hearing-impaired individuals to interact with a specifically designed, graphical representation of sound.

Inspired equally by experimental film tradition and neuroscientific research, Lumisonic aims to help those with hearing difficulties have a better understanding of sound.

The Lumisonic iPhone app, created by Goldsmiths Creative Computing and Strangeloop, is a “proof of concept” visualisation tool that generates a filtered visual display based on live sound. Sound is transformed in real-time through the implementation of a Fast Fourier Transform (FFT) and translated into a moving image. The aesthetic and conceptual approach was informed by research in both visual perception and experimental film. Concentric ring formations are a common feature in both visual hallucinations, and experimental animation. Research suggests this is due to the structure of the visual system. Therefore this method of representing sound could be more effective than linear ‘graphic’ approaches.

Throwback Thursday: How music ‘moves’ us – listeners’ brains second-guess the composer

Brain_NotesHave you ever accidentally pulled your headphone socket out while listening to music? What happens when the music stops?

Psychologists believe that our brains continuously predict what is going to happen next in a piece of music. So, when the music stops, your brain may still have expectations about what should happen next.

A new paper [it’s Throwback Thursday, so ‘new’ means 2010] published in Neuro Image predicts that these expectations should be different for people with different musical experience and sheds light on the brain mechanisms involved.

Research by Marcus Pearce, Geraint Wiggins, Joydeep Bhattacharya and their colleagues at Goldsmiths, University of London has shown that expectations are likely to be based on learning through experience with music.

Music has a grammar, which, like language, consists of rules that specify which notes can follow which other notes in a piece of music. According to Pearce, “the question is whether the rules are hard-wired into the auditory system or learned through experience of listening to music and recording, unconsciously, which notes tend to follow others.”

The researchers asked 40 people to listen to hymn melodies (without lyrics) and state how expected or unexpected they found particular notes. They simulated a human mind listening to music with two computational models. The first model uses hard-wired rules to predict the next note in a melody. The second model learns through experience of real music which notes tend to follow others, statistically speaking, and uses this knowledge to predict the next note.

The results showed that the statistical model predicts the listeners’ expectations better than the rule-based model. It also turned out that expectations were higher for musicians than for non-musicians and for familiar melodies—which also suggests that experience has a strong effect on musical predictions.

In a second experiment, the researchers examined the brain waves of a further 20 people while they listened to the same hymn melodies. Although in this experiment the participants were not explicitly informed about the locations of the expected and unexpected notes, their brain waves in responses to these notes differed markedly. Typically, the timing and location of the brain wave patterns in response to unexpected notes suggested that they stimulate responses that synchronise different brain areas associated with processing emotion and movement. On these results, Bhattacharya commented, “… as if music indeed ‘moves’ us!”

These findings may help scientists to understand why we listen to music. “It is thought that composers deliberately confirm and violate listeners’ expectations in order to communicate emotion and aesthetic meaning,” said Pearce. Understanding how the brain generates expectations could illuminate our experience of emotion and meaning when we listen to music.


CONGRATULATIONS!

best_4

Computing graduates celebrate their success at the annual graduation ceremony at Goldsmiths.

It was all smiles on the day where graduating students received their degrees watched by proud parents and friends. Teaching staff from across the computing department were beaming with pride, and loved posing for all the photos.

A special mention goes to Jack Hunt, from BSc Computer Science who came out with a First class result and is going on to study a PhD at Oxford.

best_5

best_1

FUTURE GRAPHICS, MA/MFA Computational Arts students at the V&A

future_graphics

A free programme of short films showcasing cutting-edge motion graphics, CGI environments and digital art on film featuring students from the MA/MFA Computational Arts programme at Goldsmiths.

Curated by Design on Film, Factory Fifteen and Penny Hilton, the films play on a loop throughout the V&A’s opening hours.

Part of the London Design Festival at the V&A 2014.

Sat 13 September 2014 – Tue 16 September 2014
10.30 – 17.30
V&A, British Galleries, Room 56c, Cromwell Road, London SW7 2RL


Throwback Thursday: iMC Rap Maker

Back in 2009, BSc Creative Computing Student Eric Brotto released his final year project as on the iPhone app store. The app uses real time audio analysis and manipulation technology on a mobile phone to create a rap soundtrack from a users voice.

From the app store site: “Now you can be a rapper! iMC Rap Maker you talking and then transforms it into a rap song. Syncing your voice to the beat and even adding DJ scratches to your vocal means you get a Hip Hop hit produced by you. You can then share your creation with your friends via Facebook and SoundCloud. Download your iMC today!”


Eric Brotto is now mentor-in-residence at Startup Reykjavik Accelerator Programme, co-founder of Creative Bytes Worldwide, and content creator for DECODED.

Creativity, independence and learning by doing.