Category Archives: Inspiration

Goldsmiths PhD honoured at Prix Ars Electronica for ‘YouTube Smash Up’

Parag Mital has received an honorary mention at Prix Ars Electronica for work completed as part of his PhD here at Goldsmiths Computing.

‘YouTube Smash Up’ attempts to generatively produce viral content using video material from the Top 10 most viewed videos on YouTube.

Each week, the Number 1 video of the week is resynthesized using a computational algorithm matching its sonic and visual content to material only from the remaining Top 10 videos. This other material is then re-assembled to look and sound like the Number 1 video. The process does not copy the file, but synthesizes it as a collage of fragments segmented from entirely different material.

In the video above, for example, Pharrell Williams’ Happy is recreated using music videos by Chris Brown, Lady Gaga, John Legend and Katy Perry, plus clips and trailers from FootlooseX-Men: Days of Future Past and The Voice.

Using YouTube’s interface, the videos are also textually tagged with popular culture’s “most viewed” artifacts, i.e. the database containing the Top 10 YouTube videos. This process attempts to inject the video into the community, masquerading as an innocent tribute video. The video’s audience, often viewers hoping to find the original Number 1 video, are almost certainly disturbed by the videos, as illustrated by the video’s overwhelmingly negative “like” ratio, and by comments such as, “now im [sic] blind”, “Will someone kill me in my sleep because I watched this video?” and another commenter’s reply to the previous comment, “me 2 [sic]”.

Despite their poor reception, likely due to their cut-up and abstract nature, most smashups have been the subject of copyright violations from YouTube’s automated copyright infringement detection system, Content ID. In each case, Content ID flags the videos as duplicates of the Number 1 video, rather than flagging any of the content actually used from the Number 2 to 10 videos. This automated system attempts to automatically discover copyrighted content in newly uploaded videos, informing the original content holders if it finds anything. Most likely the content-rights holders never watch the supposedly infringing videos, and instead forward a cease-and-desist notice threatening a lawsuit. Despite the powerful language used by the content-rights holders, the videos were all put back online after multiple rounds of fair-use arguments and even more cease-and-desist notices.

The videos manipulate a level of representation indistinguishable by a robot perception, a space between pixels and perception, juxtaposing cultural fragments at a proto-object layer in an entirely automated process: Miley Cyrus’s lips collaged against the background of a troupe of dancing animals or Psy’s forehead dancing without the remaining pieces of Psy. Within this space, a disjunct between a state-of-the-art robot perception and those of unsuspecting YouTubers is revealed, asking what constitutes a copyrightable cultural artifact, as algorithms become increasingly more intelligent and as data continues to be manipulated by even more complex pattern-recognition and information-retrieval algorithms. Finally, the videos attempt to probe a dystopian future of automated content generation, when computer algorithms are not only capable of modeling cultural artifacts but also producing them, further embracing their present role as mere content curators.


Parag K. Mital is an artist and interdisciplinary researcher obsessed with the nature of information, representation and attention. Using film, eye-tracking, EEG, and fMRI recordings, he has worked on computational models of audiovisual perception from the perspective of both robots and humans, often revealing the disjunct between the two, through generative film experiences, augmented reality hallucinations and expressive control of large audiovisual corpora. Through this process, he balances his scientific and arts practice, with both reflecting on each other: the science driving the theories, and the artwork re-defining the questions asked within the research. 

Some of his earlier work includes a resynthesis of Jan Svankmajer’s work, a resynthesis of The Simpson’s intro using only the Family Guy, and a resynthesis of Michael Jackson’s Beat It using nature recordings.

His PhD, “Audiovisual Scene Synthesis”, was funded by the Department of Computing, Goldsmiths, University of London, under the supervision of Mick Grierson and Tim Smith.

Music Computing graduate wins top prize at Human-Computer Interaction conference

Music Computing graduate Pedro Kirk has won first prize in the student research competition at CHI 2015 conference in Seoul, Korea.

His paper Can Specialised Electronic Musical Instruments Aid Stroke Rehabilitation? won the top prize for any student in the field of Human-Computer Interaction. He successfully beat students from every other institution who applied, including MIT, Georgia Tech, University of Washington and Carnegie Mellon University.

Now studying on the MSc in Music Mind & Brain at Goldsmiths, he presented work that he produced as part of his year 3 undergraduate Music Computing project, which he showed at the 2014 Undergraduate Degree Show.

Abstract
Stroke patients often have limited access to rehabilitation after discharge from hospital leaving them to self-regulate their recovery. Previous research has indicated that several musical approaches can be used effectively in stroke rehabilitation.

Stroke patients (n = 43), between 6 months and 19 years post-stroke, took part in specially created workshops playing music, both in groups and individually, using a number of digital musical interfaces. Feedback forms were completed by all participants, which helped to develop the prototypes and gain insights into the potential benefits of music making for rehabilitation.

93% of participants stated they thought that the music workshops were potentially beneficial for their rehabilitation. The research project contributes to the field of HCI by exploring the role of computer based systems in stroke rehabilitation.


* Copyright is held by the owner/author(s). CHI’15 Extended Abstracts. Apr 18-23, 2015, Seoul, Republic of Korea.
ACM  978 -1-4503-3146-3/15/04.

Computational Arts student wins Saudi innovation & entrepreneurship prize

MA Computational Arts student Hadeel Ayoub has won an Innovation & Entrepreneurship Prize for Saudi Students in the UK.

Her prize-winning project, the Sign Language Glove, uses flex sensors to ‘translate’ the hand and finger positions used in sign language into alphabet characters on an LED display.

As well as winning the £1000 bronze medal prize, Hadeel was approached to present her innovation at the Innovation Leaders Conference at Cambridge Judge Business School and the Arab Innovation Network Annual Conference  in Jordan.

She was also approached by Evolvys Venture Builders, a technology network that identifies innovations and helps to bring them to the market. The CEO, Dr. Evolves Oudrhiri (one of the competition judges) offered Hadeel some of their microchips to incorporate into the next prototype of the sign language glove.

“I got the idea for the sign language with arduino project while I was working on a photo editing software which allows the user to control image pixels and has the freedom to input letters as pixels. I thought to substitute the keyboard input with interactive sign language using flex sensors and an arduino. 

“For the flex sensors for the fingers I used an accelerometer to detect hand orientation. For aesthetic reasons, I replaced the microcontroller from arduino uno to the sewable lily pad so I could hide it within the glove fabric. I also got some conductive thread to patch things up without breaking the circuit.

“Finally, instead of the serial monitor (and again for aesthetic purposes), I got an LED 4-digit-numerical display screen to display the letters. I still haven’t decided if my device should be wireless but if so, I will also attach an external battery power supply and a bluetooth module.”

(Text adapted from Hadeel Ayoub’s Sign Language Glove project blog)


 

 

Develop lessons for British Museum’s Samsung Digital Discovery Centre

samsung_homepromo

The British Museum want to hear from organisations or individuals who can design and deliver innovative and experimental digital learning sessions that engage visitors with the British Museum’s collections. 

The museum’s Samsung Digital Discovery Centre delivers a programme of digital learning for schools, family and teen audiences – and are seeking new sessions to be developed and delivered by external partners. These sessions will be included in the monthly one-off ‘Innovation Lab’ programme.

Pitch a session
Session proposals should be experimental or scratch-like, and should test out new ideas or technologies, or ways of working with family audiences. British Museum do not expect new software to be developed as part of these sessions, but the Innovation Lab will be a great forum to test out new software or new uses of software with a family audience in a digital learning environment.

“We want our audiences to be excited about using new technologies, engaged with our collections, and experiencing new things. We are also interested in sessions that do not replicate what we already provide in the Samsung Digital Discovery Centre.”

If you would like to pitch an idea, send your idea using the following headings:

  1. Title: (or working title)
  2. Session description: (One paragraph description of your activity. Please include learning outcomes, and any outputs created by visitors.)
  3. Session times: (We deliver drop-in sessions on Saturdays (11-4pm), and workshop sessions on Sundays (11am-1pm and 2pm-4pm). Will your activity be a drop-in or a workshop session? Please feel free to suggest alternative times within the 11am-4pm timeframe if it is more appropriate for you.)
  4. Target audience and age range: (Is this an activity for families or teens?)
  5. Brief session plan: (Please detail a brief session plan.)
  6. Collection: (How does your idea your engage your target audience with the British Museum’s collection? If appropriate, please give examples of objects you will use or reference.)
  7. Technology and resources: (Please include what hardware from the SDDC you will use (see notes below), what additional resources you will need, and if you will be bringing in or require the hire of any additional equipment that is not available in the SDDC.)
  8. Budget: (We expect submissions of between £350-£1,000. Please detail how this money will be spent on development time, delivery time, resources and other expenses. We would expect one day of delivery to be included in this cost.)
  9. You: (Tell us a bit about you or your company, including why are you qualified to develop and lead this activity, or what skills do you hope to develop by doing this project.)

Notes on technology available in the Samsung Digital Discovery Centre

  • 75” eboard with touch screen overlay
  • 46” LCD TV and 55” LED TV
  • 25 High spec laptops with internet access, Adobe CS5 Pro production suite, Blender,
  • Audacity and standard Microsoft Office software
  • 55 Galaxy Note 10.1” android tablets with stylus
  • 40 Galaxy Note II and Note III smartphones
  • 25 digital cameras, 40 Galaxy cameras and 6 digital SLR cameras with standard and wide angle lenses (Samsung NX1 and NX100)
  • Kinect system
  • 6 HD digital camcorders with microphone jack
  • 6 digital USB microscopes
  • 3 scanners
  • 1 Samsung SUR40 multi-touch table
  • Green screen

Please send your completed pitch to Lizzie Edwards and Juno Rae by 12 noon, Monday 13 April 2015.

Major funding for next-generation tech that adapts to human expression

Computer scientists at Goldsmiths, University of London have been awarded more than £1.6m to lead an international team in accelerating the development of advanced gaming and music technology that adapts to human body language, expression and feelings.

The success of first generation interfaces that capture body movement, such as the Nintendo Wii and Microsoft Kinect, has demonstrated a public appetite for technology that allows users to interact with creative multimedia systems in seamless ways.

The Rapid Mix consortium will now use years of research to develop advanced gaming, music and e-health technology that overcomes user frustrations, meets next generation expectations, and allows start-ups to compete with developments from major corporations, such as Apple, Google and Intel.

Rapid Mix will bring cutting-edge knowledge from three leading technology labs to a group of five creative industry SMEs, based in Spain, Portugal, France and the UK, who will use the research to develop prototype products.

Newly developed Application Programming Interfaces (the tools that allow software to interact with another programme) and new hardware designs will also be made available to the Do-It-Yourself community through the open access platform.

Rapid Mix is led by Professor Atau Tanaka from the Department of Computing at Goldsmiths, University of London, with Dr Rebecca Fiebrink and Dr Mick Grierson.

Professor Tanaka comments: “Humans are highly expressive beings. We communicate verbally but the body is also a major outlet for both conscious and unconscious expression. In this quest for expression we’ve created art, music and technology.

“Technological advances have their greatest impact when they enable us to express ourselves, so it logically follows that new, disruptive innovations need interfaces that take advantage of our expressivity, rather than acting to restrict it”.

“Microsoft has promised a Kinect 2 that detects heart rate to assess gamers’ responses, but small European businesses struggle to compete with the corporations when it comes to getting amazing products from the lab into the public’s hands. Our project aims to overcome this challenge and get new technology directly to users, where it will have true impact.”

Prof Mark Bishop in The Independent

ex-machina
Ex Machina (film still)

Mark Bishop, Professor of Cognitive Computing at Goldsmiths features in The Independent with an article about the limits of Artificial Intelligence.

He outlines three arguments that address the question of consciousness and computing. The first, by John Searle, dates from 1980 and is known as the Chinese Room; if a computer convinces a Chinese speaker that it understands Chinese by responding perfectly to their questions, it has passed the Turing Test. But does it really understand Chinese, or does it only simulate understanding? The second is Bishop’s own argument from his 2002 paper, Dancing With Pixies. “If it’s the case that an execution of a computer program instantiates what it feels like to be human,” he says, “experiencing pain, smelling the beautiful perfume of a long-lost lover – then phenomenal consciousness must be everywhere. In a cup of tea, in the chair you’re sitting on.”

This philosophical position – known as “panpsychism” – that all physical entities have mental attributes, is one that Bishop sees as Strong AI’s absurd conclusion. Shadbolt agrees. “Exponentials have delivered remarkable capability,” he says, “but none of that remarkable capability is sitting there reflecting on what very dull creatures we are. Not even slightly.”

The third argument Bishop makes is that there’s something about human creativity that computers just don’t get. While a computer program can compose new scores in the style of JS Bach, that sound plausibly like Bach compositions, it doesn’t design a whole new style of composition. “It might create paintings in the style of Monet,” he says, “but it couldn’t come up with, say, Duchamp’s urinal. It isn’t clear to me at all where that degree of computational creativity can come from.”

http://www.independent.co.uk/life-style/gadgets-and-tech/features/alex-garlands-film-ex-machina-explores-the-limits-of-artificial-intelligence–but-how-close-are-we-to-machines-outsmarting-man-9996624.html

Mark Bishop’s profile at Goldsmiths:
http://www.gold.ac.uk/computing/staff/m-bishop/

Christian Marclay at WHITECUBE, Bermonsey

christian_M

Christian Marclay has a new solo exhibition at White Cube. It features surround audiovisual multi-screen projection works using a configuration similar Goldsmith’s new ‘SIML’ space.

The work was produced with the assistance of two of Goldsmith’s MA & MFA Computational Arts students Haein Kim and Antonio Daniele and one of our PhD students Diego Macedodefagundes.

About the exhibition:

Continuing Marclay’s long-standing interest in the relationship between image and sound, the exhibition is comprised of a series of works on canvas and paper that feature onomatopoeia taken from comic books. Unlike earlier instances of sound mimesis in his work, these focus solely on the wet sounds suggestive of the action of painting. Combining cartoon-strip imagery and the dripping, pouring and splashing noises associated with gestural abstraction, the works ironically bridge a gap between art movements as distinct as Abstract Expressionism and Pop Art. This is also reflected in the method in which they have been made; a combination of painting overlaid with screen printing.

A further set of onomatopoeia is put in motion for the first time in a large-scale video installation which projects across four walls. To make the work, the artist collated a lexicon of the sound effects made by characters in superhero stories. The scanned swatches were then animated using the software programme After Effects in a dynamic choreography that suggests the acoustic properties of each word. ‘Boom’, for example, is no longer static on the page, but bursts into life in a sequence of colourful explosions, while ‘Whooosh!’ and ‘Zoooom!’ travel at high speed around the walls. The work fuses the aural with the visual, and immerses the viewer in a silent musical composition.

The aqueous motif introduced with the paintings runs throughout the exhibition, surfacing in a number of new works that allude to everyday life. In a new video installation entitled Pub Crawl (2014), the artist coaxes sound from the empty glasses, bottles and cans that he finds abandoned on the streets of East London, during early morning weekend walks. In a series of projections that run the length of the gallery’s corridor, these discarded vessels are hit, rolled and crushed, forming a lively sound track that echoes throughout the space.

http://whitecube.com/exhibitions/christian_marclay_bermondsey_2015/