All posts by pfry

Student profile: Terence Broad (BSc Creative Computing)

Terence is in the second-year of BSc Creative Computing at Goldsmiths. His project, Mediated Perception, is currently shortlisted for DevArt, a competition run by Google and the Barbican Centre to source the best new digital art for the Digital Revolution exhibition.

How did you arrive at Goldsmiths?
After my art foundation in Newcastle I studied sculpture at Camberwell College of Arts, but I dropped out after a year. I was getting really interested in programming, and was frustrated with what I was being taught at college. I felt they were only teaching us about sculptors who made sculptures about sculpture. It was such a relief to come to Goldsmiths because I just wanted to get on and do stuff. In first year we made our own versions of Pong – it’s great that you can just get on with designing stuff that’s interesting.

What are you studying?
BSc Creative Computing is a combination of lectures and labs on core computer science (learning how to programme, and to build databases and websites) and creative computing, which is all about experimenting with graphics and audio. Things like building your own synthesiser and creating 3D graphics, but also looking at the maths behind how people perceive images and sound. I’ve always been interested in Photoshop, so in my first year at Goldsmiths I wanted to learn more about manipulating images, and ended up creating my own version of Photobooth for my end-of-year project (see video).


Terence’s Photobooth project

Tell us about your current project
Over the summer me and my flatmate got really excited about a new virtual reality headset called the Oculus Rift. They’re not available to the general public, so we had to pretend to be games developers to be able to get hold of one. My flatmate, who’s studying Photography at London College of Communication, wanted to use it to create a virtual art gallery. But I was more interested in discovering what it would be like if you could mess with people’s perceptions.

Instead of the user seeing a virtual reality, I attached two webcams to the front of the headset. By feeding these onto the VR screen the user gets a replica of their normal vision. And then once you’ve sorted that out, you can distort and manipulate the ‘reality’ that they see. I’ve been experimenting with using it as a synaesthesia simulator – using music to trigger visual effects like colour shifting, wobble, blurring and temporal layering. But you can also trigger perceptual distortions using head movement, changes in brightness, or the detection of motion and faces.

The low, medium & high audio frequencies control colour shifting, wobble, blurring and temporal layering.

Are there any real world uses for this?
I want to keep an open mind. There’s a lot of people working on augmented reality but it’s mostly about adding information like Twitter feeds, which isn’t that interesting. I’m much more interested in building things and then experimenting freely.

I saw a BBC documentary called The Creative Brain: How Insight Works that features a Dutch researcher who puts people in virtual reality environments where weird things happen, stuff that’s impossible in the real world, like objects floating. Then when the users do a creativity test afterwards, they score much higher than average. So letting people experience things differently for a short time could be beneficial. Maybe my mediated perception project could be a valuable experience. But, as I say, I really don’t know yet.

Have your tutors been supportive of your project?
Mick Grierson, who teaches the Creative Computing course, has been really supportive. He’s encouraging me to make the craziest things I can.

I’ve been working non-stop on this project for the past month or so, but three weeks ago I got very close to giving up. I broke the cameras by accidentally ripping a load of electronics of a circuit board with a hacksaw, and I just went crazy. I’d been working incredibly hard but I had nothing to show for it, so I was completely ready to give up. When I told Mick, he was really distraught and encouraged me to keep going. I pulled myself together, got back to work, and then two weeks later my project was chosen to feature in the DevArt competition! If I win first prize, Google and the Barbican Centre will give me £25,000 to develop my project for an exhibition that tours around the world.


www.terencebroad.com

Open letter: to a new female student studying computing

by Dr Kate Devlin, Department of Computing


Dear ________,

Welcome to the world of computing! I hope it is all that you hoped. It may not be quite what you expected.

As you will no doubt already know, women are under-represented in computer science: that includes industry as well as academia. By taking this step you are helping to change things. As women, we consume technology. We use mobile phones, laptops, tablets, mp3 players. We are a voice on social media, we are comfortable and familiar with apps. We take digital photos, we upload videos and our writing is online. We are consumers. We can also be innovators.

You may have a few concerns about how you fit into a degree and a career in computing. When Belinda Parmar, founder of Little Miss Geek, went to talk to a class of 14 year old boys and girls about the tech industry, she asked them to draw a picture of someone in the IT industry. Most, inevitably, drew a stereotypical, overweight, geeky looking person with glasses. Every single member of that class, both boys and girls, drew a man.

Take it from me, though – that’s not really how it is. Sure, some of those types of people exist but most are‚ well‚ normal. The men I have encountered in my time as a computer science academic and a programmer have, pretty much overwhelmingly, been supportive and encouraging (there are a few exceptions, but then there are assholes in all walks of life). The problem is not, the men‚ per se. The problem is the biased set-up. The problem is the social expectation. The problem is the lack of opportunity. The problem is the stereotyping.

Women have a valuable role to play. If you’ve ever been told that there are few great women computer scientists, consider that it could be because only a few women have ever been given the chance. Since we were children, the opinions of others have influenced – subtly or unsubtly – how we dress, act, behave and work. Even if we actively shun these opinions we are still susceptible to the unrelenting messages so implicit in society about what our role should be. We learn, as girls, what we are supposed to like. We learn that if we digress from this stereotype then we face problems, not least that we will have to battle to be accepted.

I hope that it is not like this for you. I hope that the degree programme you’ve joined breaks those stereotypes and gives you exactly the same chances and opportunities that your male colleagues have. That is what we aim for as educators.

Women are behind the greatest inventions in computing: programming, compilers, wifi. Women are taking lead roles in massive tech companies – Sheryl Sandberg, chief of Facebook, and Marissa Mayer of Yahoo! to name two prominent figures. There is a long way still to go but by starting your career in computing you are already making a change. We can offer support: here at Goldsmiths we have a Women in Computing network, we offer bursaries, and we have a wide and varied intake from all walks of life who each bring their own valued perspective. Computing is a subject with wonderful opportunities. From programming to web design, from wearable technologies to gaming, and from robots to music, computing opens so many doors to the career of your choice.

You can shape the field of computing.  You don’t have to listen to the stereotypes or stick with the tried and tested. Think about what you want from technology ‚ and then go out and make it.

Dr Kate Devlin
Department of Computing


First published on Goldsmiths Academics

Electronic music pioneer

As well as running the Creative Computing programme at Goldsmiths, Mick Grierson directs the Daphne Oram Collection, an archive of audio, code, photographs, scores and papers relating to the electronic music pioneer Daphne Oram.

Daphne Oram (1925 – 2003) was one of the central figures in the development of British experimental electronic music. As co-founder and first director of the BBC Radiophonic Workshop, she is credited with the invention of a new form of sound synthesis – Oramics. Not only is this one of the earliest forms of electronic sound synthesis, it is noteworthy for being audiovisual in nature – i.e. the composer draws onto a synchronised set of ten 35mm film strips which overlay a series of photo-electric cells, generating electrical charges to control amplitude, timbre, frequency, and duration.

“The Oramics machine is a device of great importance to the development of British electronic music,” explains Mick Grierson. “It’s a great shame that Daphne’s contribution has never been fully recognised, but now that we have the machine at the Science Museum, it’s clear for all to see that she knew exactly how music was going to be made in the future, and created the machine to do it.”

 

Aikon Research Project – Patrick Tresset and Frederic Fol Leymarie

Why is it that the inexperienced person finds it so difficult to draw what they see so clearly, while an artist is able to do so often just with a few lines, in a few seconds? How can an artist draw with an immediately recognisable style, in a particular manner? And how, and why, can a few lines thrown spontaneously on paper be aesthetically pleasing?

A bold project using computational techniques to examine the activity of drawing – in particular sketching the human face – has been launched at Goldsmiths, University of London.

The AIKON (Autonomous/Artistic/IKONograph) Project has received funding from the Leverhulme Trust to carry out work from 2009 until the end of 2011, and could eventually result in AIKON‚ “learnin”g to draw in its own style.

The project is being co-ordinated by Professor of Computing at Goldsmiths, Frederic Fol Leymarie and Patrick Tresset, a researcher and artist who has already carried out much work in the area upon which the AIKON Project will build.

Artistic drawing has been practiced in every known civilisation for at least the last 30,000 years and sketching specifically has the particularity of showing the drawing process complete with its hesitations, errors and corrections.

The area of research has been tackled by art historians, psychologists, neuroscientists – such as Arnheim, Fry, Gombrich, Leyton, Ramachandran, Ruskin, Willats and Zeki – who have argued that artists organise their perception of the world differently.

The AIKON Project will follow two main research paths: one starts from the study of sketches in archives and notes left by artists and the other is based on contemporary scientific and technological knowledge.

AIKON

Professor Fol Leymarie explains more about the project: “Even if still partial, the accumulated knowledge about our perceptual and other neurobiological systems is advanced enough that, together with recent progress in computational hardware, computer vision and artificial intelligence, we can now try to build sophisticated computational simulations of at least some of the identifiable perceptual and cognitive processes involved in face sketching by artists.”

The most important processes to be studied and simulated within AIKON include the visual perception of the subject, and the dynamically created sketch. It will also study the representation, planning and execution of the drawing gestures; the cognitive activity of reasoning, about the percepts of the sitter and the drawing; the influence of the years of training as a form of memory, and the inter-processes information flows, with a focus on feedback mechanisms – for example when looking back at the sitter or when looking at the partial sketch already performed.

Based on earlier work and results, Frederic and Patrick are expecting AIKON to be able to draw in its own style, with the resulting system having been informed by an artist’s insights and also by past artists‚Äô left writings about their creative behaviour.

The Aikon project website

Groundbreaking video communication brings groups together

Have you ever noticed how webcams and front-facing cameras on mobile phones never catch us at a flattering angle? Do you yearn to be able to view more than just a face on the monitor with friends and family? Would it not be great to interact as if you were there with them in person?

VideoComms

Since February 2008, the ‘Narrative Interactive Media’ (NIM) group from the Department of Computing – along with esteemed industry partners (including BT, Alcatel, Philips and board game manufacturer Ravensburger) – has been examining the potential of video technology in supporting group-to-group communication. An €18m project, TA2 looks at how video technology could improve social relationships and connect groups across different locations.

More cameras, better interaction
Indeed, the limitations of single-camera communication are self-explanatory. Whether you’re on a PC, laptop or tablet, you’re pretty much rooted to the spot – you have to place yourself in the frame of the camera. If you wander out of view, you’re out of view. The setup of the cameras in TA2 provides, in addition to the standard front-facing camera, auxiliary cameras that zoom and move to follow the action.
Incorporating best practice from the world of TV and film production, TA2’s cameras transmit images imitating how human attention would direct the experience. It is this ‘communication orchestration’ where Goldsmiths brought expertise to the project.

Marian Ursu, who leads the NIM group, explains: “Obviously neither the cameras nor the editing can be human-controlled because they have to work as we interact. Also, we have to create a slick communication environment where everyone in the room is for all intents and purposes an active part of the interaction.” This requires some degree of intelligence to be embedded into the system such that orchestration decisions can be taken automatically. “You can think of this as the brain of the system,” says Marian, and “we lead the project’s research at this end.”

Michael Frantzis, an expert in video narration and researcher in the NIM group, explains how this works. “If this were film or TV production we would have a cameraman and a director, but we do not have that luxury. Instead, primitive spatial audio and visual information is used as a basis for the automatic inference or information on, amongst others, who is in frame, when they are talking, who is talking to whom, keywords in their speech and their visual focus on attention.”

“We call these social or conversation cues” adds Martin Groen, a Computer Scientist working in Artificial Intelligence now a Psychologist, whose role in the team is to identify and define these cues. This information is in turn interpreted and transformed into decisions regarding camera choices and screen editing.

To develop software that can carry out such processes is an extremely complex, demanding and difficult task, and NIM has five Computer Science Researchers dedicated to it: Manolis Falelakis, Pedro Torres, Spiros Michalakopoulos, Notis Gasparis and Vilmos Zsombori. They are exploring ways in which knowledge can be expressed and worked with by computers and at the same time ways in which this could work sufficiently fast and reliably to be effective for communication mediation.

More than just a chat
For anyone wondering why board game designers Ravensburger were listed as a corporate partner of TA2, things are about to become clear. “TA2 is not about just having a cup of tea and a chat,” explains Marian. “We thought that when people get together normally they have activities to engage with, such as sharing pictures or playing games.”

Indeed the experimental trials saw two groups of participants split into one of two locations – the NIM Lab, in Richard Hoggart Building, and the Goldsmiths Digital Studios, in the Ben Pimlott Building – with the groups battling it out in a game of Pictionary. Each team was made up of two people, one in each room, with the opposition trying to distract the person communicating the picture to their teammate.

There were three sessions for each group, each 30 minutes long: one with a fixed front-facing camera only, one with an orchestrated video (the editing was done by humans for these trials), and one where the camera editing was random, but respected the rhythm of the human editing. 39 Goldsmiths students participated in these trials over three days.‌

Orchestration trials produce a world-first
This was the first time end user trials for orchestrated multi-camera video communication between social groups in separated locations have been conducted.

Quantitative and qualitative measures were used to assess the experience. The quantitative measure – the number of accurate and inaccurate guesses analysed in the context of the overall number of turns – brought noteworthy results, showing that there were there were significantly more accurate guesses, and significantly fewer incorrect guesses in the orchestrated trial than the other two conditions. “This means the information flowed better with orchestration,” says Marian.

Marian was less enthusiastic, but still very optimistic, regarding the results from the qualitative measure – the Independent Television Commission / Sense of Presence Inventory – a validated questionnaire assessing the subjective experience of participants, developed by the Department of Psychology’s i2 group. The questionnaire, although providing no statistically significant differences between the three conditions, showed participants’ slight preference for the fixed-camera condition. Marian says “It was a shame the orchestration didn’t come out as significantly better. As someone from the team put it: ‘Orchestration is good for you even if you don’t like it!’ And we are confident that with some immediate improvements to the technology we will get there in the next set of trials.

Taking it to the next level
The NIM Group is confident that the groundbreaking work undertaken in the project provides a solid theoretical and empirical foundation for improving social relationships and connecting groups across different locations.

Indeed, TA2’s findings will provide the basis for the group’s next EU collaborative project, entitled ‘VConect’ – Video Communication for Networked Communities – set to get underway in December this year.

Marian sets the scene: “VConect builds upon two of the most significant achievements of the current Internet: video conferencing and social networks.

“It will make social networks as flexible and engaging as chatting face-to-face to a group of friends. It will allow us to see what’s really happening, to know who is hurting and who is laughing. It will allow us to see the real drama, let us be part of the most rowdy crowds or talk quietly to a lonely friend.”