Radio 4 : A Sound British Adventure

Director of Creative Computing Dr Mick Grierson has appeared on the Radio 4 documentary, “A Sound British Adventure”, talking about the ‘Secret History of British Electronic Music’. He discusses the pioneering work of Daphne Oram, and the relationship between technology and creativity in electronic music alongside key historical figures in the field including synthesiser pioneer Peter Zinnovieff (whose machines were used by Pink Floyd and the Rolling Stones), and Brian Hodgson, creator of the Dr. Who Tardis sound effect.

You can listen to the program here until the 21st of August 2012

Aikon Research Project – Patrick Tresset and Frederic Fol Leymarie

Why is it that the inexperienced person finds it so difficult to draw what they see so clearly, while an artist is able to do so often just with a few lines, in a few seconds? How can an artist draw with an immediately recognisable style, in a particular manner? And how, and why, can a few lines thrown spontaneously on paper be aesthetically pleasing?

A bold project using computational techniques to examine the activity of drawing – in particular sketching the human face – has been launched at Goldsmiths, University of London.

The AIKON (Autonomous/Artistic/IKONograph) Project has received funding from the Leverhulme Trust to carry out work from 2009 until the end of 2011, and could eventually result in AIKON‚ “learnin”g to draw in its own style.

The project is being co-ordinated by Professor of Computing at Goldsmiths, Frederic Fol Leymarie and Patrick Tresset, a researcher and artist who has already carried out much work in the area upon which the AIKON Project will build.

Artistic drawing has been practiced in every known civilisation for at least the last 30,000 years and sketching specifically has the particularity of showing the drawing process complete with its hesitations, errors and corrections.

The area of research has been tackled by art historians, psychologists, neuroscientists – such as Arnheim, Fry, Gombrich, Leyton, Ramachandran, Ruskin, Willats and Zeki – who have argued that artists organise their perception of the world differently.

The AIKON Project will follow two main research paths: one starts from the study of sketches in archives and notes left by artists and the other is based on contemporary scientific and technological knowledge.

AIKON

Professor Fol Leymarie explains more about the project: “Even if still partial, the accumulated knowledge about our perceptual and other neurobiological systems is advanced enough that, together with recent progress in computational hardware, computer vision and artificial intelligence, we can now try to build sophisticated computational simulations of at least some of the identifiable perceptual and cognitive processes involved in face sketching by artists.”

The most important processes to be studied and simulated within AIKON include the visual perception of the subject, and the dynamically created sketch. It will also study the representation, planning and execution of the drawing gestures; the cognitive activity of reasoning, about the percepts of the sitter and the drawing; the influence of the years of training as a form of memory, and the inter-processes information flows, with a focus on feedback mechanisms – for example when looking back at the sitter or when looking at the partial sketch already performed.

Based on earlier work and results, Frederic and Patrick are expecting AIKON to be able to draw in its own style, with the resulting system having been informed by an artist’s insights and also by past artists‚Äô left writings about their creative behaviour.

The Aikon project website

Groundbreaking video communication brings groups together

Have you ever noticed how webcams and front-facing cameras on mobile phones never catch us at a flattering angle? Do you yearn to be able to view more than just a face on the monitor with friends and family? Would it not be great to interact as if you were there with them in person?

VideoComms

Since February 2008, the ‘Narrative Interactive Media’ (NIM) group from the Department of Computing – along with esteemed industry partners (including BT, Alcatel, Philips and board game manufacturer Ravensburger) – has been examining the potential of video technology in supporting group-to-group communication. An €18m project, TA2 looks at how video technology could improve social relationships and connect groups across different locations.

More cameras, better interaction
Indeed, the limitations of single-camera communication are self-explanatory. Whether you’re on a PC, laptop or tablet, you’re pretty much rooted to the spot – you have to place yourself in the frame of the camera. If you wander out of view, you’re out of view. The setup of the cameras in TA2 provides, in addition to the standard front-facing camera, auxiliary cameras that zoom and move to follow the action.
Incorporating best practice from the world of TV and film production, TA2’s cameras transmit images imitating how human attention would direct the experience. It is this ‘communication orchestration’ where Goldsmiths brought expertise to the project.

Marian Ursu, who leads the NIM group, explains: “Obviously neither the cameras nor the editing can be human-controlled because they have to work as we interact. Also, we have to create a slick communication environment where everyone in the room is for all intents and purposes an active part of the interaction.” This requires some degree of intelligence to be embedded into the system such that orchestration decisions can be taken automatically. “You can think of this as the brain of the system,” says Marian, and “we lead the project’s research at this end.”

Michael Frantzis, an expert in video narration and researcher in the NIM group, explains how this works. “If this were film or TV production we would have a cameraman and a director, but we do not have that luxury. Instead, primitive spatial audio and visual information is used as a basis for the automatic inference or information on, amongst others, who is in frame, when they are talking, who is talking to whom, keywords in their speech and their visual focus on attention.”

“We call these social or conversation cues” adds Martin Groen, a Computer Scientist working in Artificial Intelligence now a Psychologist, whose role in the team is to identify and define these cues. This information is in turn interpreted and transformed into decisions regarding camera choices and screen editing.

To develop software that can carry out such processes is an extremely complex, demanding and difficult task, and NIM has five Computer Science Researchers dedicated to it: Manolis Falelakis, Pedro Torres, Spiros Michalakopoulos, Notis Gasparis and Vilmos Zsombori. They are exploring ways in which knowledge can be expressed and worked with by computers and at the same time ways in which this could work sufficiently fast and reliably to be effective for communication mediation.

More than just a chat
For anyone wondering why board game designers Ravensburger were listed as a corporate partner of TA2, things are about to become clear. “TA2 is not about just having a cup of tea and a chat,” explains Marian. “We thought that when people get together normally they have activities to engage with, such as sharing pictures or playing games.”

Indeed the experimental trials saw two groups of participants split into one of two locations – the NIM Lab, in Richard Hoggart Building, and the Goldsmiths Digital Studios, in the Ben Pimlott Building – with the groups battling it out in a game of Pictionary. Each team was made up of two people, one in each room, with the opposition trying to distract the person communicating the picture to their teammate.

There were three sessions for each group, each 30 minutes long: one with a fixed front-facing camera only, one with an orchestrated video (the editing was done by humans for these trials), and one where the camera editing was random, but respected the rhythm of the human editing. 39 Goldsmiths students participated in these trials over three days.‌

Orchestration trials produce a world-first
This was the first time end user trials for orchestrated multi-camera video communication between social groups in separated locations have been conducted.

Quantitative and qualitative measures were used to assess the experience. The quantitative measure – the number of accurate and inaccurate guesses analysed in the context of the overall number of turns – brought noteworthy results, showing that there were there were significantly more accurate guesses, and significantly fewer incorrect guesses in the orchestrated trial than the other two conditions. “This means the information flowed better with orchestration,” says Marian.

Marian was less enthusiastic, but still very optimistic, regarding the results from the qualitative measure – the Independent Television Commission / Sense of Presence Inventory – a validated questionnaire assessing the subjective experience of participants, developed by the Department of Psychology’s i2 group. The questionnaire, although providing no statistically significant differences between the three conditions, showed participants’ slight preference for the fixed-camera condition. Marian says “It was a shame the orchestration didn’t come out as significantly better. As someone from the team put it: ‘Orchestration is good for you even if you don’t like it!’ And we are confident that with some immediate improvements to the technology we will get there in the next set of trials.

Taking it to the next level
The NIM Group is confident that the groundbreaking work undertaken in the project provides a solid theoretical and empirical foundation for improving social relationships and connecting groups across different locations.

Indeed, TA2’s findings will provide the basis for the group’s next EU collaborative project, entitled ‘VConect’ – Video Communication for Networked Communities – set to get underway in December this year.

Marian sets the scene: “VConect builds upon two of the most significant achievements of the current Internet: video conferencing and social networks.

“It will make social networks as flexible and engaging as chatting face-to-face to a group of friends. It will allow us to see what’s really happening, to know who is hurting and who is laughing. It will allow us to see the real drama, let us be part of the most rowdy crowds or talk quietly to a lonely friend.”

The Survival Guide: Part T2 – The Goldsmith’s Nightmare Engine

The Goldsmiths Game Engine is an incredible coursework where the entire class unites to produce a software framework and a demo. This blog will be particularly useful to the Masters class of 2012-2013 as we (the class of 2011) were scratching our heads on what the previous year had done. If you’re still an undergraduate and you’ve read this far: then fear not! I’ve kept the details pretty general and it should give insight into the level of difficulty a Goldsmiths masters will throw at you.

Shameless Self Promotion: I have a blog! It’s filled with ridiculously cool video game concepts and awesome work for M&C Saatchi. Check here to check it out here!

Rumour: Crysis 3 is currently being developed on our student game engine

Continue reading The Survival Guide: Part T2 – The Goldsmith’s Nightmare Engine

The video, the cinemagraphs, the summer.

Summer. Wahey!

Time to pretend I didn’t do exams and that I don’t have results waiting. Apply to all the internships! I’ve been getting my creative hat out and playing around with a few new things. Cinemagraphs have been the talk on a lot of sites for the past few months. The idea is where you take a still image, and add life to it with subtle movement. It’s important that the image itself holds as a photograph and that the animation you apply is subtle, so it’s not just an animated gif. It takes some fiddling around in Photoshop to get some really nice masks and the more time you take to prep the shot, the better it’ll look. Here’s a couple of examples of ones I did from my iPhone. Just rough and quick attempts, though I’m planning on getting out the tripod and Canon to really making some impressive ones!

Matt Huxley Cinemagraph
Matt Huxley Cinemagraph
Matt Huxley Kettle Cinemagraph
Matt Huxley Kettle Cinemagraph

I think they turned out pretty well. There’s something strangely mesmerizing about it, and I love how it incorporates a lot of photographic principles with some animation and photo editing skill.

Also, I wrote previously about making a video for a presentation I was doing. Well that’s done, and presented, so I shall leave you with this. It’s somewhat tongue in cheek, as is apparent, but fun nonetheless. It’s very dubstep, it’s very over the top, and it was very fun to make. So, make of it what you will, I suppose:

Goldsmiths Course Survival Guide Part Un

This blog is for anyone thinking about joining MSc Computer Games and Entertainment and wants some insight into what they’re jumping into, some helpful resources, some of the mistakes I made and how you can avoid them and finally (if like me) you’re new to programming: how you can catch up and code like the best of them.
But first, introductions: Hello everyone, my name is Max Bye and I’m an alcoholic.
Eyes below for a picture.

Here I am asleep in your Computer Labs

Continue reading Goldsmiths Course Survival Guide Part Un

Creativity, independence and learning by doing.