Creative Computing

I wanted to kick off my writing about computing education by talking about what is really the big idea that we’ve had at Goldsmiths: Creative Computing. As a department we have worked very hard at developing an interdisciplinary approach to computing, but particularly a creative approach, which takes cues from disciplines such as Fine Art, Music and Design. This has happened throughout the department, but in terms of undergraduate teaching this started with our BSc Creative Computing programme, which has since developed into a suite of programmes: Music Computing, Games Programming and most recently Digital Arts Computing.

All of these programmes treat computer programming as a creative discipline. We teach our students the technical aspects of programming but also how to use it creatively to develop innovative artistic projects.

This idea really comes from two directions. The first is enabling artists to use computing programming as a powerful new medium for their work, that enables them to create procedural, generative and interactive work without the constraints that using pre-existing software brings. This is a really important part of our work, which has resulted in some fantastic, innovative student work, but in this post I want to focus on the other direction: how can creative disciplines inform the way we teach computing and computer programming in particular.

We’ve based how we teach creative computing as much on how art is taught as how computing is traditionally taught. Of course we teach the basic programming structures and suggest basic practical exercises to get used to them, but we also encourage students to develop their own practice which is driven by their own creative ideas. Students work on projects of their own devising, to relatively open briefs. These projects are assessed on their technical challenge, but also on their creative outcomes: the quality of execution of the end result and the innovation in the concept. Teaching in these courses primarily happens in terms of feedback on the projects, both by teachers and by fellow students (peer feedback is going to be a topic of a future post), just as it would be in an art school “crit” session.

What do these art school methods bring to computing teaching? I would say there are lots of benefits. Some are really specific to doing artistic work with programming, but many can help inform any kind of computing teaching.

Creativity. Creative computing is dsigned around getting our students to be creative and pay attention to the design aspect of their software. This is vital to artistic work but also increasingly important in mainstream computing as it becomes more user centric and design focused on the era of the iPhone and the Web app. If nothing else, students that end up working in games or Web development where mixed teams of coders and designers will have experience of both types of work. This is really valuable because communication across those domains has traditionally been difficult.

Motivation. This links in to mark Guzdial’s (let’s see if I can ever get through a computing ed post without mentioning him) idea of Media Computing, which is pretty close to Creative Computing in some ways. Traditional geeks might be intrinsically interested in programming but most people are more interested in what you can do with it. Creative and artistic are simply more interesting toand  most people. I also think our approach adds something to media computing because we focus on independent creative work. Students work on their own projects, which they define and which they are (hopefully) passionate about. This will make them more engaged with the project. The more engaged they are the more time they will spend programming and the most important thing when learning programming is time spent programming.

Independence. The fact that students are working on their own projects of their own devising early on also means they have to be more independent. They are working on something that only they are doing and that won’t necessarily only require elements they learn in class. This means that they will have quite a lot to figure out themselves (with our support, of course). This encourages them to be more independent both in terms of defining projects and solving problems, but most importantly it encourages them to be independent in their learning. They have to go out and figure out stuff (libraries, techniques, sometimes even languages) that we haven’t taught them. That is probably the one most important skill we, as university teachers, can help students learn. Particularly in computing, where technology changes so rapidly that graduate will have to constantly be learning new things to keep up to date.

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Mocap, Oscars and Apes

Last week I went to the Human Interactive where Rich Holleworth of the imaginarium  gave a great talk, which was an animated history of mocap, and we had a really good chat afterwards.

This was very much in my mind when Mark Bishop forwarded me an article: Should Oscar go to Andy Serkis or the computer that turned him into an ape?

It addresses some interesting issues, but there are also some real problems with the article that I wanted to respond to.

Firstly the title. It plays on a typical trope of the human/computer divide, but this kind of performance is really not just about Andy Serkis and a Computer. It Andy Serkis and a bit team of highly talented and creative animators, mocap technicians and programmers (not least of whom, Rich).

The article quotes Jeff Bridges as saying “Actors will kind of be a thing of the past, We’ll be turned into combinations. A director will be able to say: ‘I want 60 per cent Clooney; give me 10 per cent Bridges and throw some Charles Bronson in there’.” I also think that this quote completely misses the point. What Andy Serkis has done is really prove that acting is still central to good film making in the era of mocap and CG. Also, the purpose of CG is really not about getting 60% Clooney and 10% Bridges. It was about getting 100% Cesar, with Serkis as a person being (literally) hidden. Andy Serkis may be a bit of a star, but one that no body can recognise, and no director hires him to “be Andy Serkis” as they might for other movie stars. It may be pointing to a new role of acting which is much more focused on character, rather than on stars. That is by no means a bad thing.

Maybe what this is really about is that calling into question the cult of the individual that pervades hollywood (and is implicit in the Oscars). Maybe we can’t pinpoint the single “genius” behind Cesar in planet of the apes, as it was the creation of a large team, but that doesn’t mean it isn’t a good performance or a good film (I haven’t seen it so I won’t comment on that). In fact, more traditional movie acting isn’t too far from that either. Jeff Bridges brings a lot to a performance, but it is also made by makeup, lighting, camera work, editing. etc. etc. The medium of film is inherently a collaborative enterprise, the work of The Imaginarium simply highlights this more.

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

ComputingEd – teaching programming

I’m starting a new sub-blog about Computing Education and teaching programming in particular. This will be a space for me to talk about my own practical experiences teaching programming and my research in the area (which is closely tied to my practice). It is also a place for me to discuss current research in the area and how it might be applied to the practical problems I see day to day. This will mostly be about university level teaching, as that is what I do, but that can’t really be separated from school level education (or continuing education, which might mean I have to mention our MOOC at some point).

In this first post I think I should start by name-checking two really important sources on computing education, particularly at university level. This first is ACM SIGCSE, or Special Interest Group in Computer Science Education. This is probably the most important international organisation in this area and they run an annual conference which is one of my main sources for Computing Education research papers.

The other is one person: Mark Guzdial and in particular his ComputingEd blog, which is really the best thing to read if you want to keep up to date on computing education research. Having said that, Mark’s own research is really interesting and I really like his approach “media computing”, which bears some resemblance to our creative computing. He was recently a co-author of “Success in Introductory Programming: What Works?”, which generated a lot of interest here at Goldsmiths.

Mark Guzdial’s blog is so great that there almost is no point in me starting my own one, but I will try to put forward what we can learn from Goldsmiths’ and my own approach. I think it is also interesting to address some of the problems from the UK perspective. Our university system is very different from the US system that Mark and other writers teach in. An important example is that our students are enrolled in a single subject from the very beginning, unlike the US system where students can take modules in many subjects before choosing a major. That means that on the surface the problem of students dropping computing after the first programming class is less visible, but that problems for students that don’t pick up programming effectively are much greater because it is harder for them to change away from computing (our responsibility for supporting students is even greater!).

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

“Social” is the future of VR

There were many interesting things said at the Oculus Connect conference a few weeks ago (which I only saw remotely through the live stream), but the one that caught me was that both Michael Abrash and John Carmack said that “Social” was the future of VR:

Social, of course, can mean many things these days, and these statements are probably at least partially motivated  by Facebook’s involvement in Oculus. However, as some one who has spent years working on simulating social interactions in virtual reality, I know that this is a really exciting area and it is great to have industry leaders like Carmack and Abrash backing this up.

Very few people have actually experienced a face-to-face encounter with a life size virtual human in virtual reality, but those, like me, who have know that it is one of the most compelling experiences that virtual reality has to offer. If you get it right and the character’s body language responds to you (making eye contact, responding when you move closer), then it creates a sense of social connection which is quite unlike anything that is possible on a screen.

It’s not something I have worked on for quite a while, but with the release of the Oculus Rift I’ve been bullied into revisiting some of my own work. I think it will be far more compelling now just because of the massive improvements in VR technology. Exciting times.

In the mean time, here are a couple of papers I published (with Xueni Pan and Mel Slater) some years ago in that area:

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

What is natural about “Natural User Interfaces”?

I’ve recently had a paper published in Mark Bishop and Andrew Martin’s excellent volume Contemporary Sensorimotor Theory. I thought I would post an extract in which I use sensorimotor theory to think through some of the issues raised by Donald Norman’s insightful critique Natural User Interfaces are not natural.

The type of interaction I have been describing [in the rest of the paper] has been marketed by Microsoft and others as “Natural User Inter- faces”: interfaces that are claimed to be so natural that they do not need to be learned. The logic behind this phrase is that, because body movements come naturally to us, a body movement interface will be natural. This idea has been criticised by many people, most notably by Norman in his article Natural User Interfaces are not natural in which he argues that bodily interfaces can suffer from many problems associated with traditional interfaces (such as the difficulty of remembering gesture) as well as new problems (the ephemerality of gestures and lack of visual feedback). So is there value in the intuition that bodily inter- faces are natural, and if so what is that value and why is it often not seen in existing interfaces?

I would argue that there is a fundamental difference in the nature of bodily interfaces and traditional interfaces. Jacob et al. propose that a variety of new forms of interaction, including bodily interaction, are successful because they leverage a different set of our pre-existing skills from traditions GUIs. While a graphical user interface leverages our skills in manipulating external visual and symbolic representations, bodily interfaces leverage leverage skills related to body and environmental awareness. The skills that enable us to move and act in the world. Similarly, Dourish proposes that we analyse interaction in terms of embodiment which he defines as: “the property of our engagement with the world that allows us to make it meaningful”. This leads him to define Embodied Interaction as “the creation, manipulation, and sharing of meaning through engaged interaction with artefacts”. While he applies this definition to both traditional and new forms of interaction, the nature of this engaged interaction is very different in bodily interfaces. Following Jacob we could say that, in a successful bodily interface, this engaged interaction can be the same form of engagement we have with our bodies and environment in our daily lives and we can therefore re-use our existing skills that enable us to engage with the world.

If we take a non-representational, sensorimotor view of perception and action these skills are very different from the skills of a traditional interface involving manipulation of representations. This view allows us to keep the intuition that bodily interfaces are different from graphical user interfaces and explain what is meant by natural in the phrase “natural user interface” (the so-called natural skills are non-representational sensorimotor skills), while also allowing us to be critical of the claims of bodily interfaces. Natural user interfaces, on this view, are only natural if they take account of the non-representational, sensorimo- tor nature of our body movement skills. Body movement interfaces which are just extensions of a symbolic, representational interface which are just a more physically tiring version of a GUI.

A good example of this is gestural interaction. A common implementation of this form of interface is to have a number of pre-defined gestures that can be mapped to actions in the interface. This is one of the types of interface that Norman criticises. When done badly there is a fairly arbitrary mapping between a symbolic gesture and a symbolic action. Users’ body movements are used as part of a representation manipulation task. There is nothing wrong with this per se but it does not live up to the hype of natural user interfaces and is not much different from a traditional GUI. In fact, as Norman notes, it can be worse, as users do not have a visual cue to remind them which gestures they should be performing. This makes it closer to a textual command line interface where users must remember obscure commands with no visual prompts. Gestural user interfaces do not have to be like this.

These problems can be avoided if we think of gestural interfaces as tapping sensorimotor skills, not representation manipulation skills. For example, the work of Bevilacqua et al. uses gesture to control music. In this work, ges- tures are tracked continuously rather than being simply recognised at the end of the gesture. This allows users to continuously control the production of sound throughout the time they are performing the gesture, rather than triggering the gesture at the end. This seemingly simple difference transforms the task from representation manipulation (producing a symbolic gesture and expecting a dis- crete response) to a tight sensorimotor loop in which the auditory feedback can influence movement which in turn controls the audio. A more familiar example of this form of continuous feedback is the touch screen “pinch to zoom” gesture developed for the iPhone. In this gesture an image resizes dynamically and continuously in response to the users’ fingers moving together and apart. This continuous feedback and interaction enables a sensorimotor loop that can leverage our real world movement skills.

A second feature of Bevilacqua et al.’s system is that is allows users to easily define their own gestures and the do so by acting out those gestures while listening to the music to be controlled. I will come back to this feature in more detail later, but for now we can note that it means that gestures are not limited to a set of pre-defined symbolic gestures. Users can define movements that feel natural to them for controlling a particular type of music. What does “natural” mean in this context? Again, it means that the user already has a learnt sensorimotor mapping between the music and a movement (for example a certain way of tapping the hands in response to a beat).

This is the full article:

Gillies, Marco and Kleinsmith, Andrea. 2014. Non-representational Interaction Design. In: Mark (J. M.) Bishop and Andrew Martin, eds. Contemporary Sensorimotor Theory. 15 Switzerland: Springer International Publishing, pp. 201-208. ISBN 978-3-319-05106-2

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Fluid Gesture Interaction Design

The GIDE gesture design interface

Fluid gesture interaction design: applications of continuous recognition for the design of modern gestural interfaces, Bruno Zamborlin’s new paper (that I helped on) is about to be published, you can access it on the Goldsmiths Repository:

The paper is based on Frédéric Bevilacqua’s Gesture Following algorithm for continuous gestures recognition (which Bruno worked on), but in this worked we really looked carefully at the HCI of gesture interface design. If you want people to design good gesture interfaces it isn’t enough to have good gesture recognition software, you need design tools that support them in doing so. In particular you need to support them in tweaking parameters to get optimal performance and help them know what to do when things don’t work as expected. Bruno showed in this paper, how real time visual and auditory feedback about the recognition process can help people design better interfaces more quickly.

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Kinect can open up a new world of games customization

The Microsoft Kinect is the device that has promised to change the way we play games and interact with computers by making real time motion tracking possible on commodity hardware, but it’s potential doesn’t stop there. We’ve been exploring how it can massively expand the way players can customise their games.

Customisation is a big part of modern gaming, particularly in Massively Multiplayer Online games, where players customise their avatars to develop an individual identity within the game, and communicate that identity to other players. Up to now customisation has mostly been about changing how characters look, but that is only one aspect of what makes a character unique. How a character moves is also very important. Even more fundamentally we could customise how characters respond to events in the game, what game developers call Artificial Intelligence. Up to now customising these would involve complex animation and programming, skills that ordinary players don’t have. With Andrea Kleinsmith, I’ve been exploring how motion tracking like the kinect can make customising animtaion and AI easy. Players can use their own movements to make the animations for the characters. AI is harder, but we’ve been looking at how machine learning to build AI customisation tools. Rather than have to program the AI, players can act out examples of behavior using motion capture or a kinect, and our machine learning algorithms can infer AI rules to control the character.

We’ve recently had a paper published in the International Journal of Human-Computer Studies that describes a study we did that allowed players to customise thier avatars’ behaviour when they win or loose a point in a 3D version of the classic video game Pong. You can see it here:

Kleinsmith, Andrea and Gillies, Marco. 2013. Customizing by Doing for Responsive Video Game Characters. international journal of human-computer studies, 71(7), pp. 775-784. ISSN 1071-5819

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Creative Programming for Digital Media & Mobile Apps

Or MOOCpocalypse Now.


After months of hard work and tight deadlines our Massively Open Online Course, Creative Programming for Digital Media & Mobile Apps, will be launching on Coursera on Monday with over 70 000 students already enrolled. The course will:

teach you how to develop and apply programming skills to creative work. This is an important skill within the development of creative mobile applications, digital music and video games. It will teach technical skills needed to write software that make use of images, audio and graphics, and will concentrate on the application of these skills to creative projects.  Additional resources will be provided for students with no programming background.

The course is being taught be Mick Grierson, Matthew Yee-King and me.

Thanks to everyone who has supported us, including Niklaas van Poortvliet of UCL Publications and Marketing Services (PAMS) for the epic amount of work he and his team have put in to produce the fantastic videos. Barney Grainger and Michael Kerrison of the University of London International Academy for all the help and support they have given through the creation of this course. And of course all the current and former Goldsmiths students who will be helping support you on the forums: Vlad Voina, Tom Rushmore, Will Gallia and Joe Boston.




facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Goldsmiths Masterclass

Welcome to everyone attending our masters classes at Goldsmiths this week.

You can see our schedule here:

Today and tomorrow I’ll be running the computing masterclasses and will be looking at some of the exciting new web technologies that have been developed in recent years. The main focus will be on HTML5. We will be using some of the HTML5 example developed at Goldsmiths that you can access here:

We will also introduce Processing, a great programming environment for rich media interactive web sites, and the main teaching language we use in first year at Goldsmiths. You can download Processing here:

and the documentation is here:

If you are interested you can look at some examples.  I will be showing this today:

and this is a great resource of processing examples:


facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Bruno Latour’s challenges for CHI

Bruno Latour just gave the closing keynote for CHI 2013 and he issued four challenges for HCI research. I thought I would get down some thoughts about them before I forget.

Before I talk about the challenges, I should try to describe his central theme. He was arguing against the division of sociology into two scales the unconnected individual and the unindividualised collective. He instead argues that we should think in terms of overlapping and interconnecting “monads” (I won’t try to explain the term). He thinks that digital technology can help to analyse data without going to the two poles of individual qualitative datum or collective, aggregate statistics.

This kind of aligns with my thoughts that interactive machine learning could help to bridge this divide by having human interaction that focuses on the detail of different aspects and items of data within the statistical analysis of machine learning (this is still very vague on my part, but I think there is something there and maybe Bruno Latour does too).

Overall, Latour wants the CHI community to help break down the individual/collective polarity, but in particular he issued four challenges:

Getting rid of data

His first challenge was to help get rid of data from large data sets (presumably so you are only left with “interesting” data in some sense). Given the rest of this talk I interpret this not as wanting to focus on individual items but to pick up connected elements that are important without either aggregating all of the data or removing them from their connections with the rest of the dataset. I can imagine that there could be powerful tool that allows researchers to investigating small snippets of data while a statistical engine runs in the background, clustering or otherwise picking out connections between those snippets and the rest of the dataset.

Capturing the inner narrativity of overlapping monads

I will have to think about this one, but it is something about bridging the gap between human narrative and statistical analysis. He referred to data journalism and how data is used in both interactive and narrative contexts in things like the guardian coverage of the london riots.

Visualizing heritage, process and genealogy

How to visualise these temporal qualities without relying in static structure or loosing connectedness. While answering questions he stressed the importance of not falling back on unchanging structures but acknowledging the changing nature of monads and their connections. This seems to me to relate to another theme that came up quite a lot in CHI (from Bill Buxton to the NIME SIG), the need to have time as a first class concept. This would make it possible to model the evolution of data without relying on static structures (maybe).

Replacing model building and emergent structure by highlighting differently overlapping monads. 

I guess that this would require a very dynamic analysis that made it possible to apply many different and changing models to data. I think that interactive machine learning could help a lot here by using the human element to navigate different interpretations, learnt models and views of the data.


Nate Matias has a much more accurate write up of the challenges. I’ll quote them below, but I’ll just warn that I think he’s not quite right about collective phenomena. He says that “Collective phenomena grow out of these collecting sites”, but this isn’t quite what Latour is saying, after all collective phenomena don’t exist so they can’t emerge. Latour is from a Science Studies background so when he talks about collecting sites he is thinking about scientific data collection instruments (or their aggregations) like microscopes, telescopes, mass spectrometers, semi-structured interviews or surveys. While we might think of a survey as collective and an interview as individual they are both just methods of collecting data which both observe and transform (“perform”) the data thus creating a particular view on the world. There is no innate distinction between individual and collective phenomena in the world, just phenomena that are created by methods of data collection (or more precisely by their interaction with the world). This means that there isn’t a division into two (or more scales) collective and individual but a mass of different views of the world, each specific to it’s data collection method.

Anyway, that small criticism aside, well done to Nate, for the otherwise excellent explanation, here is his summary of the challenges:

Visual complexity produces opacity. Massive individualizing data produces beautiful, playful hairballs which show us nothing. How do we get filter and focus data while still appreciating monads?

How can we capture the inner narrativity of overlapping monads? Latour shows us the “512 paths to the White House” visualization by Mike Bostock and Shan Carter. The other, the Guardian’s Rumour tracker, following the 2011 London riots. The idea that quantitative is different from qualitative is an artifact of the history of social science and a fallacy arising from the distinction between the individual and the collective, he tells us.

How can we visualize heritage, process, and genealogies? Latour shows us a paper he worked with on “complex systems science” (I couldn’t find it). To be a monad is to establish connections, but timeseries visualizations can focus on structure rather than connectedness (like the paper on Phylomemetic Patterns in Science Evolution by Chavalarias, Cointet et al)

How can we replace models about emergent structures with models that highlight differentially overlapping monads? He shows us a hairball network diagram and talks about the difficulty of moving beyond the hairball to understand the overlapping monads



facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Embodied Audio-Visual Interaction, Department of Computing, Goldsmiths, University of London