Better Than Life

The Better Than Life Logo

Coney, Showcaster and Goldsmiths have been experimenting with new ways to create live experiences online. Blending elements from theatre, gaming and TV, this is an exploration of a new type of live event which generates drama by giving audiences online and in the physical space the agency to influence the narrative world of the piece.

Throughout June 2014, we developed a series of 45-minute interactive theatre experiences designed for a small live audience and an unlimited number of people online simultaneously to research the possibilities of this medium.
Over the next months we will be writing a report about our findings. If you would like to be notified when we present our results please sign up to the Better Than Life mailing list below

You can still explore this site to find more information about the project and story fragments from the Positive Vision Movement.

You can read a review of the piece by Andrew Haydon in the Guardian.

We are now analysing all of the data for the performances and looking forward to publishing the results of the research soon.

This project was supported by the Digital R&D Fund for the Arts – Nesta, Arts & Humanities Research Council and public funding by the National Lottery through Arts Council England.

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather
Posted in Uncategorized | Leave a comment

What is natural about “Natural User Interfaces”?

I’ve recently had a paper published in Mark Bishop and Andrew Martin’s excellent volume Contemporary Sensorimotor Theory. I thought I would post an extract in which I use sensorimotor theory to think through some of the issues raised by Donald Norman’s insightful critique Natural User Interfaces are not natural.

The type of interaction I have been describing [in the rest of the paper] has been marketed by Microsoft and others as “Natural User Inter- faces”: interfaces that are claimed to be so natural that they do not need to be learned. The logic behind this phrase is that, because body movements come naturally to us, a body movement interface will be natural. This idea has been criticised by many people, most notably by Norman in his article Natural User Interfaces are not natural in which he argues that bodily interfaces can suffer from many problems associated with traditional interfaces (such as the difficulty of remembering gesture) as well as new problems (the ephemerality of gestures and lack of visual feedback). So is there value in the intuition that bodily inter- faces are natural, and if so what is that value and why is it often not seen in existing interfaces?

I would argue that there is a fundamental difference in the nature of bodily interfaces and traditional interfaces. Jacob et al. propose that a variety of new forms of interaction, including bodily interaction, are successful because they leverage a different set of our pre-existing skills from traditions GUIs. While a graphical user interface leverages our skills in manipulating external visual and symbolic representations, bodily interfaces leverage leverage skills related to body and environmental awareness. The skills that enable us to move and act in the world. Similarly, Dourish proposes that we analyse interaction in terms of embodiment which he defines as: “the property of our engagement with the world that allows us to make it meaningful”. This leads him to define Embodied Interaction as “the creation, manipulation, and sharing of meaning through engaged interaction with artefacts”. While he applies this definition to both traditional and new forms of interaction, the nature of this engaged interaction is very different in bodily interfaces. Following Jacob we could say that, in a successful bodily interface, this engaged interaction can be the same form of engagement we have with our bodies and environment in our daily lives and we can therefore re-use our existing skills that enable us to engage with the world.

If we take a non-representational, sensorimotor view of perception and action these skills are very different from the skills of a traditional interface involving manipulation of representations. This view allows us to keep the intuition that bodily interfaces are different from graphical user interfaces and explain what is meant by natural in the phrase “natural user interface” (the so-called natural skills are non-representational sensorimotor skills), while also allowing us to be critical of the claims of bodily interfaces. Natural user interfaces, on this view, are only natural if they take account of the non-representational, sensorimo- tor nature of our body movement skills. Body movement interfaces which are just extensions of a symbolic, representational interface which are just a more physically tiring version of a GUI.

A good example of this is gestural interaction. A common implementation of this form of interface is to have a number of pre-defined gestures that can be mapped to actions in the interface. This is one of the types of interface that Norman criticises. When done badly there is a fairly arbitrary mapping between a symbolic gesture and a symbolic action. Users’ body movements are used as part of a representation manipulation task. There is nothing wrong with this per se but it does not live up to the hype of natural user interfaces and is not much different from a traditional GUI. In fact, as Norman notes, it can be worse, as users do not have a visual cue to remind them which gestures they should be performing. This makes it closer to a textual command line interface where users must remember obscure commands with no visual prompts. Gestural user interfaces do not have to be like this.

These problems can be avoided if we think of gestural interfaces as tapping sensorimotor skills, not representation manipulation skills. For example, the work of Bevilacqua et al. uses gesture to control music. In this work, ges- tures are tracked continuously rather than being simply recognised at the end of the gesture. This allows users to continuously control the production of sound throughout the time they are performing the gesture, rather than triggering the gesture at the end. This seemingly simple difference transforms the task from representation manipulation (producing a symbolic gesture and expecting a dis- crete response) to a tight sensorimotor loop in which the auditory feedback can influence movement which in turn controls the audio. A more familiar example of this form of continuous feedback is the touch screen “pinch to zoom” gesture developed for the iPhone. In this gesture an image resizes dynamically and continuously in response to the users’ fingers moving together and apart. This continuous feedback and interaction enables a sensorimotor loop that can leverage our real world movement skills.

A second feature of Bevilacqua et al.’s system is that is allows users to easily define their own gestures and the do so by acting out those gestures while listening to the music to be controlled. I will come back to this feature in more detail later, but for now we can note that it means that gestures are not limited to a set of pre-defined symbolic gestures. Users can define movements that feel natural to them for controlling a particular type of music. What does “natural” mean in this context? Again, it means that the user already has a learnt sensorimotor mapping between the music and a movement (for example a certain way of tapping the hands in response to a beat).

This is the full article:

Gillies, Marco and Kleinsmith, Andrea. 2014. Non-representational Interaction Design. In: Mark (J. M.) Bishop and Andrew Martin, eds. Contemporary Sensorimotor Theory. 15 Switzerland: Springer International Publishing, pp. 201-208. ISBN 978-3-319-05106-2

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather
Posted in Uncategorized | Leave a comment

Fluid Gesture Interaction Design

The GIDE gesture design interface

Fluid gesture interaction design: applications of continuous recognition for the design of modern gestural interfaces, Bruno Zamborlin’s new paper (that I helped on) is about to be published, you can access it on the Goldsmiths Repository:

http://eprints.gold.ac.uk/9619/

The paper is based on Frédéric Bevilacqua’s Gesture Following algorithm for continuous gestures recognition (which Bruno worked on), but in this worked we really looked carefully at the HCI of gesture interface design. If you want people to design good gesture interfaces it isn’t enough to have good gesture recognition software, you need design tools that support them in doing so. In particular you need to support them in tweaking parameters to get optimal performance and help them know what to do when things don’t work as expected. Bruno showed in this paper, how real time visual and auditory feedback about the recognition process can help people design better interfaces more quickly.

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather
Posted in Uncategorized | Leave a comment

Kinect can open up a new world of games customization

The Microsoft Kinect is the device that has promised to change the way we play games and interact with computers by making real time motion tracking possible on commodity hardware, but it’s potential doesn’t stop there. We’ve been exploring how it can massively expand the way players can customise their games.

Customisation is a big part of modern gaming, particularly in Massively Multiplayer Online games, where players customise their avatars to develop an individual identity within the game, and communicate that identity to other players. Up to now customisation has mostly been about changing how characters look, but that is only one aspect of what makes a character unique. How a character moves is also very important. Even more fundamentally we could customise how characters respond to events in the game, what game developers call Artificial Intelligence. Up to now customising these would involve complex animation and programming, skills that ordinary players don’t have. With Andrea Kleinsmith, I’ve been exploring how motion tracking like the kinect can make customising animtaion and AI easy. Players can use their own movements to make the animations for the characters. AI is harder, but we’ve been looking at how machine learning to build AI customisation tools. Rather than have to program the AI, players can act out examples of behavior using motion capture or a kinect, and our machine learning algorithms can infer AI rules to control the character.

We’ve recently had a paper published in the International Journal of Human-Computer Studies that describes a study we did that allowed players to customise thier avatars’ behaviour when they win or loose a point in a 3D version of the classic video game Pong. You can see it here:

Kleinsmith, Andrea and Gillies, Marco. 2013. Customizing by Doing for Responsive Video Game Characters. international journal of human-computer studies, 71(7), pp. 775-784. ISSN 1071-5819

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather
Posted in Uncategorized | Leave a comment

Creative Programming for Digital Media & Mobile Apps

Or MOOCpocalypse Now.

 

After months of hard work and tight deadlines our Massively Open Online Course, Creative Programming for Digital Media & Mobile Apps, will be launching on Coursera on Monday with over 70 000 students already enrolled. The course will:

teach you how to develop and apply programming skills to creative work. This is an important skill within the development of creative mobile applications, digital music and video games. It will teach technical skills needed to write software that make use of images, audio and graphics, and will concentrate on the application of these skills to creative projects.  Additional resources will be provided for students with no programming background.

The course is being taught be Mick Grierson, Matthew Yee-King and me.

Thanks to everyone who has supported us, including Niklaas van Poortvliet of UCL Publications and Marketing Services (PAMS) for the epic amount of work he and his team have put in to produce the fantastic videos. Barney Grainger and Michael Kerrison of the University of London International Academy for all the help and support they have given through the creation of this course. And of course all the current and former Goldsmiths students who will be helping support you on the forums: Vlad Voina, Tom Rushmore, Will Gallia and Joe Boston.

 

 

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather
Posted in Uncategorized | Leave a comment

Goldsmiths Masterclass

Welcome to everyone attending our masters classes at Goldsmiths this week.

You can see our schedule here:

http://www.gold.ac.uk/goldclasses/

Today and tomorrow I’ll be running the computing masterclasses and will be looking at some of the exciting new web technologies that have been developed in recent years. The main focus will be on HTML5. We will be using some of the HTML5 example developed at Goldsmiths that you can access here:

http://doc.gold.ac.uk/~mus02mg/HTML5/

We will also introduce Processing, a great programming environment for rich media interactive web sites, and the main teaching language we use in first year at Goldsmiths. You can download Processing here:

http://processing.org/download/

and the documentation is here:

http://processing.org/reference/

If you are interested you can look at some examples.  I will be showing this today:

http://doc.gold.ac.uk/~mus02mg/HTML5/sonicPainter/

http://doc.gold.ac.uk/~mus02mg/SonicPainter.zip

and this is a great resource of processing examples:

http://www.openprocessing.org/

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather
Posted in Uncategorized | Leave a comment

Bruno Latour’s challenges for CHI

Bruno Latour just gave the closing keynote for CHI 2013 and he issued four challenges for HCI research. I thought I would get down some thoughts about them before I forget.

Before I talk about the challenges, I should try to describe his central theme. He was arguing against the division of sociology into two scales the unconnected individual and the unindividualised collective. He instead argues that we should think in terms of overlapping and interconnecting “monads” (I won’t try to explain the term). He thinks that digital technology can help to analyse data without going to the two poles of individual qualitative datum or collective, aggregate statistics.

This kind of aligns with my thoughts that interactive machine learning could help to bridge this divide by having human interaction that focuses on the detail of different aspects and items of data within the statistical analysis of machine learning (this is still very vague on my part, but I think there is something there and maybe Bruno Latour does too).

Overall, Latour wants the CHI community to help break down the individual/collective polarity, but in particular he issued four challenges:

Getting rid of data

His first challenge was to help get rid of data from large data sets (presumably so you are only left with “interesting” data in some sense). Given the rest of this talk I interpret this not as wanting to focus on individual items but to pick up connected elements that are important without either aggregating all of the data or removing them from their connections with the rest of the dataset. I can imagine that there could be powerful tool that allows researchers to investigating small snippets of data while a statistical engine runs in the background, clustering or otherwise picking out connections between those snippets and the rest of the dataset.

Capturing the inner narrativity of overlapping monads

I will have to think about this one, but it is something about bridging the gap between human narrative and statistical analysis. He referred to data journalism and how data is used in both interactive and narrative contexts in things like the guardian coverage of the london riots.

Visualizing heritage, process and genealogy

How to visualise these temporal qualities without relying in static structure or loosing connectedness. While answering questions he stressed the importance of not falling back on unchanging structures but acknowledging the changing nature of monads and their connections. This seems to me to relate to another theme that came up quite a lot in CHI (from Bill Buxton to the NIME SIG), the need to have time as a first class concept. This would make it possible to model the evolution of data without relying on static structures (maybe).

Replacing model building and emergent structure by highlighting differently overlapping monads. 

I guess that this would require a very dynamic analysis that made it possible to apply many different and changing models to data. I think that interactive machine learning could help a lot here by using the human element to navigate different interpretations, learnt models and views of the data.

EDIT:

Nate Matias has a much more accurate write up of the challenges. I’ll quote them below, but I’ll just warn that I think he’s not quite right about collective phenomena. He says that “Collective phenomena grow out of these collecting sites”, but this isn’t quite what Latour is saying, after all collective phenomena don’t exist so they can’t emerge. Latour is from a Science Studies background so when he talks about collecting sites he is thinking about scientific data collection instruments (or their aggregations) like microscopes, telescopes, mass spectrometers, semi-structured interviews or surveys. While we might think of a survey as collective and an interview as individual they are both just methods of collecting data which both observe and transform (“perform”) the data thus creating a particular view on the world. There is no innate distinction between individual and collective phenomena in the world, just phenomena that are created by methods of data collection (or more precisely by their interaction with the world). This means that there isn’t a division into two (or more scales) collective and individual but a mass of different views of the world, each specific to it’s data collection method.

Anyway, that small criticism aside, well done to Nate, for the otherwise excellent explanation, here is his summary of the challenges:

Visual complexity produces opacity. Massive individualizing data produces beautiful, playful hairballs which show us nothing. How do we get filter and focus data while still appreciating monads?

How can we capture the inner narrativity of overlapping monads? Latour shows us the “512 paths to the White House” visualization by Mike Bostock and Shan Carter. The other, the Guardian’s Rumour tracker, following the 2011 London riots. The idea that quantitative is different from qualitative is an artifact of the history of social science and a fallacy arising from the distinction between the individual and the collective, he tells us.

How can we visualize heritage, process, and genealogies? Latour shows us a paper he worked with on “complex systems science” (I couldn’t find it). To be a monad is to establish connections, but timeseries visualizations can focus on structure rather than connectedness (like the paper on Phylomemetic Patterns in Science Evolution by Chavalarias, Cointet et al)

How can we replace models about emergent structures with models that highlight differentially overlapping monads? He shows us a hairball network diagram and talks about the difficulty of moving beyond the hairball to understand the overlapping monads

 

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather
Posted in Uncategorized | Leave a comment

EAVI at CHI

The Embodied Audio-Visual Interaction group and Goldsmiths’ Computing generally is going to be out in full force this year at CHI 2013 in Paris. See you all there (I’m arriving monday).

Here is a list of our presentations:

Caramiaux, Tanaka  – Beyond Recognition (Alt.CHI)  Wed 01/5 9:00am

http://chi2013.acm.org/program/by-day/wednesday/

Hazelden, Yee King, D’Inverno – WeCurate (Work in Progress) Wed 01/5?

http://chi2013.acm.org/program/by-venues/works-in-progress/

Kiefer, Grierson  – Squeezable interface (Interactivity Explorations)

http://chi2013.acm.org/program/by-venues/interactivity/

Seipp, Devlin – One-handed Website (Interactivity Research)

http://chi2013.acm.org/program/by-venues/interactivity/

Pachet, Roy, d’Inverno – Reflexive Loopers (Note) Wed 01/5 9:00am

http://chi2013.acm.org/program/best-of-chi/

Bevilacqua, Tanaka, et al – SIG NIME (SIG)  Wed 01/5 2pm

http://chi2013.acm.org/program/by-venues/special-interest-groups/

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather
Posted in Uncategorized | Leave a comment

Mogees at TEDx Brussels

Bruno Zamborlin shows off Mogees at TEDx Brussels.

(Also featuring Steph Horak from Goldsmiths)

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather
Posted in Uncategorized | Leave a comment

Actors teach game characters the subtleties of body language

A nice wired article by Liat Clark, about our project:

http://www.wired.co.uk/news/archive/2012-08/15/goldsmiths-motion-behaviour

 

facebooktwittergoogle_plusredditpinterestlinkedinmailby feather
Posted in Uncategorized | Leave a comment