Category Archives: Games

Emotional and Functional Challenge in Core and Avant-garde Games

I’m watching Tom Cole give his talk “Emotional and Functional Challenge in Core and Avant-garde Games” at CHI Play 2015. He looked at game reviews to understand the different between main stream games and more “avant-garde” games.

You can read the abstract here and get the full paper below:

Digital games are a wide, diverse and fast developing art form, and it is important to analyse games that are pushing the medium forward to see what design lessons can be learned. However, there are no established criteria to determine which games show these more progressive qualities. Grounded theory methodology was used to analyse language used in games reviews by critics of both ‘core gamer’ titles and those titles with more avant-garde properties. This showed there were two kinds of challenge being discussed — emotional and functional which appear to be, at least partially, mutually exclusive. Reviews of ‘core’ and ‘avant-garde’ games had different measures of purchase value, primary emotions, and modalities of language used to discuss the role of audiovisual qualities. Emotional challenge, ambiguity and solitude are suggested as useful devices for eliciting emotion from the player and for use in developing more ‘avant-garde’ games, as well as providing a basis for further lines of inquiry.

Emotional and Functional Challenge in Core and Avant-garde Games

Cole, Tom , Cairns, Paul and Gillies, Marco . 2015. ‘Emotional and Functional Challenge in Core and Avant-garde Games’. In: CHI Play 2015. London, United Kingdom.

Performance Driven Expressive Virtual Characters

Creating believable, expressive interactive characters is one of the great, and largely unsolved, technical challenges of interactive media. Human-like characters appear throughout interactive media, virtual worlds and games and are vital to the social and narrative aspects of these media, but they rarely have the psychological depth of expression found in other media. This proposal is for the development of research into a new approach to creating interactive characters which identifies the central problem of current methods as being the fact that creating the interactive behaviour, or Artificial Intelligence (AI), of a character is still primarily a programming task, and therefore in the hands of people with a technical rather than an artistic training. Our hypothesis is that the actors’ artistic understanding of human behaviour will bring an individuality, subtlety and nuance to the character that it would be difficult to create in hand authored models. This will help interactive media represent more nuanced social interaction, thus broadening their range of application.
The proposed research will use information from an actor’s performance to determine the parameters of a character’s behaviour software. We will use Motion Capture to record an actor interacting with another person. The recorded data will be used as input to a machine learning algorithm that will infer the parameters of a behavioural control model for the character. This model will then be used to control a real time animated character in interaction with a person. The interaction will be a full body interaction involving motion tracking of posture and/or gestures, and voice input.

This project is funded by the EPSRC under the first grant theme (project number EP/H02977X/1).

Gillies, Marco, 2009. Learning Finite State Machine Controllers from Motion Capture Data. IEEE transactions on computational intelligence and AI in games, 1 (1). pp. 63-72. ISSN 1943-068X

Gillies, Marco and Pan, Xueni and Slater, Mel and Shawe-Taylor, John, 2008. Responsive Listening Behavior. Computer Animation and Virtual Worlds, 19 (5). pp. 579-589. ISSN 1546-4261

Kinect can open up a new world of games customization

The Microsoft Kinect is the device that has promised to change the way we play games and interact with computers by making real time motion tracking possible on commodity hardware, but it’s potential doesn’t stop there. We’ve been exploring how it can massively expand the way players can customise their games.

Customisation is a big part of modern gaming, particularly in Massively Multiplayer Online games, where players customise their avatars to develop an individual identity within the game, and communicate that identity to other players. Up to now customisation has mostly been about changing how characters look, but that is only one aspect of what makes a character unique. How a character moves is also very important. Even more fundamentally we could customise how characters respond to events in the game, what game developers call Artificial Intelligence. Up to now customising these would involve complex animation and programming, skills that ordinary players don’t have. With Andrea Kleinsmith, I’ve been exploring how motion tracking like the kinect can make customising animtaion and AI easy. Players can use their own movements to make the animations for the characters. AI is harder, but we’ve been looking at how machine learning to build AI customisation tools. Rather than have to program the AI, players can act out examples of behavior using motion capture or a kinect, and our machine learning algorithms can infer AI rules to control the character.

We’ve recently had a paper published in the International Journal of Human-Computer Studies that describes a study we did that allowed players to customise thier avatars’ behaviour when they win or loose a point in a 3D version of the classic video game Pong. You can see it here:

Kleinsmith, Andrea and Gillies, Marco. 2013. Customizing by Doing for Responsive Video Game Characters. international journal of human-computer studies, 71(7), pp. 775-784. ISSN 1071-5819