4th International Joint Workshop on
Computational Creativity
Goldsmiths, University of London17-19 June 2007
home people programme proceedings location
timetable keynote show and tell papers panel
 

Timetable (Click on a word for details)

Paper presentations are 20 minutes talk plus 5 minutes questions; there will be a plenary discussion at the end of each session.

Sun 17 June Mon 18 June Tue 19 June
08:45 Breakfast (tea, coffee, juice, pastries)
09:30 Welcome & Keynote Paper Session 3
Musical Creativity
Paper Session 5
Frameworks for Creativity
10:00
10:30
11:00 Break/Posters Break/Posters
11:30 Paper Session 1
Creativity in Narrative
Break/Posters Paper Session 6
Frameworks for Creativity
12:00 Paper Session 4
Applied Creative Systems
12:30 Workshop Close
13:00 Lunch/Posters
14:00 Show and Tell 1 Show and Tell 2
14:30
15:00
15:30
16:00 Break/Posters
16:30 Paper Session 2
Analogy & Language
Panel Session
17:00
17:30
18:00 Exploring New Cross Break
18:30 Workshop Dinner
(Staff Dining Room)
19:00
19:30
19:45 After-dinner entertainment
20:30 Exploring New Cross

timetable keynote show and tell papers panel
 

Keynote

Text representation of music: from word processing to rule-based composition/improvisation

Bernard Bel

The Bol Processor project originated in 1980 as a word processor facilitating the transcription of quasi-onomatopoeic syllables used as an oral notation system for Indian drumming. It grew up as an expert system (BP1) mimicking the ability to compose variations on a musical theme or assess their acceptability. Pattern grammars (a subset of type-2 formal grammars) proved appropriate for modelling the musical system under study. A stochastic learning device was implemented to infer weights from sets of examples accepted by the grammar, with the effect of enhancing the aesthetic quality of productions. None the less, field work revealed limitations inherent to the expert system approach when it comes to modelling sophisticated human improvisation skills.

In 1989 a numeric-symbolic learning device (QAVAID) was implemented in Prolog II for inferring grammars from examples. However, it has never been used in fieldwork because of its slow operation on portable computers of that time.

The next implementation of Bol Processor (BP2) addressed the issue of music composition and improvisation in the MIDI and Csound environments of electronic music. A new challenge was to deal with superimposed sequences of events (polyphony) within the framework of text-oriented rewriting systems. This was achieved by means of polymetric representation. Minimal descriptions of polyphonic/polyrhythmic structures may be "expanded" by the system to produce arbitrarily complex musical scores. This representation makes it possible to produce sophisticated time-patterns from information comprehensively imbedded in compositional rules, thereby maintaining the consistency of interpretation. This is a major discovery for computer music, as "human-like" phrasing is no longer achieved by randomness nor "interpretation rules".

Producing the actual performance requires additional information which the Bol Processor encapsulates in metrical/topological properties of "sound-object prototypes". A time-setting algorithm modifies sound-objects taking into account physical timing and their adjacent sound-objects, much in a similar way human speakers modify the articulatory properties of speech sounds with respect to the speaking rate and influence of adjacent segments (coarticulation).

Many composers and music teachers support the Bol Processor approach because of its underlying paradigm of text representation, i.e. "composing with pen and paper". It found its way long before the invention of markup languages, at a time only graphic interfaces were expected to capture the sophistication of compositional processes.

BP2 is currently implemented for MacOS 9 and MacOS X. The project has been open-sourced by Sourceforge at http://sourceforge.net/projects/bolprocessor/ with the help of Anthony Kozar.

About the speaker
Bernard Bel is a computer scientist with background in electronics. In 1979 he started collaborating with anthropologists, musicologists and musicians on a scientific study of North Indian melodic and rhythmic systems. In 1981 he built the first accurate real-time melodic movement analyser (MMA) for the analysis of raga music. In 1986 he joined the French National Centre for Scientific Research (CNRS) in Marseille to continue a research on the rule-based modelling of training methods in traditional Indian drumming. He studied artificial intelligence under Alain Colmerauer and graduated with a PhD in theoretical computer science in 1990. Between 1994 an 1998, Bel was deputed to CENTRE DE SCIENCES HUMAINES (CSH, New Delhi) to carry on projects in the fields of computational musicology and social-cultural anthropology. He displaced his focus to "innovative" music forms: different ways of associating musical experience with information technology, and questioning the usual modernity/tradition dichotomy outside Western urban culture. In 1998 he joined LABORATOIRE PAROLE ET LANGAGE (CNRS, Aix-en-Provence) as a member of a team specialised in speech prosody and formal representations of language. Together with colleagues at CNRS he created the Speech Prosody Special Interest Group (SproSIG) under the banner of the International Speech Communication Association (ISCA).
References

BEL, B.; KIPPEN, J. The identification and modelling of a "percussion" language, and the emergence of musical concepts in a machine-learning experimental set-up. Computers and the Humanities, vol. 23, no. 3. 1989, p. 199-214. http://halshs.ccsd.cnrs.fr/halshs-00004505

KIPPEN, J.; BEL, B. Can a computer help resolve the problem of ethnographic description? Anthropological Quarterly, vol. 62, no. 3. 1989, p. 131-144.

BEL, B. Pattern grammars in formal representations of musical structures. Proceedings of International Joint Conference on Artificial Intelligence, 2nd Workshop on AI and Music (11: August 1989: Detroit, MI). 1989, p. 118-146.
http://www.lpl.univ-aix.fr/~fulltext/186.pdf

BEL, B. Acquisition et représentation de connaissances en musique. Thèse de Doctorat en Sciences. Université Aix-Marseille III, 1990.
http://tel.ccsd.cnrs.fr/documents/archives0/00/00/96/92/index_fr.html

BEL, B. Time and musical structures. Interface, Journal of New Music Research, vol. 19, no. 2-3. 1990, p. 107-135.
http://www.lpl.univ-aix.fr/~fulltext/214.pdf

KIPPEN, J.; BEL, B. Modelling music with grammars: formal language representation in the Bol Processor. In Alan Marsden; Anthony Pople, Computer Representations and Models in Music. London: Academic Press. 1992, p. 207-238.
http://halshs.ccsd.cnrs.fr/halshs-00004506

BEL, B. Time-setting of sound-objects: a constraint-satisfaction approach. International Workshop on Sonic Representation and Transform (1992 October 26-30: International School for Advanced Studies, Trieste, Italy).
http://www.lpl.univ-aix.fr/~fulltext/280.pdf

BEL, B.; KIPPEN, J. Bol Processor Grammars. In Mira Balaban; Otto Laske; Kemal Ebcioglu, Understanding Music with AI. Menlo Park, CA: AAAI Press. 1992, p. 366-400

BEL, B. Symbolic and sonic representations of sound-object structures. In Mira Balaban; Otto Laske; Kemal Ebcioglu, Understanding Music with AI. Menlo Park, CA: AAAI Press. 1992, p. 64-110. (Revised on-line version: Two algorithms for the instantiation of structures of musical objects)
http://halshs.ccsd.cnrs.fr/halshs-00004504

BEL, B. Modelling improvisatory and compositional processes. Languages of Design, Formalisms for Word, Image and Sound, vol. 1, no. 1. 1992, p. 11-26.

BEL, B. Rationalizing Musical Time: Syntactic and Symbolic-Numeric Approaches. The Ratio Symposium (December 14-16, 1997: The Hague, The Netherlands). In BARLOW, Clarence (ed.) The Ratio Book. Cologne, Germany: Feedback Papers. 2001, p. 86-101.
http://www.lpl.univ-aix.fr/~fulltext/1119.pdf

KIPPEN, J.; BEL, B. Computers, Composition, and the Challenge of "New Music" in Modern India. Leonardo Music Journal, vol. 4. 1994, p. 79-84.
http://www.lpl.univ-aix.fr/~fulltext/386.pdf

BEL, B. A symbolic-numeric approach to quantization in music. Proceedings of Brazilian Symposium on Computer Music (3: 1996 August 5-7: Recife, Brazil).
http://www.lpl.univ-aix.fr/~fulltext/538.pdf

BEL, B. A flexible environment for music composition in non-European contexts. Actes, Journées d'Informatique Musicale (1996 May 16-17: Caen, France). 1996.
http://recherche.ircam.fr/equipes/repmus/jim96/actes/Bel/Bel.html

BEL, B. Migrating musical concepts: An overview of the Bol Processor. Computer Music Journal, vol. 22, no. 2. 1998, p. 56-64.

BEL, B. Bol Processor - an overview. Proceedings of Symposium "Virtual Gamelan Graz: Rules - Grammars - Modeling" (2006 October 27-28: Graz, Austria) [Forthcoming].
http://www.lpl.univ-aix.fr/~belbernard/music/BolProcessorOverview/BolProcessorOverview.pdf

BEL, B. The Bol Processor project: musicological and technical issues. Seminar of the Music, Informatics and Cognition research group, University of Edinburgh. (2006 October 31: Edinburgh, UK).
http://www.lpl.univ-aix.fr/~belbernard/music/BolProcessorOverview/

 
timetable keynote show and tell papers panel
 

Show and Tell

The Show and Tell sessions will be short, practical demos of creative systems at work. The demos will be as follows. Each demo will be strictly no more than 15 minutes long!

DaySlotNameAbstract
11Penousal MachadoIn this demo we will make a brief overview of our work in the field of evolutionary art. Participants will have the opportunity to work with the evolutionary art program NEvAr, guiding evolution in order to create images that suit their personal tastes. Concurrently, NEvAr will evolve images autonomously, using its own aesthetic judgments. Images and videos resulting from interactive and autonomous evolution will also be presented.
12Marian UrsuWe define ShapeShifting programmes as interactive and reconfigurable moving image productions that adapt their content, on the fly, to suit the preferences of the viewers or engagers. They are automatically edited at the time of delivery. We have developed a paradigm, a computational model and an accompanying software system for the creation and delivery of ShapeShifting Screen Media programmes. These are all generic – genre and production independent. They employ AI techniques including: logic programming, ontologies, symbolic representations, normative statements, and constraint satisfaction heuristics. ShapeShifting Media empowers authors to engage in a new form of creative expression.
13Hugo Ricardo Gonçalo OliveiraIn this session we will give a brief discussion about Tra-la-lyrics, a system capable of generating portuguese lyrics for given melodies that is currently being developed. Some lyrics will be generated and analyzed with particular emphasis to rhythm, rhymes, repetition, sentence construction and meaning. The session is related with the accepted paper entitled "Tra-la-Lyrics: An approach to generate text based on rhythm".
14Miki ShawThe film joined with this poster for the Show & Tell event shows the evolution of a protein structure mapped into FormGrow Space traversing 20 nodes in an extrapolated phylogenetic tree covering in excess of 50 millions years (back and forth in time). The film shows a highly original representation of DNA on its journey from the human liver to the (eye) lens, initially backtracking through mice, worms, yeast, towards its common ancestor and then moving forward to its current present day form. The animated form interpolates between each node on the tree with surprising results. DNA is used to both generate the forms and produce the soundtrack. The film is an extension of the work done by William Latham and Stephen Todd in the late '80's/early '90's but this time connected to genomics and proteomics. The film crosses the dividing line between scientific visualisation of DNA data, aesthetically pleasing art.
15Celso Miguel de MeloThis demo presents a model which aims at effective and aesthetic creative expression in virtual humans. The model supports cognitive emotion synthesis and bodily, environment and screen expression. Bodily expression consists of psycholinguistics-based gesticulation and facial expression. Environment expression is based on the manipulation of lights, shadows and camera. Screen expression explores compositing and filtering from the visual arts to manipulate the virtual human pixels themselves. The emotion model is based on the Orthony, Clore and Collins (OCC) emotion theory. These are two components required for creative expression. However, presently, the creative process is not explicitly modelled. The demo shows the kind of creative expression which can be created using the model in the context of expression of emotions using bodily, environment and screen expression.
16Simon ColtonI will demonstrate two recent visual arts pieces of software that we have been working on: the Avera System by Marc Hull, which we have used for various evolutionary arts projecs, and The Painting Fool (see www.thepaintingfool.com), which aims to simulate creative aspects of the human painting process.
21Graeme RitchieThe STANDUP interactive riddle-generator allows the user to generate novel punning riddles, making choices via a colourful user-friendly graphical user interface. The type of joke generated (and the algorithms used) are closely based on Binsted's JAPE program (1996). Possible user choices include: a word to be included, a topic to be touched upon, or the type of joke. Speech output can be used to assist with the output messages, or as a "joke-telling" option. Many aspects of the system's behaviour can be varied through a Control Panel which allows the setting of options related to speech, menus, joke types, word familiarity, phonetic similarity, joke types, and other aspects. Fuller details are given in the paper at this workshop by Ritchie et al.
22Nick CollinsINFNO Infno is an algorithmic generator of electronic dance music. The program attempts to model the production of electropop and dance music styles with a closer union of parts than typical in many previous algorithmic composition systems. Voices (including a percussion section, bass, chord and lead lines) are not independently created: the parts can cross-influence each other based on underlying harmonic ideas, rhythmic templates and already generated lines. The eventual system posits a potential for any one part to influence or in the extreme force the recalculation of another, both from top-down and bottom-up information impacting on compositional preference. In particular, dynamic programming is used to choose melodic lines for counter melodies under cost constraints of register, harmonic template, existing voices and voice leading heuristics. Now under continuous development, the version intended for demoing at CC07 will be a working prototype conveying the creative scope of this generative artwork.
23Thor MagnussonInterfaces as Semiotic Machines Thor Magnusson Abstract: A musical software is not a neutral tool for the musicians to express their desired music. It is a constraining and deterministic technology designed with implicit musicology or ideas on how music should sound and how it should be made. The users of musical software often uncritically subscribe to the musical ideologies of the commercial software and the results are genres of music where one can hear the influence of a specific software in the style of the genre. Musical tools can be placed on a continuum of limitations and freedom. On the other end of the constraining-free continuum are the open source audio patchers/languages (like SuperCollider, Pure Data, CSound, etc). Here the user has much more freedom, but is still limited to the way these environments are designed. The difference is that they often have been consciously designed with the view of not imposing a musical tradition into the language itself. In this "show and tell" session, I intend to discuss the dilemma of creating graphical user interfaces in open source programming languages and the rationale behind the process of providing affordances and creating constraints vs. keeping the expressive scope open. I will present my own environment ixiQuarks (written in SuperCollider) which focusses on the question of immediate response and gestural latency in real-time performance with screen-based digital instruments. An important part of the research is on the semiotics of computer interfaces and the way interfaces serve as semiotic machines. A screenshot of the environment can be found here: http://www.ixi-software.net/thor/ixiQuarks.jpg
24Robert KellerImpro-Visor is an interactive software tool designed to assist jazz musicians create solos of a type similar to what one might improvise. Impro-Visor includes capabilities for creating convincing melodies based on a probabilistic context-free grammar. On a sufficiently-fast machine, complete choruses can be generated in real-time.
25Elaine ChewMIMI - multi-modal interaction for musical improvisation (a collaborative project by Alexandre François, Elaine Chew, and Dennis Thurmond; url: http://www-rcf.usc.edu/~mucoaco/MIMI): Mimi is a multi-modal interactive musical improvisation system that explores the potential and powerful impact of visual feedback in performer-machine interaction. Mimi is a performer-centric tool designed for use in performance and teaching. Its key and novel component is its visual interface, designed to provide the performer with instantaneous and continuous information on the state of the system. For human improvisation, in which context and planning are paramount, the relevant state of the system extends to the near future and recent past. The Mimi system, designed and implemented using the SAI framework, successfully integrates symbolic computations and real-time synchronization in a multi-modal interactive setting. Mimi's visual interface allows for a peculiar blend of raw reflex typically associated with improvisation, and preparation and timing more closely affiliated with score-based reading. Mimi is not only an effective improvisation partner, it has also proven itself to be an invaluable platform through which to interrogate the mental models necessary for successful improvisation.
25Alexandre FrancoisMIMI - multi-modal interaction for musical improvisation (a collaborative project by Alexandre François, Elaine Chew, and Dennis Thurmond; url: http://www-rcf.usc.edu/~mucoaco/MIMI): Mimi is a multi-modal interactive musical improvisation system that explores the potential and powerful impact of visual feedback in performer-machine interaction. Mimi is a performer-centric tool designed for use in performance and teaching. Its key and novel component is its visual interface, designed to provide the performer with instantaneous and continuous information on the state of the system. For human improvisation, in which context and planning are paramount, the relevant state of the system extends to the near future and recent past. The Mimi system, designed and implemented using the SAI framework, successfully integrates symbolic computations and real-time synchronization in a multi-modal interactive setting. Mimi's visual interface allows for a peculiar blend of raw reflex typically associated with improvisation, and preparation and timing more closely affiliated with score-based reading. Mimi is not only an effective improvisation partner, it has also proven itself to be an invaluable platform through which to interrogate the mental models necessary for successful improvisation.

 
timetable keynote show and tell papers panel
 

Accepted Papers and Posters

Paper session 1: Creativity in Narrative

A Computer Model that Generates Biography-like Narratives
   Samer Hassan, Pablo Gervás, Carlos León, Raquel Hervás, Universidad Complutense de Madrid, Spain
On the Fly Collaborative Story-Telling: Revising Contributions to Match a Shared Partial Story Line
   Pablo Gervás, Universidad Complutense de Madrid, Spain;
   Rafael Pérez y Pérez, Ricardo Sosa, Christian Lemaitre, Universidad Autónoma Metropolitana, Mexico
Narrative Inspiration: Using Case Based Problem Solving to Support Emergent Story Generation
   Ivo Swartjes, Joost Vromen, Niels Bloom, University of Twente, The Netherlands
 

Paper session 2: Analogy & Language

Evaluating Computer-Generated Analogies
   Diarmuid P. O'Donoghue, NUI Maynooth University, Ireland
A Generative Grammar for Pre-Hispanic Production: The Case of El Tajín Style
   Manuel Álvarez Cos, Rafael Pérez y Pérez, Atocha Aliseda, National Autonomous University of Mexico
Tra-la-Lyrics: An approach to generate text based on rhythm
   Hugo Oliveira, Amílcar Cardoso, Francisco Câmara Pereira, University of Coimbra, Portugal
 

Paper session 3: Musical Creativity

A Hybrid System for Automatic Generation of Style-Specific Accompaniment
   Ching-Hua Chuan, Elaine Chew, University of Southern California, USA
On the Meaning of Life (in Artificial Life Approaches to Music)
   Oliver Bown, Geraint A. Wiggins, Goldsmiths, University of London, UK
Evaluating Cognitive Models of Musical Composition
   Marcus T. Pearce, Geraint A. Wiggins, Goldsmiths, University of London, UK
Systematic Evaluation and Improvement of Statistical Models of Harmony
   Raymond Whorley, Geraint A. Wiggins, Marcus T. Pearce, Goldsmiths, University of London, UK
 

Paper session 4: Applied Creative Systems

A practical application of computational humour
   Graeme Ritchie, University of Aberdeen, UK; Ruli Manurung, Helen Pain, University of Edinburgh, UK;
   Annalu Waller, Rolf Black, Dave O'Mara, University of Dundee, UK
Automatizing Two Creative Functions for Advertising
   Carlo Strapparava, Alesandro Valitutti, Oliviero Stock, ITC-irst, Italy
 

Paper session 5: Frameworks for Creativity

Algorithmic Information Theory and Novelty Generation
   Simon McGregor, University of Sussex, UK
How Thinking Inside the Box can become Thinking Outside the Box
   Chris Thornton, University of Sussex, UK
Minimal creativity, evaluation and fractal pattern discrimination
   Jon Bird, Dustin Stokes, University of Sussex, UK
 

Paper session 6: Frameworks for Creativity

Creative Ecosystems
   Jon McCormack, Monash University, Australia
Towards a General Framework for Program Generation in Creative Domains
   Marc Hull, Simon Colton, Imperial College London, UK
 

Posters

From DNA to 3D Organic Art Forms - FormGrow Revisited
   William Latham, Miki Shaw, Stephen Todd, Frédédric Fol Leymarie, Goldsmiths, University of London, UK
Towards Creative Visual Expression in Virtual Humans
   Celso de Melo, Ana Paiva, Technical University of Lisbon, Portugal
 
timetable keynote show and tell papers panel

Panel session

Autonomy, Signature and Computational Creativity
A Conversation at Computational Creativity 2007

Convenors: Paul Brown, University of Sussex; Janis Jefferies, Goldsmiths, University of London

Conversants: Margaret Boden, University of Sussex; Jon McCormack, Monash University

This panel is one of the outcomes of the Computational Models of Creativity in the Arts (CMCA) Workshop held at Goldsmiths in May 2006. This included two days of invited presentations together with an evening event "Creative Cyborgs" curated by BLIP and held at the Dana Centre with support from the Computer Arts Society (CAS).

The workshop spawned a special issue of Digital Creativity (Vol. 18, No. 1, 2007) and an e-list - www.jiscmail.ac.uk/CMCA - to join.

The workshop generated a strong dialogue concerning many aspects of AI and A-life in the arts and this conversation intends to continue this focussing on the concepts of Autonomy and Signature.

Margaret Boden is a philosopher and Research Professor of Cognitive Science at the University of Sussex. She is an authority on Artificial Intelligence, Creativity and Cognitive Science and has written extensively on AI and the arts. Her most recent publication is the two-volume Mind as Machine: A History of Cognitive Science, OUP, 2006.

Jon McCormack is an Electronic Media Artist, co-director of the Centre for Electronic Media Art (CEMA) and a Lecturer in the School of Computer Science and Software Engineering, Monash University, Melbourne, Australia. Impossible Nature - a book about his work was published by the Australian Centre for the Moving Image (ACMI) in 2004.

 
timetable keynote show and tell papers panel