Goldsmiths College University of London Computing Logo Psychology Logo
Text only   




C3 2005 - The Whitehead Lectures on Cognition, Computation & Creativity

The Whitehead Lectures are funded and organised by the Departments of Computing and Psychology at Goldsmiths College, University of London, with the aim of stimulating interest and debate in the area of cognition, computation and creativity. All are welcome to attend.

The meetings for the autumn term 2005 [October .. December] are listed below. All seminars to be held in the Pimlott Lecture Theatre, (Ben Pimlott Building), unless otherwise stated.

For directions to Goldsmiths College see: http://www.goldsmiths.ac.uk/find-us/

To be added to the seminar mailing list, please contact Mark Bishop by email: m.bishop@gold.ac.uk



Wednesday, 12th October, 16.00

The Body and Soul of Schizophrenia: Contributions from Neurology and Psychiatry

Dr. Jean Oury and Prof. Jacques Schotte
La Borde Clinic and U.C.L. Llouvain.
http://www./

Abstract:

Dr. Jean Oury is an eminent reformer of psychiatry. Now eighty-three years of age, he worked closely with Julian de Ajuriaguerra, the distinguished neurologist who pioneered work on the phantom limb and the cerebral cortex. Dr. Oury spearheaded the post-war reformation of the psychiatric institution and has developed the psychotherapy of schizophrenia for over forty years. He has always stressed the necessity of psychiatrists and psychoanalysts alike being grounded in neurology. At the clinic of La Borde, founded by Dr. Oury in 1953, more schizophrenic patients are currently being treated than in any other establishment in France. Dr. Oury's publications have been translated throughout Europe, the Americas and Asia. Professor Jacques Schotte, retired emeritus professor of U.C.L Llouvain, is a leading scholar on the histories of neurology and phenomenology. For almost fifth years, he has attracted wide acclaim for his scholarship on the psychopathology of Viktor von Weizsäcker. A key writings anthology of Oury, Schotte and Weizsäcker is currently in preparation.


Wednesday, 19th October, 16.00

Could we build a conscious robot?

Prof. Owen Holland
Department of Computer Science, University of Essex, UK.
http://cswww.essex.ac.uk/staff/holland.htm

Abstract: In the last few years a new discipline has begun to emerge: machine consciousness. This talk will describe the background to this movement, and will present a line of thought showing how the problem of constructing a truly autonomous robot may also constitute an approach to building a conscious machine. The basis of the theory is that an intelligent robot will need to simulate both itself and its environment in order to make good decisions about actions, and that the nature and operation of the internal self model may well support some consciousness-related phenomena.

As part of an investigation into machine consciousness, We are currently developing a robot that we hope will acquire and use a self-model similar to our own. We believe that this requires a robot that does not merely fit within a human envelope, but one that is anthropomimetic - with a skeleton, muscles, tendons, eyeballs, etc. - a robot that will have to control itself using motor programs qualitatively similar to those of humans. The early indications are that such robots are very different from conventional humanoids; the many degrees of freedom and the presence of active and passive elasticity do provide strikingly lifelike movement, but the control problems may not be tractable using conventional robotic methods.

The project is limited to the construction and study of a single robot, and there are no plans for the robot to have any encounters with others of its kind, or with humans. Without any social dimension to its existence, and without language, could such a robot ever achieve a consciousness intelligible to us?

After training as a production engineer, Owen became interested in psychology, graduating from Nottingham University in 1969 and going on to teach experimental methods at Edinburgh University Psychology Department for three years. He then moved into commerce, and then back into engineering, working as Special Projects Manager for Dellfield Digital Ltd., a telecomms start-up (1983-87), and as Senior Production Engineer for Renishaw Metrology Ltd. (1987-1990). In 1988 Owen began to take an interest in behaviour based robotics; in 1990, this work won me a Small Firms Merit Award for Research and Technology from the Department of Trade and Industry, and he then set up a consultancy company, Artificial Life Technologies. He worked on a variety of projects, notably the MARCUS prosthetic hand (a European Community TIDE project), before moving to the University of the West of England, Bristol, (UWE) to help set up the Intelligent Autonomous Systems Engineering Laboratory in 1993. For 1993-94 Owen was a Visiting Research Fellow at the Zentrum fur interdisziplinare Forschung at the University of Bielefeld, Germany and in 1997 was Visiting Associate in Electrical Engineering at Caltech, working in the Microsystems Laboratory. In 1998 I was appointed Reader in Electrical Engineering at UWE, and in 1999 I spent a year as Principal Research Scientist at the CyberLife Institute (now CyberLife Research) before returning to Caltech in 2000. Owen's next port of call was the legendary Starlab, in Brussels where he spent several months as Chief Scientist before joining Essex in October 2001.


Wednesday, 26th October, 16.00

Synthetic Performers with Embedded Audio Processing

Prof. Barry Vercoe
Professor of Media, Arts & Sciences, MIT; Assoc Academic Head & Founding Member, MIT Media Lab.
http://web.media.mit.edu/~bv/

Abstract: This talk will trace the development of advanced human-computer music interaction, from the author's first developments in Paris (IRCAM) in the 80s to the world's first software-only professional audio system released in Japan in 2002. Digital processing of audio changed in 1990 when it first became real-time on desk-top machines. Human interaction previously constrained to custom hardware was suddenly possible on general-purpose machines, and the 90s saw new experiments in gestural control over complex audio effects. The pace of development outpaced Moore's Law when cross-compilers allowed rapid prototyping of audio structures on DSPs using large amounts of processor power. An interactive music performance system using hand-held devices running real-time audio software will be demonstrated. The talk will also be illustrated by other examples of music research at the MIT Media Lab, including the Audio Spotlight, applications of cognitive audio processing, compositions from the Experimental Music Studio, soundtrack from a recent Hollywood movie, and a new method of music recommendation on the Internet.

Barry Vercoe is Professor of Music and Professor of Media Arts and Sciences at MIT , and Assoc Academic Head of the Program in Media Arts & Sciences . He was born and educated in New Zealand in music and in mathematics, then completed a doctorate in Music Composition at the University of Michigan. In 1968 at Princeton University he did pioneering work in the field of Digital Audio Processing, then taught briefly at Yale before joining the MIT faculty in 1971. In 1973 he established the MIT computer facility for Experimental Music -- an event now commemorated on a plaque in the Kendall Square subway station. During the '70's and early 80's he pioneered the composition of works combining computers and live instruments. Then on a Guggenheim Fellowship in Paris in 1983 he developed a Synthetic Performer -- a computer that could listen to other performers and play its own part in musical sync, even learning from rehearsals. In 1992 he won the Computer World / Smithsonian Award in Media Arts and Entertainment, and recently gained the 2004 SEAMUS Lifetime Achievement Award. Professor Vercoe was a founding member of the MIT Media Laboratory in 1984, where he has pursued research in Music Cognition and Machine Understanding. His several Music Synthesis languages are used around the world, and a variant of his Csound and NetSound languages has recently been adopted as the core of MPEG-4 audio -- an international standard that enables efficient transmission of audio over the Internet. At the Media Lab he currently directs research in Machine Listening and Digital Audio Synthesis (Music, Mind and Machine group), and is Associate Academic Head of its graduate program in Media Arts and Sciences.


THURSDAY, 3rd November, 16.00

Visual Routes to Knowledge and Action

Melvyn A. Goodale
The University of Western Ontario, London Ontario
http://www.ssc.uwo.ca/psychology/faculty/goodale/

Abstract: Visual systems first evolved not to enable animals to see, but to provide distal sensory control of their movements. Vision as 'sight' is a relative newcomer on the evolutionary landscape, but its emergence has enabled animals to carry out complex cognitive operations on representations of the world. In the more ancient visuomotor systems, there is a basic isomorphism between visual input and motor output. In representational vision, there are many cognitive 'buffers' between input and output. Thus, in this system, the relationship between what is on the retina and the behaviour of the organism cannot be understood without reference to other mental states, including those typically described as "conscious". The duplex nature of vision is reflected in the organization of the visual pathways in the primate cerebral cortex. The dorsal 'action' stream projecting from primary visual cortex to the posterior parietal cortex provides flexible control of more ancient subcortical visuomotor modules for the control of motor acts. The ventral 'perceptual' stream projecting from the primary visual cortex to the temporal lobe provides the rich and detailed representation of the world required for cognitive operations.This might sound rather like Cartesian dualism?the existence of a conscious mind separate from a reflexive machine. But the division of labour between the two streams has nothing to do with the kind of dualism that Descartes proposed. Although the two kinds of visual processing are separate, both are embodied in the hardware of the brain. Moreover, there is a complex but seamless interaction between the ventral and the dorsal streams in the production of adaptive behavior. The selection of appropriate goal objects depends on the perceptual machinery of the ventral stream, while the execution of a goal-directed action is mediated by dedicated on-line control systems in the dorsal stream and associated motor areas. Moreover, as I will argue, the integration of processing in the two streams goes well beyond this. The dorsal stream may allow us to reach out and grasp objects with exquisite ease, but it is trapped in the present. Evidence from the behaviour of both neurological patients and normal observers shows that, by itself, the dorsal stream can deal only with objects that are visible when the action is being programmed. The ventral stream, however, allows us to escape the present and bring to bear information from the past ? including information about the function of objects, their intrinsic properties, and their location with reference to other objects in the world. Ultimately then, both streams contribute to the production of goal-directed actions.

Professor Melvyn A. Goodale, (Ph.D., F.R.S.C), currently works with the Group on Action and Perception at The University of Western Ontario, London Ontario.


Wednesday, 16th November, 16.00

Visual Illusions & Actions: A little less consciousness perception, a little more action

Dr. Gregory DiGirolamo
Cambridge University, UK.
http://www.psychol.cam.ac.uk/pages/staffweb/digirolamo/

Abstract: Considerable debate surrounds the extent and manner that motor control is, like perception, susceptible to visual illusions. Using the Brentano version of the Müller-Lyer illusion, we measured the accuracy of voluntary (anti-saccadic eye movements and ballistic arm movements) and reflexive (pro-saccadic eye movements) actions to the endpoints of equal length line segments that appeared different (Exps. 1 & 3) and different length line segments that appeared equal (Exps. 2 & 4). From this data, I will argue that the representations underlying perception and action interact and influence even the most reflexive movements with a stronger influence for movements that are consciously controlled.


Dr. DiGirolamo started his career in cognitive neuroscience as an undergraduate working with Stephen Kosslyn (at Harvard) doing PET and fMRI of visual mental imagery. He then went on to do his Ph.D with Mike Posner (at the University of Oregon) studying visual attention. After which he did a brief (6 month) post-doc with Art Kramer & Gordan Logan (at the Beckman Institute at University of Illinois) studying visual attention. Dr. DiGirolamo then landed at Cambridge where he has been ever since (5 years). Dr. DiGirolamo is a University Lecturer in the Department of Experimental Psychology, and a fellow of medical sciences at Jesus College, Cambridge.


Wednesday, 23rd Novemeber, 16.00

Application of the Fisher Rao metric to structure detection in http://igor.gold.ac.uk/~mas02mb/http://igor.gold.ac.uk/~mas02mb/seminars/Whitehead folders/images

Prof. Steve Maybank
School of Computer Science and Information Systems, Birkbeck, University of London, UK.

http://students.dcs.bbk.ac.uk/dept/staffperson05.asp?name=sjmaybank

Abstract: Many image structures in computer vision form parameterised families. For example, the set of all lines in an image forms a two dimensional family in which each line not containing the origin is specified uniquely by the coordinates of the point on the line nearest to the origin. In order to locate a particular image structure, measurements are made in an image and the structure most compatible with the measurements is found. The parameter space for the image structures can be given a metric which is derived from the error model for the measurements. Two structures are close together in this metric if they are hard to distinguish given a measurement. The metric is known in statistics as the Fisher-Rao metric. In most cases the Fisher-Rao metric cannot be found in closed form. However, if the noise level is low, then the Fisher-Rao metric can be approximated by the leading order term in an asymptotic expansion of the metric. In many cases of practical interest this leading order term can be obtained in closed form, or in terms of well known and easily computed functions. Examples of such cases include lines, ellipses and projective transformations of the line.

The main application of this approximation to the Fisher-Rao metric is that it gives for the first time an easily computed measure of the complexity of structure detection in http://igor.gold.ac.uk/~mas02mb/http://igor.gold.ac.uk/~mas02mb/seminars/Whitehead folders/images. This measure is equal to the volume of the parameter space under the Fisher-Rao metric divided by the volume of a region of the parameter space corresponding to a single distinguishable structure. If this ratio of volumes is large then structure detection is difficult because there is a large number of distinguishable structures. If the ratio of volumes is small then structure detection is easy because there is only a small number of distinguishable structures.


Steve Maybank is Professor in the School of Computer Science and Information Systems at Birkbeck College, University of London. He is also Visiting Professor at the Institute of Automation, Chinese Academy of Sciences and Member of the Academic Committee of State Key Laboratory for Image Processing and Intelligent Control, Huazhong University of Science and Technology, Wuhan, China. Steve is Editor for Computing and Informatics, Associate Editor for Acta Automatica Sinica and Member of the Editorial Board for the International Journal of Computer Vision.


Wednesday, 30th November, 16.00

Top-down processes in visual selection

Prof. Glyn Humphreys
School of Psychology, University of Birmingham, Birmingham, UK.
http://psg275.bham.ac.uk/bbs/humphreysg.htm

Abstract: Traditionally, several pieces of evidence have been used to argue for the primary role of bottom-up saliency in visual selection, including search asymmetries, visual grouping effects and pop-out effects. I will present recent evidence that, in each of these instances, processing can be modulated by top-down knowledge - either the 'template' of the target or particular items held in working memory. Neuropsychological studies with patients showing extinction further show that the match between bottom-up information and information held in working memory enables a stimulus to pass into conscious awareness.

Glyn Humphreys is Professor of Cognitive Psychology and currently Head of the School of Psychology at the University of Birmingham, UK.


Wednesday, 7th December, 16.00

Algebraic Semiotics, Ontologies, and Visualisation of Data in Animations

Dr. Grant Malcolm
University of Liverpool, Liverpool, UK.
http://www.csc.liv.ac.uk/~grant/

Abstract: Ferdinand de Saussure highlighted both the arbitrary nature of signs (why should the sound /kat/ either spell "cat" or mean `feline'?), and the systematic way in which signs, however arbitrary, function: signs convey meaning by (arbitrary) convention, yet they obtain their meaning by contrast with other signs. This talk will explore the ways in which the structure and systematicity of signs can be exploited in developing animations of algorithms.

Goguen has proposed algebraic semiotics as a way of capturing the systematic way that signs are organised and used to build higher-level signs. We apply algebraic semiotics to the study of user-interface design, and show how relationships between signs can indicate the effectiveness of a user interface. Then we view animations of algorithms as user interfaces, and relate their sign systems to ontologies describing the concepts underlying the algorithm. Dynamic aspects of animations can then be seen as relationships between signs and entities in the ontology, which can be further illuminated by narratology and the development of conceptual spaces.

Dr. Malcolm is lecturer in Computer Science at the University of Liverpool. His research interests are Algebraic Semiotics; Biologically Motivated Computing; Ontologies and Hidden Algebra.

 

Department of Computing, Goldsmiths College, University of London, New Cross, London, SE14 6NW
Tel: +44 (0) 20 7919 7850 | Fax: +44 (0) 20 7919 7853 | Email: computing@gold.ac.uk

Department of Psychology, Whitehead Building, Goldsmiths College, University of London, New Cross, London, SE14 6NW
Tel: +44 (0) 20 7919 7870/7871 | Fax: +44 (0) 20 7919 7873 | Email: psychology@gold.ac.uk

Copyright © Goldsmiths College, 2004