mihalis

Mihalis A. Nicolaou

Mihalis A. Nicolaou is lecturer in Computer Science at Goldsmiths, University of London.

Previously, Mihalis was a postdoctoral Research Associate at Imperial College London (Department of Computing), where he also completed his PhD in 2014. Before that, Mihalis obtained his MSc from the same department, and Ptychion (4Y BSc equiv.) from the Department of Informatics and Telecommunications at the University of Athens, Greece in 2008.

Mihalis’ research interests span the areas of machine learning and computer vision, particularly motivated by problems arising in the audio-visual analysis of affective behaviour under real-world conditions. Mihalis’ work revolves around probabilistic and robust methods, component analysis, predictive analysis, time-series analysis and alignment as well as the discovery of deep (hierarchical) non-linear representations.

Publications

Journal Papers

Conference Papers

Book Chapters

Projects

Automatic Sentiment Analysis in the Wild
European Commission Horizon 2020 Programme SEWA

The Automatic Sentiment Analysis in the Wild (SEWA) is a EC H2020 funded project. The main aim of SEWA is to deploy and capitalise on existing state-of-the-art methodologies, models and algorithms for machine analysis of facial, vocal and verbal behaviour, and then adjust and combine them to realise naturalistic human-centric human-computer interaction (HCI) and computer-mediated face-to-face interaction (FF-HCI).

This will involve development of computer vision, speech processing and machine learning tools for automated understanding of human interactive behaviour in naturalistic contexts. The envisioned technology will be based on findings in cognitive sciences and it will represent a set of audio and visual spatiotemporal methods for automatic analysis of human spontaneous (as opposed to posed and exaggerated) patterns of behavioural cues including continuous and discrete analysis of sentiment, liking and empathy.

Telepresence Reinforcement-learning Social Agent

European Commission FP7 TERESA project

The TERESA project aims to develop a telepresence robot of unprecedented social intelligence, thereby helping to pave the way for the deployment of robots in settings such as homes, schools, and hospitals that require substantial human interaction. In telepresence systems, a human controller remotely interacts with people by guiding a remotely located robot, allowing the controller to be more physically present than with standard teleconferencing. We are developing a new telepresence system that frees the controller from low-level decisions regarding navigation and body pose in social settings. Instead, TERESA will have the social intelligence to perform these functions automatically. In particular, TERESA will semi-autonomously navigate among groups, maintain face-to-face contact during conversations, and display appropriate body-pose behaviour.

Achieving these goals requires advancing the state of the art in cognitive robotic systems. The project will not only generate new insights into socially normative robot behavior, it will produce new algorithms for interpreting social behavior, navigating in human-inhabited environments, and controlling body poses in a socially intelligent way. The project culminates in the deployment of TERESA in an elderly day centre. Because such day centres are a primary social outlet, many people become isolated when they cannot travel to them, e.g., due to illness. TERESA will provide a socially intelligent telepresence system that enables them to continue social participation.

Fun Robotic Outdoor Guide

European Research Council FP7 project FROG
FROG aspires to turn autonomous outdoor robots into viable location-based service providers. It will develop an outdoor guide robot, part of an emerging class of intelligent robot platforms.

Multimodal Analysis of Human Nonverbal Behaviour in Real-World Settings

European Research Council Starting Grant (FP7) MAHNOB
Project lifespan: 2008 – 2013

Existing tools for human interactive behaviour analysis typically handle only deliberately displayed, exaggerated expressions. As they are usually trained only on series of such exaggerated expressions, they lack models of human expressive behaviour found in real-world settings and cannot handle subtle changes in audiovisual expressions typical for such spontaneous behaviour.

The main aim of MAHNOB project is to address this problem and to attempt to build automated tools for machine understanding of human interactive behaviour in naturalistic contexts. MAHNOB technology will represent a set of audiovisual spatiotemporal methods for automatic analysis of human spontaneous (as opposed to posed and exaggerated) patterns of behavioural cues including head pose, facial expression, visual focus of attention, hands and body movements, and vocal outbursts like laughter and yawns.

As a proof of concept, MAHNOB technology will be developed for two specific application areas: automatic analysis of mental states like fatigue and confusion in Human-Computer Interaction contexts and non-obtrusive deception detection in standard interview settings.

A team of 5 Research Assistants (RAs), led by the PI and having the background in signal processing and machine learning will develop MAHNOB technology. The expected result after 5 years is MAHNOB technology with the following capabilities:

  • analysis of human behaviour from facial expressions, hand and body movements, gaze, and non-linguistic vocalizations like speech rate and laughter
  • interpretation of user behaviour with respect to mental states, social signals, dialogue dynamics, and deceit/veracity
  • near real-time, robust, and adaptive processing by means of incremental processing, robust observation models, and learning person-specific behavioural patterns
  • provision of a large, annotated, online dataset of audiovisual recordings providing a basis for benchmarks for efforts in machine analysis of human behaviour.