matthewdata

Matthew Yee-King

Matthew Yee-King gained a DPhil from the School of Informatics at Sussex University, wherein he investigated techniques for exploring the high dimensional space of synthetic timbre.

Since then he has worked on several research projects such as the PRAISE project, developing online, collaborative learning systems which have been used by many thousands of people. He has worked with the data resulting from the real world deployment of these systems to address questions such as:

  • What can be the impact of social, collaborative learning upon learning?
  • How can data be used to improve the design of online social learning systems?

Papers

Engineering Multiuser Museum Interactives, Engineering applications of artificial intelligence, Elsevier, p.1-24 (2015)
Roberto Confaloniera, Matthew Yee-King, Katina Hazelden, Mark d’Inverno, Dave de Jonge, Nardine Osmaa, Carles Sierra, Leila Agmoud, Henri Prade;

Multiuser Museum Interactives for Shared Cultural Experiences: an Agent Based Approach, AAMAS 2013, Saint Paul, Minnesota, USA, p.917-924 (2013)
Matthew Yee-King; Roberto Confalonieri; Dave de Jonge; Katina Hazelden; Carles Sierra; Mark d’Inverno; Leila Amgoud; Nardine Osman

Social machines for education driven by feedback agents, in Proceedings First International Workshop on the Multiagent Foundations of Social Computing, AAMAS-2014, Paris, France, May 6 2014
M. Yee-King ,M. d’Inverno, P. Noriega

Designing educational social machines for effective feedback. 8th International Conference on e-learning. Lisbon, Portugal, 15-18 July, 2014
M. Yee-King, M. Krivenski, H. Brenton, A. Grimalt-Reynes, M. d’Inverno.

Projects

PRAISE project: PRAISE is a social network for music education with tools for giving and receiving feedback. It aims to widen access to music education and make learning music more accessible and more social.

At its heart PRAISE will provide a supportive, social environment using the latest techniques in social networks, online community building, intelligent personal agents and audio and gesture analysis.

Any member of any community can post audio to any community for which they are a member and ask for specific kinds of feedback on various regions of that audio. Any community member can respond with text, or with other audio to emphasize a particular point about style or performance for example.