Katan, S., M. Grierson, and R. Fiebrink. Using interactive machine learning to support interface development through workshops with disabled people. Proceedings of ACM CHI, April 18–23 2015.
Wolf, K. E., G. Gliner, and R. Fiebrink. A model for data-driven sonification using soundscapes. Proceedings of ACM Conference on Intelligent User Interfaces (IUI), March 29–April 1, 2015.
Hipke, K., M. Toomim, R. Fiebrink, and J. Fogarty. BeatBox: End-user interactive definition and training of recognizers for percussive vocalizations. Proceedings of AVI 2014 International Working Conference on Advanced Visual Interfaces, Como, Italy, May 27–30.
Laguna, C., and R. Fiebrink. 2014. Improving data-driven design and exploration of digital musical instruments. CHI’14 Extended Abstracts, 26 April–1 May.
Fried, O., and R. Fiebrink. Cross-modal sound mapping using deep learning. Proceedings of New Interfaces for Musical Expression (NIME), Daejeon, South Korea, May 27–30, 2013.
Fiebrink, R., and D. Trueman. End-user machine learning in music composition and performance. Presented at the CHI 2012 Workshop on End-User Interactions with Intelligent and Autonomous Systems. Austin, Texas, May 6, 2012.
Morris, D., and R. Fiebrink. Using machine learning to support pedagogy in the arts. Personal and Ubiquitous Computing, April 2012.
Fiebrink, R., P. R. Cook, and D. Trueman. Human model evaluation in interactive supervised learning. Proceedings of ACM CHI, Vancouver, May 7–12, 2011.
I am a CO-I on Horizon 2020-funded project Realtime Adaptive Prototyping for Industrial Design of Multimodal Interactive Expressive Technology (RAPID-MIX).
The RAPID–MIX consortium has devoted years of research to the design and evaluation of embodied, implicit and wearable human-computer interfaces. These interfaces, developed and applied through creative fields such as music and video games, provide natural and intuitive pathways between expressivity and technology.
RAPID–MIX will bring these innovations out of the lab and into the wild, directly to users, where they will have true impact. RAPID–MIX will bring cutting edge knowledge from three leading European research labs specialising in embodied interaction, to a consortium of five creative companies.
I am the author of the Wekinator software for real-time, interactive machine learning. Wekinator facilitates the use of machine learning as a prototyping and design tool, enabling composers, musicians, game designers, and makers to create new gestural interactions or semantic analysis systems from data.
The Wekinator has been downloaded over 3000 times and used in dozens of computer music performances utilising new musical instruments built with machine learning.