More selected projects

Body Tracking Technology

by: Joshua Hodge & Patricio Ordonez P

Introduction

Is it possible to create a set of instructions simultaneously from one single input?  Have methods been developed to help the disabled to create music through the use of a single finger, a tongue, or even just a brain? 

These are two of the many questions that we have considered in our creative research.  In our investigation we have researched a few of the many advances in these efforts to use technology for the aid of the physically or mentally disadvantaged.  

 

Finger Tracking Devices

When physicist Stephen Hawking lost the ability to speak in 1985, many looked for a solution that would allow him to communicate his ideas.  Words Plus, a software company based in California provided the solution coupled together with the use of a speech synthesizer called Speech Plus. 

Using this device, Hawkings was able to “speak” at a rate of around 15 words per minute by selecting words and spelling with the use of a hand clicker and a customized Apple II computer mounted onto his wheel chair.

There have since been developments in algorithm used to allow Professor Hawkings to speak at a rate, which is now provided by London based company Swift Key.  This now includes a word prediction algorithm which has been developed through analysis of Hawking’s past correspondence.  Now, once the professor has clicked on a word, a list of his commonly used words to follow appear, allowing him to save time and increase this rate of communication [1].

Tongue Tracking Devices

Another advancement that researchers have developed is tracking technology for the tongue.  One leading method for this is the Tongue Drive System (TDS), co-developed by Professor Maysam Ghovanloo at the Georgia Institute of Technology.  In this technique, the subject’s tongue is pierced with a magnetic tongue stud.  The sensors in the stud allow the user to use the position of his/her tongue as a controller, relaying control signals to an accompanying headset.

This development appears to supercede an earlier similar technology referred to as “sip and puff.”  This was accomplished with the use of a specialized straw, as developed by companies such as Orin Instruments, which allow the user to control their wheelchair as well as mouse button emulation [2].

Eye Tracking Devices

One of the leading developers in eye tracking technology is Tobii Technology, a Swedish company founded in 2001 by John Elvesjo, Mårten Skogö and Henrik Eskilsson.  Their products  have been used in a variety of computing fields, including gaming, assistive technology, research solutions and technology integration.

Tobii Technology consists of three sub-divisions, Tobii Dynavox, which is focused on monitoring the user’s eye movements, coupled with touch screens, focused on assisting disabilities such as cerebral palsy, muscular dystrophy, and spinal injury.  Tobii Pro is centered around the study of human behavior in psychology and neurology, and Tobii Tech develops eye tracking technology for implementations within games, cars and virtual reality.  This area in particular is in earlier stages of development and their plan is continued collaboration with software developers to create new user experiences [3].

 

Brainwave Tracking Devices

Mind midi is an algorithm created by Aaron Thomen in 2013 which, along with the use of an EEG machine and electrodes placed on the head, non-invasively receives signals from the brain, processing the data and turning it into a midi signal.

The purpose of translating this data is to create music, as well as exploring find alternative ways of producing sound signals in real time.  Different brainwaves can be programmed to trigger different sounds.  An example of this would be Delta and Theta waves controlling a piano sound, Alpha waves controlling the bass, and Beta and Gamma waves for synthesizers.  Further technical investigation is required to understand how the brainwave signals are programmed to affect the sound as it seems that from this video it only changes the tempo of a pre-recorded arpeggio [4].

In Summary

Our main aim behind this investigation has been to gather information about how developers have worked to develop methods of creating a complex output from simple inputs.  Most technology requires or is best used with two hands.  What if the user has a disability that prevents normal use?  

This exploration provides opportunities to explore the possibilities of creating a system or interface which would help people with physical impediments to express their musical or artistic ideas in a more effective way by the use of a simpler or alternative input.

There are a number of considerations, mainly cost and time.  Would this implementation be primarily software based, or would we require the use of external hardware?  The answers to these challenges haven’t become clear at this point but we are in hopes that through further exploration and research the methods will become more clear.

References

[1] Medeiros Joao – Exclusive: Giving Stephen Hawking a Voice
[2] Maysam Ghovanloo website
[3] Tobii Technology website
[4] Aaron Thomen Mind Midi website