Category Archives: Throwback Thursday

Throwback Thursday: Program analysis, schemas and slicing

In 2010, Sebastian Danicic, head of Goldsmiths’ BSc Computer Science, wrote this article for our website. We reprint it for the first time in four years.


Software is prone to errors. Our research is fundamental to software analysis; in particular to the static analyis of computer programs. Such analysis is essential for ensuring that these errors are corrected safely.

The results of software errors may be extremely serious. According to Wikipedia, errors in the software controlling the Therac-25 radiation therapy machine developed in the 1980s were directly responsible for patient deaths. In 1996, the European Space Agency‘s Ariane 5 rocket was destroyed less than a minute after launch, owing to an error in the on-board guidance computer program. The cost of this error was estimated at 1 billion dollars

In 1994, an RAF Chinook helicopter crashed into the Mull of Kintyre, killing 29 people. An investigation provided sufficient evidence to convince a House of Lords inquiry that it may have been caused by a software error in the aircraft’s engine control computer. In 2002, a study commissioned by the US Department of Commerce National Institute of Standards and Technology concluded that software errors are so prevalent and so detrimental that they cost the US economy an estimated $59 billion annually, or about 0.6 percent of the gross domestic product.

Through the production of static analysis tools based on the results of our research, those involved in minimising these errors, including software engineers and others involved in the production of software, will benefit from our research. The errors in software are a consequence of the nature of human factors in the programming task. They arise from oversights or mutual misunderstandings made by a software team during specification, design, coding, data entry and documentation. Programs are written for a machine and as a result are not necessarily easy to understand by a human. On large software projects, many people have to co-operate. This involves having to understand each other’s code. As illustrated above, even small misunderstandings can lead to catastrophic results. Software first needs to be analysed in order to remove these errors from it. Without this analysis, we are in danger of introducing new errors while removing the old ones. Changing programs can be very dangerous. It is very hard to know what impact even altering a single line of code in a program can have.

This is one of the main tasks of static program analysis, so called because it analyses the program without actually executing it. (Analysis involving a program’s execution is known as dynamic analysis.) All methods for statically analysing programs are conservative. This means algorithms for performing such analyses will always include false positives; for example, non-existent dependencies between statements will be falsely highlighted. In some cases there may in fact be so many false positives in a particular approach, to render the analysis almost useless.

One aim of our research is to reduce the frequency of false postives significantly thereby improving the accuracy of the analysis. Importantly, theory tells us that we cannot remove false positives altogether. All such approaches will be conservative. The question that we are interested in answering is, therefore, how far can we push the boundary? In other words, what are the theoretical limits of such analyses? Our previous work has already demonstrated that more accurate analysis is possible than current techniques allow. However we do not yet fully understand where the theoretical boundaries lie.

There is also a practical side to our research. The theoretical limitations of such approaches having been better understood, practical algorithms need to be devloped and studied. It is important to determine whether there exist efficient algorithms and thus to study the complexity of the underlying problems. It is possible that at the theoretical limit the problems, though decidable, will in fact be intractible i.e. too inefficient to be of practical use. Since static analysis of a program does not involve executing it, we can convert our programs to other forms which may be more amenable to analysis.

This is exactly the purpose of studying program schemas, the main topic of our research. The idea is to retain aspects of the program which lend themselves to static analysis and to abstract away all irrelevant details. The beauty of such an approach is that when we analyse programs in this way we are in fact not just analysing a single program but a whole class of programs which are structurally similar to the one we are considering. To some extent this happens implicitly with current technology, but we attempt to make it explicit.

Schema theory has so far only been applied to simple imperative conventional programming languages. There is no theoretical reason why concepts in schema theory cannot be extended to handle modern fully-fledged object-oriented programming languages. Abstracting such languages to the level of schemas makes complete sense. The resulting schemas would still have the same structure but unimportant details for purposes of static analyses would have been abstracted away. Much work is needed to extend the theory of schemas to handle such constructs. Once this has been done, it will be necessary to develop and assess new algorithms for the analyses of programs written in modern languages. Furthermore, any particular program can be represented by any one of a huge range of schemas, all varying in their degree of abstraction from the original program.

There is huge scope for choice in schematizing a program; in particular, in deciding which components should be abstracted away and which should remain. The key will be to keep the resulting schema as concrete as possible while keeping the analysis tractable. It is hoped that the results of our research will influence the design of new static analysis tools available in popular integrated program development environments.


Throwback Thursday: iPhone gesture recognition

Back in 2010, Marco Klingmann (Msc Cognitive Computing) wrote about his iPhone gesture recognition project…

“The growing number of small sensors built into consumer electronic devices, such as mobile phones, allow experiments with alternative interaction methods in favour of more physical, intuitive and pervasive human computer interaction.

“This research examines hand gestures as an alternative or supplementary input modality for mobile devices.

“The iPhone is chosen as sensing and processing device. Based on its built-in accelerometer, hand movements are detected and classified into previously trained gestures. A software library for accelerometer-based gesture recognition and a demonstration iPhone application have been developed. The system allows the training and recognition of free-from hand gestures.

“Discrete hidden Markov models form the core part of the gesture recognition apparatus. Five test gestures have been defined and used to evaluate the performance of the application. The evaluation shows that with 10 training repetitions, an average recognition rate of over 90 percent can be achieved.”


Marco Klingmann is now an interaction designer and app developer working in Switzerland. Follow him on Twitter

Throwback Thursday: GreenInsight

green

This week’s delve into the recent history of Goldsmiths Computing looks at GreenInsight, a tool developed in 2010 from research by Goldsmiths’ Mark Bishop and Sebastian Danicic.

GreenInsight quickly provides the information required to either match the products that you purchase at item level, or to classify those items that could not be matched to a sufficient level of detail to calculate the carbon footprint.

This is then combined with the publicly verified information for products and industries to build up an organisations environmental impact from the bottom up. GreenInsight enables customers to:

  • view the cost to the environment as items are purchased, and the carbon footprint of items purchased
  • evaluate their organisation’s spend and calculate the environmental cost of the spend in considerable detail, to product line level.

Throwback Thursday: Sensory Response Systems

Back in 2009, Ryan Jordan (MFA in Computational Studio Arts) created Sensory Response Systems, an exploration into audio-visual performance using an array of sensors and controllers responsive to physical movements.

The project also explored the reshaping and replication of the body through the use of fabrics, textiles and technologies in order for the performer to fully embody and ‘become’ the instrument.

The overall aim was to bring a more direct and immediate relationship and control over the sound and images being generated, and to allow for full body expression and intimacy between performer and instrument (computer).


Ryan Jordan runs the noise research laboratory and live performance platform NOISE=NOISE

“Embedded in wires and circuits, Ryan Jordan beams throbbing, ritualistic
recreations of rave musik from some dystopic future place where all recording
technology is long since gone and only folk memories of ‘dance music’ exist.”
Dr Adam Parkinson, EAVI

Throwback Thursday: Scan_Memories

ini

We’re going all the way back to 2009 for this week’s Throwback Thursday look at past projects developed at Goldsmiths Computing. 

The Scan_Memories project investigated how new technology can create or participate in the process of reconstructing memories in comparison with the existing way of remembering the deceased and being remembered by the bereaved.

Developed at Goldsmiths by Miguel Andres-Clavera and Inyong Cho, the project used radio-frequency ID, mobile and multimedia technologies to give people a gate to keep an emotional relation with the deceased person.

Clavera and Cho said: “The project opens a heterogeneous and direct access to the memories materialized in physical spaces and in objects connected with the dead, presenting the dialectic between constructed formations based on presence and absence, and memory reconstruction through patterns technology mediated.”

Watch the 20 minute documentary

 

Throwback Thursday: LumiSonic

This week we revisit a research project developed by Goldsmiths’ Mick Grierson in collaboration with Sound and Music, Whitefield Schools and Centre and the London Philharmonic Orchestra.

LumiSonic is a sound application that visualises sound in real-time in a way that allows hearing-impaired individuals to interact with a specifically designed, graphical representation of sound.

Inspired equally by experimental film tradition and neuroscientific research, Lumisonic aims to help those with hearing difficulties have a better understanding of sound.

The Lumisonic iPhone app, created by Goldsmiths Creative Computing and Strangeloop, is a “proof of concept” visualisation tool that generates a filtered visual display based on live sound. Sound is transformed in real-time through the implementation of a Fast Fourier Transform (FFT) and translated into a moving image. The aesthetic and conceptual approach was informed by research in both visual perception and experimental film. Concentric ring formations are a common feature in both visual hallucinations, and experimental animation. Research suggests this is due to the structure of the visual system. Therefore this method of representing sound could be more effective than linear ‘graphic’ approaches.

Throwback Thursday: How music ‘moves’ us – listeners’ brains second-guess the composer

Brain_NotesHave you ever accidentally pulled your headphone socket out while listening to music? What happens when the music stops?

Psychologists believe that our brains continuously predict what is going to happen next in a piece of music. So, when the music stops, your brain may still have expectations about what should happen next.

A new paper [it’s Throwback Thursday, so ‘new’ means 2010] published in Neuro Image predicts that these expectations should be different for people with different musical experience and sheds light on the brain mechanisms involved.

Research by Marcus Pearce, Geraint Wiggins, Joydeep Bhattacharya and their colleagues at Goldsmiths, University of London has shown that expectations are likely to be based on learning through experience with music.

Music has a grammar, which, like language, consists of rules that specify which notes can follow which other notes in a piece of music. According to Pearce, “the question is whether the rules are hard-wired into the auditory system or learned through experience of listening to music and recording, unconsciously, which notes tend to follow others.”

The researchers asked 40 people to listen to hymn melodies (without lyrics) and state how expected or unexpected they found particular notes. They simulated a human mind listening to music with two computational models. The first model uses hard-wired rules to predict the next note in a melody. The second model learns through experience of real music which notes tend to follow others, statistically speaking, and uses this knowledge to predict the next note.

The results showed that the statistical model predicts the listeners’ expectations better than the rule-based model. It also turned out that expectations were higher for musicians than for non-musicians and for familiar melodies—which also suggests that experience has a strong effect on musical predictions.

In a second experiment, the researchers examined the brain waves of a further 20 people while they listened to the same hymn melodies. Although in this experiment the participants were not explicitly informed about the locations of the expected and unexpected notes, their brain waves in responses to these notes differed markedly. Typically, the timing and location of the brain wave patterns in response to unexpected notes suggested that they stimulate responses that synchronise different brain areas associated with processing emotion and movement. On these results, Bhattacharya commented, “… as if music indeed ‘moves’ us!”

These findings may help scientists to understand why we listen to music. “It is thought that composers deliberately confirm and violate listeners’ expectations in order to communicate emotion and aesthetic meaning,” said Pearce. Understanding how the brain generates expectations could illuminate our experience of emotion and meaning when we listen to music.