Category Archives: Staff profiles and activity

Throwback Thursday: Program analysis, schemas and slicing

In 2010, Sebastian Danicic, head of Goldsmiths’ BSc Computer Science, wrote this article for our website. We reprint it for the first time in four years.


Software is prone to errors. Our research is fundamental to software analysis; in particular to the static analyis of computer programs. Such analysis is essential for ensuring that these errors are corrected safely.

The results of software errors may be extremely serious. According to Wikipedia, errors in the software controlling the Therac-25 radiation therapy machine developed in the 1980s were directly responsible for patient deaths. In 1996, the European Space Agency‘s Ariane 5 rocket was destroyed less than a minute after launch, owing to an error in the on-board guidance computer program. The cost of this error was estimated at 1 billion dollars

In 1994, an RAF Chinook helicopter crashed into the Mull of Kintyre, killing 29 people. An investigation provided sufficient evidence to convince a House of Lords inquiry that it may have been caused by a software error in the aircraft’s engine control computer. In 2002, a study commissioned by the US Department of Commerce National Institute of Standards and Technology concluded that software errors are so prevalent and so detrimental that they cost the US economy an estimated $59 billion annually, or about 0.6 percent of the gross domestic product.

Through the production of static analysis tools based on the results of our research, those involved in minimising these errors, including software engineers and others involved in the production of software, will benefit from our research. The errors in software are a consequence of the nature of human factors in the programming task. They arise from oversights or mutual misunderstandings made by a software team during specification, design, coding, data entry and documentation. Programs are written for a machine and as a result are not necessarily easy to understand by a human. On large software projects, many people have to co-operate. This involves having to understand each other’s code. As illustrated above, even small misunderstandings can lead to catastrophic results. Software first needs to be analysed in order to remove these errors from it. Without this analysis, we are in danger of introducing new errors while removing the old ones. Changing programs can be very dangerous. It is very hard to know what impact even altering a single line of code in a program can have.

This is one of the main tasks of static program analysis, so called because it analyses the program without actually executing it. (Analysis involving a program’s execution is known as dynamic analysis.) All methods for statically analysing programs are conservative. This means algorithms for performing such analyses will always include false positives; for example, non-existent dependencies between statements will be falsely highlighted. In some cases there may in fact be so many false positives in a particular approach, to render the analysis almost useless.

One aim of our research is to reduce the frequency of false postives significantly thereby improving the accuracy of the analysis. Importantly, theory tells us that we cannot remove false positives altogether. All such approaches will be conservative. The question that we are interested in answering is, therefore, how far can we push the boundary? In other words, what are the theoretical limits of such analyses? Our previous work has already demonstrated that more accurate analysis is possible than current techniques allow. However we do not yet fully understand where the theoretical boundaries lie.

There is also a practical side to our research. The theoretical limitations of such approaches having been better understood, practical algorithms need to be devloped and studied. It is important to determine whether there exist efficient algorithms and thus to study the complexity of the underlying problems. It is possible that at the theoretical limit the problems, though decidable, will in fact be intractible i.e. too inefficient to be of practical use. Since static analysis of a program does not involve executing it, we can convert our programs to other forms which may be more amenable to analysis.

This is exactly the purpose of studying program schemas, the main topic of our research. The idea is to retain aspects of the program which lend themselves to static analysis and to abstract away all irrelevant details. The beauty of such an approach is that when we analyse programs in this way we are in fact not just analysing a single program but a whole class of programs which are structurally similar to the one we are considering. To some extent this happens implicitly with current technology, but we attempt to make it explicit.

Schema theory has so far only been applied to simple imperative conventional programming languages. There is no theoretical reason why concepts in schema theory cannot be extended to handle modern fully-fledged object-oriented programming languages. Abstracting such languages to the level of schemas makes complete sense. The resulting schemas would still have the same structure but unimportant details for purposes of static analyses would have been abstracted away. Much work is needed to extend the theory of schemas to handle such constructs. Once this has been done, it will be necessary to develop and assess new algorithms for the analyses of programs written in modern languages. Furthermore, any particular program can be represented by any one of a huge range of schemas, all varying in their degree of abstraction from the original program.

There is huge scope for choice in schematizing a program; in particular, in deciding which components should be abstracted away and which should remain. The key will be to keep the resulting schema as concrete as possible while keeping the analysis tractable. It is hoped that the results of our research will influence the design of new static analysis tools available in popular integrated program development environments.


Throwback Thursday: GreenInsight

green

This week’s delve into the recent history of Goldsmiths Computing looks at GreenInsight, a tool developed in 2010 from research by Goldsmiths’ Mark Bishop and Sebastian Danicic.

GreenInsight quickly provides the information required to either match the products that you purchase at item level, or to classify those items that could not be matched to a sufficient level of detail to calculate the carbon footprint.

This is then combined with the publicly verified information for products and industries to build up an organisations environmental impact from the bottom up. GreenInsight enables customers to:

  • view the cost to the environment as items are purchased, and the carbon footprint of items purchased
  • evaluate their organisation’s spend and calculate the environmental cost of the spend in considerable detail, to product line level.

Organic Systems at The Natural History Museum

gemma_anderson

Image: Isomorphogenesis No.3 by Gemma Anderson

William Latham and Gemma Anderson are facilitating an ‘Organic Systems’ Drawing workshop at The Natural History museum as part of ‘The Big Draw’ – the world’s biggest drawing festival.

This is a rare opportunity to get hands on experience of the Natural History Museum’s collections to gain insight into evolutionary processes through drawing.

The event will take place on Sunday 19th October from 11am.

Latham  began his career studying Printmaking at the Royal College of Art (1983-1985) where he developed the ‘FormSynth’ method. He then worked as Artist in Residence with IBM between 1987-1993 which led to the ongoing ‘Mutator’ project http://latham-mutator.com and is currently Professor of Art and Games at Goldsmiths University of London.

Anderson also studied Printmaking at the RCA (2005-2007) and has been working in collaboration with scientists at the Natural History Museum and Imperial College​ since 2006​. She is currently Associate Lecturer of Drawing at Falmouth University, where she is also completing her practice based PhD (2011-2015) www.gemma-anderson.co.uk.
Anderson has adapted Latham’s rule based (algorithm) evolutionary drawing method ‘FormSynth’ to create ‘Isomorphogenesis’ an extended Organic Systems drawing process, which relates directly to the Natural History Museum’s collections. In this workshop Latham and Anderson will share their experimental drawing methods, which perform an analogue to morphogenesis.

Andy Lomas and Patrick Tresset award winners @ The Lumen Prize!

Cellular Forms ~ Andy Lomas

Andy Lomas, Head of Computer Graphics at Framestore is the winner of Lumen Prize Gold for ‘Cellular Forms’. Andy regularly gives lectures and seminars at Goldsmiths and will be included in the ‘Creative Machine’ exhibition opening on 6th November 2014.

Patrick Tresset a visiting research fellow at Goldsmiths also obtained 3rd prize with his project ‘5 Robots Named Paul’.

Modelling a Community’s Health and Mobility Patterns with Mobile Phone Data

Figure3

This Thursday at 3pm (16th October 2014), Kate Farrahi, Lecturer in Computing at Goldsmiths University will be giving a talk on ‘mobility patterns and interactions sensed by mobile phones’ at Cambridge University.

This data provides a new source for many applications both in research and industry. In this talk, she will discuss two mobile sensed data-driven applications, one based on mobility patterns and the other based on interaction patterns.

Human interactions sensed ubiquitously by cellphones can benefit many domains, particularly for monitoring the spread of disease. A community of 72’s flu patterns have been collected simultaneous to their interactions sensed by mobile phone Bluetooth logs. The focus of this work is to determine the accuracy of incorporating interaction data into dynamic epidemiology models for infection prediction.

Kate (Katayoun) Farrahi is a lecturer at the University of London, Goldsmiths. Her research focuses on large-scale human behaviour modelling and mining, with special interest in data science, computational social sciences, mobile phone sensor data, and machine learning. Farrahi received her Ph.D. in Computer Science from the Swiss Federal Institute of Technology (EPFL) Lausanne, and the Idiap Research Institute, Switzerland. She has spent time as an intern at MIT and is a recipient of the Google Anita Borg scholarship, and the Idiap research award.

This talk is part of the Computer Laboratory Systems Research Group Seminar series.

Smart Slums?

1-Kibera Slum

Our very own Dr Dan McQuillan has written an article for the Guardian’s  Cities in development series about the discussion around ‘Smart Slums’.

The article raises important questions around some of the negative impacts of ICT4D; and that a push for ‘smart slums’ could be appropriated for social justice.

This in-depth article talks through alternative approaches currently being tested through ‘bottom-up citizen science’ which involve ‘citizens in using embedded sensor technology to answer their own questions about their environments.’

Follow @danmcquillan on Twitter.


Theseus Returned

bish

Mark Bishop’s short story ‘Theseus Returned’ recently made the short list for CyberTalk Magazine’s Flash Fiction Competition.

The story was also published in ‘The Envelope: A Collection of Short Stories’ by Stephen Westland and Helen Disley which is available now on the Kindle.

It will appear in Cybertalk’s printed magazine later in the year.