Sketches vs Skeletons

Last month I was in Vancouver at the fantastic MOCO workshop presenting a couple of papers.

The first was called, Sketches vs Skeletons: video annotation can capture what motion capture cannot. It was the outcome of a study we did as part of the Praise project about using technology to give feedback to music learners about their movements and postures. It taught us a lot about the holistic, complex nature of movment, but also about research, being wrong and how to stop being wrong.

We initially made what seemed to be the obvious choice of using motion capture and created a prototype (technology probe) using the kinect, but when we worked with music teachers we discovered that not only was the kinect not sufficient (not particularly surprising), but the whole premise of using skeletal motion capture was misguided.

This really showed us the value of rapid prototyping, we were not particularly attached to our prototype and could recover from it quickly.

Anyway, here is the abstract:

Good posture is vital to successful musical performance and music teachers spend a considerable amount of effort on improving their students’ performance.
This paper presents a user study to evaluate a skeletal motion capture system (based on the Microsoft Kinect for supporting teachers in giving feedback on learner musicians’ posture and movement. The study identified a number of problems with skeletal motion capture that are likely make it unsuitable for this type of feedback: glitches in the capture reduce trust in the system, particularly as the motion data is removed from other contextual cues that could help judge wether it is correct or not; automated feedback can fail to account for the diversity of playing styles required by learners of different physical proportions, and most importantly, the skeleton representation leaves out many cues that are required to detect posture problems in all but the most elementary beginners. The study also included a participatory design stage which resulted in a radically redesigned prototype, which replaced skeletal motion capture with an interface that allows teachers and learners to sketch on video with the support of computer vision tracking.

and this is the full reference:

Sketches vs Skeletons: Video Annotation Can Capture What Motion Capture Cannot

Gillies, Marco , Brenton, Harry , Yee-King, Matthew , Grimalt-Reynes, Andreu and d’Inverno, Mark . 2015. ‘Sketches vs Skeletons: Video Annotation Can Capture What Motion Capture Cannot’. In: Proceedings of the 2nd International Workshop on Movement and Computing. Vancouver, Canada.

SIGCSE: Supporting Creativity and User Interaction in CS1 Homework Assignments

For the last day of SIGCSE 2015 I’m going to go back to one of our key beliefs at Goldsmiths: the importance of creativity in computing education. Tammy VanDeGrift at the University of Portland seems to share this belief has written a paper Supporting Creativity and User Interaction in CS1 Homework Assignments.

She describes the experience of introducing homework assignments to a CS1 course that encourage independence and creativity (core values at Goldsmiths), as well as requiring student to do user tests of interactive software. The success was evaluated with end of semester questionnaires. When asked what they liked about the course, creativity and open-endedness scored highest, which is a good sign. There is also some evidence that this approach helped create more intrinsic motivation, while grades scored highest in student report of their motivation, the majority of students also listed intrinsic motivations. It’s a bit hard to say whether this was due to the creative elements as there was not comparison to a “non-creative” version, but the results are at least encouraging for us creative computing types and do accord with our experience.

The main drawback of this approach, as contrasted to the previous papers I’ve discussed, is that marking this kind of creative work needs a lot of teacher intervention as opposed to the automated techniques described by the last paper. It’s notable that this paper describes a class of 51 as opposed to hundreds in the other papers. Is there a way of allowing creativity that has some of the efficiency and capacity for frequent feedback that automation gives. One answer is that you can use both types of assignment. Some automatically marked assignments to give fast feedback while learning key concepts, and some more creative assignments that need to be marked by hand. I was thinking if there was something more we can do. One possibility is that you could ask students to write their own automated tests against which their assignments would be graded. Since automatic graders are pretty similar to unit tests this would in fact also be teaching unit testing (or to put it another way teach unit testing and use that as a grading criteria). I quite like the mix of open ended creativity with the rigour of unit testing. There would still need to be some human grading, if nothing else the unit tests need to be approved by some one otherwise students could just pass themselve, but it would still allow for fast feedback on certain technical criteria, while the creative aspects could be assessed at a slower pace. It would need a good unit testing framework where it is easy for first years to write tests, maybe a set of generic tests that accord with learning outcomes but are customisable by students. Something to think about, anyway.

SIGCSE: The Role of Automation in Undergraduate Computer Science Education

My third SIGCSE 2015 paper talks about an issue implicit in my last two posts: The Role of Automation in Undergraduate Computer Science Education. How can automated tools, feedback and assessment be used to teach computing? Can they improve teaching or will they always be inferior to human input from the teacher?

The author, Chris Wilcox at Colorado State University, paints a positive picture. He introduce a number of automated tools into his CS1 class, including automated grading of code, online peer instruction quizzes and online tutorials. He they analysed a range of student performance data, including exam grades and dropout rate. The results were positive, students scored better and there was less dropout than in previous years. There was also an obvious benefit in terms of saving instructor time, which he points out was not a cost saving, but allowed TAs to spend more time with students (don’t tell senior management, they will see cost savings).

The main benefit seems to be that students made effective use of automated feedback on their assessments and changed their behaviour to benefit from it. They tended to submit exercises for grading earlier because they could do so at any time and get instant feedback. They also submitted several times, each time getting feedback to improve their performance. It would also have been possible to set more frequent exercises that carried feedback, though I’m not sure if that happened in this case.

What are the drawbacks? Automated marking requires a very constrained exercise with clearly measurable outcomes, which goes against the student driven, open ended creative projects we set as part of our Creative Computing philosophy. I don’t feel automation can do everything, and you do lose flexibility. However, a mixed approach can work well, after all in coding there is a lot of stuff that is clear cut and measurable, even if you writing creative code. Following my last post, an interesting approach would be semi-automation, combining some aspects of automation with human judgement where it counts. Not entirely sure how that would work, but something to think about.

SIGCSE: Closing The Cyberlearning Loop

My second paper for day 2 of SIGCSE is Closing The Cyberlearning Loop: Enabling Teachers to Formatively Assess Student Programming Projects by Ashok Basawapatna and Alexander Repenning of AgentSheets and Kyu Han Koh of the University of Colorado Boulder.

This carries on the theme of yesterday’s paper with rich data from online programming environments. This time it tackles a harder problem: project based work. While small exercises can be relatively easy to assess automatically, more open ended projects are harder, but they do bring richer learning experiences and are an integral part of our teaching at Goldsmiths (see my post on Creative Computing).

Basawapatna et al. take a different approach from Spacco et al. rather than looking for automated ways of detecting student who are struggling, they give teachers a data dashboard. Teachers can interpret these data with all their rich contextual knowledge of the students. Their trial seemed to indicate that teachers could indeed use it effectively to support students and they gave some very positive feedback.

Could we do something similar at Goldsmiths. Certainly the focus on project based work chimes with us, but I get the impression that their projects are still more constrained than the ones we set, and it might be hard to find consistent analytics for our very open ended projects.


Thanks a lot to Kyu Han Koh for getting in touch about this post. Kyu is co-author of the paper and the creator of Computational Thinking Pattern Analysis (automated assessment tool), and  the first version of REACT (data dashboard).  

He has replied to a number of the points I raised in the post:

I conducted a research to illustrate how we can detect programming divergence in a classroom settings.

In this case, we set a norm, a tutorial, so we could compute the divergence from the norm. 

However, if your project or class is truly open ended, which you cannot set any norm, then it would be a bit hard to provide a dashboard feedback that I devised. 

Still, if you can set one project as a norm, then you can apply this assessment mechanism. 

My tool,  Computational Thinking Pattern Analysis, is designed to compute semantic similarity between projects. So, for open ended projects, you can compare each individual project to other individual project to see their semantic similarity to each other. It means that you will get a similarity matrix. 

The paper quoted above presents a metric for measuring creativity in game development based on divergence from an existing norm according to a number of dimensions, each of which is defined as similarity to a canonical Computation Thinking Pattern. I think this approach is exciting and computational support for assessing creativity could be enormously useful. Having said that I do feel a bit uncomfortable assessing creativity automatically as it is so hard to define it effectively (if not impossible, truly creative acts are surely those that go beyond existing definitions, even of what it means to be creative). How it could be very useful is as a support for human assessment or as a semi-automated process (I think this is what Kyu and other authors are actually proposing). Help in finding creativity would be greatly valued even if the final judgement is human.

SIGCSE: Analyzing Student Work Patterns Using Programming Exercise Data

ACM SIGCSE is starting today. It is the major conference on computer science education.  As some one who teaches beginner programming it is a real goldmine of research for inspiring practice. Unfortunately I can’t be there so I will try and read a paper a day for the length of the conference.

The first paper that caught my eye was Analyzing Student Work Patterns Using Programming Exercise Data by an intercontinental team:  Jaime Spacco, Paul Denny, Brad Richards, David Babcock, David Hovemeyer, James Moscola, Robert Duvall.

Among the many benefits of automated online programming exercises is that they allow you to collect a lot of data about students programming. They gathered a very detailed dataset from three universities in North America and New Zealand. This was down to the level of individual compiles, with success and failures of compiles and tests . This is the kind of data that can support years of research.

They present some straightforward results like doing the exercises correlated with final exam score. The most interesting research question was whether you could detect failing students from patterns of activity, which could be very valuable, particularly in large classes. Unfortunately they conclude that straightforward measures don’t do very well. This doesn’t surprise me as programming is complex and students are complex. But I hope they will persevere with the data as it would be a fantastic result for all of us if they can find a predictor. Maybe some machine learning is in order (though I have my doubts about that as well).

As well as all this the paper introduced me to CloudCoder an open source web based platform for automatically marked programming exercises. I wonder if I can adapt it to work in Processing.

Creative Computing

I wanted to kick off my writing about computing education by talking about what is really the big idea that we’ve had at Goldsmiths: Creative Computing. As a department we have worked very hard at developing an interdisciplinary approach to computing, but particularly a creative approach, which takes cues from disciplines such as Fine Art, Music and Design. This has happened throughout the department, but in terms of undergraduate teaching this started with our BSc Creative Computing programme, which has since developed into a suite of programmes: Music Computing, Games Programming and most recently Digital Arts Computing.

All of these programmes treat computer programming as a creative discipline. We teach our students the technical aspects of programming but also how to use it creatively to develop innovative artistic projects.

This idea really comes from two directions. The first is enabling artists to use computing programming as a powerful new medium for their work, that enables them to create procedural, generative and interactive work without the constraints that using pre-existing software brings. This is a really important part of our work, which has resulted in some fantastic, innovative student work, but in this post I want to focus on the other direction: how can creative disciplines inform the way we teach computing and computer programming in particular.

We’ve based how we teach creative computing as much on how art is taught as how computing is traditionally taught. Of course we teach the basic programming structures and suggest basic practical exercises to get used to them, but we also encourage students to develop their own practice which is driven by their own creative ideas. Students work on projects of their own devising, to relatively open briefs. These projects are assessed on their technical challenge, but also on their creative outcomes: the quality of execution of the end result and the innovation in the concept. Teaching in these courses primarily happens in terms of feedback on the projects, both by teachers and by fellow students (peer feedback is going to be a topic of a future post), just as it would be in an art school “crit” session.

What do these art school methods bring to computing teaching? I would say there are lots of benefits. Some are really specific to doing artistic work with programming, but many can help inform any kind of computing teaching.

Creativity. Creative computing is dsigned around getting our students to be creative and pay attention to the design aspect of their software. This is vital to artistic work but also increasingly important in mainstream computing as it becomes more user centric and design focused on the era of the iPhone and the Web app. If nothing else, students that end up working in games or Web development where mixed teams of coders and designers will have experience of both types of work. This is really valuable because communication across those domains has traditionally been difficult.

Motivation. This links in to mark Guzdial’s (let’s see if I can ever get through a computing ed post without mentioning him) idea of Media Computing, which is pretty close to Creative Computing in some ways. Traditional geeks might be intrinsically interested in programming but most people are more interested in what you can do with it. Creative and artistic are simply more interesting toand  most people. I also think our approach adds something to media computing because we focus on independent creative work. Students work on their own projects, which they define and which they are (hopefully) passionate about. This will make them more engaged with the project. The more engaged they are the more time they will spend programming and the most important thing when learning programming is time spent programming.

Independence. The fact that students are working on their own projects of their own devising early on also means they have to be more independent. They are working on something that only they are doing and that won’t necessarily only require elements they learn in class. This means that they will have quite a lot to figure out themselves (with our support, of course). This encourages them to be more independent both in terms of defining projects and solving problems, but most importantly it encourages them to be independent in their learning. They have to go out and figure out stuff (libraries, techniques, sometimes even languages) that we haven’t taught them. That is probably the one most important skill we, as university teachers, can help students learn. Particularly in computing, where technology changes so rapidly that graduate will have to constantly be learning new things to keep up to date.

ComputingEd – teaching programming

I’m starting a new sub-blog about Computing Education and teaching programming in particular. This will be a space for me to talk about my own practical experiences teaching programming and my research in the area (which is closely tied to my practice). It is also a place for me to discuss current research in the area and how it might be applied to the practical problems I see day to day. This will mostly be about university level teaching, as that is what I do, but that can’t really be separated from school level education (or continuing education, which might mean I have to mention our MOOC at some point).

In this first post I think I should start by name-checking two really important sources on computing education, particularly at university level. This first is ACM SIGCSE, or Special Interest Group in Computer Science Education. This is probably the most important international organisation in this area and they run an annual conference which is one of my main sources for Computing Education research papers.

The other is one person: Mark Guzdial and in particular his ComputingEd blog, which is really the best thing to read if you want to keep up to date on computing education research. Having said that, Mark’s own research is really interesting and I really like his approach “media computing”, which bears some resemblance to our creative computing. He was recently a co-author of “Success in Introductory Programming: What Works?”, which generated a lot of interest here at Goldsmiths.

Mark Guzdial’s blog is so great that there almost is no point in me starting my own one, but I will try to put forward what we can learn from Goldsmiths’ and my own approach. I think it is also interesting to address some of the problems from the UK perspective. Our university system is very different from the US system that Mark and other writers teach in. An important example is that our students are enrolled in a single subject from the very beginning, unlike the US system where students can take modules in many subjects before choosing a major. That means that on the surface the problem of students dropping computing after the first programming class is less visible, but that problems for students that don’t pick up programming effectively are much greater because it is harder for them to change away from computing (our responsibility for supporting students is even greater!).