Archives for May 2017

May 11, 2017 - No Comments!

Smart Alien

Project by: Alexander Tkaczyk-Harrison, and Bradley Griffiths.


Smart Alien is an educational tool designed to teach children about space. Children start learning about earth and the solar system as early as Key stage level one and now we have developed an app that will make your children just that more interested in about the universe we live in.


With Smart Alien you can jump into your rocket ship and travel through space looking at all the different planets in our solar system, teaching you about the distances, ages, size’s, obit time and giving you fun facts about each planet. As you use your directional buttons to fly round the solar system you can get close up and personal with each planet, or zoom far off into the distance and see how beautiful our universe really is.


Our target audience for Smart Alien are children learning at key stage level 1 aged between five to seven years old. We have made sure to include in our app what children are learning at this stage in their life about space. We realise that this age is one of the most important in the development of a child’s brain. We wanted to make the subject of space as interesting as possible for them. While researching about the solar system [1]at key stage one level we know that a student should be learning about the different planets, the orbit time, the distances which you can all find in our application.


During our research, we found that there has been a growth in children using mobile devices and playing games on tablets and computers. [2] Studies show that 20 percent of one-year-olds have their own tablet computer and 28 percent of two-year-olds can navigate a mobile device without help. We felt with the growing pace of technology that would be a good idea to start trying to create more educational programmes for children to use on their computer/tablets so they could use it for revision and study.

We consider our audience in our design by making the program very easy to use with a simple layout for educational purposes.


Documentation of creative process:

When we began the creative process, we had a starting point. That starting point was including WebGL in our project. We had to gain an understanding of the way WebGL works with P5js as it works completely differently in contrast with the regular P5 library.

We played around with primitive shapes such as boxes and spheres (as opposed to rect’s and ellipse’s) and when we felt comfortable with the library, we moved onto the next step.

Our original idea was to create a program which gives the user an Autonomous Sensory Meridian Response (ASMR) to what is being viewed on the screen. ASMR is that feeling you get occasionally when, for example, you get a shivering / tingling sensation going down your spine when you hear particular sounds / see particular visuals. [These can include people whispering, the sound of scrunching paper, etc.

Here is an example of a video designed to give you an ASM Response:


The intention was to tie in the feeling of floating in space into the ASMR program we were creating.

To do this we first created a sphere, which going to be a planet. We applied a texture of planet earth to the sphere. The next step was to create stars to make it look like the sphere is in space. We achieved this by making a constructor function to create a small sphere at a random location. This constructor function was then called in a loop which runs 1000 times, and every time pushes a new Star into an array of Stars. This worked extremely well, as we now had 1000 stars at random locations. We then wanted to apply movement to the 3D space we had created. We achieved this by rotating everything using Perlin Noise. This gives a free flowing feel to the movement of the rotation, as well as it being random.

For this to work as intended, we would have needed to set it up as an installation in a large, empty, black room with no light in it. The visuals of the program would then be projected onto one of the walls, while the sound was played loudly through speakers. We could not, however, achieve this as we needed to hire a large room, and a lot of equipment.

This is where we changed our idea. We stuck with the solar system frame for the program, but changed the basis to be an interactive, educational program to be used in schools to teach children about the solar system.

We began by adding more planets into the program using a constructor function which we called with specific locations and sizes and spacing them apart from each other, and applying textures to them. One problem we encountered was the stars which we had originally created were now moving through some of the planets. This did not look realistic, so we made the stars significantly smaller than the originals, and also created a giant box to go around the whole of the solar system. We applied a star texture to the box, which gave the desired effect of our solar system being part of a bigger solar system.

The next task was to be able to move the camera from one planet to the other. This was done very simply, by creating a variable for the camera’s Z position, then adding and subtracting 50 from that variable when the left and right arrows are pressed. We also created a variable for the cameras X position so the user can zoom in on the planets.

One problem we encountered here was that of how to include restrictions to prevent the user from moving the camera past a certain point. I.E. preventing the user from being able to zoom through the planets, and outside of the box surrounding the universe.


We achieved this by using a series of if statements. Here is an example:

if (keyIsDown(DOWN_ARROW)) {

camZ += 50;



if (keyIsDown(DOWN_ARROW) && camZ == 5950) {

camZ -= 50;


Next we needed to create a way of displaying the planets information including name, size, distance from the sun, and orbit time. This required researching into planetary facts. This was done by creating an extremely thin box then applying a custom texture to the box which we created in Adobe Photoshop. This worked perfectly.

After user testing we concluded that we needed to add the distance between the planets. This obviously could not be realistic as the distance between the planets at one point goes to 4.5 billion km from one to the other. So to counter this, we spaced the planets evenly apart, and then added another box with a custom texture on it stating how far the distance is to the previous planet. This appears to work well.


Link to live project:

May 11, 2017 - No Comments!

Like Wind, Like Water

Like Wind, Like Water (2017) instigates a reconsideration of the veracity of digitally-acquired information and knowledge in our everyday lives. The information is often presented through familiar and beguiling user interfaces of the digital platform.

The audience is presented with two different ways to interact with the work - via a desktop computer whose screen can be accessed and altered visually through a web application and via the web application on the audience’s mobile device. The audience of the desktop computer is prompted to search for information through a search engine. While desktop user tries to access information, users of the web app interfere with the desktop screen by altering it by using digital image processing techniques such as blurring and smudging. The act of warping the image by hand, and not through another device such as a mouse, becomes a metaphor for the distortion of truth. The power to shape what we perceive is reinterpreted in the form of the act of manipulating images. By allowing users to both disrupt and experience the disruption of the interface, it makes indistinct the positions of control and powerlessness one can have.

By disrupting the digital interface, Like Wind, Like Water intends to disturb our belief in the authenticity of the information through the familiar and beguiling user interfaces of the digital platform.

(Click to view larger image.)

lwlw02 lwlw03 lwlw04 lwlw05

Audience Engagement

This work intends to raise questions and instigate a reconsideration people's perception of information and knowledge in our everyday lives. Living in an age of information sharing, people are may take for granted the stated information as truths/facts when such information is presented aesthetically (both language- and image-wise) to appeal to people.

By allowing the audience to be in both roles of the disrupted and disruptor, I also wanted to bring attention to the indistinct positions of control and powerlessness one can have.

//Audience interaction

This piece consists of two participants: the Computer User (CU), who uses the search engine on the desktop computer, and the Screen Manipulator (SM) who disrupts the CU's activity by blurring and pixelating the screen of the desktop computer remotely. (For simplicity purposes, I will refer to the two participants with acronyms from here onward.)


This user is the main participant of an interactive montage which consists of searching for the same query with a series of different search engines. I expect the user to sit at the desk and see the prompt in the search text box to type in a query to search. The user will see the exact search result web page by that particular search engine for a minute or so and then it will automatically switch to the another search engine's index page. This process then repeats until the user tires of it. In the midst of all this, their screen will also be blurred or pixelated in various places, which will hinder their activity of searching and looking for information, and likely cause the user confusion and frustration.


Because the CU's screen will likely be noticed before the tablet, this user will likely be able to infer that the screen seen on the tablet is a copy of the CU's screen. As they are presented with the words "Here is the power to distort", the user will be curious of what that means and start exploring by touching the tablet's screen with various actions such as tapping and dragging as the words imply that the user can do something to distort something. They would soon find out that their actions caused corresponding parts of the screen to be blurred or pixelated. By being at a position where they can see the CU's reactions, they can infer that they are actually affecting the CU's screen.

I will also expect most pairs of CUs and SMs to be acquaintances and communicate with each other on what is happening, and this may help them understand my work as they talk about what is happening.

Creative Research

Part of what influenced my work was personal belief and trust in the everyday systems. Pseudoscience and alternative medicine such as Traditional Chinese Medicine and Fengshui which is a part of my family's lifestyle. While casually searching for more information about pseudoscience on the internet, it was hard for me to accept or understand how other pseudosciences that I found on the internet have believers. This made me question my belief in my family's practices of TCM and Fengshui which more of a traditional folk belief than 'science'. On social media, I once came across this research by Masaru Emoto. Part of his research I saw was of experiment results depicting his conviction that water could respond to positive feelings and words, and that contaminated water could be cleansed by means of prayer and positive metal projection[1]. He pushed this idea through comparisons of ice crystals (fig. 1). While I feel that this is incredulous and many would agree that this theory is hard to believe, Emoto committed more than two decades of his life to his research.


Fig. 2 Comparisons of water crystals from Emoto's experiments [Click on image for source.]

The Great Pretenders (2009) is an art project, presented as if it was a study of leaf insects, by artist Robert Zhao under an alias that inspired me to work on this theme. He established a fictitious group of scientists from known as the ‘Phylliidae Study Group’ which worked on merging genes of insect and plants to construct new hybrid leaf insects that camouflaged extremely well they were virtually invisible in a certain habitat. The samples were purportedly submitted to an annual ‘best new species’ competition and the fictional title was awarded to one of the researchers of the study group. Zhao presented a photograph (fig. 2) of the postulated leaf insect to people, and they seem to always be able to spot the insect even though there was no insect in the photograph[2].

Fig. 2 ROBERT ZHAO RENHUI, Hiroshi Abe: Winner, 2008/09 Phylliidae Convention, Tokyo, Abe Morosus (Abe, 2006), 2009, from “The Great Pretenders” series [Click on image for source.]

When we say that something is true, we mean that it is not made up - a fact in which we believe to be real, but truth is also subjective. It is truth due to our belief in it. The construction of this belief done by apparatuses that determine if the source of truth is deemed believable or reliable by one's standards.

The internet, social media are everyday digital interfaces that provide the accessibility to information online to learn about the world around us. Yet, the interface gives an illusion that we have the power to know, when it could be controlling what we have access to. Despite the many obvious reasons why people rely on the internet, of which the information presented to us are largely curated by many different search algorithms (fig. 3) that also take into account our online search habits. The Internet seems to give us the freedom to find out about whatever we want, but in actuality, the algorithms behind the user interface presented to us influence how and what we see.


Fig. 3 Example search on different web search engines come up with different results

"How can we believe anything we see anymore? With today’s technology, we can literally do anything we want with images."[3] These words by Jerry Lodriguss lead me to consider using digital image manipulation techniques as a device to represent the manipulation of search results by algorithms behind the veil of the user interface. Many images we see everyday are actually products of image editing or manipulation software. When photography was first invented, people had faith in it due to how realistically it recorded nature than any other art form in existence during that time. It was associated with “reality” and “truth”, but fake and manipulated photographs began circulating not long after. With today's technological advancements, it is easy for anyone to doctor a photograph and it has become difficult to discern if a photograph is real or not if the manipulation is done professionally.

Design Development


I decided to present my work in a semi-personal setting that was comfortable enough such as a home office or workspace by having props such as common stationary and a set of small potted plants. This set-up is centered around the CU who was anchored to a location, unlike the SM who is more like a free (roaming) agent.

In addition to that, the cacti and prompt (which I later changed for the exhibition) displayed on the web app was an allusion to Fengshui practice. The connotation was quite lost on many people as it was not obvious enough to those who were not familiar with Fengshui. As this was not the main focus of my work, I was okay with it as a whimsical side-note and being part of the decoration.

//Interaction between the two users


Fig. 4 Exploration of Installation Setup

I considered a number of different ways to place the screens and users. Arranging the two participants to face each other (fig. 4, n.1, n.3) will better allow the audience to make the link between the two machines, but if the devices were located at the same table or within the same area, it would be easy to understand that the two devices are part of the same work. If they faced away from each other, it better implies the anonymity of the internet. Placing them side by side (fig. 4, n.2) made the set-up look like an institute, crammed office table or school computing lab.

I decided to make the CU less aware of their counterpart like how algorithms that produce the interface are hidden if one does not know how to look for it. By facing the user whose screen they are affecting, the SM would be able to make out a relation between that user and their current role. Hence, I arrived at this interaction set-up: the SM faces the CU while the CU faces elsewhere(fig. 4, n.4).

//The user and their device


Desktops are also the more popular choice over laptops for use in the home as it is cheaper due to its low transportability. Also because of this, it made the desktop a safer choice as people would not be able to easily remove it from the installation.


A touchscreen device was most suitable because the tactile action on the SM's part creates a distinction from the CU who traditionally uses the physical mouse and keyboard. As I also wanted to explore the way our digital and physical worlds coincide in everyday life, it also was a good way to enforce the physicality of touching and affecting a digital surface.
I initially wanted the many people to play the role of the SM and to be able to connect to the web app from anywhere they were, even if not in a gallery. I knew that there was realistically not enough time for me to manage to make that work, so I retained the medium of a web app as I wanted it implied that SMs would be able to manipulate the screen in a position hidden from the CU.

//User Interface


I edited screenshots of the index pages of search engines to remove all elements of the page except for the logo, search input, and search button. This is so that users can only focus on what is shown. The following screenshots show the linear sequence of how a user may use the program.

LWLW-CU1 2 3 4 5 6 7 8


This screenshot is of the web client. In the middle is a HTML5 canvas element that displays a copy of the CU's screen. The small icon on the left side is to simulate a tool button that allows the SM to change the mode of distortion from blur to pixelate and vice versa. "Here is the power to disrupt" is both a prompt for the user as well as a subtitle for this part of the artwork. (More about this later on.)


Many users did not realise that they could change the mode of the distortion (from blur to pixelate and vice versa) with the button on the webapp. While I left the option of switching the modes available, I modified it to automate the mode change.


During user testing without any prompts, users had no idea how to proceed. Hence, I implemented prompts or instructions to instigate an action on the users part. I did not want to make them look like explicit instructions, so I tried to assimilate them into the user interface.


For the CU, I needed the user to type and search for something. What they searched was not important, but the CU needed to search for the same phrase for the following search engines. The prompts are displayed like how recent forms have labels within textboxes instead of traditionally on the left side.

After a series of user testing, I realised that I needed much more straightforward hints to get people to type the same query. Also, I failed to consider that when the SM was distorting the screen, it affected the visibility of the prompts and the user had no idea was they were typing as well. To solve these, I included their first query (the answer) as a hint like this: "Did you mean '[query here]'?" The user can then also enter 'yes' to continue. I also cleared the area within the textbox in the shader program so the prompts can be read by the user.


The first prompt that I used, almost like a placeholder, was "Be the invisible force to transform someone's life", but there was feedback that it was too vague and dramatic in a way that did not go well with my work overall. After working through a few sentences, I forwent trying to sound poetic and what I think worked best with was quite literal, short and simple; “Here is the power to disrupt.”

Technical Research and Build


I came up with different combinations of devices that could work:

  • Using two screens and one computer;
  • Two separate computers:
    • Both desktop computers;
    • One desktop computer and one mobile device:
      • Tablet;
      • Smartphone.

My first prototype was based on the first listed set-up. After presenting the first prototype, I realised having one computer meant having only one cursor, hence there was a problem returning the cursor to the CU after it was intercepted by the SM. It was technically very challenging to make it work and I doubted that it was even possible after trawling the web for solutions. I was also unsure about having the manipulator steal the cursor from the CU as well, as it seemed like the machine was faulty. Therefore, there needed to be two machines instead of one machine.

I decided to use a desktop computer and a tablet because the two devices created a dichotomy of the immobile and mobile, big and small. I felt that a smartphone would be too small for the user experience to be effective.

//Live Streaming Desktop Screen

While working on the first prototype, I got a screen capture of the desktop by adapting code from the Windows API[4] examples and then translating the screen grab which was an HBITMAP handle into an ofPixel object which could then be manipulated in openFrameworks. The image of the screen was converted into a BITMAP BYTE array then to an ofPixel array. I experienced some confusion while trying to assign values to the ofPixels array. There was something incorrect in the way I was assigning the values in the ofPixel array, which led to this output (window in the right side of screenshot):


I later realised that bitmap colours were in the BGRA format instead of the usual RGBA and managed to get the correct output.

Following the prototype was to get the image to appear on a webpage which then could be accessed by the tablet’s web browser. I tried sending the screen grab to a NodeJs app to be displayed on a HTML5 canvas via UDP communication. This failed and I learnt that UDP had a size limit for each datagram and the data was unable to be sent. I got over this by splitting the image into pieces but in the end, it was not feasible because it worked too slowly for the art work to be effective.


On right: The actual web page; On left: Output of the web page on web client at the same time

Hence moving on to another method, I followed Mick’s suggestion which was to use a VNC server to do the work of getting the image of the desktop. I used Real VNC Open[5] as my VNC server and intended to adapt noVnc as a web client. As I was using a VNC server without support for WebSockets connections, I needed a WebSockets to TCP socket proxy. The proxy included, websockify, had this error when I tried to run it. Before I knew about that OS-related issue, I tried other versions of websockify and hoped that at least one of them will work but to no avail. From this, I learnt to check the open issues on the git repository as I spent quite a lot of time unnecessarily on an issue that many others have already encountered yet still is not fixed. Luckily, I stumbled across this demo[6] which uses a different method, Remote Frame Buffer[7], to communicate with the VNC server. Although the code was quite old and I had to switch one or two NodeJs libraries, I managed to update the code to make it work.

//Shaders for Real-time Image Manipulation

Using GLSL shaders with oF, I explored a few basic image manipulation techniques. Shaders allowed for the distortion to stay even when the image has changed. Before using shaders, I tried out a few effects by writing the code in oF to get a sense of what it would look like.


Smudge Effect

In addition to writing my own code for the pixelation effect, I adapted oF examples (fboAlphaMask and gaussianBlurFilter[8]) to write a program that applied the distortion effects on parts that the user draws. As I could not figure out an algorithm for the smudge effect on constantly updating frames within the period of time I had set, I decided to work on it if I had time after completing the necessary parts of the artwork (no, I did not manage to write it).


Error messages when trying to run the program on the desktop computer

After realising that the program could not run properly because the OpenGL version on the desktop computer was older than my latop (which could run version 3.2) which I tested my program on, but version 2.1 shaders (GLSL #version 120) could run, I checked the supported OpenGL version. The computer's graphics card supports openGL 3.0, so I wanted to try using the latest the GLSL version (#version 130) instead. (Also because there were less differences between versions 3.2 and 3.0 than 2.1.) The same errors occurred, so in the end I still to had altered it to OpenGL 2.1. The bottom-line is to use the lowest possible version whenever you can.

After translating my shaders to GLSL #version 120, this was the output:


An Example of Glitch Art

It took me a while to figure out why this happened because I did not have a working example shader in the same version to refer to. It turned out that variables had to be initialised with a zero value, whereas in the other versions, it was possible for the shader to work properly even without assigning an initial value.

I also thought of using WebGL to update the SM's copy of the CU's screen, but that could slow down the performance, hence I did not implement it as the time taken to update new content from the CU's screen on the web client was quite good (about only one second delay from SM's action to the distortion appearing on their screen).

//Summary of Software System

The rest of the system was quite straightforward and can be explained with this:


Diagram showing the data flow between computer and tablet

The following dependencies used are:

  • ofxAwesomium[9] for implementing a live webpage within the oF app.
  • Socket-io[10] for communication between the NodeJs server and the web client.
  • NodeJs's in-built dgram[11] module for sending data from the server to the oF app.
  • PngJs[12] for converting raw pixel data into PNG encoding to display on HTML5 canvas.
  • ofxNetwork[13] for the oF app to receive and send data to the NodeJs server



During the exhibition, the delay between what was showing on the computer and what appeared on the tablet increased very significantly. Although the touch events from the tablet were still registered and the computer screen was continually getting distorted, users of the tablet were confused as to why the distortions had not registered on their screen. Only during the exhibition that I realised that I failed to consider that there were many people visiting the gallery at the same time and most of them were also using WiFi, which made the speed of the WiFi connection much slower than when I was testing my system with barely anyone in the gallery venue.

Also, the VNC server I used does not capture the oF app in full screen mode, so I had to change it to the windowed mode. I maximised the window instead, but there was a chance of users clicking the close button. In my physical set-up, I needed to paste a post-it ("Player 1"; indicates there needs to be two users for the work) on the monitor and I decided to paste it over the upper-right hand corner of the monitor to keep the red ‘close’ button out of sight for users. I am not sure if it really worked but only few people actually closed the window by accident.

Despite that, the audience interaction generally went well as they understood what the artwork was trying to show. People who tried to interact with the work without a counterpart usually started using the desktop computer first, so I became the other user instead. This is something to think about regarding as I feel that many interactive works interactive works that require more than just a machine and at least two users make solo users feel a little left out. Perhaps there can be an AI that can be activated to substitute the other user when there is less than the required participants, but this is just food for thought.

After setting up in the gallery space, the tablet on the table felt like a prop. Initially, I thought that having the tablet not being mounted to anywhere helped make the role of the SM seem more mobile, but for the exhibition, I mounted the tablet on a nearby column instead. When I was planning the set-up, the column did not seem significant to me, but when I was actually using the space on-site, I saw that the narrow and darkly-lit space behind the column a good place for the SM to 'hide' in.

//Testing & Time Management

I managed to finish my work in time for the examination but could only fully complete it as more users tried my work. I did not have time to properly test out my work seeing that I finished what I planned to do on the last possible day, and I pushed back doing user testing with people who did not know my work. While I was in the midst of building my work, it was important to me that the user experienced the work in its entirety, but after experiencing this, I learned that it was important to establish in the early period of producing the work which parts of the work to test and how to present those parts to testers. I think that this will help with the workflow as well.

Process Sketches

Notes, ideas and sketches from my notebook while working on this project.
(Click on image to view larger version.)




1 Emoto, M. (2010). What is Hado. Retrieved from Office - Masaru Emoto:

2 Tsai, S. (Jun, 2014). ArtAsiaPacific: Robert Zhao Renhui. Retrieved from ArtAsiaPacific:

3 Lodriguss, J. (2006). The Ethics of Digital Manipulation. Retreived from Catching the Light:

//Code Cited

4 Windows API

6 js-vnc-demo-project by mgechev

8 openFrameworks example shaders in 09_gaussianBlurFilter

WebGL Fundamentals

//Dependencies and Software

5 RealVNC Open by RealVNC

7 RFB2 for NodeJs by sidorares

9 ofxAwesomenium by mcpdigital

10 for NodeJs

11 NodeJs dgram

12 PngJs by lukeapage

13 openFrameworks addon ofxNetwork

Gitlab Repository

Includes final code and code explorations.

May 09, 2017 - No Comments!


By Kevin Lewis & James Carty

Carillon was an interactive installation in which children and playful adults interact with a series of digital musical instruments (DMIs). The instruments are bells which are projected onto suspended cardboard boxes. Users interact with the project by swinging the boxes.

Each box was to be hung from a railing at an angle which clearly presents two sides to the user. An image of the instrument was projected onto one side of each box, whilst a visualisation representing the sound it creates was projected on the other visible face.


As users swing the boxes, bell-like sounds were created, and the visualisations which coincide with the sounds were produced. After experimenting with various different sounds, bells were the most naturally suited instrument based on the physical action of swinging the boxes. When played together, they would have formed a major chord.

Four LEDs were placed on each face of the box, one on each corner, and were used to track the position of the box. The projected instrument and visualisation followed the position of the physical box without user intervention.

Carillon was inspired by Bot & Dolly's "Box"1, Pomplamoose's Happy Get Lucky2 and Joe McAlister's The Watsons3. The idea that we could use projection mapping in an unusual context, bridging the gap between the 'digital world' and the 'physical world', was intriguing. Often, installations involving projection mapping are meticulously planned, and involve little user interaction. We wanted to allow our users to feel a sense of control around what was being shown and heard.


As you can tell from the past-tense, we faced issues with this project which led to it not working. Below we explore the reasons for this, and our creative process, challenges and attempts to fix them.


One recurring theme in our early research was the desire to build a digital musical instrument (DMI). At this point, we'd just completed a piece of work involving projection mapping using color tracking, so decided that it'd be fun to build a DMI using a projection mapped box.

The Ishikawa Watanabe Laboratory (University of Tokyo) have developed Lumipen4, a low-latency projection tracking system. However, they used a custom hardware and software rig (including a 1000 fps camera and a saccade mirror array). We didn't have the resources (time, money or skill) to produce something similar. That said, our project was much less ambitious and we decided to try and build it with technologies we already knew (p5.js).

Due to levels of excitement with our idea we decided to pursue our build before conducting further research, which upon reflection was admittedly premature.


Our first few milestones involved creating visuals and synthesised sounds - we had decided early on that suspending and swinging boxes would be an appealing way of interacting with the piece, but weren't yet set on the A/V elements. Our first set of visuals can be seen here, and interact with just a single variable value changing.

We'd just completed a piece of separate work involving computer vision, and used our code sample from this work, which involved tracking the most dominant colour bin of a given colour. However, this proved quite unreliable and instead we based our project on Kyle McDonald's Computer Vision PointTracking example5.


Our first challenge was with the PointTracking example's memory management. While the project works reliably, after running for more than a minute it slowed down significantly. The array of coordinate values was only growing, and were then stored in an unusual data structure we'd not encountered before. After exploring the structure, we spliced it to keep only the latest 4 pairs of values to help with the latency issue.

This project also used bitshifting, which is a new programming concept and took time to understand. In order to understand why it was being used, we needed to contact Kyle and get some support. We found out that curr_xy[j<<1] was similar to curr_xy.x and curr_xy[j<<1+1] was similar to curr_xy.y. Learning this made it much easier to work with.

We still lacked understanding, so more conversations with Kyle were needed. Unfortunately, while he was lovely, he let us know that the inner-workings of his computer vision examples were actually powered by JSFeat - a computer vision library for JavaScript6. While this was a great piece of learning, the JSFeat documentation left much to be desired. The PointTracking example was straight out of the JSFeat documentation and Eugene (the developer behind the library) didn't go into too much depth about how the library works.


Instead of spending more time to further understand JSFeat, we worked around the limit of our understanding by manipulating values provided and creating new data structures which served our purpose. I believe this lack of understanding was one of our larger setbacks. We are both creatives who must understand something to a low-level, or we struggle to retain the knowledge. "It just works so we'll leave it be" isn't an attitude we found easy to adopt and spent much time trying to understand a complex library which was the very backbone of our entire project.

Our next challenge was the realised lack of processing power provided to us by using browser-based tools, which resulted in an unacceptably-high level of latency. In order for this project to work in a compelling way, we really needed the piece to be reactive to interactions from users. Our chosen tools didn't allow for this.

In order to accommodate this limitation, we changed the scope of the project significantly. At this point, Carillion would just be a single face of a single box. Instead of swinging, it would now move back and forth horizontally, similar to a slider on a mixing desk. It would now make a constant sound, which would be manipulated as the box position moved.


Our piece works by taking in a video feed of the box and tracking each corner with an LED on it. We used p5.js in the browser to manually calibrate the position of the box in terms of the projector, and then project the graphic back out onto the box. If we move the box 10cm to the left, the projected graphic also moves, but not nearly enough. We believed this could be fixed by a manually-calibrated scaling mechanism, but in order for this to work, we needed to know how much an item moves between frames.

As we could only work with the coordinate points provided by JSFeat, which were updated per-frame, we didn't know how to get the difference. We changed our array splice to give us the latest 8 points (2 sets of 4). We then found the average x and y movement of the four points, but could never get it working quite right. This had us stumped for over 70 hours and eventually we realised that, along with the high latency, our idea wouldn't come to fruition with our chosen tools.

Our code, to the point we stopped development, can be found here (box).


Our evaluation framework looked at three core areas:

  • Is the project easily understood? Will users be able to approach the project and know what is expected from them?
  • Is the project reliable? Does it actually work as expected consistently and reliably?
  • Is the sentiment post-use positive? Based on a post-use interview, how did users feel when interacting with this project?

As we didn't ever get Carillon to a point where users could interact with it, we were unable to carry our planned evaluation.


We spent time over several weeks working to create visuals and audio before building a prototype of the projection tracking system. It was three weeks before we were encouraged to focus our attention on the box itself, by which time we'd already invested enough resource (time and energy) into the idea that we were reluctant to change it. If we had spent more time initially researching-by-doing, we would have realised that the low feasibility of this project and pivoted to work on something else.

The tools we chose were due to our proficiency in using them. Instead, we should have realised that p5.js wasn't the right choice of technology sooner, at which point we may have had enough time to learn an alternate framework (openFrameworks, for example) which could have made this project a reality.

This project ultimately failed because we spent too little time testing out the idea at the start of the creative process. If we had, we may have realised that what we were trying to achieve was quite demanding and either been more prepared, or worked on another interpretation of a physical DMI.


[1] Bot & Dolly, "Box",, 2013

[2] Pomplamoose, "Pharrell Mashup (Happy Get Lucky)",, 2014

[3] Joe McAlister, "The Watsons",, 2016

[4] Ishikawa Watanabe Laboratory, "Lumipen 2",, 2014

[5] Kyle McDonald, "cv-examples PointTracking",, 2016

[6] Eugene Zatepyakin, "JSFeat",, 2016