May 11, 2017 - No Comments!

Like Wind, Like Water

Like Wind, Like Water (2017) instigates a reconsideration of the veracity of digitally-acquired information and knowledge in our everyday lives. The information is often presented through familiar and beguiling user interfaces of the digital platform.

The audience is presented with two different ways to interact with the work - via a desktop computer whose screen can be accessed and altered visually through a web application and via the web application on the audience’s mobile device. The audience of the desktop computer is prompted to search for information through a search engine. While desktop user tries to access information, users of the web app interfere with the desktop screen by altering it by using digital image processing techniques such as blurring and smudging. The act of warping the image by hand, and not through another device such as a mouse, becomes a metaphor for the distortion of truth. The power to shape what we perceive is reinterpreted in the form of the act of manipulating images. By allowing users to both disrupt and experience the disruption of the interface, it makes indistinct the positions of control and powerlessness one can have.

By disrupting the digital interface, Like Wind, Like Water intends to disturb our belief in the authenticity of the information through the familiar and beguiling user interfaces of the digital platform.

(Click to view larger image.)

lwlw02 lwlw03 lwlw04 lwlw05


Audience Engagement

This work intends to raise questions and instigate a reconsideration people's perception of information and knowledge in our everyday lives. Living in an age of information sharing, people are may take for granted the stated information as truths/facts when such information is presented aesthetically (both language- and image-wise) to appeal to people.

By allowing the audience to be in both roles of the disrupted and disruptor, I also wanted to bring attention to the indistinct positions of control and powerlessness one can have.

//Audience interaction

This piece consists of two participants: the Computer User (CU), who uses the search engine on the desktop computer, and the Screen Manipulator (SM) who disrupts the CU's activity by blurring and pixelating the screen of the desktop computer remotely. (For simplicity purposes, I will refer to the two participants with acronyms from here onward.)

CU

This user is the main participant of an interactive montage which consists of searching for the same query with a series of different search engines. I expect the user to sit at the desk and see the prompt in the search text box to type in a query to search. The user will see the exact search result web page by that particular search engine for a minute or so and then it will automatically switch to the another search engine's index page. This process then repeats until the user tires of it. In the midst of all this, their screen will also be blurred or pixelated in various places, which will hinder their activity of searching and looking for information, and likely cause the user confusion and frustration.

SM

Because the CU's screen will likely be noticed before the tablet, this user will likely be able to infer that the screen seen on the tablet is a copy of the CU's screen. As they are presented with the words "Here is the power to distort", the user will be curious of what that means and start exploring by touching the tablet's screen with various actions such as tapping and dragging as the words imply that the user can do something to distort something. They would soon find out that their actions caused corresponding parts of the screen to be blurred or pixelated. By being at a position where they can see the CU's reactions, they can infer that they are actually affecting the CU's screen.

I will also expect most pairs of CUs and SMs to be acquaintances and communicate with each other on what is happening, and this may help them understand my work as they talk about what is happening.


Creative Research

Part of what influenced my work was personal belief and trust in the everyday systems. Pseudoscience and alternative medicine such as Traditional Chinese Medicine and Fengshui which is a part of my family's lifestyle. While casually searching for more information about pseudoscience on the internet, it was hard for me to accept or understand how other pseudosciences that I found on the internet have believers. This made me question my belief in my family's practices of TCM and Fengshui which more of a traditional folk belief than 'science'. On social media, I once came across this research by Masaru Emoto. Part of his research I saw was of experiment results depicting his conviction that water could respond to positive feelings and words, and that contaminated water could be cleansed by means of prayer and positive metal projection[1]. He pushed this idea through comparisons of ice crystals (fig. 1). While I feel that this is incredulous and many would agree that this theory is hard to believe, Emoto committed more than two decades of his life to his research.

LWLW-hado

Fig. 2 Comparisons of water crystals from Emoto's experiments [Click on image for source.]

The Great Pretenders (2009) is an art project, presented as if it was a study of leaf insects, by artist Robert Zhao under an alias that inspired me to work on this theme. He established a fictitious group of scientists from known as the ‘Phylliidae Study Group’ which worked on merging genes of insect and plants to construct new hybrid leaf insects that camouflaged extremely well they were virtually invisible in a certain habitat. The samples were purportedly submitted to an annual ‘best new species’ competition and the fictional title was awarded to one of the researchers of the study group. Zhao presented a photograph (fig. 2) of the postulated leaf insect to people, and they seem to always be able to spot the insect even though there was no insect in the photograph[2].

Fig. 2 ROBERT ZHAO RENHUI, Hiroshi Abe: Winner, 2008/09 Phylliidae Convention, Tokyo, Abe Morosus (Abe, 2006), 2009, from “The Great Pretenders” series [Click on image for source.]

When we say that something is true, we mean that it is not made up - a fact in which we believe to be real, but truth is also subjective. It is truth due to our belief in it. The construction of this belief done by apparatuses that determine if the source of truth is deemed believable or reliable by one's standards.

The internet, social media are everyday digital interfaces that provide the accessibility to information online to learn about the world around us. Yet, the interface gives an illusion that we have the power to know, when it could be controlling what we have access to. Despite the many obvious reasons why people rely on the internet, of which the information presented to us are largely curated by many different search algorithms (fig. 3) that also take into account our online search habits. The Internet seems to give us the freedom to find out about whatever we want, but in actuality, the algorithms behind the user interface presented to us influence how and what we see.

LWLW-search

Fig. 3 Example search on different web search engines come up with different results

"How can we believe anything we see anymore? With today’s technology, we can literally do anything we want with images."[3] These words by Jerry Lodriguss lead me to consider using digital image manipulation techniques as a device to represent the manipulation of search results by algorithms behind the veil of the user interface. Many images we see everyday are actually products of image editing or manipulation software. When photography was first invented, people had faith in it due to how realistically it recorded nature than any other art form in existence during that time. It was associated with “reality” and “truth”, but fake and manipulated photographs began circulating not long after. With today's technological advancements, it is easy for anyone to doctor a photograph and it has become difficult to discern if a photograph is real or not if the manipulation is done professionally.


Design Development

//Setting

I decided to present my work in a semi-personal setting that was comfortable enough such as a home office or workspace by having props such as common stationary and a set of small potted plants. This set-up is centered around the CU who was anchored to a location, unlike the SM who is more like a free (roaming) agent.

In addition to that, the cacti and prompt (which I later changed for the exhibition) displayed on the web app was an allusion to Fengshui practice. The connotation was quite lost on many people as it was not obvious enough to those who were not familiar with Fengshui. As this was not the main focus of my work, I was okay with it as a whimsical side-note and being part of the decoration.

//Interaction between the two users

20170512_124654

Fig. 4 Exploration of Installation Setup

I considered a number of different ways to place the screens and users. Arranging the two participants to face each other (fig. 4, n.1, n.3) will better allow the audience to make the link between the two machines, but if the devices were located at the same table or within the same area, it would be easy to understand that the two devices are part of the same work. If they faced away from each other, it better implies the anonymity of the internet. Placing them side by side (fig. 4, n.2) made the set-up look like an institute, crammed office table or school computing lab.

I decided to make the CU less aware of their counterpart like how algorithms that produce the interface are hidden if one does not know how to look for it. By facing the user whose screen they are affecting, the SM would be able to make out a relation between that user and their current role. Hence, I arrived at this interaction set-up: the SM faces the CU while the CU faces elsewhere(fig. 4, n.4).

//The user and their device

CU

Desktops are also the more popular choice over laptops for use in the home as it is cheaper due to its low transportability. Also because of this, it made the desktop a safer choice as people would not be able to easily remove it from the installation.

SM

A touchscreen device was most suitable because the tactile action on the SM's part creates a distinction from the CU who traditionally uses the physical mouse and keyboard. As I also wanted to explore the way our digital and physical worlds coincide in everyday life, it also was a good way to enforce the physicality of touching and affecting a digital surface.
I initially wanted the many people to play the role of the SM and to be able to connect to the web app from anywhere they were, even if not in a gallery. I knew that there was realistically not enough time for me to manage to make that work, so I retained the medium of a web app as I wanted it implied that SMs would be able to manipulate the screen in a position hidden from the CU.

//User Interface

CU

I edited screenshots of the index pages of search engines to remove all elements of the page except for the logo, search input, and search button. This is so that users can only focus on what is shown. The following screenshots show the linear sequence of how a user may use the program.

LWLW-CU1 2 3 4 5 6 7 8

SM

This screenshot is of the web client. In the middle is a HTML5 canvas element that displays a copy of the CU's screen. The small icon on the left side is to simulate a tool button that allows the SM to change the mode of distortion from blur to pixelate and vice versa. "Here is the power to disrupt" is both a prompt for the user as well as a subtitle for this part of the artwork. (More about this later on.)

LWLW-SM1

Many users did not realise that they could change the mode of the distortion (from blur to pixelate and vice versa) with the button on the webapp. While I left the option of switching the modes available, I modified it to automate the mode change.

//Prompts

During user testing without any prompts, users had no idea how to proceed. Hence, I implemented prompts or instructions to instigate an action on the users part. I did not want to make them look like explicit instructions, so I tried to assimilate them into the user interface.

CU

For the CU, I needed the user to type and search for something. What they searched was not important, but the CU needed to search for the same phrase for the following search engines. The prompts are displayed like how recent forms have labels within textboxes instead of traditionally on the left side.

After a series of user testing, I realised that I needed much more straightforward hints to get people to type the same query. Also, I failed to consider that when the SM was distorting the screen, it affected the visibility of the prompts and the user had no idea was they were typing as well. To solve these, I included their first query (the answer) as a hint like this: "Did you mean '[query here]'?" The user can then also enter 'yes' to continue. I also cleared the area within the textbox in the shader program so the prompts can be read by the user.

SM

The first prompt that I used, almost like a placeholder, was "Be the invisible force to transform someone's life", but there was feedback that it was too vague and dramatic in a way that did not go well with my work overall. After working through a few sentences, I forwent trying to sound poetic and what I think worked best with was quite literal, short and simple; “Here is the power to disrupt.”


Technical Research and Build

//Hardware

I came up with different combinations of devices that could work:

  • Using two screens and one computer;
  • Two separate computers:
    • Both desktop computers;
    • One desktop computer and one mobile device:
      • Tablet;
      • Smartphone.

My first prototype was based on the first listed set-up. After presenting the first prototype, I realised having one computer meant having only one cursor, hence there was a problem returning the cursor to the CU after it was intercepted by the SM. It was technically very challenging to make it work and I doubted that it was even possible after trawling the web for solutions. I was also unsure about having the manipulator steal the cursor from the CU as well, as it seemed like the machine was faulty. Therefore, there needed to be two machines instead of one machine.

I decided to use a desktop computer and a tablet because the two devices created a dichotomy of the immobile and mobile, big and small. I felt that a smartphone would be too small for the user experience to be effective.

//Live Streaming Desktop Screen

While working on the first prototype, I got a screen capture of the desktop by adapting code from the Windows API[4] examples and then translating the screen grab which was an HBITMAP handle into an ofPixel object which could then be manipulated in openFrameworks. The image of the screen was converted into a BITMAP BYTE array then to an ofPixel array. I experienced some confusion while trying to assign values to the ofPixels array. There was something incorrect in the way I was assigning the values in the ofPixel array, which led to this output (window in the right side of screenshot):

lwlw_ss01

I later realised that bitmap colours were in the BGRA format instead of the usual RGBA and managed to get the correct output.

Following the prototype was to get the image to appear on a webpage which then could be accessed by the tablet’s web browser. I tried sending the screen grab to a NodeJs app to be displayed on a HTML5 canvas via UDP communication. This failed and I learnt that UDP had a size limit for each datagram and the data was unable to be sent. I got over this by splitting the image into pieces but in the end, it was not feasible because it worked too slowly for the art work to be effective.

LWLW-ss02

On right: The actual web page; On left: Output of the web page on web client at the same time

Hence moving on to another method, I followed Mick’s suggestion which was to use a VNC server to do the work of getting the image of the desktop. I used Real VNC Open[5] as my VNC server and intended to adapt noVnc as a web client. As I was using a VNC server without support for WebSockets connections, I needed a WebSockets to TCP socket proxy. The proxy included, websockify, had this error when I tried to run it. Before I knew about that OS-related issue, I tried other versions of websockify and hoped that at least one of them will work but to no avail. From this, I learnt to check the open issues on the git repository as I spent quite a lot of time unnecessarily on an issue that many others have already encountered yet still is not fixed. Luckily, I stumbled across this demo[6] which uses a different method, Remote Frame Buffer[7], to communicate with the VNC server. Although the code was quite old and I had to switch one or two NodeJs libraries, I managed to update the code to make it work.

//Shaders for Real-time Image Manipulation

Using GLSL shaders with oF, I explored a few basic image manipulation techniques. Shaders allowed for the distortion to stay even when the image has changed. Before using shaders, I tried out a few effects by writing the code in oF to get a sense of what it would look like.

lwlw-smudge

Smudge Effect

In addition to writing my own code for the pixelation effect, I adapted oF examples (fboAlphaMask and gaussianBlurFilter[8]) to write a program that applied the distortion effects on parts that the user draws. As I could not figure out an algorithm for the smudge effect on constantly updating frames within the period of time I had set, I decided to work on it if I had time after completing the necessary parts of the artwork (no, I did not manage to write it).

LWLW-error

Error messages when trying to run the program on the desktop computer

After realising that the program could not run properly because the OpenGL version on the desktop computer was older than my latop (which could run version 3.2) which I tested my program on, but version 2.1 shaders (GLSL #version 120) could run, I checked the supported OpenGL version. The computer's graphics card supports openGL 3.0, so I wanted to try using the latest the GLSL version (#version 130) instead. (Also because there were less differences between versions 3.2 and 3.0 than 2.1.) The same errors occurred, so in the end I still to had altered it to OpenGL 2.1. The bottom-line is to use the lowest possible version whenever you can.

After translating my shaders to GLSL #version 120, this was the output:

lwlw-shaderglitch3

An Example of Glitch Art

It took me a while to figure out why this happened because I did not have a working example shader in the same version to refer to. It turned out that variables had to be initialised with a zero value, whereas in the other versions, it was possible for the shader to work properly even without assigning an initial value.

I also thought of using WebGL to update the SM's copy of the CU's screen, but that could slow down the performance, hence I did not implement it as the time taken to update new content from the CU's screen on the web client was quite good (about only one second delay from SM's action to the distortion appearing on their screen).

//Summary of Software System

The rest of the system was quite straightforward and can be explained with this:

lwlw_diagram

Diagram showing the data flow between computer and tablet

The following dependencies used are:

  • ofxAwesomium[9] for implementing a live webpage within the oF app.
  • Socket-io[10] for communication between the NodeJs server and the web client.
  • NodeJs's in-built dgram[11] module for sending data from the server to the oF app.
  • PngJs[12] for converting raw pixel data into PNG encoding to display on HTML5 canvas.
  • ofxNetwork[13] for the oF app to receive and send data to the NodeJs server

Conclusion

//Exhibition

During the exhibition, the delay between what was showing on the computer and what appeared on the tablet increased very significantly. Although the touch events from the tablet were still registered and the computer screen was continually getting distorted, users of the tablet were confused as to why the distortions had not registered on their screen. Only during the exhibition that I realised that I failed to consider that there were many people visiting the gallery at the same time and most of them were also using WiFi, which made the speed of the WiFi connection much slower than when I was testing my system with barely anyone in the gallery venue.

Also, the VNC server I used does not capture the oF app in full screen mode, so I had to change it to the windowed mode. I maximised the window instead, but there was a chance of users clicking the close button. In my physical set-up, I needed to paste a post-it ("Player 1"; indicates there needs to be two users for the work) on the monitor and I decided to paste it over the upper-right hand corner of the monitor to keep the red ‘close’ button out of sight for users. I am not sure if it really worked but only few people actually closed the window by accident.

Despite that, the audience interaction generally went well as they understood what the artwork was trying to show. People who tried to interact with the work without a counterpart usually started using the desktop computer first, so I became the other user instead. This is something to think about regarding as I feel that many interactive works interactive works that require more than just a machine and at least two users make solo users feel a little left out. Perhaps there can be an AI that can be activated to substitute the other user when there is less than the required participants, but this is just food for thought.

After setting up in the gallery space, the tablet on the table felt like a prop. Initially, I thought that having the tablet not being mounted to anywhere helped make the role of the SM seem more mobile, but for the exhibition, I mounted the tablet on a nearby column instead. When I was planning the set-up, the column did not seem significant to me, but when I was actually using the space on-site, I saw that the narrow and darkly-lit space behind the column a good place for the SM to 'hide' in.

//Testing & Time Management

I managed to finish my work in time for the examination but could only fully complete it as more users tried my work. I did not have time to properly test out my work seeing that I finished what I planned to do on the last possible day, and I pushed back doing user testing with people who did not know my work. While I was in the midst of building my work, it was important to me that the user experienced the work in its entirety, but after experiencing this, I learned that it was important to establish in the early period of producing the work which parts of the work to test and how to present those parts to testers. I think that this will help with the workflow as well.


Process Sketches

Notes, ideas and sketches from my notebook while working on this project.
(Click on image to view larger version.)

20170512_12412320170512_124022LWLW-notes1


References

//Biblography

1 Emoto, M. (2010). What is Hado. Retrieved from Office - Masaru Emoto: http://www.masaru-emoto.net/english/hado.html

2 Tsai, S. (Jun, 2014). ArtAsiaPacific: Robert Zhao Renhui. Retrieved from ArtAsiaPacific: http://artasiapacific.com/Magazine/88/RobertZhaoRenhui

3 Lodriguss, J. (2006). The Ethics of Digital Manipulation. Retreived from Catching the Light: http://www.astropix.com/HTML/J_DIGIT/ETHICS.HTM

//Code Cited

4 Windows API

6 js-vnc-demo-project by mgechev

8 openFrameworks example shaders in 09_gaussianBlurFilter

WebGL Fundamentals

//Dependencies and Software

5 RealVNC Open by RealVNC

7 RFB2 for NodeJs by sidorares

9 ofxAwesomenium by mcpdigital

10 Socket.io for NodeJs

11 NodeJs dgram

12 PngJs by lukeapage

13 openFrameworks addon ofxNetwork


Gitlab Repository

Includes final code and code explorations.

Published by: qlim001 in catalyst 2017, yr3

Comments are closed.