More selected projects

Negative Space

 

Jordan Wu

Negative Space is a virtual reality piece that explores the flows between the physical and virtual world, specialising on focus and presence in space.

The piece revolves around themes of negative spacing and draws attention towards the focus on people and focus on spaces. Using a virtual reality headset, the physical locations of people are displayed in a virtual room, blurring the line between the physical and virtual world by bringing human presence into a virtual space.

  • gallery-image
  • gallery-image
  • gallery-image
  • gallery-image
  • gallery-image
  • gallery-image
  • gallery-image
  • gallery-image
  • gallery-image

An Xbox One Kinect is used to capture the positions of the people and roughly calculate where their skeleton structure. This data is passed into the Unity game engine and a model is mapped onto the skeleton structure to give the effect of people becoming and moving as the transparent model that is being used.

 

 

Audience Engagement

The piece was created with the intention of communicating to the audience a sense of flow between physical and virtual spaces as well as concepts of negative space in human and environmental focus. Negative space does this by highlighting the significance of focus in a scene

I wanted to emphasize the way in which we focus on the significance of places and the environment, much like we do at landmarks and museums. The piece simultaneously reflects the opposite, focusing on the people around us being the centre of attention,  places such as concerts, social gatherings or cafés. These two concepts work together much like you would see in a negative space illusion.

The piece uses the Oculus Rift Virtual Reality headset to fully immerse the user in a virtual environment. Rather that just display the changes in flow,  the audience is able to experience and fully immerse themselves within it.

By reducing the audiences’ sense, virtual reality allows the piece to focus on their sight, pouring their focus into what they are seeing. The use of the Kinect to physically map the positions of people in the room into a virtual recreation is used like a tether between the physical and virtual world. The real world positions are much like a reminder that you are in the physical world while also being in a virtual room that is very alike to it.

Using virtual reality should also make the switching between focusing on the human presence or the environment easier. The switching of scenes and the use of the same characters makes the contrast between the two more obvious. All these factors should help to achieve an expression of my ideas to the audience through an immersive experience, hopefully giving a sense of awe and wonder as well as experiencing ideas of flow between the spaces and an awareness of focus on either human presence or environmental space.

 

 

The Creative Process

The ideas behind Negative Space were originally inspired by Kim Assendorf’s Pixel Sorting work and the visual elements behind his work. The affect of pixel sorting gives images a blurred effect and an element of vagueness to their appearance. In the early stages of development, I applied this effect to people as well as the environment as a means of narrowing down the viewers focus and spotlighting specific focal points. Such as in the examples below:

These images were used to test the effectiveness of using pixel sorting to highlight negative spaces.

A lot of the early research were ways in which I could represent human presence in my piece without taking too much attention away from the foreground. Art by Dorian Legret initially interested me because of the interesting ways in which he represented human presence amongst the chaotic backgrounds. However, during the creation of the piece, it became apparent that the brightly lit colours and distortions were too extreme.

Kejiro Takahashi was also another artist that I researched. Much like Legret’s work, Takahashi’s work also included bright neon and fluorescent colours that did not quite fit with the ideas of my final piece. His work mainly consisted of unity projects that used motion capture data combined with particle systems and extravagant shaders. The colours and shapes would become affected by the movements and position of the users. This lead me to explore the use of shaders as a means of presenting the users in my scene, as well as adding glitch like effects to the camera.

Dans Les Noir ?  is a restaurant in London that serves its customers in pitch black. The thought behind this restaurant was for the customers to eat the food and taste and feel the textures without the experience being tainted by the aesthetic appeal of it. This idea of reducing senses in order to draw concentration towards the other senses intrigued me,  relating back to the idea of drawing attention towards either human presence or the physical environment.           

 

 

Technical Research

In order to express my initial themes, I began to experiment with Kim Assendorf’s Pixel sorting Processing sketch, modifying it to see what kind of effects I could produce. With the help of Daniel Schiffman’s pixel sorting tutorial I began to produce my own pixel sorting effects, producing sketches that did not just rely on rearranging images by hue, saturation or brightness. These lead to the tube experiment images and the blurring of certain parts of the images such as just the foreground or just the faces of the people.  

Although these produced interesting outcomes and effects that could be utilized later on, the images being produced were static. This means that I had to animate them by frame. For them to be truly useful they needed to be rendered in real time.

At this point I decided to reproduce these techniques in OpenFrameworks. Using a library called Fakeartist which used live pixel sorting on a live video webcam I was able to test some ideas, creating an augmented reality piece in the virtual reality headset using live pixel sorting. 

Unfortunately, this idea quickly fell through because of the physical building aspects of the piece. For the piece to work efficiently I needed to physically build a rig to change the distance of the cameras that would be placed on the headset. This had to be done because the space in between the eyes of each person is different. Placing two cameras onto the headset would also make it top heavy, affecting the users experience and taking away from the immersion. Considering the amount of time that I had this was not a feasible option was dropped.

Alongside this research I was also experimenting with the Xbox One Kinect and creating test projects in the Unity game engine. Acquiring a Kinect and getting it to function properly was also a task in itself. With its long list of requirements, complicated coding structure and my terrible computer, what should have been an easy task became a time consuming endeavour that would eventually affect the development of the final piece.

The Kinect SDK came with a useful basis for using the Kinect alongside Unity, supplying the user with a basic skeleton body recognition, depth and colour data as well as a broken green screen shader example. From here I explored the skeleton body recognition scripts and scenes and attempted to develop a way to map objects and shaders onto the skeleton structures, much like I had tried to in previous experiments. I was able to map objects using the Kinect Manager by getting the positions of the skeleton joints it stored and placing the cubes exactly where they were on the model.

The examples ran through a Kinect manager that stored all the important values that the Kinect received. A Kinect view script was then applied to an object. This script drew information from the Kinect manager script. To gain access to the positions, a script must get the information directly from the Kinect manager and specify the type of information the object needs access to, whether that is depth, camera or joint type data.

The Kinect SDK also supplied a green screen tester. The green screen example worked much like the skeletal view. It also drew its information from a manager, however the shader it came with was out of date and functioned incorrectly. Having previously done worked with shaders in the project I was able to correct the code.

Although I managed to fix it, the green screen data that was received did not provide any depth information, making it less useful than the skeletal tracking. Even though I was able to display people in the room, the effect came off flat and uncoordinated due to the fact that it was simply information from the camera stream and not the infra red sensor.

The green screen itself wasn’t used but the shader was modified to give a transparent backing that was be applied to screens. As shown below:

The shader was further modified so that the scene instead showed a silhouette rather than the camera feed of the people. The intention behind this was to experiment with how the people were represented in the scenes. A direct camera feed may have been too much and taken too much attention away from the environment, but a silhouette might have been just enough for the audience to recognise the people in front of them whilst drawing their attention towards the environment. 

This was later phased out during the user testing phase as people preferred the alternative that was used in the final piece. Although I was successful in mapping objects to the skeleton bodies in real time, I was facing many other problems with the Kinect which consistently hindered my progress both in the technical department as well as expressing my artistic concepts.

During this earlier development period I was unable to move the Kinect skeletons inside the Unity scenes as well as having a slightly unstable floor plane. This meant that the position of the characters on screen could not be adjusted and the movement of the characters were at incorrect angles if orientated inaccurately. Although not a permanent fix, I temporarily mapped the rest of the environment according to the positions of the bodies, shifting the scenes in order for the orientation to synchronise. These problems persisted throughout the project slowing down the progress of the creative areas of the piece.

 The recreation of the room was an integral part of the project. The room acts as a catalyst between the physical and virtual world. It partially gives the effect of being in both places at once but also being in neither. If not done correctly the audience would not be able to see the relation between the two places.

The simplicity of the room forces the user to look for the more minute details in the scene. Rather than having an extravagant decorated version of the room users could focus on smaller more delicate details such as pipes and screws, and although maybe not as exciting and slightly bland in comparison, the room works on the idea of attention to detail. The addition of decorations and objects in the room would have also caused additional hazards, especially since people were already having the urge to get and move around the room. Audience participation and moving around the room was a fundamental part of the piece, adding too many objects would hinder movement as well as the effectiveness of the piece.

Although I had already mapped the same room in the past, the 3D models that I had used before were outdated and did not import correctly, with most of the textures broken and objects depreciated. To make sure the room and the virtual space synchronized properly I was required to measure the room and convert these measurements to produce a similarly sized room. Many photos of the room were taken, not just for reference but also for mapping onto objects as textures to give an extra dimension of realism. For example, the carpet, clock, radiators and even the screen stands. All details of the room were taken into consideration including minute details such as pipe shapes and edges, where they were attached to the wall and even the positioning of the mains and switches.

Most of the objects were modelled within Unity and those that were not were either modelled in Blender, like the desk, or imported assets that would have taken too long to create such as the Outer Space Skyboxes. Many of the objects had to be grouped together or made of various different shapes so fthat applying C# scripts would be possible.

The final project consisted of two scenes. Both scenes contained a rendition of the physical room. Using C# scripts, the walls of the physical room would fade out to reveal either a scene from outer space or a snowing mountain range. Both scenes also used particle systems in order to create floating asteroids or snow from above.

The aesthetic choice for these two scenes was to create a contrast in atmosphere, starting in a very bland but relatable room, with the user focusing on the transparent white models. Then suddenly the user would be transported to a location that you would not normally be able to visit, a place that’s only visible through virtual space. Only the walls were faded in scene, this was to give the feeling of still being related and grounded to the original and physical room. The drastic change was meant to further highlight the flow between the physical and virtual world, by showing you complete polar extremes.

Although a majority of the C# scripts that I coded were in fact, for getting the Kinect to function alongside Unity, many of the scripts in the project were for changing the render settings for certain objects and different scenes.

The script used for changing the scene worked on a timer that would reset with the level and load the new level. This later caused errors with destroying objects and reinitialising the Kinect sensor. This was paired alongside two other scripts.

One of these scripts would reduced the alpha channel of the wall objects when enough time had past in the recreated room. This made the objects transparent, giving the effect of the walls disappearing. Being able to fade out the walls also meant the render mode of these objects was set to ‘fade’ rather than ‘opaque’, resulting in the walls looking partially transparent before they had even begun to fade. Another script was created to switch the rendering mode of the object however it was unsuccessful as once the render mode was set at the beginning of the scene it could not be altered so easily. The solution to this problem was to place an identical wall in its place that was rendered opaque and use a script to deactivate the object just before the the ‘fade’ rendered wall started to disappear. 

The character models used by the people in camera also needed to be hidden. The way that the code was structured meant that the orientation of the characters relied on the initial position of the 3D models, as well as each person appearing needing their very own model. At first I tried to place the models far out of sight from the camera but this was ineffective. Having all six models placed at the centre of the screen at once would have been a bad aesthetic choice as well as looking unprofessional and unfinished. The script written set all the models, placed in the middle of the screen, as inactive until the PlayerIndex value (number of bodies recognised by the Kinect) was more than zero. At this point each model was numbered and activated or deactivated depending on how many bodies were recognised.

An excessive amount of time was spent on the coding of the Kinect scripts and getting them to function simultaneously with unity. As mentioned above, although I was successful in mapping objects to the body joints, I could not alter the positions effectively.

Another key concept was gaining access to the floor clip plane. This contained the position of the floor relative to the Kinect. Although it may seem simple to access, with limited understanding of the Kinect structure, accessing the value proved to be a challenge.

The point of getting this value was to run it through a matrix to rotate and transform the bodies so that they would always be orientated correctly. Although I had created the matrices and had the floor clip plane values, I was executing the commands in the wrong place. By the time I gained access to the values, created the matrices and realised my mistakes, time was limited and I needed to move on.

Although this may have been a fundamental tool in creating a stable piece I had spent immoderate amounts of time working on this fraction of the technical aspects, neglecting the creative testing stages that were originally planned. Fortunately, I was able to find a Kinect Unity asset package that helped with precisely mapping models onto the Kinect bodies as well as being able to position them accordingly.

As useful as the asset package was it also came with its own problems. When combined with the scene altering and scene switching scripts the Kinect sensor would no longer turn on. It was clear that the package was not fully completed. Realising that some objects were being destroyed and then being duplicated when the scenes shared the same object, I was able to apply a singleton pattern to make sure that the 3D models, Kinect manager and certain scripts that were present in both scenes were not destroyed every time a level was loaded. 

Throughout the experience I continued to create various versions and builds using them as fail safe options, in total creating eighteen of them. Although they are conveniently not on Github, I had problems uploading unity projects in general due to their size. I have also opted to not include all of them in the repository as they would take up ridiculous amounts of storage space.

 

 

User Testing

After producing many different versions of the piece I began my user testing stage. Because of the nature of the piece, user testing was done in groups of either two or more. This way there would always be at least one person appearing on the screen. The very first user tests were for testing the limits of the Kinect, to see what happens when people leave and re-enter the room. The maximum amount of people that the Kinect can register is six however this was less of a problem because of the size of the room. The area in which the Kinect could register people was limited, meaning that there were never too many people on the screen.  It was also useful to see how people reacted to being on screen as a different model.

The second set of tests were for determining the way in which human presence would be represented. As discussed earlier I had been previously experimenting ways to display people entering the room. There were six options available. These were the green screen feed, silhouetted people, people represented as basic shapes, people represented as lights, 3D silhouette models that used a transparent shader and 3D sounds.

Users experienced all six of the models in the recreated room and were instructed to rank the models on different scales. They were asked to rank them in terms of:

  • The most interesting.
  • The most relatable.
  • The most suited to the scene

 

On the scale of most interesting the 3D model ranked the highest, and the shapes the lowest. This may have been because they were bland and could easily be dismissed.

 On the most relatable scale the green screen was at the top, due to the fact that it was obvious who was standing in front of the user, and the 3D sound was lowest, mainly because it was too subtle and not fully understood.

Finally, the 3D model ranked highest again for the most suited and the green screen the lowest. The 3D model might have ranked highest because it fit the aesthetic feel of the 3D built room, while the green screen shader was flat and reminded users of people Photoshoped into architecture photos.   

The aim of these questions was to rank the models in terms of how much attention they would draw to or away from the environment. The most interesting model was meant to be used as a contrast with the least attractive scene and vice-versa.  This was to highlight the contrast between the scene and the presence of people and further emphasise the theme of negative space. Although I did create a project in which the way people were represented changed depending on the scene, the inconsistency when switching models affected the way in which the scenes flowed together, as well as being harder on the Kinect, which had to switch between sensors during scenes. this idea took away from the piece aesthetically, making it seems jerky and too drastic a change. To fix this issue the 3D model that used the transparent shader was used in all scenes, not only for the sake of continuity but for aesthetic and hardware purposes.

 The transparent models were created to be the main focus of the scene in the class room. When the scenes changed to one of the two more dramatic scenes the attention was drawn towards them. The models however still act as an anchor to the physical world, with physical world interactions visually affecting the virtual world. The class room acts as a way of being in two places at once, a reminder that even though you are experiencing the virtual world you are still physically in that room.

After adding the additional changes to the models I proceeded to test the almost finished piece on more users. The piece, at this moment, consisted of the three scenes that were in the final piece as well as the transparent 3D models. The feedback on the piece was mostly positive, with the main changes being the addition of sound to the piece. Relaxing music was often suggested for the mountain scene and space-like techno music suggested for the other. A wind-like soundtrack was added to the snow scene and an eerie emptiness sound was added to the space scene to add to the immersion. 3D sound was also added to the models in the space scene. The objective of this was so that you could hear people as they got closer to you. This extension was not added to the snow scene as the sound effect did not fit well within the atmosphere. In the final piece the sounds were too quiet and had minimal effect.

Other aspects that were changed included edits to the models such as keeping the door ajar as to see people walking past. The rendering of the mountains in the distance were visible, however this could not be changed as Unity only renders objects that are closer for efficiency, to deal with this I turned up the fog in the scene in order to make it less obvious. 

Additional feedback was to do with the immersion of the piece. These included subtle changes to make the scene more realistic such as setting the clock to real time and the user being able to see their hands. The clock was created with a photo texture which made it more difficult to change as well as having to program a working clock in the piece with the limited time. To pick up the users sitting, the Kinect camera would either have to be in a completely different spot or there would need to be more than one running simultaneously. The hands however, could have been displayed using a leap motion controller. Adding this hardware, even though it would contribute towards the immersion of the piece, would also affect the way in which people view the piece. The addition could take attention away from pieces’ focus, no longer focusing on the people in the room or the scene itself, but your own presence.

Being able to navigate around the scenes was also a suggested option. While this might be an interesting addition to the piece, from the advanced graphics course I learnt that navigation is the main cause for motion sickness in virtual reality. The inclusion of this idea was not enough to risk the audience being motion sick and spoiling the experience overall.

 Because of the unexpected humorous value, people who first tested the piece enquired about having a way to view themselves, such as making the computer monitor visible. Being able to see what was going on in the seated user’s headset would add another dimension to the piece but it would overall affect the piece negatively. Displaying the head set for all to see would remove the curiosity of what the users were seeing, it would also remove the experience of realising that the transparent models are actually the people in front of you.

When questioned about their interpretations of the work and what kind of message was being conveyed, the answers were surprisingly accurate. Personally I believed that the message could have been more clear and developed in a more constructive way, but nevertheless a majority of the points managed to get through to the audience.

 Many viewers understood the importance of the flow between the physical world and virtual space, especially with the recreation of the physical room when compared to the other scenes. Other people grasped an understanding of the focus on the people in the room, stating that they felt bound to the room and shifted their focus towards the presence of people inside it. One of the reasons I believe that the piece was better understood was because the title gave it context. Caspar Sawyer explained in his master class that the title of a piece can sometimes give a different field of view or way of thinking and I think this was the case with Negative Space.

 

 

Evaluation

Overall the project succeeded in its purpose of demonstrating the flow between physical and virtual space and human and environmental presence. Despite the fact that a majority of the audience understood the piece, the project did not turn out the way I expected it to. Personally I felt that the project was slightly underdeveloped in its conceptual areas.

Negative Space managed to express the crucial themes, but I felt that it could have been conveyed in a more eloquent manner. This may have been because the design stages of development were somewhat rushed, with a hefty allotment of time used on the technical problems with the Kinect. Frustratingly, this gave little time to fully explore the palette of ideas that I wanted to play with. Many of these ideas were in development, including live scenes and 360 degree videos previously prepared, such as a scene on the tube, but often required more equipment and planning.

This experience has taught me not to underestimate the amount of time that must be allotted to learning and working with new and unfamiliar technology, but to also factor in more time for the artistic development side, allowing time for the ideas to evolve.

An example of this would have been the way in which the environment was developed. The rapid alternation from scene to scene might have been too extreme. Minor adjustments to the scene instead, such as changing the scene piece by piece, for example the floor into grass, could have potentially been more effective. It would have also brought an unusual normality towards virtual reality.

Along with the rushed development, I was also clouded by extravagant ideas achievable in virtual reality, with the fear that making something too simple would affect the amount of work and therefore the effectiveness of the piece. In hindsight I struggled with this balance between simplicity and substantiality and created something that could potentially be seen as a gimmick. Alongside these matters, I was also worried about being too obvious when conveying my ideas and worried about the subtleness of the piece.  

With all this in mind I endeavoured to try and decrease the chances of these factors affecting the piece. I often tried to ground myself by revising my themes and elaborating on them, using brainstorms and mind maps as reminder. Finally, I realised that these development stages cannot be rushed and vigorous testing must also be carried throughout the development of the piece.

When compared to my previous work, this piece was more successful in this particular area but there is still much room for improvement.

The reaction that I received from the audience was surprisingly positive, with most of them understanding some, if not all, aspects of the piece. Even with my previous worries people were pleasantly surprised with the changes in the scenes, some in awe of the scenery. There was also a humourous aspect to the piece that I had not considered, with people realising that they were onscreen and jumping out at their friends. Negative Space also used an unusual property of turning virtual reality, a very singular and isolated experience, into something that brought interaction to everyone in the room, not just the person in the seat.

Creating Negative Space has been an enlightening experience, helping me to further understand the areas of my work that need improvement and also the polishing of fundamental artistic values, while being successful in its own right. It has taught me invaluable skills in both the technical aspects and as a developing artist.

 

 

References and Bibliography

[1] Assendorf, Kim. Kim Asendorf. N.p., 2012. Web. 12 May 2017.

[2] Legret, Dorian. "Behance". Behance.net. N.p., 2015. Web. 12 May 2017.

[3] Takahashi, Keijiro. "Keijiro (Keijiro Takahashi)". GitHub. N.p., 2017. Web. 12 May 2017.

[4] "Kinect For Windows V2 Windows Runtime API Reference". Msdn.microsoft.com. N.p., 2014. Web. 12 May 2017.

Unity Assets:

3rd Person Controller + Fly Mode
Vinicius Marques                                           https://www.assetstore.unity3d.com/en/#!/content/28647

Free MatCap Shaders
Jean Moreno (JMO)                                         https://www.assetstore.unity3d.com/en/#!/content/8221

Free Snow Mountain
ProAssets                                                      https://www.assetstore.unity3d.com/en/#!/content/63002

Kinect v2 Examples with MS-SDK
RF Solutions                                                   https://www.assetstore.unity3d.com/en/#!/content/18708

Planet Earth Free
headwards                                                    https://www.assetstore.unity3d.com/en/#!/content/23399

Sky5X One
RKD                                                                     https://www.assetstore.unity3d.com/en/#!/content/6332

Snow Mountain
Svchost74                                                     https://www.assetstore.unity3d.com/en/#!/content/24690

Vast Outer Space
Prodigious Creations                                     https://www.assetstore.unity3d.com/en/#!/content/38913

Kinect for Windows SDK 2.0                                                                     https://www.microsoft.com/en-gb/download/details.aspx?id=44561

 

Gitlab repository:

HTTP:  http://jwu011@gitlab.doc.gold.ac.uk/jwu011/CreativeProjectsThridYear.git

SSH: git@gitlab.doc.gold.ac.uk:jwu011/CreativeProjectsThridYear.git

 

Blog

https://www.tumblr.com/blog/artofwu