by: Dimitrios Kyriakoudis
Domain: Warped is an executable audiovisual composition that draws its main inspiration from the demoscene. Demos are small programs made up of a series of algorithms that, given the input of time, produce audio and visual output to form an artistic composition, or a demonstration of creative and programming skills. Unlike videos that record information and then retrieve it for playback, Demos execute calculations in real time to produce, display and animate their content. Domain Warped uses a ray marching algorithm that, in real time, calculates and displays imaginary spheres positioned in imaginary three dimensional space. These spheres, and the space they exist in, are then stretched, compressed, and deformed using sculpted noise.
The accompanying music consists of a pre-recorded track, made with a polyrhythmic FM drum machine built in Max/MSP and mixed with Reaper. While the audio is being played back, each of the two channels gets analyzed in order to extract separate RMS values that act as an approximation of that channel's perceptual loudness. This data is then passed over to the algorithms producing the graphics, where they are used to animate various parameters such as object size or amount of deformation applied. All these elements are brought together and managed by code written in C++ using openFrameworks.
Artistically, Domain: Warped aims to build an abstract audiovisual environment where subtle causal links are established between what the audience hears and what they see. By using procedural content generation rather than collections of data, the graphics can react to the incoming audio and react in ways that feel natural and fundamental to the identity of the objects themselves.
Download it here.
My interest in audiovisual interactions was sparked when I first came across the videoclip for Autechre's Gantz Graf, directed and animated by Alex Rutterford. Even though both the music and the visuals were abstract enough to bear no resemblance to the everyday life around me, the way they interacted between them seemed as thorough and as natural as anything in the physical world. Using manual timepoints to trigger events and animate variables, Rutterford's clip managed to create the perceptory illusion of a deep, underlying connection between what I was hearing and what I was seeing.
Further research for similar kinds of videoclips led to the discovery of the demoscene and proedural modelling. Shortly after, I came across the work of groups such as Farbrausch and ASD. More specifically, it was the work of prominent member of the scene Inigo Quilez that served as inspiration while his code and articles served as a textbook, both for shaders and procedural graphics. I was impressed by how clean and at the same time complex the rendered scenes could be, interestingly enough the same perceptual features that I attribute to FM synthesis in general.
The music of Autechre, the visuals of Inigo Quilez and the between-them interaction of Alex Rutterford were the main influences that shaped both my artistic and programming interests at a fundamental level.
The element of demos that initially piqued my interest is their self-contained nature. Instead of borrowing textures and samples to make an audiovisual collage, both media types are instead synthesized from the ground up. This, I feel, enables the artist/s to become the sole architects of the world their art projects. Having complete control over all the aspects of one's raw materials allows for lower-level creative decisions that can concern aesthetics, form and structure, or underlying interactions between the various elements. This lent itself perfectly to my fascination for shader graphics, especially Ray Marching, and FM synthesis and so there was no debate about the media synthesis toolkit I was going to use.
Even though most demos utilize sound and sound reactivity, either pre-programmed or in real time, I felt that the depth and complexity of those interactions are usually overlooked. In Domain: Warped I wanted to explore the ways sound can reach into the object's source code and be more subtle about the ways it modifies it, aiming to create a perceived sense of causality between what the ears hear and what the eyes see.
Over the development process I tried different ways of obtaining control data from sound, such as applying a Fast Fourier Transform to get frequency content or calculating the Root Mean Square to get an estimate of loudness. Even though I found both equally useful in their own ways, I ended up sticking to RMS for two reasons. While the FFT can be very accurate, I found that it requires a lot of processing of the resulting data to sculpt it in any way that is useful or perceptually accurate. Combined with my decision to keep the music mostly percussive, the FFT seemed an overkill and an inefficient path to follow.
Build Problems Encountered
As this was my first project implemented in C++ , some build problems were to be expected. One of the earlier ones encountered was the program not being able to find files I put in the data folder. After many hours of unsuccessful online searching I tried printing the directory the Visual Studio project was referring to. To my surprise, I saw that it was not the current directory of the project but the directory of the empty project I had copied when starting the project. I simply changed the path and I have since been more conscious of the state of my project's files and directories.Another side problem arose when I accidentally committed a load of unwanted files on the Git repository. Failing to see that the .gitignore file did not specify any of the unwanted files created by builds on Windows, I added all files, committed and end up with an unnecessarily large repo. To make matters even worse, I went on to add a few more commits and so I could not just amend the last commit anymore. After trying recursive branch filtering with mixed results I decided it was best to make a new repo and make sure that everything, including the .gitignore file, would be set up right from the beginning. Only when I went through that process did I get more comfortable with Git's internal structure and started appreciating its paradoxical simplicity and utility. Even though I still struggle with at some points, I can now see why it has become a standard and I can get a small taste of what working on a large project with numerous people would be like.
In addition, when I tried the per-channel RMS calculation, I realized that the method I was using so far to play back the audio files only actually played the left channel of the stereo file I provided. After finding nothing in the documentation for the Maximillian addon, I looked at the source code and realized that the play() function only returns one sample at a time and so I worked around it by using a second player to achieve parallel playback of both channels. Looking through the source code for Maximillian gave me a better understanding of the inner workings of C++ audio and also showed me that, if nothing else is available, the source code can often be a valuable resource.
An issue I am still facing is the app crashing on exit. As far as I understand, since the audioOut function runs on a separate thread, there needs to be better thread management in order for the application to terminate smoothly. In addition, since my computer does not have a dedicated graphics card, it can only display so much before it starts slowing down and becoming unusable. This also meant that any attempts at screen capturing would deliver unusable framerates or would simply crash my computer.
Going through the process of trying to fix all these build problems and rebuilding the project helped me gain a better intuition about the different aspects of a C++ project. At any given moment I am now more conscious of the file and directory structures in my projects and I am slowly developing a routine aimed at being more effective with file and code management. I have also realized the importance of establishing good file management and version control from the beginning.
The project's design has been through many different stages and iterations. Even though making an abstract audiovisual composition with visuals that react to the sound was the main goal from the beginning, the original plan involved a group collaboration and producing a video instead of a real-time executable. This is due to the graphics processes we had chosen to use, such as pixel sorting, not lending themselves to real-time processing due to their time complexity or other requirements. When the group collaboration proved to not be successful I had just discovered fragment shaders and the demoscene in general. Having never programmed computer graphics before, I felt that the mathematical modelling character of procedural shader drawing was more effective than the shapes and meshes of traditional computer graphics when it comes to drawing or manipulating complex scenes. Even though it seemed easier at the time to stick with simpler Processing-like graphics, I felt that procedural shaders were the better choice because of the unrivaled control and quality they offered. I also felt it would be a great opportunity to learn and apply some of the concepts I was so impressed by when watching demos.
Representing visual information with mathematics, rather than with a disorganized pile of data, allows for much greater control over the way the image is constructed and displayed. This, in turn, has many implications about the kind of visual, and by extension artistic, results that the algorithm can offer. By building a mathematical grip over the exact specifications of every element in the composition, from shape and position to colour and texture, a much finer and subtler manipulation of the image becomes possible.
The manipulations that Domain Warped focuses on are those of domain warping and domain repeating. A perfect sphere drawn on a set of perfect axes is a simple sphere, but a perfect sphere drawn on axes that have been stretched, compressed, or deformed in any other way suddenly carries and projects those deformations. Similarly, if the sphere is expressed as a function of space and that space gets repeated, the sphere is going to repeat too. Combining the two, the result is an infinite sea of spheres that carry the deformation of their respective space.
Looking back at where I was at the beginning of the project, I feel I have made great progress both in programming skills and project management. I am satisfied with the quality of the visuals I was able to produce and I learned a lot about computer graphics in general along the way. Considering that I had to rethink a large part of the project when the group collaboration ended, and that it was my first time working with computer graphics, I am pleased with both the tools and the skills I have acquired by working working on Domain: Warped.
The main shortcoming of the project I believe to be its lack of artistic composition. When I realized I would have to produce the visuals as well as everything else that constitutes the project, I started investing a lot of time in learning how computer graphics work and behave, especially since I was inexperienced in graphics programming. I eventually ended up spending most of my time playing with shaders and tweaking parameter values and found myself with little to no time available to actually put the composition together. I now feel that, when it comes to approaching composing for digital media, the actual composition process has to go hand in hand with the development of the software infrastructure. Since the computer can only do what it is programmed to do, one's ability or inability to add certain forms of functionality to the piece will depend on one's ability or inability to implement that in software. I also realized I had grossly underestimated the time and effort that is required for parameter tweaking to achieve an envisioned effect. Especially when it comes to timing, I found that composing animation sequences requires a more analytical and deconstructive way of thinking about motion over time.
Another aspect of the project I am not completely satisfied with is the music. Coming from a predominantly musical background, I felt confident enough in my ability to compose and mix something in a short period of time. I thus felt safe in postponing producing the music in favor of gaining a better understanding, and hence better results, in the graphics domain. Looking back, I realized that working on the music earlier on would not only aid the mixed compositional process but would also guarantee a better musical result. For now, the music serves as a template upon which I can fine tune and experiment with the ways the graphics react to sound.
As it stands in its current state, Domain: Warped is more of an interactive experiment in audiovisuals than it is a composition. No clear form or structure is specified and the visuals do not unfold themselves over time. Rather, mouse and keyboard controls are made available to the audience to play with. In addition, since the GLSL shader code has to be compiled every time when running the application, it is also available for anyone to read and alter in any way they see fit. Simply open it with any text editor, save some changes and press 'r' in the program to reload the shader.