More selected projects

Visualising Machine Created Music Through Cellular Automata

by: Sebastian Smith and Milton De Paula

Description

Our project is a generative music based system providing fft based visuals to demonstrate the creative potential within systems. The music is generated by many different rulesets of Wolfram’s Elementary Cellular Automata with an in built logic system to determine when to change rulesets and also when to change the key of the music to keep the melodic content constantly evolving with every listen. For each row of cells in these rulesets, each active cell represents a note being turned on in it’s correlating position in the current scale and key which develop into a combination of different phrases expressed musically at different note lengths. The aim of the experiment was to test for creative potential in machines and systems and see if it were possible to create expressive music through these means.

Gitlab Repository

 

Background Research and Intended Outcomes

Sebastian

While researching about different techniques for generative music, I was originally going to implement a simple markov chain that worked through a single note melody line. However, after hearing many examples of this online I found the markov chain approach to result in some quite bland music so I altered my approach by turning towards generative systems to achieve a sense of natural progression in the music. By this search I came across computer scientist Stephen Wolfram who had developed his own class of cellular automata where cell groupings were split into rows and the outcome of a row was determined by the cells in it’s previous row. Upon seeing images of the resulting generations, I could see how the active cells could perfectly match the midi score for a piece of music and the pyramid-esque aesthetic of it making for some interesting ascending and descending harmonies.

To get me started, I used Daniel Shiffman’s following video on how to create Wolfram based cellular automata in Processing and ported the code to openFrameworks to then musically translate the cell information. [1]

As an experiment on evolutionary music, the intended outcome is to help recognise creative potential within systems and demonstrate musical expression as a product of randomness. Whether or not a listener actually recognises and/or enjoys this form of expression is entirely subjective however the system’s ability to change key and cellular rules helps retain interesting melodic content that sounds different with every listen.

Creative Process

Sebastian

After fully implementing the translation of active cells to notes on a certain scale, the initial music that was generated was already sophisticated enough that I decided to stick with this approach. Throughout the process I found myself experimenting with many different forms of determination for deciding what rulesets to use, when to change key and the range of notes that could be played at any time. Initially, rulesets that generated large amounts of active cells created incredibly dissonant sounds as they were telling every note in a scale to be played at once. This obviously needed to be handled in such a way to minimise the amount of dissonance so that the music could stay consistent although I wasn’t keen to the idea of simply discarding these kinds of sounds completely. I found that when these rulesets were active, it gave more character to the music since the contrast of switching between one row of cells having few notes active compared to it’s neighbouring row of many cells created a sense of chaos and order within the music as if the computer was banging a load of notes together. One of the most intriguing example of this used in the project is rule 65 which shows as follows:

The nature of this ruleset gave the same effect mentioned before but now with a descending phrase following every other note. This highlights an advantage of computer made music that I think not a lot of people have discussed yet which is the amount of notes being played at once with intricate patterns in between, obviously an impossible task for a single musician. Granted, you could make arrangements to fill in the rest of the notes although it will become obvious that you are listening to multiple musicians in such an instance whereas this project’s example of generative music makes the concept of arrangements ambiguous, since all melodies are being played within the same confines of the system.

Milton

For interacting with the app we first looked at the Leap Motion. We thought it would be an interesting idea to have the user interact with the music in a metaphysical way almost as if they were touching the sound  (pretentious way of saying AR). Unfortunately though we couldn’t quite ‘get’ a way to make the interaction intuitive, we had many ideas but none of them were obvious to how to use the app without first having to read instructions and/or seeing someone use it first and this ultimately lead us to scrapping the idea of using the Leap motion as an interface for the app.

The idea for the visuals came to me in a rather funny way. I got inspiration from many sources.

The colour palette that I chose (#C23836, #642E68, #00BACC and #EEC053) was taken straight from the fourth issue of ‘The Ride” a cycling magazine. And the final idea itself came from a stroll where I was thinking through a programing problem and for some reason outer space came to my mind, in particular Saturn. I say ‘for some reason’, but that's not a very accurate thing to say, I was trying to implement a particle system and I wasn’t sure how I was going to implement the physics of it all, and then one thought lead to another and than eventually I came to the conclusion of outer space. When it came to the colours the deep purple was chosen to invoke a sense of the cosmos/space. Than the red, I thought, would make a nice colour to invoke warmth like the warmth being portrayed in the magazine cover that I took inspiration from. And the yellow was chosen because I wanted to have a nice warm yellow to represents stars.

The idea i settled on toward the end was to have a planet as the centerfold of the  work, that would pulsate according to the bpm of the sound being generated. Stars that would slowly go by (in a parallax effect) in the background, while also pulsating to the beat of the sound. And finally the planet in the middle would have it’s outer rings in the shape of sine waves that are reacting to the music. I also wanted to use a particle effect on the stars, not to represent the stars themselves but to represent a warm glow surrounding them. I also took inspiration from Daniel Shiffman's video on fft as well as this gif.

Build Process

Sebastian

The final build of the project is based around the initial bundle of ideas for the audio from a project named “GenerativeTestCA” in my sub folder which is where most of the build on my side took place. Learning the basic ins and outs of maximilian proved to be somewhat of a challenge at times, given the lack of reference. I was grateful enough to come across Mick Grierson’s Advanced Audio-Visual Processing module on learn.gold where he had recorded his lectures teaching the basics of maximilian and provided real time examples for many different applications of the library. Once I had figured out how to output a sine wave at a given frequency and control its output, it was then time to start organising the core musical logic of the system for easy manipulation afterwards.

I used a class labelled music to store a total of 129 notes and their frequencies and implemented a method where I could output the corresponding frequency to a note’s letter and octave range e.g. “n2f(A, 4)” would return 440Hz. Once this was done I made a separate class for handling scales by taking a root note and using an array of intervals to decide how this root note would get to it’s upper octave and then multiply those frequency values by 2 to the power of 5 to give it an octave range of 5. This was for a single key in a single scale so to finish this implementation I created an as many number of scale objects that I could so that I had every possible scale in every possible key all stored and ready to go during the setup process of the program.

Then came structuring the music, which took a while to get absolutely right. Originally, I had it calculate the time between beats determined by the bpm and frame rate which was a terrible idea in hindsight. This meant that if the frame rate was to be disrupted by high cost usage of the program, it would decrement the tempo of the music causing massive inconsistencies to the flow of the music. This was solved by checking beat time with milliseconds and if the bpm was updated in anyway, it would update the program within the next frame which was a negligible amount of time between tempo changes to cause any disruption. All of the decisions for deciding when to change ruleset, the ruleset to use and when to change key all come down to random number generators that were tweaked for optimal variety of music specifying a certain chance to make any of these changes.

Evaluation

Sebastian

Our initial project goals were clearly much too ambitious for us achieve. We originally wanted the user to be put into a 3D environment, where the music generated would be affected by user input using hand gestures via a leap motion controller. In terms of the audio output, I believe I’ve demonstrated the original idea of our project’s task of generative music. Towards the final push of the deadline I implemented a minimal graphical user interface for the user to have some sort of interaction with the music although we originally wanted the user to affect the progression of the music, as well as its overall tone and feel (e.g. happy/sad, bright/dark) and then have the visuals reflect that. This would’ve required a logic system that I found was beyond my skillset which is why I decided to stick to fleshing out the generative part of the music as much as I could, making it seem as natural and interesting to the user as possible.

I also planned to purchase ableton live to then use the openFrameworks ableton addon for sending midi info from our program to sophisticate the quality of the audio and music with custom synthesisers. However, since we were pressed for time I felt this was not a priority since it’s purpose would just simply be to make everything sound nice but didn’t add to the core purpose of the music which still stands to demonstrate algorithmically generated music.

In essence, there is one instrument being played by the system which is the many triangle wave oscillators being played on top of each other. This singular instrument helps highlight the musical effects of cellular automata although the original idea was to created full pieces of music ideally with a rhythm section driving the main ideas. I attempted to implement this by adding drum samples as wav files to the project but for some odd reason, maximilian did not like this and would play each wav file as a frenzy of noise. I experimented by exporting the drum samples at different bit rates, sample rates and stereo/mono but the result was never the clean signal as it should have been so this forced me to scrap the rhythm section completely.

Milton

I was not very successful at achieving my goals. As Sebastian mentioned we were quite a bit ambitious, which I don’t think was a wrong thing. But I did let my passion and enthusiasm for the project die. As the deadline approached both I and Sebastian quickly realized that we weren't going to meet our original ambitions for the project. Me and Sebastian talked quite a bit about about ideas. We talked about having a 3D environment that the users could interact with somehow. Like previously mentioned we wanted user to interface with the app using the Leap Motion. Unfortunately I found 3D very tricky. I still can’t quite implement much in 3D. I think I fell to the trap of the sunk cost fallacy, where I kept telling myself that I spent so much time trying to understand 3D graphics that I just had to use it. This turned out to be a huge waste of time, I should have squashed my ego at the very beginning and moved on to doing something else. I did end up moving on from 3D in the end but I did leave it a bit too late. I than tried to figure out particle effects which I got the basic concepts of but didn’t get to the point of where I wanted to. Before the above mentioned idea I wanted to have particle effects representing flames and said flames to represent the sound frequency, kind of how it is in this this video. Unfortunately though I couldn’t figure out how to model the flames using code. So after much time wasted I limited myself to only do things in 2D for this project, unfortunately by this time the deadline was fast approaching.

I also had much difficulty getting the sound frequencies being used to output sound. Which led me to using a bit of a hack solution (and not the good type of hack solution). This lead to some threading issues, that I eventually figured out the problem to. The threading problem was one that just didn’t want to go away though, I had to change a couple of times how I got the data before it stopped breaking everything.