I began considering the idea of intentional music back in the early 90's. I was into the Chaos theory and I loved exploring Fractal generation software. I spent a lot of time pondering the beauty of self similarity. One thing that always stood out to me about the Mandelbrot set was its apparent simplicity but deceptive complexity. By taking a simple formula, running some numbers through it, plotting a color on the screen, then feeding the number back through, you could generate some beautiful artwork.

Of course, to make all of this work, you had to define some things up front to make sure that the end result looked appealing. I spent a lot of time thinking of ways to create "Fractal Music" at the time, and even spent time participating in the online forums on the topic back in the day. I tried out the various experimental software back then and wasn't really happy with what I could produce. It just didn't sound good. Much of it was due to the limitations of the General MIDI nature of the tools, but mostly, it was because Fractal music just wasn't any good. What worked for visual art definitely did not work for audio art.

So I spent a lot of time thinking about what needed to change to make it work. I tried a lot of different approaches, but I still could never get my skills to match up with my desires. I went back to work at my day job as a business software developer and kept the ideas in the back of my head.

Throughout the years, I acquired more musical software and hardware and practiced using it by making "music" with it of a somewhat abstract nature, but again, I just wasn't happy with what I was producing.

After about three decades of experience in software development, I'm finally to a point where I can apply my skills to make the music that I want to hear. It doesn't hurt that modern day tools have improved immensely, either.

So, I've been developing what I refer to as "Intentional Music Programming". I've mentioned it in a previous blog post, but I didn't get very detailed and left the topic undefined.

So, what makes an Intentional Music Program?

Well, there are two primary structures that go into the program. Here they are with a brief discussion on what each one consists of:

  • The Song - Name, Duration, a Collection of Tracks, and an array of numbers that serves as the song's "seed". A seed can be thought of as the primary driver behind the song that defines the variability of notes, duration, and "openness to change". It's like an attitude for the song.
  • The Track - Name, Patch (instrument), Period (the note's length relative to the track with a modification by the seed), "affluence", which is the track's overall impact on the other tracks in the song. Tracks with a higher affluence value will have a greater impact on the overall sound of the song. The last property of a track is the optional chord. This is an array of numbers that defines which relative notes the track will prefer when constructing chords. If no chord is defined, then the track tends to stay with a single note per song event.

Once these properties have been defined, then they are fed into the Intentional Music algorithm. This algorithm started life as a simple loop, but has advanced to much more. It has now grown into a multi-step loop that considers the variables and the variable interaction between the variables based on the seed. Without getting into too much detail, this is the actual "brains" of the whole thing.

When I was younger, I would sit and think about what Artificial Intelligence would look like. I had a Korg PolySix synthesizer that the previous owner had modified to allow you to pass an input into the synthesizer and it would process the input through some of the synthesizer circuits. I decided that it would be a good idea to run the output of the synthesizer through a reverb box and run it back into the synthesizer to produce a reverb/synth processing feedback loop. In my mind at the time, that was the key to an "intelligent" machine - the feedback loop. By feeding information into a computer and processing it, taking the processed output and feeding it back to the algorithm in a loop, somehow the computer would magically gain sentience.

Obviously, it's a bit more involved than that, but modern machine learning looks similar to my thought experiements from back then.

While I'm not applying machine learning to my Intentional Music algorithm, it is not too far from being able to begin taking advantage of trained models of music that I have created to produce music that I would like to hear. However, that's a bit above my current skillset and my current computer seems to only support image and sound identification, not prediction.

I am working on my algorithm now to allow me to get more advanced features of a track, like variable note length per seed event, but right now I'm pretty happy with what it created after my first few days of development.

Here is the album Intentional Ambience on bandcamp...written entirely in the Swift programming language with post-processing in Apple's Logic Pro DAW.

Enjoy.