Adding Scales to Intentional Music

Adding Scales to Intentional Music

I spent last few days adding another algorithm to my Intentional Music (algorithmic composition) arsenal. This time, I chose to focus on adding the ability to choose notes from musical scales. My first scale was the first octatonic (Diminished) scale from Jazz music as defined in this Wikipedia article on Jazz scales.

It was quite a challenge at first, because my original note choosing algorithm was all but random. I controlled the notes that were played relative to a base note, but they were not in a specific scale.

It didn't take me long to determine that the octatonic Jazz scale did not work well with Ambient music (at least not the ambient music I was trying to make), so I took it in a different direction and made some industrial sounding soundscapes.

I first created an array with all of the notes from the first scale as listed in the article. I used their midi note number to store them. They were all relative to note 0.

// octatonic 1
let JazzScale1: [Int] = [3, 5, 6, 8, 9, 11, 12, 14, 15]

I then expanded my track definition so that I could add the octave to it.

{ "name": "C", "octave": 1, "chord": [0,2,6], "offset": 5, "period": 15, "inPosition": 100, "affluence": 6 }

Octave 1 is actually octave 2 in MIDI as I added 1 to offset the -1 octave that is in the MIDI definition (-1 becomes 0, 0 becomes 1, etc.)

While processing the track, I grab a random number between 1 and 4 and I use that to divide up the note duration. If it's 4, then I divide the duration by 4 and add 4 notes, if it's 2, then I divide by 2 and add 2 notes (same for 1 and 3). It's arbitrary at the moment, and I need to work on adding some intelligence to this decision.

let newDuration = Int.random(in: 1...4)
for newPos in stride(from: 0, to: newDuration, by: 1) {
    note = getJazzNote(song: song, track: trk)
    t.addNote(timeStamp: Double(newPos + newDuration), duration: Float(newDuration), note: UInt8(note), velocity: 100, channel: channel)
}

I've refactored the code quite a bit and reduced the complexity and noise. It was spaghetti code when I created my first iteration, but it's getting narrowed down now.

I'd like to spend more time exploring different scales to use in the algorithm and add the scale to the song definition JSON file. I also need to spend some time making sure that the notes that are chosen by the algorithm sound good with all of the other notes being played. That's going to require some heavy work as the tracks are currently rendered independently from one another. There will be a unified approach to this as it will require an overall song constraint.

That was another update that I made to the Intentional Music code that helped a lot - JSON configuration files. That has improved my workflow immensely.

You can hear the results of this new algorithm on my Bandcamp page.

Michael Earls

Michael Earls

Montgomery, AL, USA
Michael has been a computer nerd since he was ten years old and he begged his parents to buy him a computer for Christmas. In 1982, he was the proud owner of a TI-99/4A. He's been coding since.