I am a programmer at heart. It's what I do for a living and I enjoy it immensely. I also enjoy creating music in my spare time. However, I'm not a classically trained musician, so I struggle with the composition of my music.
That's why I've started to work on what I like to call "intentional music programming". It basically means that I will write my intended music using codified structures that a computer can interpret and assist me in creating the music that I want.
To start, I need to point out that I'm creating ambient music. I would like to start with a specific key that my music will be in, so I need to setup a way to define the key signature.
Then, I will define the overall length of the piece of music. My ambient pieces will range anywhere between 6 and 20 minutes each. I like to make sure that my pieces have movement, so there needs to be some modulation of some sort, whether that be in the virtual instrument, or in the choice of notes or note duration.
I would rather not create music that sounds like someone "put a brick on the keyboard and walked away" as someone explained to me. My music needs to show the movement, so I will ensure that the notes change every so many bars.
After I have defined the total length of the piece, I need to define the divisions. That is, I need to define how long each section of the song will last. My notes will be long lasting and be in the key that I chose.
I would also like to be able to codify the chord progression for the song, so I need some way to do that. Maybe a 1,3,5,3,5,3,1 type of structure. I don't want to have to get caught up in the details of how to play each chord on a keyboard, I just want to define the key, define the chord progression, and let the program figure out the rest based on a template that I define up front.
It will likely be defined in a simple text format, maybe json or similar structure. That way, I can parse it with the computer and generate the MIDI file. Once the MIDI file is generated, I can put it in my DAW and work with the instrument, effects, and final mix.
I think this approach will lead to me creating music more in line with what I would like to hear because I think in terms of code, not in terms of chords and scales. I also think this will position me to take advantage of the emerging realm of Machine Learning with AI assisting me in my composition. Unfortunately, I have been unable to get the early prototypes of ML music generation to work on my current PC. I expect to have a new Mac Mini with the new M1 chip soon that will hopefully open the door to serious ML processing to me, but until then, I will focus on programming my music using basic structures. The ML will have to wait.