logo

Part 2

To some, the advent of AI and 'intelligent' composing tools offers potential for machines to contribute to the creative process. What are your hopes, fears, expectations and possible concrete plans in this regard?

Ha. I've been advising computer science senior theses for more than a decade that are trying to make AI compose and it always seemed like a fun problem to me, but lately it's much less so.

The realm of AI music goes back so far, you can trace it to Max Matthews at Bell Labs or Marvin Minsky at MIT, or Raymond Scott, or other places, but it's not new. What does feel new is how it's become the hot topic for big tech, and so much money and political power is being wielded to make the most capitalistic version of the AI music dream come true. I use and create algorithmic composition and performance tools all the time, and I love it.

I don't have any interest whatsoever in typing in a text prompt and getting some ready-made music output. I completely understand the reason for the push toward that goal: uncreative executives want to be able to make bland music to go behind their ads and youtube videos and whatnot without paying composers to write that bland music. And one could argue that composers don't actually want to write that stuff anyway - "hey, we need something that sounds like this song, but peppier and with acoustic guitar". But those jobs are actually how a lot of working composers put together a living - selling tunes to music libraries, or writing stuff for ads or things like that.

I find the current fad for large language model AI to be really disappointing, because it focuses so much energy on a huge-dataset, middle-of-the-road, black-box approach to AI generation rather than more dynamic AI tools that can be used creatively by feeding training data and moving around parameters.

Even though that stuff sucks, there are people working on AI music tools that are more about providing interesting material and keeping the human in the feedback loop, like Rebecca Fiebrink's Wekinator, for instance.



Have you used AI or generative music tools for your own productions? If so, in which way and what did they add?


You could argue that much of Loom is generative. A lot of the rhythmic material is generated by the MantaMate, using an algorithm I wrote that creates randomized repeating patterns that you can nudge around. I generate a lot of different patterns and improvise with them live, steering them this way or that.

In my more experimental music, I use generative processes a lot, like the orbital mechanics I use in "Opposite Earth," or the pattern generation that is at the core of "Fictitious Forces."



I wrote a piece, "Substratum," for string quartet and pedal steel, where the main harmony of the chords is actually derived from sonifying IP addresses. In the case of that piece, it's not part of the concept of the piece, I was just trying to get out of a harmonic rut, so I used a tool that a student had developed to rather arbitrarily map four-number IP addresses on the internet as four-note chords, and I just surfed the web until I found some interesting harmonies.



One interesting aspect of that was that large websites ended up having very related IP addresses across related pages, so I found a website that produced several neat chords that felt connected to each other due to common tones.

Richie Hawtin, speaking about semi-modular equipment, has stated that a deeper understanding of sound synthesis can lead to “life lessons that go beyond what we can hear.” Can you relate to that statement?

I wonder what that meant to him.

I can relate in my own way, I think. I've loved making electronic music for a long time, but my path later in life has become somewhat interdisciplinary, with music and engineering mixing in many ways. When I went to grad school for music composition, I was at a point in my life where I realized that I wanted to do things musically that I couldn't really do with existing tools. I didn't know how to program a computer or make a circuit, so I set out to learn, to better be able to realize my musical ideas.

Now, many years later, I find myself dividing my time equally between the engineering concerns of building instruments and writing firmware and the artistic concerns of making music and performing. I am very glad that I get to do both of these things, and I find they complement each other and help me feel balanced as a human.

Making music can sometimes leave me feeling unmoored, where the lack of any real metric of whether something I did to a piece works or doesn't can be overwhelming. At those times, I can turn to a firmware programming task, where I have a clear goal, and I can test whether the code I wrote succeeded or failed. When the engineer side of things starts to feel too much like drudgery, I can switch over to improvising and composing to give myself a more immediate creative outlet and reaffirm that my instrument building work is worthwhile.

Beyond that practical use of the engineering knowledge I've gained, I think that getting a stronger understanding of what's going on under the hood in any situation always puts you in better position to get the results you want.


Previous page:
Part 1  
2 / 2
previous