logo

Names: Kid Arrow and Markus Reuter
Occupations: Producer, composer, guitarist, educator (Markus Reuter), process, music creator (Kid Arrow)
Nationalites: Japanese (Kid Arrow), German (Markus Reuter)
Recent release: The latest Kid Arrow album Without Boundaries is out via bandcamp. It is the 17th collaboration between Markus Reuter and Kid Arrow, a process based on transformations of a single MIDI file.

If you enjoyed these thoughts by Markus Reuter and Kid Arrow and would like to find out more about their work, visit Markus's official homepage.

To keep reading, we also recommend our earlier Markus Reuter interview.

This interview is part of an ongoing series of conversations with Kid Arrow about the role of human composers in music. Read part 2. Read part 3



Credits for the project read: Music by Kid Arrow / Produced by Markus Reuter. So who is is Kid Arrow?

Kid Arrow is situated in a city in Japan and almost bodiless. There's no consideration of what it looks like. Is it thin? Is it fat? White? Black? It's a completely open, blank canvas. But one thing's certain: It is lonely. It is alone, working from some sort of vacuum.

The word “kid” obviously denotes a younger entity. It's like you're looking back at what spoke to you when you were young. As a kid, you're egocentric, but that also comes with a sense of solitude: If you believe that you are the centre of the universe, then that also means you're very alone.

What about the arrow?

It stands for direction, and trajectory. But also, you don't know where the arrow is going to land. And that's interesting to me.

Is Kid Arrow human?

Yeah, it's human, clearly human. It is the part that takes decisions in the process of what people call artificial intelligence. The part that is still required to run modern technology. It's the remainder of humanity in a way. All creative decisions are based on whatever intuitive or emotional response I am having towards the process.

So you have a totally intuitive and opinionated, human way of creating – egocentric even - and then imposing that on to an automated process.

What was the point of departure, musically speaking?

Conceptually, the accompanying visual art and the music are based on the idea of a family and family relationships. There's a source, a gene pool of sorts, which stands at the beginning of this exploration. And then the gene pool gets permutated and brought to life in different ways. So all of these tracks are essentially the same piece of music. But it's getting filtered and altered in ways that create different personalities, different characters.

And so each release each album or track stands for one particular outcome of the same starting conditions.

The way I understood our earlier conversations, it all started with a single MIDI file.

Yes. I didn't play it on keyboard because I didn't have a keyboard when I came up with the idea in  Osaka, I typed it in. But it's an actual composition with bass, chords, and a melody. And these elements, they still exist in every single of the pieces. But they all just morphed and altered. Beyond recognition.

Before the algorithms get to work, there is a personal statement. So there is a human element embedded into the code.

Exaxtly. The MIDI file is just pure, stupid data. But the data can carry some sort of meaning in terms of pitches and rhythms.

The sound was originally modelled on the idea of creating 70s electronica, or even early 80s electronica. It would have been okay if it had turned out super cheesy. In fact, some tracks are super cheesy. But they also emotionally changed in a way that makes the cheese or the potential cheese really melt away. And it's just just really touching.

You could say that pieces are elaborate, very elaborate, modular synth tracks, because everything is modular and literally modulating all the time.

How involved are you in the modulations?

Right now, I make the sonic decisions. But I'm trying to automate that as well, as much as possible. So I'm using modern compressors and equalisers, and some sort of machine learning algorithms as well. It's not that I'm spending a lot of time tweaking mixes. Actually, that's not even possible because the process is so involved. With my computer, it's almost impossible to listen to the pieces in real time. I have to bounce out little chunks to get an idea of what's actually happening. So it's not that I can really work with it like with a regular music production project.

Right now, the pieces are still all based on the same initial MIDI file. So the transformations that have been made have been made onto the same initial configuration. But that could also be changed. So the potential for creating a whole different mood, a whole different vibe is still within the gene pool. It's just that I haven't gone there yet, because I'm still amazed by what the current iterations yield: Surprising alterations which lead to many different moods.

If everything comes from a single MIDI file, how can the process come up with these extremely different results?

One of the main drivers is to exchange or swap out notes.

So let's say you start out with the C major scale, right? And then you take a G and turn the G into an A, and the A into a G, so just swapping out two notes. And then you realise that you're hearing the same rhythm and melody, but the line goes to a different note. And where it used to mostly sound like a happy major scale, now it sounds mostly minor. Just because a single note got switched around.

What about the rhythm?

The rhythm is always the same, but I'm using some interesting tricks to subdivide the measures differently. It's what people call tuplets. So the underlying piece is all in four-four. And the chord changes happen on the downbeat of a four-four. But the rhythm that sits on top can maybe run in five, or 11, or something like that, or anything, really.

And then there are also pieces where different subdivisions are combined, triplets with something straight, or like a seven over a four and stuff like that. It sounds technical when I talk about it, but not at all, when you're listening to it.

And of course, this inner child also explains the length of the pieces. As a kid, you grow up in an almost timeless space.

Yes, it's the space of play. You lose yourself, you play and you don't know if you've spent 20 minutes or three hours and that's exactly what it's about. The piece ends when it ends and the fact that they all fade out at the end is part of the concept. It might be complete or maybe it's not.

At the beginning of our conversation, you've described your role as the remainder of humanity. Here's something interesting: I've edited roughly 2000 interviews for 15 questions over the last two years. And if I compare those with the ones we conducted in the decade before that, the word humanity suddenly keeps coming up more and more. It's at the forefront of everyone's mind, more so than ever before.

It doesn't surprise me.

I think the question is: Where does humanity show itself? In the music? In the process? In the code?

Let me say something, possibly provocative. If we go back to the 80s, and we have albums like David Sylvian's  Brilliant Trees, or Secrets of the Beehive, where you suddenly have something really, really beautiful – and I'm using this word very consciously here, it really is the highest form of art.



But then a lot of artists don't really understand how intricate Sylvian's music actually was. They literally didn't understand what was happening. So they take the simplest elements of that, and turn them into their own genre of music that's possibly beautiful, but also very uninformed, and naive in a negative way. It's like a child that stays where it is, and never becomes an adult.

So what you end up with is this new neoclassical stuff, which is not about moving forward or being in the music. As a listener, you know exactly what's going to happen at all times.  

So what you're saying is: This human-made music actually sounds more like it was generated by a cheap AI. Whereas an AI, if set up properly and with human guidance, can come up with something far more human-sounding.

That's exactly what I'm trying to say.

But for great artists, for real artists who are using AI as a tool to try and generate new ideas, it doesn't make a difference if you use an old fashioned screwdriver to put the screw in the wall or an electric one. It really is just a tool and you're, as it were, still putting up the same picture on the wall.