logo

Name: Pidgins
Members: Milo Tamez, Aaron With
Interviewee: Aaron With
Nationalities: American (Aaron), Mexican (Milo)
Occupations: Sound artist (Aaron), drummer, percussionist (Milo)
Current release: Pidgins's Refrains of the Day, Volume 1 is slated for release on October 29th 2023 via Lexical. The third single off that album, "Still in Progress", is out now.

[Read our Milo Tamez interview about healing with music]
[Read our Milo Tamez interview about drumming]

If you enjoyed this Pidgins interview and would like to find out more about the duo and their music, visit their official homepage. They are also on Instagram. More on Aaron With specifically, can be found on his personal homepage or our interview with him about sound.   



What are among your favourite spaces to record and play your music?

I really enjoyed recording our last album at Desierto Recording Studio, located in a natural reserve on the outskirts of Mexico City.

The room is mostly glass walls and faces out into the woods. I thought the glass might offer weird reflections, but the room sound was great, and playing to an immersive forest view was a major mood upgrade over my windowless home office recording environment. I use a lot of musicalized nature recordings within my sample instruments, and they were feeling especially appropriate during that recording.

The recording we did there of “Profit Shifting” was probably the best we’ve ever played it, because it was such a fitting environment for that sound palate of bird calls. Over the course of your development, what have been your most important instruments and tools for achieving the sound you want?

I play on a Linnstrument. It’s a beautifully designed midi controller. I find that its smaller pads (relative to similar controllers like Push) lend themselves to faster finger percussion, and larger reach for more harmonic potential. It’s also flexible for retuning and remapping the keys so I can program various just intonations and microtonal tunings into intuitive visual mappings. I appropriate just and microtonal mappings generously offered in Erv Wilson’s free “Wilsonic” app—a marvelous resource for just intonation explorers.

My sample instruments are created mostly in Kontakt and Omnisphere, which live in Live. I build instruments from samples taken from a variety of sources, including field recording and foley libraries. I use Lemur for iPad to trigger mix and effects changes, and launch crossfades between instruments—which lets me evolve my sound while keeping my focus on performing finger percussion patterns.

I run vocals through various outboard Eurorack and pedal effects—of which I’m especially fond of a 4MS SMR and the Intellijel Rainmaker’s comb filter.

Could you describe the process of creating this personal sound on the basis of one of your pieces, or live performances, please?

For every Pidgins piece, I use a different custom instrument combining various sample groups—usually a mix of pitched and unpitched sounds.

So for say, “Search Optimization,” that instrument is built from a foley library of attack sounds recorded with contact mics. The attacks are actions like a stick striking a metal plate, or dropping a pebble on glass. Contact mics capture more of the pitched resonance of the surfaces they’re attached to, relative to the more inharmonic click of the attack, so they produce a nice tonal thud. I like percussion that is pitched, where you can feel melody in the patterns—so I loved this library.

First I listened to the entire library – a few hours of sounds - to find the clearest pitched attacks. These were edited into discreet files and organized by pitch quality. Then I mapped the files in Kontakt to a configuration playable from my Linnstrument, finding individual samples that roughly corresponded to certain pitches, but leaving in imperfections. Some samples required a little EQ to make the instrument feel consistent as a whole. From there, it’s just putting in the practice to learn the mapping as a new instrument—identifying finger patterns and shapes that create nice melodic lines.

When playing that instrument in Pidgins with Milo, we were gravitating towards a cacophonous style, identifying each other’s micropatterns and engaging in call and response. That was nice, but at some point I felt my instrument needed more flexibility, so I layered in various other sample layers. Halfway through this recording I layer in flute key click samples, another pitched percussive sound but with a stronger fundamental, to strengthen the melodicity as the noise cloud thickened.

The song is largely improvised (except for the vocals), so sometimes live I’ll combine different sample instruments that didn’t make it in this recording, depending on the energy that day. When we play it slower, I might layer in samples of cardboard tearing, which makes me focus on decays over attacks, inviting sparser patterns.

When working with sound, what guides your decisions?

When designing instruments, I like combining sounds that emulate the feeling of an actual acoustic event—harmonic resonance and saturation, inharmonic resonance, sympathetic vibrations, etc. This lets me pursue fantastical or fictional new sound ideas, but they can retain a familiar, real, nostalgic quality.

I’m also looking for attack clarity, layers within sustains, and tone that is velocity-responsive, such that playing a pattern will respond to imperfect and unpredictable human performance dynamics, and offer ongoing sonic input into the constantly evolving feedback loop of organic pattern evolution. Most electronic instruments do not provide this kind of human playability or variety, and electronic percussion instruments sadly are rarely played by humans at all, generally being triggered by pre-programmed single bar beat loops, resulting in stagnant, predictable patterns and more broadly a cultural loss of percussive wisdom.

Beyond just avoiding using programmed beats, I want the sound of my instruments to forcibly guide my rhythmic playing to places I could never program if I tried.

Paul Simon said “the way that I listen to my own records is not for the chords or the lyrics - my first impression is of the overall sound.” What's your take on that and how would you define your personal sound?

I’m a little disappointed, because I always imagined Paul Simon listening to his records and being like “man … those are some great lyrics and chords”. But I can’t disagree.

Since we’re talking about lyrics, I’ll share a few thoughts about my vocal sound and the sound of words. I think most lyricists prioritize the sound of words as much or more as their meaning. That’s been true for me across different types of vocal music I’ve made—from my spoken word music to this Pidgins record where I’m chanting these managerial class mantras.

I discarded a lot of mantras that I found funny or interesting when chanted, but just couldn’t get them to sound right. The way the formants resonate at a given pitch, how pliable certain syllables are for stretching, all dictate a word’s musical potential.

If I only valued their sound, I could just sing jibberish lyrics, which I sometimes do … but usually lyrics do have to first meet some non-sound criteria for me. In the case of this record the lyrics had to be technocratic expressions that capture something about popular PMC attitudes (technological utopianism, self improvement focused on productivity, etc), and needed a kind of asymmetry so that looping, reversing, or reordering their components created new meanings.

So that gives me a set of ideas to try, and then I’ll play with the sound of the phrases to see if they’re chantable and can find a good circular note loop.

After I have a solid chantable phrase, then that has to conform to the hybrid biotech sound I’m going for in Pidgins. I’d like them to feel ambiguous, like you could maybe hear them as an AI voice built from human voice data, or maybe a it’s human voice but with some sound altering tech grafted onto the vocal chords in an elective surgery to achieve the sonic beauty standard of a culture you don’t know yet.

For vocal effects to feel intentional and non-ornamental, they have to closely consider the sound shape of the words. For phrases with sustain-worthy vowels (like "Data Driven"), tuned comb filters may activate the formant nicely. For phrases heavy on fricatives and sibilants (like "These Models Scale"), granular delays can nicely extract that friction sound.

But even nails on the chalkboard, maybe with a little EQ and reverb, and placed in the right context, can be perceived as pleasant.