Name: Natasha Barrett
Nationality: British
Occupation: Composer, sound artist, researcher
Current Release: Natasha Barrett's new album Toxic Colour is out March 28th 2025 via Persistence of Sound.
Developments that take electronic music forwards: I think it’s exciting when festivals bring together all kinds of electronic music: mixing experimental, electroacoustic, or acousmatic music with more mainstream electronic genres, and vice versa. It creates opportunities for audiences to discover new sounds and for artists to meet, exchange ideas, and inspire each other. I think there should be much more of this kind of fusion!
For a deeper dive, read our earlier Natasha Barrett interview.
If you enjoyed these thoughts by Natasha Barrett and would like to stay up to date with her music, visit her official homepage. She is also on Soundcloud, Facebook, and bandcamp.
Most genres of music make use of electronic production means. What does the term electronic music mean today, would you say?
This is an interesting question, and I believe the answer is shaped by the listener’s own preferences and habits.
Electronic music is, of course, not a single genre but rather a broad category encompassing all forms of music that rely wholly or partly on electronic sound production, maybe with the exemption of music that attempts to replicate known acoustic instruments. In practice, if you approach the question from a somewhat beat-based or ambient music perspective, then I would say electronic music generally refers to music made with synthesisers, sequencers, digital audio workstations, and effects.
However, from an experimental music standpoint, electronic music can just as easily encompass electroacoustic or acousmatic music—where acoustic sources are recorded in unconventional ways, manipulated with signal processing techniques, and the sonic result of these experiments takes the place of the synthesiser.
As a composer working in experimental music, I would traditionally describe what I do as electroacoustic or acousmatic music rather than electronic music. Some might also categorise certain pieces of mine as “noise music”. However, I also encounter situations where listeners unfamiliar with the kind of music I create find it more accessible when I refer to my work as electronic music, despite it not falling squarely under beat-based or ambient genres. It is something I wouldn’t have considered ten years ago, but reflects the cross-fertilisation that has taken place over the past decade within music made using electronic means.
So, there is no straightforward answer—I think the definition ultimately depends on the listener.
I grew up mainly listening to electronic music but have lately been disappointed …
I agree, and I think this is an issue affecting all genres of electronic and experimental music, for a variety of reasons. The most obvious is that the tools for creating, controlling, and mixing sound have become much easier to use. In itself, this should not stifle “creative health”. However, this ease of use comes with some conditions:
Firstly, commercial software is, by nature, commercial—it attracts users by looking polished, being easy to use, and offering complete reliability. This convenience and reliability mean that you and thousands of other musicians are working within a framework imposed by the software manufacturer.
Older software, by contrast, was often unstable, allowing you to push the tools beyond their intended functions. It was less user-friendly and required greater technical knowledge. I began working with electronic music in 1992 (then referred to as computer music, as I was studying in an academic setting). Everything that could go wrong did go wrong! It was extremely frustrating! But in those moments, fascinating sounds and ideas would emerge.
The other issue with ease of use is the speed of use. When everything is immediate, real-time, and visually engaging, it’s tempting to improvise through sound and structure without pausing to reflect. I believe that compelling results require a balance of intuitive working methods, thought, and contemplation. Our tools today prioritise the first and neglect the latter.
The power of the digital audio workstation (DAW) has also evolved significantly—we can now mix vast numbers of tracks and rhythmic structures instantly, creating either extremely dense mixes or prolonged drones. When mixing eight mono tracks was a luxury, you had to be selective and focus on what you were trying to achieve.
I wouldn’t want to return to the limitations of eight-track mixing, but I’m conscious of the temptation to keep adding ‘just one more layer’ or ‘one more effect’. Sometimes, it’s necessary to remove, reduce, and let the material breathe—allowing it to reveal its true qualities—rather than building it into an overwhelming mass of sound.
Finally, I think the streaming market has played a significant role: quantity, rather than quality seems to dominate when it comes to gaining market share.
What kind of musical/sonic materials and ideas are particularly stimulating for your work right now?
Right now I find it interesting to work with extreme contrasts of materials and approaches - not in the same composition but across projects that are being created in parallel. I think it keeps my ear in tune to the reality of the moment.
In one of these projects I’m returning to the outdoor landscape and exploring a challenge: when we are familiar with a specific location, such as where we live, we listen differently compared to a visitor. For example, we have sonic ‘friends’—sounds that are familiar and perhaps comforting in their presence—or we perceive spatial sound in a unique way, filtering out the noise while focusing on the features.
For a few years now I have been interested in exploring ways to capture this sensation in music without being explicit or relying on obvious sound-field recordings. I was originally inspired by this idea while developing the materials for Speaking Spaces I: Heterotopia in 2021.
But I am now revisiting the idea with a particular focus on how wind alters our experience of the places we know.
In this new project I have motion sensors hanging from a tree, which sway in the breeze and control sound as a real-time response to the environment. The work is designed for four loudspeakers fixed to four trees, the motion sensor, a live microphone, and runs on a Raspberry Pi. It will form part of an outdoor immersive sound installation called Talking Trees: A Nature-Responsive Grove, opening at the Momentum Biennale this summer, just outside of Oslo.
A contrast is in another project that combines noisy pure synthesis and materials created from very close recordings of objects being gently touched. This creates a combination that sounds abstract and artificial, and is an extreme contrast to the reality of the outdoor landscapes in the other project.
How much potential for something “new” is there still in electronic music? What could this “new” look like?
I believe there will always be potential for discovering new sounds and musical structures, as well as new ways of creating and experiencing music—even blurring the line between creation and experience. The future of electronic music isn’t just about new tools; it’s about new ways of listening, feeling, and experiencing sound in dimensions we have yet to imagine.
For example, immersive environments and interactive compositions are already common in experimental electronic music. But what if they could adapt to their surroundings in real time? This could happen in simple ways, like the motion sensor and wind-based work I’m currently exploring, or through more complex interactions involving biofeedback that responds to the listener’s emotions.
We can also rethink our use of artificial intelligence. Right now, beyond simply training on (or appropriating without permission) copyrighted material, AI is used for sound design, assisted composition, studio tasks like mixing and mastering, and even live improvisation. However, it isn’t truly creating anything new—it’s primarily making processes faster and more efficient.
But what if AI could enable work that would be humanly impossible, even with infinite time, knowledge, or experience? Then it could absolutely expand artistic possibilities.
What were some of the recent tools you bought, used, or saw/read about which changed your perspective about production, performing, and making music?
Although I work freelance, I have a history of academic connections and have often been at the user end of what were once cutting-edge tools - mainly in spatial, spectral, temporal, and granular processing - before they entered the commercial market. I’ve also been involved in the development of spatial audio tools for 3D sound.
For better or worse, this means that new tools rarely change my perspective. However, when tools evolve to an extreme, that’s when things get particularly exciting!
As a composer working with what some might call unconventional sounds, the latest mastering tools have been extremely useful when mastering stereo tracks, especially when the tracks are actually reduced to stereo from 3D original.
Also MaxMSP (a software which allows you to create really anything you want), is not only continuously evolving but also remains a constant learning process for me, even after using it since the mid-’90s.
Right now, I’m building my own vst plugins using the RNBO extension of MaxMSP - not for anything overly complex, but to regain control over my workflow when working inside a DAW, which is now mainly Reaper. I’m also using RNBO in the Talking Trees installation I mentioned earlier, as it turned out to be the best tool for creating interactive audio processing software for the Raspberry Pi—which will be hanging outside, swinging from a tree, definitely not something you’d want to risk your best laptop for!
In terms of hardware, I’m particularly excited about the new 3D microphones, such as the MH Acoustics EM64 (a 6th-order 3D ambisonic microphone) and the Harpex SPCmic (an extremely low noise 3rd-order 3D ambisonic microphone). These allow me to capture 3D sound with a resolution close to actual human perception.
For a long time, I have composed exclusively in 3D sound using higher-order ambisonics in a 64-channel format, and these microphones have changed my workflow rather than my perspective. They enable me to explore ideas I had previously put on hold, anticipating the development of these new technologies.
Do you think that there is a limit to what can be done in sound design, and what defines these limits?
I don’t think there’s a limit if you embrace an expanded idea of time and space - or if you see sound design as a temporal-spatial phenomenon rather than just a short-term, mono object.
That said, there can be discrepancies between the artist and listener. For example, if a sound’s meaning or identity unfolds over a long time span and the listener is not engaged, or if spatial identity is a key feature in an immersive setup but the listener only has access to stereo.
In fact, some studies suggest that not everyone perceives spatial differences clearly, and spatial hearing itself requires training - definitely a challenge for both artist and listener if you want to explore spatial sound design!
How would you say your live performances and your recording projects connect at the moment?
Right now, all my live performance projects revolve around 3D sound in space. I work with immersive materials that can be performed on simple horizontal 8-channel speaker systems as well as large 3D speaker domes with 74 speakers or more. Both setups are fun, but naturally, the larger arrays create a far more immersive experience.
Some of my performances also incorporate a mix of pre-made and real-time computer graphics, using motion tracking and basic computer vision, allowing the system to interpret and process visual information. Everything runs inside MaxMSP and Jitter.
When working on a live recording project, I usually collaborate with another performer. Their input not only influences the piece through the sounds they create but also through their physical movement. Motion tracking captures their gestures, which I then use to control spatial sound movement and real-time computer graphics. This creates a feedback loop - the performer reacts to how the system responds to them, shaping the performance in real time.
The real challenge, though, is translating all of this into a recording for home listeners. It’s essentially a one-way process - collapsing 3D sound into stereo or binaural (3D sound for headphones). It’s always a compromise, and sometimes I have to recompose or edit sections for the stereo version, as certain immersive elements simply don’t translate and end up feeling flat or uninteresting.


