Published in: zweikommasieben Magazin #17
Renick Bell - Code Potential
Renick Bell is a computer musician—not the kind of computer musician that a lot of people are these days, but someone who uses the machine and its inherent potentialities for writing music. By using code and algorithms to compose sounds, he frees himself from elaborate interfaces. No Ableton needed, not in the studio and not on stage. The results of this can be heard most recently on the EP Empty Lake (UIQ, 2016), and they can also be seen, thanks to a website accompanying the release which streams music via code from Bell’s system.
Besides producing and performing electronic music, the Texas native is also a graduate of the doctoral program at Tama Art University in Tokyo, Japan, where he’s been living since 2006. The following conversation with Mathis Neuhaus took place in a small Indian restaurant in Tokyo’s Kanda district. It touches on everything from the perception of algorithms to live coding as a demystifying strategy to the difference between performing and producing electronic music.
Mathis Neuhaus: I want to start out this conversation by introducing my own perspective on algorithms in music. It is very different to your vantage point on this, but we’ll talk about that later. I went to New York two years ago to research a story called The Taste Machine, which deals with the question of whether algorithms are able to make taste.
RB: And what is your conclusion?
MH: Taste, of course, is a very vague term, but platforms like Spotify are definitely changing the ways people are listening to music. Playlists and therefore the algorithms working behind these playlists are doing what back in the days was the sole purpose of someone on the radio or in the record store: to recommend new music you’ve never heard before. And, at least in the mainstream, this is playing a very important role by now. I know people who completely rely on their Spotify recommendations for discovering new music. You still have the selectors, diggers and music nerds, but that is not most people—and Spotify as a platform is scaled for most people, not for the niche. So my perspective on algorithms focuses on analyzing their cultural impact while you think from the other side of the screen, so to speak.
RB: I mean, there is obviously a wide variety, with many types of algorithms that apply to many different things. But in the end, they all require programming to implement.
MH: How did you end up using algorithms for making music? Could you walk me through your history of composing and producing sounds? Were you interested in the potential of the machine from the very beginning, or were you also involved with conventional music?
RB: As a very young child I sang in boy’s choirs, and at nine I started to play the piano. During high school, I also started to play drums. I picked up guitar and did a lot of things. The first time I got into music done by a machine was with a Yamaha PortaSound PSS-560 synthesizer from 1990 that had this very simple synthesis section where you could adjust some parameters of the voice. It also had a drum machine, again very rudimentary, but you could boost the tempo to 260 BPM or something and get other strange things out of it. As a university student, I just wanted to do music and was looking for the right voice, but couldn’t really find it. I wanted to do all these things I mentioned at the same time. The closest I got to that was playing in bands. Back in Texas I was in hardcore and screamo bands and played all the instruments in different contexts, but bands are hard to manage. Meanwhile, I was building up a studio with a friend and we were doing production work for hip hop groups. Furthermore, we started doing our own techno and trying to make things like what we were listening to at the time, which for me was releases on Mo’Wax and drum ‘n bass. I stuck with that for a long time, doing drum ‘n bass tracks and sending them to BBC 1Xtra and labels and trying to get them on 12”, but I also really wanted to perform electronic music live. But if I am in the studio, dragging snare samples around, that is about as far from live as you can get. So I asked myself: “how can I do the things I am doing live?”
MH: When was this?
RB: That was before Ableton, in the mid-1990s. In college I was studying electronic music, so I knew about and had experience with tools like CSound and Max/MSP and thought that I could maybe use these tools to perform my music live. In my master’s degree I built a piece of generative music software with a graphical user interface in SuperCollider. It worked fine, but after all it was all buttons and sliders, and I had to manipulate everything with the mouse. It was like playing guitar with one finger. It wasn't really satisfying, and I thought at the time that maybe it was a factor of working with the wrong tools, like the wrong graphical user interface toolkit, so I started looking at other programming languages in depth. Along the way, in 2004, I read an article by Alex McLean called Hacking Perl In Nightclubs. And he was doing this live coding stuff long before me, in 2004 already. I read the article at the time and thought, well, this is really cool, and it also seemed really ridiculous to me at the same time. Like, why would you ever do that? I didn’t get the point. But around 2006, I was doing all this programming in Haskell—the language which is still my main language at the moment—and testing things. I realized: “oh, wait. I’m making sounds here, directly in code, without a graphical user interface, and it’s kind of working.” And then it clicked for me what Alex McLean was talking about in his article. I gave up on trying to build a graphical interface and pursued using the code directly.
MH: You cut out the middle man.
RB: Right, and it actually worked and was the outlet I had been looking for. When I played piano, I was never really good at letting my two hands work independently. With my drum technique, I managed to play a trap set but I could never really do the amazing things. Guitar was the same. My technical skills were always lacking. But all that time I was programming, also. My dad had a TI-994a computer at the house already when I was five, so I had been using them forever. I can type pretty well, you know, and I realized I have this technical skill that I am pretty good at, and by using this to make music directly, it all fell into place. From that point on, I followed this path and started to make my own tools for live coding. I decided that I didn’t want to sit and drag around snare samples and put the snare samples in exactly the right place because I don’t have the time to do that. I want to generate the music, not specify each detail by hand, but exactly in that moment—though composing with algorithms does require exact specification in the code of each musical event, in some manner. Even a randomly-generated detail is precisely specified as random. The composer is ultimately responsible for nearly all of the details.
MH: You said that you wanted to perform. Did you see the potential of writing the music directly into code also for producing?
RB: I was aiming at performing. That was my whole goal, but along the way I realized that I could use these generative tools to produce, too. And it’s true: I can produce so much more than I could have ever done before. The computer can generate all those details that I spent six months creating in the studio. I can get the machine to do that for me in a second. I saw that benefit, but the original purpose was to use this for performing. And this was in line with what Alex McLean and Nick Collins were doing from the early 2000s on. They are the true originators of this thing called “algorave,” where people are live coding on stage and other people are dancing to the music that is produced in realtime while being able to follow the coding process by watching a projection of what is on their screens.
MH: Would you therefore consider yourself more as a performer then a producer?
RB: No, because before I found this outlet of live coding I was heavily involved as a producer of electronic music. I was performing before I was producing, but as an electronic musician I was producing before I was performing.
MH: The process of live coding resembles jazz a lot, I find.
RB: Definitely. Improvisation is a very important part. The process shares a lot of similarities with jazz. With my software, I can’t produce the same song twice. Even if I wanted to I couldn't do it. The feature is not built in. I am trying to change that a bit, but right now I cannot reproduce the same song. And the way I am doing things and the way my setup is designed, I cannot completely foresee how things are going to come out. I hear it for the first time at the same time the audience hears it. And if I don’t like it, then I change it. That is how I navigate a performance: letting things come out, and if they are cool and surprising, I let them play, and if not, I change to the next thing as quickly as possible.
MH: This is very debatable now, but I like to talk about failure as something to strive for or as a creative possibility. What is your stance on this? By doing things like you are doing them, can there be any failure in your performance?
RB: I was part of this live coding conference recently, and Shelly Knotts, who is part of this duo called Algobabez, was one of the keynote speakers. One section of her keynote speech dealt with failure and emphasized that it is a principle and accepted part of live coding. The live coding community particularly is fine with that, I think. Depending on the system you are using, it can always be hit-and-miss. It’s possible to shut everything down by accident, for example. That can happen because you used all your RAM or stuff like that. There is technical danger, but people are cool with that and don't really mind. I’ve tried to design my system so that it is robust against that kind of technical failure, but sometimes, from a musical perspective, I of course get things that are weird or that I’m not into at all. But while I’m changing it, I sometimes find elements that I actually want to keep playing for a little bit longer. At those times, maybe what I tried failed my initial intention, but it worked in other ways.
MH: Is this approach very different to when you are producing music? Do you give yourself more time, for example, or do you lay out sketches?
RB: After all, dance music is formulaic, to a certain extent. But there is one big difference for me between performing and producing: when I produce, I will usually have one sample set and one rhythm for a track, because I like that distinct sample set and distinct rhythm at that time, and then I will sit and improvise with it for an hour or so, and hopefully things I like will come out of that process. I will, say, let just the drums play, then just the synth parts, then just the melody, and I will record them one by one and come back to them and edit them together so that it makes sense. So I distill the hour-long process into three or four minutes. On the other hand, when I play live, I seldom let anything play for longer than a minute before I make changes to it. I try to move very quickly. That’s a big difference for me between performing live and producing a track.
MH: As I understand it, the visual element of live coding is as important as the music itself.
RB: Yes. There is usually a projection behind the performer where you can see what he or she is doing at that very moment. It’s about exposing the process.
MH: It becomes transparent and lifts the curtain.
RB: Yes, and that’s a very important issue for our community, also to make the process of coding more accessible. Another good example of people in the community trying to make our approach more accessible is the work of Shelly Knotts, who I mentioned before, and also Joanne Armitage. They are doing these workshops for women on how to use the tools of live coding. After a couple of days, the participants are all producing music, and it shows that what we are doing is maybe not extremely simple but also not that difficult. People like Shelly and Joanne are helping a lot to demystify the whole process. Everybody can do this stuff.
MH: Is the coding more important than the result?
RB: I’ve heard some people complain that the process is emphasized to the result’s detriment. Maybe sometimes that’s true. I hope in my case that it isn’t, because I’m very concerned with the final result. But the question has to be asked: what is the result? Just the audio? Because that is not the only and sole point of live coding. Most algorave performers want to have people dancing, and they want to make something that sounds cool, but they also want to show the process and build a community. It isn’t like the process is more important, because there isn’t a single important aim but rather a variety of them. And for skill or situational or other reasons, sometimes some aims are more successfully achieved than others.
MH: The algorave community seems to be very inclusive and anti-commercial and obviously algorithms play a big role in this whole thing. I was wondering if you’d agree that there is a certain dialectic between this and the fact that algorithms more and more are getting used for very commercial purposes?
RB: Algorithms are just tools, and anybody can use any tool for any purpose. If somebody chooses to use the tools for electronic music creation, that’s one thing, and if someone else chooses to use the tools for commercial purposes, that’s another thing. There are so many algorithms, and they are just sets of instructions. But we can take algorithms and change their purposes. For example, one of the algorithms I use for rhythm patterns was originally designed to describe the growth of plants, called the Lindenmayer systems. I have taken this and am using it to produce rhythms, so it isn’t connected to its original purpose anymore at all. The other way around, if we publish something, there’s nothing that stops someone from taking our code and turning it into some kind of commercial tool. One of my main concerns at the moment is this paranoia that surrounds the use of algorithms. If people are more aware of algorithms and of where they are being employed and how they are being employed, they might judge differently. There is this misunderstanding or feeling that once algorithms are involved, it gets out of control. But, after all, it’s people making the commercially-driven choices. My concern is that this is not put into consideration enough and that algorithms get rejected completely. I want to prevent the belief that algorithms are some evil thing that can’t be trusted. I would like the discourse to be more nuanced, and I hope that algoraves and live coding are things that give people that view.