A structured improvisation and meditation on the nature and meaning of live coding as a musical performance practice, this piece documents the first impressions of a beginner as they dive into the world of live code and SuperCollider. Read on
By looking at the design of digital musical instruments through the discipline of string puppetry, the Electroacoustic Marionette is meant to uncover new approaches to sound synthesis and gestural mapping. In this report I will describe the approach taken in developing an early prototype of this instrument, including the early hardware prototype, sound synthesis algorithm, and mapping between the two.
Unlike acoustic musical instruments, whose design and functionality are constrained by the physical nature of sound and the instruments’ construction, digital musical instruments (DMIs) can take any form and produce any sound (Magnusson 2010). For a designer and maker of DMIs, this poses the problem of deciding what form new instruments should take (the construction of an interface), what sounds they should produce (the choice or design of sound and/or music synthesis algorithms), and how the interface relates to the sounds (the mapping) (Nort et al. Read on
My objective with this project was to explore some of the issues involved with making a digital musical instrument which is based on an existing acoustic musical instrument. I was interested to explore the balance between giving a faithful digital recreation of the experience of playing the acoustic instrument and taking advantage of the possibilities that become available when the interface and the sound it produces become decoupled as they are in a digital musical instrument.
I started studying electronic music last year in Concordia’s electroacoustic music composition program. My background is in jazz piano performance, so I was naturally drawn to live electronic music performance as a major field of interest, and it didn’t take me long to figure out that the interfaces available to me were inadequate for performing the kind of music I was learning about at Concordia. Read on
Professor Ricardo Dal Farra
The most widespread interface for playing electronic music is probably the piano keyboard. While this is fine for playing music which is built up from notes, it is clear that the keyboard is radically inadequate for the performance of electroacoustic music which, as Dennis Smalley notes, embraces the musical nature of all sounds, and their spectral evolution over time. With a keyboard, spectral evolution is not possible; once a key is down nothing can be done to change the sound. Smalley observes that traditional instruments “were conceived and developed for an harmonic music… The future of live performance must lie with new instruments” (1986).Read on
A piece in sonata form made from the sound of washing the dishes. Rather than the more traditional tonic and dominant, the first theme is presented in the “key” of water, and the second in the “key” of metal pot, then in the “key” of water. Read on
“This Is Just One Voice” was performed during Studio 7 at Concordia University, Montreal QC, on March 27th 2015 with four speakers surrounding the audience and the lights turned low.
The poetry for this piece was developed in collaboration with the users of Concordia’s Fine Arts Reading Room during my artist residency in February 2015. Gratitude to the Reading Room. Here is the text:
This is Just One Voice
by Travis West,
in collaboration with the users
of Concordia’s Fine Arts Reading Room
I invited people to abuse my gravity
but when many voices speak at once
their meaning tends to blur. Read on
Written in 2013 while working cash at the local gas bar. Read on
This is a composition using only sounds from the Doepfer Modular system. Early electronic music appropriated lab equipment designed for physics and psychology experiments and turned it toward artistic ends. Over the years, the essential tools for analog electronic synthesis have largely remained the same: there are sources, and there are processors. What has changed is the tools have gotten smaller and more affordable, so that more musicians than ever can build complex electronic instruments from reconfigurable modules.
A piece in three movements, “Tuesday Lab Experiments” represents my exploration of some of the sonic possibilities of analog modular synthesis. In each one of the three movements, entitled Spectral Glow, Dance, and Storm, I navigate the sounds of the Doepfer modular system. Read on
This has been the most ambitious project I’ve undertaken in this genre; I’ve used the software at my disposal more during this assignment than ever before. All of the effects and techniques that I hear below, I discovered while composing. I feel like I’ve learned a great deal about the possibilities of some of the software I own, but I know that there’s going to plenty more assignments to follow and stretch my knowledge further.
I spent most of the composition process building the individual sound objects — I made about twenty, and I had a difficult time piecing them together into a single composition. Read on
The source material for this assignment was any spoken passage (I recorded a short excerpt from “I Get a Hit“), from which only the unvoiced consonants were allowed. The only processing allowed for this project was cut, copy, paste, and changing the amplitude. With these processes, a somewhat surprising variety of effects are possible.
At first, I experimented mostly with pasting whole consonant sounds, and ordering them to make rhythmic motives. However, I quickly realized that, by taking a very small section of any given sound and pasting it repeatedly, it would make a buzzing tone with a perceptible pitch. Read on