The Cyber Slippers

Travis West


Professor Ricardo Dal Farra


The most widespread interface for playing electronic music is probably the piano keyboard. While this is fine for playing music which is built up from notes, it is clear that the keyboard is radically inadequate for the performance of electroacoustic music which, as Dennis Smalley notes, embraces the musical nature of all sounds, and their spectral evolution over time. With a keyboard, spectral evolution is not possible; once a key is down nothing can be done to change the sound. Smalley observes that traditional instruments “were conceived and developed for an harmonic music… The future of live performance must lie with new instruments” (1986). The cyber slippers are a new interface meant to address this need.

Originally conceived to sonify the movement or a person walking, the Cyber Slippers incorporate 6 pressure sensors, two flex sensors, and two 3-axis accelerometers into a pair of felt slippers, allowing the wearer to control a musical algorithm with his or her feet.

Overview of foot-centric interfaces

Sensor interfaces which are made to be attached to the feet are well documented in the literature, and have been used for a variety of applications including music. Hockman et al. (2009) used a 2-axis accelerometer to estimate a runners pace in order to synchronize music to their movement using a phase vocoder. Similarly, Moens et al. (2010) used a 3-axis accelerometer to estimate a runners pace and select a song from a BPM aware playlist to be synchronized with the runner. Choi and Ricci (1997) built a pair of insoles with force sensitive resistors (FSRs) in the toe and heel to detect a user’s walking state in a virtual environment. Using a similar sensor interface to Choi and Ricci, Papetti et al. (2011) integrated FSRs in the toe and heel of a sandal allowing it to be used to play percussive sounds from a computer. Papatti et al. also integrated audio-tactile feedback into the sandals. Paradiso et al. (2000) built a very robust system using a plethora of sensing technologies including FSRs in the toe and heel, flex sensors, accelerometers, gyrometers, magnetometers, dynamic pressure sensors, capacitive sensing, and sonar, for use in interactive dance.

The cyber slippers are very similar to the system by Choi et al., having three FSRs at the toe and one at the heel. To this the cyber slippers add a 3-axis accelerometer near the heel, and a single flex sensor running from the tip of the toes toward the middle of the foot. The flex sensor allows bend in the sole of the foot, such as when standing on the tips of the toes, to be detected. This is similar to the placement of the flex sensors by Paradiso et al., but only one flex sensor is used in each foot instead of two.

The digital musical instrument

The design of digital musical instruments is an important and active field of study at the intersection of human computer interaction and electronic music. It consequently attracts a very diverse group of researchers. The main publication in this field is the proceedings of the international conference on new interfaces for musical expression, also known as the NIME proceedings. NIME has been held every year since its inception in 2000.

Unlike an acoustic instrument, where the method of playing the instrument and its sound are both tied to its physical nature, a digital musical instrument’s interface can have as little or as much to do with its sound as the designer chooses. This decoupling of the interface from sound production means that every aspect of a digital musical instrument, from the way it looks, to the way it’s played, to the way it sounds all become aesthetic decisions (Hinrichsen et al.). Unlike acoustic instruments, which are constrained by the causal interrelationship of their physical parts, a digital musical instrument is constrained only by the choices of its designer.

In particular, decisions regarding how a performer’s gestural input to a digital musical instrument’s interface should be mapped to its sonic output are of vital importance. Different mapping strategies have profound impact on how rewarding it is to play an instrument, its potential to foster virtuosity, and its impact on the audience, among other important considerations (Hunt et al. 2002).

The cyber slippers interface

The cyber slippers used three FSRs and one flex sensor each to sense the pressure on the bottom of each foot, as well as bend in the toes, such as when standing on the tips of the toes. In addition, a small perforated prototyping board is attached to each foot near the ankle, which includes a Freescale MMA8451 3-axis accelerometer, an analog multiplexer, and a female header pin block. The multiplexer’s inputs 0–3 are connected to 6mm metal snap-on buttons sewn onto the board, which are also used to attach the board to the ankle. 0.6mm crimping beads are soldered onto both tabs of each FSR and flex sensor. Conductive thread is sewn through the crimp beads on each sensor to one of the snap-on buttons on the electronic’s board, forming a stable electrical connection between the board and the sensors.

An Arduino board is attached to the user’s belt, and connected to each of the small boards on the slippers with 24AWG wires via the female header pin blocks. This allows the slippers to be disconnected from the Arduino, which is very convenient when putting on and taking off the slippers. The Arduino scans through the multiplexer channels reading the sensors, and collects data from the accelerometers using the IIC protocol.

Hunt et al. (2002) propose a three layered approach to mapping performers’ gestures to sound. In the first of these layers, a number of functions are performed to derive more meaningful abstract information from the basic sensor inputs. In the cyber slippers, the signals from the three FSRs in the toe of each slipper are averaged and interpreted as overall toe pressure. The difference in pressure between the sensor on the inside of the toe area and the sensor on the outside gives the continuous roll of the foot from inside to outside. The difference between the overall toe pressure and the pressure at the heel is interpreted as a continuous pitch of the foot from front to back while the foot is on the ground. The absolute values of all three axes of each accelerometer are averaged, and give the overall magnitude of acceleration in any direction.

Tanaka (2010) suggests a tripartite mapping structure which can be applied to any single continuously varying sensor input. In the cyber slippers, first a number of thresholds are implemented to judge whether or not the foot is touching the ground, and if so which parts of it are touching the ground (i.e. toe and/or heel) to give a binary mapping. Whenever the toe or heel touches the ground, the magnitude of acceleration is sampled to give a basic parametric mapping. The continuously variable pressure and acceleration values are used for expressive mappings.

The cyber slippers sound synthesis

The second layer of mapping in the model proposed by Hunt et al. is to map the sound production algorithm’s input parameters to more meaningful abstractions in the output sound. This is similar to the approach seen in some recent commercial music software, where the meaningful abstractions are often called macro knobs. In the cyber slippers, there are two sound generators: an FM algorithm is implemented to create an inharmonic percussive sound, and a granular synthesis algorithm is implemented to create high frequency textures reminiscent of swishing around broken glass. The FM instrument’s many parameters are mapped to just a few abstractions, such as brightness and duration. The granular synthesis algorithm is controlled by random number generators, whose ranges are mapped to a single volume/density abstraction and a pitch abstraction.

The final layer of mapping is between the abstract parameters of the interface and the abstract parameters of the synthesis algorithm. In the cyber slippers, the FM percussion is activated when the foot touches the ground, with its brightness modulated by the sample of the acceleration magnitude when the foot touches the ground. The duration of the sound is shortened by a combination of pressure on the heel and pressure on the toe. The granular synthesizer’s volume is proportional to the overall acceleration magnitude, and its pitch is modulated by acceleration in parallel with the length of the foot. A frequency shifted delay line is controlled by the heel to toe pitch (amount of delay) and inside to outside roll (frequency shift) of the foot, as well as the bending of the toe (amount of feedback).

The three layered approach to mapping causes the correspondence between sensor input and sound output to be quite complex. Most sensors are mapped to two or three different parameters. This kind of complex multi parametric mapping is shown by Hunt and Kirk (2000) to promote a user’s ability to improve on an instrument, and allows the user to perform better in more complex tasks.

Works Cited

Hunt, Andy, and Ross Kirk. 2000. “Mapping Strategies for Musical Performance.” In Trends in Gestural Control of Music, 231–58. Paris: IRCAM.

Hunt, Andy, and Marcelo Wanderley. 2002. “Mapping Performer Parameters to Synthesis Engines.” Organised Sound 7 (2): 97–108.

Smalley, Denis. 1986. “Spectro-Morphology and Structuring Processes.” In The Language of Electoacoustic Music, edited by S. Emmerson, 61–93. London: MacMillan Press.

Tanaka, Atau. 2010. “Mapping Out Instruments, Affordances, and Mobiles.” In Proceedings of the International Conference on New Interfaces for Musical Expression, 15–18.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>