Articles Home

Features

Reviews

 

New Controllers, New Control

© Dan Phillips

This article first appeared in Electronic Musician magazine in 1992, under the unfortunate title "Control Freaks." Some of it is a bit dated as of this writing, in 2003 - but many of the concepts still seem novel, and have yet to make it into commercial products.

We're accustomed to new developments in electronic music research equating to new possibilities in sound production. New sounds, better sounds, more sounds; from the first video-game squawks and filter sweeps of early analog synthesis through FM, sampling, PCM/synth/effects combinations (L/A, AI et al.), vector synthesis, wave sequencing and so on, innovation in sound and increases in our timbral palette have been the key benefits of improved technology. There are certainly more timbres to discover, and research continues in this area; we can probably look forward, one day, to consumer instruments implementing synthesis techniques which are presently found only in the academé, such as granular synthesis and physical modeling. The question begs, however: are we really getting the most out of the sounds that we already have? The answer: probably not.

A large amount of academic research is now focusing not on how sounds are produced, but on how we control them. Much of this activity is investigating interaction with the music machines, so that the performer(s) and computer(s) can share responsibility for the performance; as opposed to merely playing a sequence back, for instance, the computer might look at what the human performer is doing, and base its output on those gestures. This broad area of research can be looked at as seeking the answers to three questions: What do we control? How do we control it? And, lastly and perhaps most intriguing: How do we know that we are controlling it?

What Do We Control?

So here's the Big Lie: synths give the performer ultimate expressive control over his/her music, because you can control all of the parameters of the sound. Twist a knob - zing! Grab a fader - thwooop! With all those parameters to play with, who could fail to make a joyous noise?

The problem with this idea is that we are usually expected to be traditional musicians, too. We still have to execute pitches and rhythms and dynamics, just like our counterparts on acoustic instruments; goodness knows, it takes a lot of practice just to get that part right. On top of that, then, how are we supposed to find the extra time, concentration, and for that matter hands, to wreak out additional expressivity?

There are several partial solutions to this problem, of course. One is the minimoog approach to synthesizer performance: the right hand does the notes, and the left hand does the controllers (this doesn't work quite as well for MIDI guitarists). The sacrifice here is obvious: you can only play half of the notes that a two-handed keyboardist can. An approach used in synth programming is to build as much expressivity into velocity and aftertouch as possible, allowing the keyboardist to keep both hands on the ivories. This still only gives you two axes of control, though, and only one for shaping the timbre over time; acoustic wind and string instruments can do better than that.

You can also use other parts of your body for expressive gestures, including breath controllers and foot pedals. Again, there are limits to the amount of skill a single human can bring to bear in real time; using both hands, pressure, breath control, and pedals at the same time leaves you with a lot to think about, probably too much. This isn't at all meant to suggest that it's pointless to perform on synthesizers, or that simple velocity and pressure can't make a line expressive; it's just that there seems to be so much untapped potential, new dimensions of malleability which remain unavailable to performers on conventional instruments.

Fortunately (could you see this coming?), some people have been working on this problem, and made some interesting leaps of concept. Maybe they won't seem that strange if we first look again at the "minimoog approach" above; with this technique, the player gives up playing half the notes to gain more control over the sound. Now - what if you gave up playing all of the notes?

Max Matthews' Conductor

A fairly simple and current example of this is Max Matthews' Conductor program. Matthews, a researcher at Stanford's Center for Computer Research in Music and Acoustics (CCRMA, pronounced "Karma") and one of the founding fathers of electronic music, has been working for over two decades on the concept of real-time performance with synthesizers. In addressing the subject of control, he points out that (at least in traditional Western music) a piece of music can be separated into two parts: those aspects dictated by the composer (usually including pitch), and those left up to the expressiveness of the performer. Matthews suggests that a computer can take care of making the composer-dictated, fixed parts happen while the performer is left with only the expressive parameters to worry about.

His Conductor program, which he uses in conjunction with the Radio Baton three-dimensional controller, does just that: the computer can play the notes of a piece, while the performer can dictate tempo or trigger each note while controlling volume and other parameters. Matthews points out that this allows an amateur to interact with a piece of music, shape it and make it his/her own without need of learning a complex technique or fear of playing a wrong note.

Gestural Control

This same concept can also be taken in the opposite direction: given the freedom from playing the notes, how much more attention could a virtuoso apply to expressiveness? It would be possible to do much more individual note-shaping, much more use of dynamic, continuous control. In fact, many sequencer users apply these concepts already, in recording a passage and then overdubbing pressure, mod wheel, etc.

The above techniques still make an assumption which may be slightly antiquated, at least with regard to the new potential of electronic control; namely, that the performer is working on a note-by-note basis. Some of the work on the leading edge of interactivity is challenging this idea. David Wessel, of UC Berkeley's Center for New Music and Audio Technologies (CNMAT, pronounced "Sin-mat"), is an aficionado of Don Buchla's Thunder controller. Using Thunder's pressure and position-sensitive strips and Opcode's Max software, Wessel uses small gestures to shape entire phrases. For instance, the phrase's tempo might be determined by the speed of the finger movement, while pressure controlled its dynamics. The time frame of the gesture and the phrase can be very different, so a brief gesture could define a much longer phrase. The MIT Media Lab's Tod Machover has developed his Hyperinstrument concept on similar grounds, often using "real" instruments for gesture input.

While the previous examples deal primarily with manipulating pre-determined material in one way or another, similar techniques can be applied to completely live interaction, in which the performer plays and the computer responds to the input in real time. Jean-Claude Risset, for instance, created his Duet for One Pianist using a Yamaha Disklavier MIDI piano and Opcode's Max software. As the pianist plays, his notes are sent via MIDI to Max, which then processes them and sends a response, in the form of new MIDI notes, back to the same Disklavier. The effects range from inverting around a specified pitch, to altered arpeggiations, to harmonic "stretching" of the performer's chords.

How Do We Control It?

The problem of having too many synth parameters to keep track of can also be addressed from the controller side, by using a macro controller to affect multiple parameters at once. For instance, CCRMA's Perry Cook has developed a very complex model of the human voice, called SPASM (see the Computer Musician column from September 1991 for a more complete description of this program). SPASM features a very large number of programmable parameters, which make for an accurate model but are not optimized for real-time control.

Towards this end, he's developed a few tools which allow control of many related parameters with a single gesture, including the capability to crossfade between different vowels and consonants. The most impressive, however, is a single slider labeled Effort, which simultaneously controls brightness, volume, vibrato depth, and amount of randomness in vibrato; moving this slider up on a sustained note creates an amazingly realistic crescendo effect, which actually sounds like the singer is working harder. Defining this kind of musically useful grouping is the important part of macro controllers.

When learning how to play a new controller - be it a violin or Air Drums - the musician typically has to learn a whole new vocabulary of gestures particular to that instrument. Wouldn't it be great, suggests CNMAT's Wessel, if the instruments instead molded themselves to the musician's own gestures? He calls the concept "instruments that learn," and is presently working on implementing it by using neural net technology. The nets "learn" a mapping between specified inputs and desired outputs, allowing them - for instance - to learn the gestures of a specific conductor, so that it can tell the difference between a downbeat and an upbeat, or between beats two and three. Another application is tracking on MIDI guitars, a notorious problem; the net was able to successfully distinguish between a number of different motifs, and Wessel foresees the ability to analyze a particular player's style for more accurate tracking in general.

Robert Moog has, for some time now, been interested in the possibilities of adding new control dimensions to the traditional keyboard. These "multiple-touch-sensitive" keyboards have previously sensed the finger's 2-dimensional position on the key, as well as velocity and pressure. A new version prepared in collaboration with University of Chicago composer John Eaton may be ready soon, adding sensitivity to the area of the flattened fingertip. Although they would certainly demand a somewhat more refined technique, these instruments promise to combine the benefits of sophisticated polyphonic control found in keyboards with note-shaping capabilities normally reserved for monophonic instruments.

Mark Coniglio, of the California Institute of the Arts, has developed a truly unique control idea: the MIDI Dancer. Using flex sensors, the amount of bending at various parts of the dancer's body is detected and sent via radio to a receiver, which then converts the data to MIDI. Interactor, real-time MIDI processing software developed by Morton Subotnic and Coniglio (now published by Dr. T's), uses the dancer-data to modify playback of a score in any number of ways. Notes can be triggered and tempo determined, timbres and volumes change, lighting altered, video disks controlled. And Mark made me promise to say - sorry, guys, but it's not a commercial product.

 

 

Force-Feedback Controllers

I'll admit it - I was a hold-out for quite some time. When my concert-pianist girlfriend told me that my synths felt like toys, I'd counter that they were more responsive than a piano, offering much faster action and aftertouch to boot. Last fall, however, I couldn't refuse a bargain on a digital piano; soon after, my studio was disrupted for a period of time, and I ended up playing that piano a lot. When I finally got everything back together and started working with the synths again, they felt like...well...no, I can't say it. But the piano just feels better, darn it; more solid, more real (and it doesn't even have aftertouch)!

Even so, weighted-key controllers are not always optimal. For drum programming or other fast passagework, it really can be desirable to have a lighter touch; and how much sense do weighted keys make when playing a non-velocity sensitive organ sound? With these issues on my mind, I was introduced to CCRMA graduate student Brent Gillespie, who's been thinking about them a lot.

Gillespie is working on building a force-feedback system into a keyboard controller (similar work has also been done by Claude Cadoz et al. at France's ACROE). For starters, this will allow the feel of the keys to be programmable; when you call up a piano program, they will feel weighted, and when you then select an organ, the weight can disappear. This is almost incidental, however, to his real objective: the re-establishment of a physical relationship between the performer and the instrument.

It's been suggested that keyboardists appreciate weighted keyboards for purely anachronistic reasons, that they're simply accustomed to the way that a piano feels. Gillespie, however, maintains that it's not simply weight that we're looking for. He points out that in the real world, our senses often act to reinforce one another with redundant information; as I move my hand over the surface of a bottle, I feel the glass with my fingers, see my hand moving, and hear the bottle resonate slightly. When dealing with computers, on the other hand, the senses often work separately, with no tactile input at all. For instance, we may use a mouse to move a cursor on the screen, using our eyes for input (seeing the cursor) and our hand for output (moving the mouse). Wouldn't it be nice, he asks, to get some tactile information from the computer - so that, as you moved the mouse, you could somehow feel the borders of the windows, buttons, etc?

Similarly, he suggests that it is valuable to feel, as well as hear, the results of playing on an instrument. When an acoustic instrument such as a horn or string sings, it is resonating particularly well, and the performer can feel that tactile response and use it to help find and hold the sweet spots.

CCRMA's Chris Chafe, in fact, has put together a device as a "lip controller" for a physical model of a brass instrument (physical modeling is a type of synthesis based on re-creating the actual physical properties of an instrument; for more information, see the Computer Musician column of September 1991). The controller is a narrow metal bar mounted on top of a small speaker driver; the speaker, in turn, is driven by the output of the brass model. Since the bar is directly connected to the speaker, you can feel as well as hear the output.

As you press down or let up to change various parameters of the "lip," the pitch of the model moves among its harmonics; between each harmonic, the model breaks up, making a blatty noise. To play the instrument well, you have to avoid settling on the break points and pass through to the sweet spots. Fortunately, the breaks generate a lot of subharmonics, which are much easier to feel than the normal brass tones, so that they stick out as much to your fingertip as to your ear; with the two working together, it's much easier to play well than with just hearing alone.

So how would this work on a piano-stye keyboard? When you press down on the keys, they will press back, according to programmable parameters. For instance, if you were playing a harpsichord program, the key might depress relatively freely down to a point, and then give a "pluck" sensation. As Gillespie puts it, "you don't just play it - it plays you!"

If you were playing a pad layered with a bell sound that came in only at high velocities, the response might be relatively mild unless the bell was played, at which point the keys would give the sensation of hitting something. With an acoustic bass sound, the key might push or vibrate slightly with the "woof" as the string settles. The age-old (or at least decade-old) question of when velocity ends and aftertouch begins could be answered by the key beginning to vibrate slightly when aftertouch was applied - and if aftertouch was activating an LFO, you might even have the key vibrate at that frequency.

Gillespie points out that our physical reflex time can be significantly faster than our mental response time. After some practice, the force-feedback controller might allow fingers to respond to some stimuli directly, such as the avoiding the unwanted application of aftertouch. As sensory input is increased, learning is reinforced; in such instances as the brass model described above, one could become a better player, faster.

Aside from objective benefits, the force-feedback could introduce another major improvement into synthesizers: that elusive "realness." On a standard synthesizer keyboard, Gillespie notes, you get no physical feedback which directly relates to the sound parameters; with his keyboard, you could perform a gesture, and then immediately feel the results of that gesture. Being able to feel an exchange of energy with an instrument, appropriate to a specific synthesized sound - not just a piano - would give us literally a whole new sense (touch) of reality. The results could be quite dramatic.

More wild and futuristic applications of force-feedback systems could eventually involve extensions on touch-screen systems. Improvements in piezo electric film, a material that changes shape when electricity is applied (and vice-versa), might eventually allow a finely varied, configurable surface texture so that you could in fact feel windows, graphic buttons and sliders, etc. This could in turn allow virtual mixing boards and synth control panels which you could re-arrange to suit your taste or current project, while actually maintaining the tactile sensation of physical controls.

Controlling the Future?

With advances like these, electronic musicians could finally begin to overtake acoustic instruments in the arena of expressivity. No, we may not have achieved the perfect sound yet; on the other hand, an amateur (like me) scraping a bow across a violin doesn't sound all that great, either. What we really need to concentrate on is not raw sound power, but on the potential for control, on making truly expressive instruments - and on making the instruments that we already have truly expressive.

 

Dan Phillips does user interface design, documentation, and assorted other stuff for Korg Research and Development - and he really had fun going back to school for this article. Thanks to Julius Smith III for several mind-bending conversations which helped to solidify these concepts.