Articles Home

Features

Reviews

 

The Virtuoso Sequencist

Tips and Tales from Modern-Day Masters

© Dan Phillips

"The study of the sequencer is now-a-days so general, and good sequencists so numerous, that mediocrity on this instrument is no longer endured...In this volume will be found the exercises necessary for the acquirement of agility, independence, strength, and perfect evenness in the quantization, as well as suppleness of the controllers - all indispensable qualities for fine execution...After it has been thoroughly mastered, it may be repeated from time to time, and difficulties will disappear as if by enchantment, and that beautiful, clear, clean, pearling execution will have been acquired which is the secret of distinguished artists..."

- the introduction to C. L. Hanon's The Virtuoso Pianist, slightly revised

Well, unlike Hanon's classic book of piano exercises, this article can't promise you absolute mastery of your instrument (these days, truth in advertising laws are much more strict). What it can deliver on is the "secrets of distinguished artists" part, with a panel of experts brought together in a masters class for your edification and inspiration. And who do we have today? Roll back the curtains, Miss Penelope, and please welcome: Thomas Dolby, a founder and icon of intellectual synth-pop; Vince Clarke, currently of Erasure but also composer/sequencist behind Yaz and the early Depeche Mode, seminal synth-pop projects all; Trent Reznor of Nine Inch Nails, a relative newcomer bringing his energy and considerable musicianship to the industrial/alternative scene; synthesist extraordinaire Larry Fast, well-known for his session work with Peter Gabriel and others, in addition to his own Synergy; Jan Hammer, longtime fusion/rock keyboard great, sequencing pioneer, and composer for film and TV; and Suzanne Ciani, pioneer of electronic music in advertising and Private Music recording artist. Here's what they all have to say...

Don't Get Trapped by Details

One of the greatest attractions of working with a sequencer is the ability to correct every single nuance of a performance. Given the necessary patience and time, a take that previously would have been kept as "good enough" can be tweaked to perfection. Not everyone, however, agrees that perfection is necessarily a good thing. Also, since this power is available, it's easy to get carried away on insisting that everything be just right - which is especially unproductive in the demo stages of a song. I know that I've sometimes spent an entire evening on four bars of a solo, instead of simply roughing the whole thing out and discovering that the solo didn't really fit there anyway. As Thomas Dolby points out, "I think that when you indulge in a lot of attention to detail, it's very easy to get protective of the work that you've done, and you can't really see the wood for the trees. Someone else could come in and make one big swipe and really improve your work, and you wouldn't see it because you're so wedded to those details."

To avoid this syndrome, Dolby will create a number of alternative versions of the same song. "I end up with different letters of the alphabet being different approaches to the song [using Vision's ability to call up a different sequence with each letter on the Macintosh keyboard]. Then, at the push of a button I can make comparisons...For example, with Pulp Culture, there are probably half a dozen different mixes, with slightly different grooves and drum patterns and things. At that point I had my Mac in my dining room, in the middle of my house, and I would just wander around and make some phone calls, hit button E, watch some TV, try Q, and over a period of time my instincts would tell me that F was the one."

Perfection vs. Imperfection

Once you've settled on the best overall approach, you can go about fine-tuning your tracks. Quantization, and other forms of rhythmic correction, tend to be a large part of that process. What you see on your sequencer screen and what comes out your speakers may not always be exactly correlated, however. The MIDI response time of an instrument - the time that it takes to start a note after receiving a note-on event - is never instant, and can sometimes be significant (up to over 10 ms per note). For most purposes, this can be compensated somewhat by shifting a track slightly ahead of the beat, but the delays are not always constant, making them hard to correct completely: they vary according to the number of oscillators being used, and the number of notes played.

Is this a problem? Vince Clarke believes that it is, and thus made an interesting decision for the latest Erasure album, Chorus: he abandoned MIDI. "I had felt, and people had also told me, that albums I'd done previously, maybe eight years ago, had a very different feel from the albums I was doing now. And I think one of the reasons for that change was the use of MIDI.

"If you were to build up like sixteen tracks of the same thing, doing the same beat on fours, the gaps between the sounds - the flamming - although minute, would be more and more as you build the tracks up. On previous albums, obviously we'd record using MIDI, everything in solo, and getting the timing as right as possible. But you can never get it exactly right because of the way that MIDI works... Basically, that means that in the last five years, no-one's actually made an album that's in time. I thought it would be nice to do one, you know?"

"So, the whole of the album was actually recorded off of the Roland MC-4 [a digitally controlled analog sequencer], because it is in time. First we arranged and pre-sequenced using the UMI sequencer for the BBC micro. Then, once all the arrangements and the sounds were set, we'd feed the MIDI information from the UMI sequencer into the Roland MPU-101 [a MIDI to CV/gate converter], and then take the CV and Gate from there into the MC-4 for playing to the tape. I don't think I'll ever go using MIDI again. Not for recording."

On the other hand, not all styles or tastes demand that every note be exactly quantized; roughness, whether intended or not, can form the basis of its own aesthetic. As Trent Reznor explains, "I'm not interested in making perfect, homogeneous groove beat records. I'd rather have that human element in there, even if it's created by a computer not functioning properly. Stuff that's not perfectly quantized, or accidentally shifted ahead or behind and then it's just out of sync...something like that. Although you don't shoot for that initially, sometimes you fall into realizing that, well, that really makes it stand out, you know? I was always a pro-drum machine kind of person, because you know, no hassles, no f--k-ups, perfect. But I start realizing, since I've worked with real drummers, that there's something cool about finding that groove, or being off time - how subtle things like that can really affect the way you listen to the music, so that subconsciously you start to pick up on how things fit together. So, I can simulate some of that electronically now, and just work with things that aren't perfect."

Dolby concurs. "There's one song on the new album [Astronauts & Heretics], called Cruel, which is very, very quiet. When I wrote the lyrics, it was late one night, and I was a bit worried about the neighbors. This was just after I'd bought Studio Vision, and so I recorded a rough vocal track right there, singing very softly. To save on hard disk space, I later stripped silence [a process which removes all the quiet space between audio events], and if you're not careful with the levels on that, the softer sounds can end up getting clipped. Months later, I spent about 4 days doing the real vocals (I thought) in the studio. And at the end of it, I thought, well I'll just double-check that there wasn't anything I did on that first night that was superior. And the whole of the first vocal was better than what I'd spent four days doing in the studio! So, I decided to try and use as much of it as I could - and it was very clipped, you know - it sounds like a vocal with a very heavy noise gate on it. But that was kind of part of the vibe of the whole thing. So, purists might raise their eyebrows when they hear it, because it does sound clipped, but it also has a very distinctive feeling to it."

Quite aside from the question of perfection, some tools (and musicians) go in the other direction, opting for the excitement of indeterminacy and the possibility of real-time interaction. One of the early ways to achieve this was with arpeggiators, and Jan Hammer is still finding new possibilities there. "I've been doing all kinds of things with the Oberheim Cyclone [a flexible, programmable arpeggiator module]. You throw something at it, and it throws something back at you. It's a very interesting, interactive thing - I like to be able to just play, and instinctively respond to the computer responding to you and so on, in a loop.

"I've been using it to work up all kinds of semi-classical parts, where I'll have a texture of some sort of a string orchestra, and then let's say do solo violin fills and runs that are being pulsed by the Cyclone, in all kinds of interesting polyrhythms. It's a lot of experimentation and trial and error, but when that sort of thing clicks, it's just magic. When a certain patch really works great, I might even record the output of the Cyclone into the sequencer as the final version - but I find it really interesting to just, every time the sequence plays, have it be something a little bit different. It's very exciting. So then you get into saying, well, was this the take? Let's try another - you'll be coaxing it, saying, "you can do it better - I know you've got that take in you."

Hammer, a Studio Vision user, also points out a unique possibility of that medium: grabbing only the middle of a recording take. "I may be an extremely sort of sub-mediocre guitar player, but I can basically play 6 or 8 measures of a rhythm part that sounds fairly decent. When I start playing I'll definitely sound very rough at the beginning, and then I'll get into some sort of a groove. So basically what I can do is to record into Studio Vision and just play along until it clicks, and then keep playing for a while, and then take that whole part where I was warming up and just throw it out, and go straight into the good part - which is something that would be basically impossible with tape. It's fantastic."

This trick is still completely valid with just MIDI tracks; if you find that you keep warming into a difficult part just as it's almost over with, make the section longer, copy the good part to the beginning, and then cut the extra measures out again.

 

 

New Ways to Create Rhythms

Industrial, rap, hip-hop, house, and other new music styles rely heavily on drum beats - or entire rhythm tracks - sampled from other recordings. While this can occasionally produce synergistic sonic collages, it more often has a tendency to degenerate into clichéd quotes. Sampling doesn't have to be a passive process, however; as Trent Reznor points out (and as his album Pretty Hate Machine proves), lifted samples can transformed into something new and personal by processing them with effects or programs such as Digidesign's TurboSynth.

"I'd find, say, a drum loop, or a loop of percussion, or a music loop, or just anything. I'd TurboSynth it, or just distort it through the board, or eq everything out except one frequency, and experiment to see if I could get something that's kind of interesting but that now doesn't sound like what it originally was. Maybe tune it up or down, and then start it like on the second beat of the measure, and mix it in the background, and get something that you don't recognize as 'oh, that's a Public Enemy drum loop,' but which adds a kind of complex percussion part...

"One of the later things I did was "Get Down Make Love," the B-side of the "Sin" 12-inch. All of the loops on that particular track are the same sample, just run through different TurboSynth procedures and then started at different points in the measure. That got kind of a cool, clanky, chunky kind of a beat, and worked out quite well."

Of course, sequenced rhythms don't have to come from samples, or even from drums. Larry Fast, for instance, likes to create a rhythm out of timbral shifts on a synth patch. "For instance," he explains, "I'll use a psuedo-analog sound on the Yamaha SY-77, and set up a patch which allows me to access the filter over MIDI in real time, using continuous controllers. I can then create a rhythm by using the sequencer to modulate the filter in a sample and hold sort of effect [abruptly moving from one level to another], basically emulating what I used to do with a Moog modular system. Or, I can use a rhythmic Wave Sequence on the [Korg] Wavestation."

Even if a synth doesn't allow you to route a MIDI modulator to the filter, you can create timbral rhythms in a few different ways. For instance, you can design a patch in which velocity controls the filter cutoff very sensitively, but has little or no effect on amplitude. Then use this patch to record a rhythmic part, varying your velocity accents widely, for a simulated sample-and-hold effect.

If your synth responds to MIDI System Exclusive in real time, and your sequencer can record, edit, and play back SysEx messages in the middle a track, you can use this in the place of bona-fide continuous controllers. It takes a little bit of hacking around in hexadecimal numbers, but the results can be worth the trouble. My Roland JX-8P, for instance, can send out System Exclusive messages whenever a parameter is edited; I simply call up the filter cutoff for editing, put my sequencer in record, and move the synth's data entry slider to generate a brief stream of SysEx. I can then pull out a few of these events, arrange them into a rhythm (i.e., every 16th note), editing the cutoff value for each beat if necessary until the timbres create the desired pulse.

Working within limitations

Music, and musical instruments, have always presented musicians and composers with limitations of one sort or another. An instrument can only play so many notes at a time; chords and/or melodies don't fit together well; fingers can only stretch so far. We complain about them, but our attempts to work within or circumvent those limitations can get the creative juices flowing, as we're forced to find solutions which would ordinarily go undiscovered. They can also serve to provide a structure, keeping our possible choices down to a manageable number. Sometimes, it's even useful to create self-imposed limitations; witness Schoenberg's restrictive 12-tone system, intended to impose order on the staggering possibilities of atonality.

Vince Clarke took a big step along these lines with Erasure's latest album, Chorus: all of the instrumental tracks - including drums - were created solely with analog synthesizers. Moreover, all of the tracks were monophonic. "By not using chords," he notes, "it meant that you had to really think about the monophonic lines. You had to kind of orchestrate the piece, get more arrangement going on. By not having a wash of a chord sound, it's easier to make more interesting sounds, not have to go completely over the top to make the sounds stick out. Also, it gives the vocal more room, I think."

Usually, of course, you don't have the luxury of choosing your own limitations - you just run up against a problem, and need to find a way around it. The most common problem that I run into is the "not enough instruments" syndrome: I want to get one more timbre playing back, and all my available channels are already zinging away. One way to get around this is to create a keyboard split out of two or more of the sounds, so that more than one timbre can play back from a single MIDI channel. If the instrument allows you enough leeway in transposition, there's no reason that the lower part of a split needs to be in the bass range - you could just as well have a lead sound on the bottom, and a pad on the top.

Even with synths that are ostensibly monotimbral, you can create patches which sound very different at high and low velocities, using that to modulate filter cutoff (or modulator amplitude, on FM systems), attack time, and so on. With careful editing of velocity in the sequencer, you can use the same patch to simultaneously produce a pad and a percussive bass line, for instance, or mellow chords and a bright lead line.

Working Live with Sequencers (or not...?)

I remember seeing Howard Jones on what I believe was his first tour of the U.S., when live sequencing was still an exotic thing. His tour brochures made it clear that everything was being played "live," and not from tape; just how that was happening when he danced downstage, they didn't make exactly clear. While Milli-Vanilli style taped vocals are definitely an embarrassment to the performing community, is there really any difference between playing something back from a tape or a sequencer? Trent Reznor suggests that there isn't, and that the benefits of tape even make it preferable.

"I'd recommend the way I approached the live set for any electronic band that's going on the road. When you're in the situation that we were in, when we started and we were basically nobody and we were opening for a variety of bands, you have to take into consideration that you might not have a soundcheck, you might not have the luxury of a couple of hours to plug in your amps and hook up a computer and keyboards make sure everything works - instead, you have about five minutes to get your shit on stage. So, the idea was a total low-tech system. I just dumped all of the bass and the loops and real fast sixteenth note parts down to a four-track cassette, a high-speed cassette with dbx; it sounded pretty good, and worked out all right, and then it came down to two things that can screw up: the tape can break, and the deck can mess up. This versus a computer, where any number of things can go wrong."

New Controllers

While MIDI was originally designed to carry human performance gestures on a controller, it can also be used to create gestures outside the capabilities of either the human player or the controller itself. Suzanne Ciani notes, "On my new album, Hotel Luna, I was using the new Roland RSS system (a spatial enhancer/"3-D Sound" processor); I had a lot of fun using MIDI control of that particular item. On a piece like Rain, I could make movements discrete - you see, without MIDI, all you have are those rotation knobs on the RSS front panel, so it's very continuous, whereas if you go to MIDI you can jump all over the place, in ways that you can't turn knobs. So that was a lot of fun, that was great. It can be very dramatic and very specific; the more that you use these things, the more you want that control."

Another point is that the concept of the "controller" itself is getting increasingly fuzzy. Sure, we still have keyboards, wheels, and sliders; but we also have software such as Sybil and Opcode's Max which can transform simple human gestures into complex, multi-event streams, which can then be captured into a sequence and edited. Jan Hammer also points out that there are controllers lurking where we might not think to look. "I'll use anything for a controller," he laughs. "Sometimes, in Opcode's Galaxy, I'll use the little keyboard that's on the screen to play something, and record it as a MIDI file. Because…on that keyboard there's all kinds of things where you can control the repeat rate, or just gliss the cursor across the keys, and things like that, which you would not be able to play. And you can use performance gestures like that, record them into a sequencer, and then you can take them further obviously. Or the same thing, say, with drums pads, mapping it out to some interesting assignments for notes, and then just going at it and playing it as drums, and you'll be amazed what comes out sometimes. It doesn't have to say "controller" on it to work. "

Go for It

That last thought is probably the key - it's the results that matter; nothing has to say anything on it, as long as it works. Practice with your sequencer, get to know its quirks and how to bend them to your needs, and eventually (or sooner) you, too, can acquire the "pearling execution" of the Virtuoso Sequencist.

 

Dan Phillips, a product specialist for Korg Research and Development, is presently brushing up on his quantization technique and transport control dexterity.