Ok, this is for the geeks and musicians out there. If you’re neither, this post may be of limited interest! (Ironically, I’ve got time to do this today because my main computer’s down–Darin is cloning the hard drive on to a 2nd machine so we have a backup for the road.)
My live show is running primarily in Logic Pro 7.2 on a Power Mac G5 dual 2.0Ghz running OSX 10.4. I have a rack containing an uninteruptable power supply, Firepod audio interface, MOTU MIDI interface, and a Nord 3 rack synth. My three keyboard controllers are a CME 7, a Novation SL25 ReMOTE, and a Virus TI Polar (below). This is the only one of the keyboards that actually generates its own sounds–it’s a new kind of hybrid, as it has its own plug-in/editor in Logic so you get all those advantages without the CPU load or latency. The others don’t make sound, they’re just used as MIDI controllers.
I have an M-Audio TriggerFinger for samples, drum input, and switching tracks on and off. I also have various pieces of vintage field equipment, including 3 oscilloscopes and 3 signal generators (one below, a Simpson) that have been gutted and retrofitted for MIDI so that I can assign any knob or switch to any parameter in any softsynth.
Each song is a Logic ‘project’, and I pre-load all the songs before each show. I switch between songs from the Novation, where I’ve programmed 12 buttons to select each song by name. This makes it pretty easy to play the songs in any order, though in practice I don’t mess around much with the set list from gig to gig.
I run a custom MAX application called ZoneOut which I designed with Peter Nyboer. This allows me to map the various keyboards in zones, so that different parts of the keyboard can output their own MIDI channels and therefore play different softsynths in Logic. This is something I probably could have done within Logic’s Environment, but the advatange of ZoneOut is that I have a visual representation of each keyboard and can draw in the zones I want, transpose by a a given number of semitones, turn each zone’s sustain of pitchbend on and off, limit velocities and so on. To do that in the Environment would be very laborious. I store each setup as a patch in ZoneOut and Logic sends it a program change message. So each song might have 3 to 8 different mappings, according to what parts I need to play and where. All 3 keyboards are more or less interchangeable, except that for 2-handed piano type parts I prefer the CME. For some parts where my lousy fingering is not up to it, I cheat by transposing the key or even moving notes around–so if you’re a keyboard player and you’re watching my fingers on the headcam, you might be a bit confused.
Most of the songs have some number of parts pre-sequenced, while I play a lot of the parts live over the top, still sending MIDI through Logic. Some of them I start off in ‘cycle record’ mode, so that I can play in each part until I have the groove built up, creating a kind of extended intro; then I pop it out of cycle mode using a button on one of my keyboards, and continue into the song. This method has two things going for it: (a) the audience knows they’re seeing something get built in real-time, I’m not just hitting ‘play’ on a tape machine (b) they hear each part in isolation, so they recognize it when it comes back in later in the song.
But it’s risky because it can all go horribly wrong. The first few times I attempted this I was terrified I’d trip up and it would all come to a grinding halt. A couple of nights at the beginning of ‘Hyperactive‘, that’s exactly what happened, much to my embarassment. And you know what? Afterwards several people said they thought I made it stop on purpose, because it was funny. Well, that loosened me up a lot, and since then I just adopt an attitude of que sera, sera. We’re all grownups. If I screw up and have to restart, you can wait a few seconds, then you’ll get to hear the intro again!
On ‘Airhead’ I go a stage further. I recently bought a plug-in called Stylus RMX. It’s basically a drum machine that plays REX loops. But it plays them in time with the host sequencer, like Ableton or Acid, except that when you trigger multiple loops with multiple keys on your keyboard or pads, it will wait until the next beat or 16th to trigger them. This means that you can load up a bunch of loops, and hold down any combination of keys to get different combinations. You don’t have to be very accurate, just hit the key a fraction ahead of the beat. So in ‘Airhead’ I just lay down a funky bass and guitar riff in cycle/record mode, then start jamming with loops over the top. The song can end up any length, I can sing the vocal or not, and add in, say, a piano solo over any groove texture that takes my fancy.
This technique would not be appropriate for many of my songs that have complicated chord sequences, peculiar structures and phrases with unusual numbers of measures in. Most of my songs are not ‘chunk’-based, unlike a lot of modern dance/electronica music where everything is in neat 4s and 8s. That’s the main reason I work in Logic, which is a linear sequencer (left to right!) versus something like Ableton Live which is great for DJs, remixers and freestylers.
Some other favorite plug-ins: I think Logic’s native ones are pretty good, I especially like Sculpture, Space Designer, the Vocoder, and the ES2 synth. Most of my samples are in EXSP24. From third parties I use Arturia’s MiniMoog (below), RMIV drums, Slayer2 guitars, UltraFocus, and a nifty gating effect called Camelspace. I put the whole caboodle through the T-RackS mastering module (see above) for overall EQ, limiting, and warmth. I am looking forward to when all these are ported to Mac’s new Universal mode, so I’ll be able to run the whole show on a MacBook laptop. Then I’ll have something to occupy all those hours on tour when you’re sitting around a hotel room or airport lounge.
Finally, my vocals: the EQ, effects, mix, and compression are really important to me, so they’re all programmed into the sequences. My Crown headset mic goes into an Art pre-amp thence via SPDIF into a Muse Receptor, which is basically a cage for plug-ins, so I don’t have to overload my CPU. I set up a patch for each section of a song–some songs have as many as 7 or 8 patches. Logic sends the Muse a program change. The mic feed is split back into Logic’s EVOC vocoder for ‘live’ backing vocals.
I use in-ear monitors (earphones) made by Sensaphonics, a company that has a network of audiologists around the country that squirt hot silicon into your ears to take a precise mold. They block out 90% of all sound, though at a gig I still feel the vibration. That means I can have my own cue mix, including dry vocals, click track to count in to a song, etc. The only downside is now the audience is completely silent–so I have an ambient mic pointing at them and I mix a little bit in. (Occasionally I hear private conversations in graphic detail so watch what you say!) I have a litte Shure mixer in my rack for the cue mix elements and ambient mic.
I have never calculated the cost of all this gear. If someone is feeling industrious, please add it up and post it here. I’ll tell you what though, it’s a lot of kit for the money when you consider my first Fairlight cost $120,000 in 1982 and did a hell of a lot less.
That’s probably as much detail as you need unless you’re some kind of FREAK