Basics of Signal Processors

Basics of Signal Processors

What Is a Signal Processor?

Natural sound filters are everywhere. Your mouth and tongue modify the sound of your vocal cords to produce recognizable vowels . A concert hall alters and enhances the raw sounds coming from the instruments in a symphony orchestra. Even the racket coming from your neighbor's stereo, if you live in an apartment, sounds very different after it has passed through the wall. Digital signal processing (DSP) lets you emulate those natural effects, along with others that originated in classic analog synthesizers, as well as creating newly imagined digital sonorities.

On the DVD: The included demo copy of Ableton Live will enable you to try the hands-on effects routing and audio processing examples in this chapter.


Anything that takes a sound input, modifies it, and returns a sound output is considered a signal processor . Many types of signal processors are available, and all of them impact one or more of the following basic elements of sound:

Processors and effects versus filters: In common usage, the term "effect" is often used instead of "signal processor" to refer to any process that takes a signal (input) and changes it in some way (output). The term "filter" can be theoretically defined in exactly the same way, as any process with an input and output. However, for the purposes of this book (and in common usage), "filter" refers to a device that boosts or reduces certain frequencies in a signal (see "Equalization," p. 212).


  • Frequency: The pitch and harmonic spectrum of a sound. Examples of effects that work in the frequency domain are equalizers and lowpass filters.

  • Dynamics: The changes in amplitude (loudness) of a sound, from soft to loud. Examples of effects that work in the amplitude domain include compressors, limiters, expanders, and gates.

  • Time: When a sound begins in relation to the rest of the music, and how often (and in what manner) it's repeated. Examples of effects that work in the time domain include reverb and delay lines.

Digital Signal Processing

Many processors we commonly use today are based on the designs of earlier analog equipment. The way they work, and even the labels for knobs and faders , come from a tradition of audio gear. That's a good thing: with the software plug-ins included in your audio software, you can use the same techniques heard in your favorite recordings of the past few decades. Thanks to Digital Signal Processing (DSP), musicians today have more control over traditional effects, and can also create new kinds of sounds that weren't possible before.

Ten Signal Processing Ideas

Before we get into the nitty-gritty details of how signal processors work, here are some common tasks for which you might use them and locations for descriptions of each:

  • Reverse a track for a special effect (p. 203)

  • Use an equalizer to bring out the high frequencies of recorded cymbals (p. 212)

  • Use a DJ-style filter to sweep through a groove (p. 222)

  • Run an electric guitar through a wah effect (p. 236) and amp simulator (p. 248)

  • Use a compressor to add punch to a drum loop or vocals (p. 228)

  • Add a classic phaser effect to an electric piano (p. 253)

  • Give your track a sense of space by using a reverb (p. 242) on an effects send (p. 206)

  • Make your track sound "retro" or digital-gritty using digital distortion (p. 247)

  • Create a far-out spectral effect using an FFT-based processor (p. 246)

  • Tune your guitar (p. 263)


Most DSP is performed in real time using specialized processors or your computer's CPU. Other effects cannot be performed in real time. Instead, they require you to process audio first, modifying the actual audio file, before you can hear the results of the effect.

Destructive vs. Nondestructive Editing

Non-realtime effects are usually applied as destructive edits ; that is, they modify the actual waveform itself, so that your audio file now irrevocably incorporates the added effect. That doesn't necessarily mean you can't change your mind later: many audio editors have multiple levels of undo, while others will make copies of the file each time you make a change, either by default or as an option. Real-time effects are by nature non-destructive: the audio file is unchanged; effects are simply added to the audio signal, not the waveform, during playback.

Destructive edits have some advantages. You can apply an effect or other edit to an audio file, then move the audio file to another application that doesn't have the effect, yet still hear it with the effect. Destructive edits don't require processing power after the edit is performed, since the effect isn't operating in real time. (Some newer waveform editors use non-destructive edits that can be saved destructively in the finished file; see Figure 7.1 .)

Figure 7.1. Apple's Soundtrack Pro is an example of a waveform editor that applies effects and edits entirely in real time, non-destructively. By storing each effect as an "action layer," you can use Soundtrack to adjust effects, reorder effects processing, and make other changes as you go.


Destructive audio file edits can also be useful when batch-processing audio ( Figure 7.2 ). For instance, if you wanted to perform noise removal and filtering on a whole batch of sound recorded for a video, you might set up basic settings for all the audio, batch-process them in an audio editor like Apple Soundtrack Pro or Adobe Audition, and then bring them into your DAW for later editing.

Figure 7.2. A waveform editor with batch processing capabilities can apply a string of effects destructively to a large group of files, an essential timesaver when you have a lot of audio to clean up, process, and convert. Steinberg's WaveLab editor is employed here for pre-processing a group of files.

Delay Compensation

Some effects appear to be functioning in real time, but introduce a slight delay between their input and output. This includes plug-ins with "look-ahead" features that fill a memory buffer with audio prior to processing, and hardware DSP systems like the Universal Audio UAD-1, TC Electronic PowerCore, and Digidesign hardware. If you're using a DAW with automatic delay compensation (sometimes called plug-in latency compensation ), your software will automatically compensate for the slight delay introduced by feeding audio slightly early to plug-ins that require it. The compensation feature works invisibly in hosts that support it, requiring no interaction on your part. Delay compensation works only with recorded audio, not live inputs; because recorded tracks are stored on the hard disk, it's possible to feed the audio to the effect early.

Digidesign's DAE audio drivers include built-in delay compensation for use with TDM plug-ins. With other hosts and hardware DSP you'll have to check your DAW's specifications, though recent versions of Logic Pro, Digital Performer, SONAR, and Cubase all support this feature. If you're working with a DAW or plug-in that doesn't support automatic delay compensation, you may need to shift a track slightly in time when using such an effect in order to keep the effect's output in sync with other tracks. On the other hand, if you don't use plug-ins that require it, you may never need to worry about delay compensation.


Using Non-realtime Effects

You'll usually find effects that are applied to a sound file in non-realtime in waveform editors, either in the audio editor included in your DAW or in an independent waveform editor like Audacity (on the DVD). Usually, you'll simply select either all of an audio waveform or the part of the waveform to which you want to apply the effect, then use a menu command and wait as your software applies the effect permanently to your audio file.

Fades

A fade gradually lowers the loudness (amplitude) of either the beginning ( fade-in ) or end (fade-out) of a sound to zero, using a straight line or curved shape. Cross-fades gradually fade out the amplitude of one sound file while fading in the amplitude of another, so that the resulting mix shifts from one sound to another. (For a comparison, see Figure 7.3 .)

Figure 7.3. Three fades: a destructive fade-in, as applied in a waveform editor (top); a non-destructive real-time fade-in, as applied with the fade tool in a DAW (middle); and a cross-fade between two sound files (bottom).


Can it be done in real time? While you can use automation to create fades by adjusting the volume level of a track's fader (see Chapter 10 for information on automation), fades can usefully be applied as destructive edits. In addition to automated fades using track faders, most DAWs have the ability to apply non-destructive fades and cross-fades to audio via a fade tool or by automatically cross-fading sounds that are dragged on top of one another. The advantage of non-destructive fades is that they're easier to adjust after the fact, though you may prefer to use destructive fades for fine- tuned effects and batch processing.

Speed, Tempo, and Pitch

Changing the speed impacts audio just as it would on a tape player or record player, modifying both pitch and tempo simultaneously . (In other words, if you speed up audio, you'll get the classic higher-pitched "chipmunk" effect; slow it down and you'll get a slower but lower-pitched "Darth Vader" sound.)

Thanks to digital processing, you can also modify pitch independent of tempo or vice-versa. Unfortunately, this process tends to introduce unpleasant sound artifacts, especially when you make more drastic changes ( double-digit metronome marking tempo changes or pitch changes greater than a couple of semitones). Different sound material will behave differently, some plug-ins and settings will help compensate, and audio recorded at a higher sample rate can perform more accurately, but you'll still want to use this process judiciously.

Can it be done in real time? Speed, tempo, and pitch shifters are all available in real-time versions, though they often make compromises in audio quality in order to be more processor-efficient.

Normalization

Normalizing amplifies an entire sound file or a segment of a sound file so that the loudest sound's peak is increased to the highest amplitude range available, or some other specified amplitude level. Normalization analyzes audio for the highest signal peak (the loudest sound in the audio), then amplifies the entire file so that peak is at 100% signal level (0 dB) or some other level you determine.

While mixing, of course, one can simply turn up the fader of a channel for the same result (a louder sound), so normalization might seem unnecessary. It's useful, though, for matching different sound levels to be used in the same track prior to mixing, at least as a rough approximation . Usually some additional adjustment is necessary while mixing. Some sound engineers also normalize files in preparation for outputting to a CD or other medium, but this should be considered only a quick-and-dirty way of mastering and should be applied to the entire set of tracks, not individual tracks. (Applying normalization to individual tracks prior to mastering can actually interfere with the mastering process because it's a fairly inaccurate way of balancing the overall perceived loudness of audio; subjective adjustment with your ears is often preferable.) For more control over the dynamics of a sound, you'll need dynamics processors like compression and limiting; see p. 228.

Can it be done in real time? Normalization doesn't work in real time, since it requires an analysis of an entire audio file (see Figure 7.4 ).

Figure 7.4. Normalization increases the overall volume of a sound file so the loudest peak is amplified to a specified amplitude level (here 3 dB; sometimes 0 dB).


Highest peak before normalizing.

Highest peak after normalizing; amplified to 3 dB.

Reverse

This effect reverses an entire section of audio so that it plays backward, from the end to the beginning. Because we're used to hearing the way real-world sounds decay, the "sucking" effect of reversed audio decays can sound very strange (in a good way, if you like the sound). For instance, a picked guitar note, when reversed, will begin quiet (the end of the decay), gradually get louder, and then end with the attack of the pick. Reverse can be used as a special effect, or more subtly in small doses to add punch to the entrance of a note.

Can it be done in real time? Reversing is impossible in real time with live audio input (though pre-recorded audio can obviously be played backward in real time). However, some plug-ins provide near real-time operation for short segments of audio by using a memory buffer to store live input for reversing. For instance, ndc Plugs' Tempo Sync Reverser (Windows/Mac VST plug-in) reverses a specified time segment of your live audio input, triggered by MIDI. Real-time reversing of buffer-recorded audio is especially useful in conjunction with a delay, as with Expert Sleepers' free Mac-only Meringue plug-in (http://www.collective.co.uk/expertsleepers/meringue.html).

Signal Routing with Real-Time Processors

Processors, by definition, can't function without an input, so you must first hook up the signal processor to a sound source. You'll likely use processors during one or more of the following stages:

  • Before and during recording: Sometimes effects are added directly to the sound source. For instance, you may be recording a guitar whose sound is being modified by stompbox effects pedals, or a hardware synth that has built-in effects that enhance its sound programs.

    Non-realtime Processing in Audacity

    Try basic processing for free using Audacity; select the part of a waveform you want to process and explore the options on the Effects menu. To fade the sound from zero at the beginning of the file up to the sound's full amplitude at the cursor position, click your cursor in the waveform where you want the fade to end, select Edit > Select... > Start to Cursor, and Effect > Fade In. To fade from a point of the sound down to zero at the end, click where you want the fade-out to begin, select Edit > Select... > Cursor to End, and Effect > Fade Out. For other effects that you would normally apply to the whole file, select Edit > Select All to modify the entire file at once, and try options like Effect > Change Pitch, Change Speed, and Change Tempo, Effect > Normalize, and Effect > Reverse.


  • After recording: When a track is recorded "dry" (without effects), effects can be used on playback to modify the sound.

  • On the final mix: Once the multitrack recording process is complete, mastering effects can be used (either by inserting them on your DAW's master output bus or by using specialized mastering software on the stereo mix after it has been created) to modify the sound of the entire mix. (See Chapter 10.)

The easiest way to work with signal processors, and the place you're likely to use them most often, is at the channel level. You'll access these processors via a user interface object called a channel strip .

What's a channel?

A channel is any path of audio from input (which could be either a live input or an input from a file previously recorded to the hard drive) to output. On a mixer, for instance, each fader is associated with a separate channel, and allows you to control the amount of signal that passes from the input to the mixer's master fader. An eight-channel mixer is one with eight input/output paths for signal, a little like an eight-lane automobile highway . Those eight outputs can be combined at the master fader so you can hear and mix them as one stereo mix; hence the term mixer.

What a channel strip is: The heart of any multichannel mixer; vertically groups settings and processing for a channel

How to use it: Set up inputs and outputs, then add effects (via inserts /sends)

When you'll use it: Any time you want to set up an input/output or software instrument for use, for setting individual channel level and pan, and for adding effects


For the purposes of applying real-time effects, we'll be working with channels, because by definition signal is routed into and out of an effect via a channel. Having channels is essential to organizing which effects will be applied to which signals: for instance, the signal on one channel might be a vocal, on another it might be a guitar solo. (Channels should not be confused with tracks: a track is a location in software or another storage medium into which signals can be recorded and on which sound files can be positioned for playback. For more on tracks, see Chapter 10.)

Channel strip

Traditional hardware mixers make it easy to evaluate and adjust settings for each channel of your mix, because they align faders and knobs visually in vertical columns . In one way or another, nearly all software that works with multiple audio channels has adopted this basic model, which is called a channel strip ( Figure 7.5 ). At its heart is the simplest of all signal processors, the level fader, which does nothing other than change the level (gain) of the signal fed through it.

Figure 7.5. Channel strips group signal processors, routing, and level adjustments (as shown from left to right in Apple Logic Pro, Cakewalk SONAR, Digidesign Pro Tools, MOTU Digital Performer, and Ableton Live).

Input and output signal routings

Pan

Level

Solo/mute/record buttons

Effects inserts

Effects sends

Gain is the level (or strength) of a signal. In computer software or audio equipment, gain is the technically correct term for adjusting signal strength, not "volume," since the signal doesn't become actual sound until it reaches speakers or headphones.


Channel strips also show effects routing via two ways of adding signal processors: inserts (for modifying sound on a channel) and sends (which can be used to route sound from multiple channels to a single effect).

Inserts

An insert is so named because it is "inserted" into the signal path. The entire audio signal passing through the channel strip is routed into the insert, is modified in some way, and is then passed on to the channel's fader and output bus. In other words, you use an insert to modify sound before it reaches the fader ( Figure 7.6 ). With multiple inserted effects, a signal can be fed through multiple processors in sequence.

Figure 7.6. Signal starts at the electric piano source (1), is routed through two inserts (2, 3), and then arrives at the fader (4), as shown in Apple Logic Pro.

Inserts are most useful with processors that need to be applied to a specific channel in order to shape that channel's whole sound, such as filters, chorusing, and delay.

Wet/dry balance: Some effects let you choose how much of the dry (original source) sound you use versus the amount of wet (modified) sound you use; the wet/dry blend is routed to the effect's output.


Sends/returns and buses

Inserts process an entire signal at once, which is comparable to holding a colored filter up to the lens of a camera. If you want to process only a portion of a signal, so that you hear the original source on its own mixer fader while a processed sound is heard on a separate fader, you'll need to use a send for the effect rather than an insert. A send, short for "auxiliary send," routes the signal to an additional output, called a return. (This scheme is often commonly called busing, and the send/return configuration a bus.) Both your original channel and the return are routed to the final mix, effectively allowing your signal to be in two places at once: the signal is present on the original channel, with any inserted effects you've added there, and is also routed, at a specified level (the send level) to a return channel (also called an auxiliary channel), where you can apply other effects. Since more than one channel can be fed to a send/return, the return channel can mix as many channels as you'd like, with the balance of each set by the send level on each channel feeding it ( Figure 7.7 ).

Figure 7.7. Sends allow you to route multiple channels to the same effect(s) (on an effects bus), controlling the level of each signal being sent to the bus. Here, signals from a keyboard and drum machine are both fed to send bus 1, but the sends on the two channels are set to different levels (1, 2). A reverb is inserted on the send bus (3), and the output of the send bus is returned to the main output bus (4).

Using the return channel, you can process a signal from multiple channels through a single effect. One reason to do this is to give the impression that the sounds in the various channels coexist in a signal acoustic "space." You might, for instance, want to make several instruments in a band sound as if they're playing in the same room. By routing portions of their sound to a send/return with a reverb on it, you could add the sound of the reflections of a virtual room to all the instruments simultaneously. Here's how to route a send/return:

  1. Send signal from one channel or multiple channels to the send bus; a send level knob or the equivalent will let you determine how much signal you want to process from each of the individual channels. You may be able to choose a pre or post send to determine whether your signal is processed before or after the channel's fader and panpot. If you want the amount of signal going to the send to drop when the channel fader goes down, a post-fader send is what you need. But if you want to be able to lower the fader without affecting the send level, use a pre-fader send.

  2. Add one or more signal processors to the send bus , which is where the actual processing occurs. Unlike an insert effect, effects on a send bus are usually set to 100% wet, because the dry sound is being heard through the individual channel(s).

  3. After processing, the output of the signal processor feeds a return that (you guessed it!) returns signal to the main mix. You then have separate level controls for the dry signal (the source fader or faders) and the wet signal (the return fader), so you can adjust the mix however you like.

What's the difference between applying a single effect via a send to multiple channels and simply adding that effect as an insert on each channel? Each time you add an insert effect, you're creating a separate instance of that effect. For example, if you apply a reverb plug-in called "Reverb-o-rama" as an insert to channel 1, and then add Reverb-o-rama inserts to channels 2 and 3, you have three instances or copies of that plug-in. (This is hence also called "instantiating" the reverb, because you're creating an instance of it.) Each reverb has individual parameters, and you'll use roughly three times the number of CPU cycles as with only one Reverb-o-rama. That's fine if you've got CPU power to spare and want completely different settings for each channel. More often, though, both for convenience and to conserve CPU cycles, you'll route channels 1 through 3 to a send, add Reverb-o-rama to that send bus, and then adjust the balance of wet reverberation to dry signal by adjusting the level fader on the return for that bus. (See Figure 7.8 for a diagram of this imagined scenario.)

Figure 7.8. Applying the same plug-in effect to multiple inserts creates multiple instances of that plug-in, as shown here with our imaginary "Reverb-o-rama" plug-in (top). Each will have separate controls, and will require additional CPU cycles. By applying a reverb to a send (bottom), you can apply reverb to all the tracks with just one instance of the same plug-in (though you'll lose individual control over plug-in parameters for those tracks).


Choosing between inserts and sends: If you're applying individual effects to individual channels, you'll probably use an insert. The main reason you'd want to use sends is the ability to apply a single effect to multiple channels at once, for creating a single acoustic "space" in which those channels are mixed together for processing (as with a reverb), or simply to save the extra CPU cycles required to add more effects to more channels. You can also use sends to adjust the wet/dry balance on effects that lack a wet/dry setting (not all software plug-ins have such a setting). For this application, you'd use the source channel as the "dry" signal and the send for the effect, adjusting the balance of the wet signal via the return's volume fader.


Side-chaining

Side-chaining lets you route audio directly into an effect from a second audio source, such as another channel, bus, effect, or instrument. Side-chaining plug-ins offers additional capabilities, such as the use of inputs from other channels as modulators to generate rhythmic effects. Here are some possibilities, employing effects described later in the chapter:

  • Create a selective delay: Side-chain the input of a gate to a drum pattern, then send the output of the gate to a delay, so only the heaviest hits on the drum pattern are delayed.

  • Shape a track rhythmically: Side-chain a drum track (or other rhythmic source) to a noise gate on another track, such as a continuous pad you want to "break up," giving it the same rhythm as the drums, or a bass part or other instrument you want to tighten. The source will feed the gate, so that you hear the second track only when the amplitude of the first track crosses a threshold (as it does with each loud drum hit, for instance).

  • Make music "duck" under spoken word: A technique commonly used in broadcast radio and similar applications is to automatically reduce the volume of music as an announcer speaks. This is called " ducking ." To produce this effect, place a compressor on the music track, and side-chain the voice-over to a compressor, leaving the compressor's output gain at 0 dB (or turning off the "make-up gain" feature). This will quiet the music each time the announcer speaks.

Sometimes side-chain routings are labeled as such, while at other times you'll simply have an option for an external input, or a modulation labeled "via" that side-chains internal modules of the plug-in. Not all DAW software or effects allow side-chaining.

Hands-on: Add Effects in Ableton Live


Insert Effects
  1. Pick an effect: For built-in Ableton effects and instruments, select the Live Device Browser (1). For non-Ableton VST/AU/DX plug-ins, choose the Plug-In Device Browser (2). Here, we've selected the built-in Chorus effect, which you might want to add to a keyboard or guitar part.

  2. Add the effect: Select the track you want, and drag your chosen effect to the Track View. (If you don't see a window that shows "Drop Audio Effects Here," just double-click the track title.) You can route effects in sequence by adding more effects; they're routed left to right.

  3. Turn it on/off: Turn effects on and off using the "power button" in the upper-left corner of each effect.

Send Effects
  1. Set your send: Make sure your sends/returns are visible (View > Sends and View > Returns). Pick an audio track, and adjust the send knob to determine how much of your signal to send.

  2. Add the effect: Drag your effect from the Device Browser (or drag it with settings intact from another track), and drop it on a Return track. (Here we've selected Live's built-in Reverb, a likely candidate for use as a send effect.)

    Use third-party plug-ins: To make sure Live finds your third-party AU, VST, and DX plug-ins, double-check your preferences. Use Preferences > Plug-In > Active Sources to turn on/off different formats and set a default VST folder. (On the Mac choose Live > Preferences and on Windows choose Edit > Preferences.)


  3. Choose pre/post: By default, Live's sends occur post-fader, so the level and panning of the send signal will be determined by the volume and pan settings of the channel fader(s). If you'd rather set volume and pan solely with your return, choose Pre. Either way, you can adjust volume and pan on your return track just as you would with any other Live track.

  4. Add more returns: If the default two sends/returns aren't enough for you, use Insert > Insert Return Track to add as many as you'd likejust remember not to overload your computer's processor!



Real World Digital Audio
Real World Digital Audio
ISBN: 0321304608
EAN: 2147483647
Year: 2006
Pages: 96
Authors: Peter Kirn

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net