Will Pirkle — Designing Audio Effect Plug-Ins in C++ With Digital Audio Signal Processing Theory — 2012


Чтобы посмотреть этот PDF файл с форматированием и разметкой, скачайте его и откройте на своем компьютере.
Designing Audio Effect
Designing Audio Effect
With Digital Audio Signal Processing Theory
Will Pirkle
First published 2013
by Focal Press
70 Blanchard Road, Suite 402, Burlington, MA 01803
Simultaneously published in the UK
by Focal Press
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
Focal Press is an imprint of the Taylor and Francis Group, an Informa business
© 2013 Taylor and Francis
The right of Will Pirkle to be identiÞ ed as author of this work has been asserted by him/her in accordance with
sections 77 and 78 of the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any elec-
tronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in
Dedicated to
my father and mother
J.V. Pirkle
Introduction .....................................................................................................xvii

Chapter 1: Digital Audio Signal Processing Principles ..............................................1

1.1 Acquisition of Samples ..........................................................................................1

Reconstruction of the Signal ..................................................................................3

1.3 Signal Processing Systems ....................................................................................4
1.4 Synchronization and Interrupts ..............................................................................5
1.5 Signal Processing Flow ..........................................................................................6

Numerical Representation of Audio Data ..............................................................7

1.7 Using Floating-Point Data .....................................................................................9
1.8 Basic DSP Test Signals ........................................................................................10
1.8.1 DC and Step ...................................................................................................10
1.8.2 Nyquist ...........................................................................................................11
1.8.3 ½ Nyquist .......................................................................................................11
1.8.4 Å Nyquist .......................................................................................................12
1.8.5 Impulse ...........................................................................................................12
1.9 Signal Processing Algorithms ..............................................................................13
1.10 Bookkeeping ........................................................................................................13
1.11 The One-Sample Delay ........................................................................................15
1.12 Multiplication ......................................................................................................16
1.13 Addition and Subtraction .....................................................................................17

Algorithm Examples and the Difference Equation ..............................................18


Gain, Attenuation, and Phase Inversion ...............................................................18

1.16 Practical Mixing Algorithm .................................................................................19
Bibliography .................................................................................................................2

Chapter 2: Anatomy of a Plug-In .........................................................................21


2.1 Static and Dynamic Linking ................................................................................21


Virtual Address Space and DLL Access ..............................................................22


C and C++ Style DLLs ........................................................................................24


Maintaining the User Interface ............................................................................25


The Applications Programming Interface ...........................................................27

2.6 Typical Required API Functions ..........................................................................29
2.7 The RackAFX Philosophy and API .....................................................................31
2.7.1 __stdcall .........................................................................................................31
Bibliography ...............................................................................................................34

Chapter 3: Writing Plug-Ins with RackAFX ..........................................................35

3.1 Building the DLL .................................................................................................35
3.2 Creation ...............................................................................................................36
3.3 The GUI ...............................................................................................................36
3.4 Processing Audio .................................................................................................37
3.5 Destruction ...........................................................................................................38
3.6 Your First Plug-Ins ...............................................................................................38
3.6.1 Project: Yourplugin .........................................................................................39
3.6.2 Yourplugin GUI ..............................................................................................39
3.6.3 Yourplugin.h File ............................................................................................39
3.6.4 Yourplugin.cpp File ........................................................................................40
3.6.5 Building and Testing ......................................................................................40

4.4.1 Project FeedBackFilter ...................................................................................89
4.4.2 FeedBackFilter GUI .......................................................................................89
4.4.3 FeedBackFilter.h File .....................................................................................89
4.4.4 FeedBackFilter.cpp File .................................................................................90
4.5 Observations ........................................................................................................94
4.5.1 General ...........................................................................................................94
4.5.2 Feed-Forward Filters ......................................................................................95
4.5.3 Feed-Back Filters ...........................................................................................95
Bibliography ...............................................................................................................95
Chapter 5: Basic DSP Theory .............................................................................97
5.1 The Complex Sinusoid.........................................................................................97
5.2 Complex Math Review ......................................................................................100

Time Delay as a Math Operator .........................................................................102


First-Order Feed-Forward Filter Revisited ........................................................103

5.4.1 Negative Frequencies ...................................................................................104
5.4.2 Frequencies Above and Below
Nyquist ....................................................106

5.5 Evaluating the Transfer Function H(
) .............................................................106

5.5.1 DC (0 Hz) .....................................................................................................107
5.5.2 Nyquist (
) ...................................................................................................108
5.5.3 ½ Nyquist (
/2) ............................................................................................109
5.5.4 Å Nyquist (
/4) ............................................................................................109
5.6 Evaluating
.....................................................................................................112
5.7 The
Substitution ..............................................................................................114
5.8 The
Transform .................................................................................................114
5.9 The
Transform of Signals ................................................................................116

5.10 The
Transform of Difference Equations ..........................................................117

5.11 The
Transform of an Impulse Response ..........................................................118

5.12 The
Zeros
of the Transfer Function ...................................................................119


Estimating the Frequency Response:
Zeros
.......................................................121
5.14 Filter Gain Control .............................................................................................122

First-Order Feed-Back Filter Revisited .............................................................123

5.16 The
Poles
of the Transfer Function ....................................................................124

5.16.1 DC (0 Hz) ...................................................................................................128
5.16.2 Nyquist (
) .................................................................................................128
5.16.3 ½ Nyquist (
/2) ..........................................................................................129
5.16.4 Å Nyquist (
/4) ..........................................................................................130
5.17 Second-Order Feed-Forward Filter ....................................................................132
5.17.1 DC (0 Hz) ...................................................................................................139
5.17.2 Nyquist (
) .................................................................................................139
5.17.3 ½ Nyquist (
/2) ..........................................................................................140
5.17.4 Å Nyquist (
/4) ..........................................................................................140
5.18 Second-Order Feed-Back Filter .........................................................................142
5.18.1 DC (0 Hz) ...................................................................................................148
5.18.2 Challenge ....................................................................................................149

5.19 First-Order Pole-Zero Filter: The Shelving Filter .............................................149

5.19.1 DC (0 Hz) ...................................................................................................155
5.19.2 Challenge ....................................................................................................155
5.20 The Bi-Quadratic Filter ......................................................................................157
Bibliography ...............................................................................................................162

Chapter 6: Audio Filter Designs: IIR Filters ........................................................163

6.1 Direct
-Plane Design ........................................................................................163

6.2 Single Pole Filters ..............................................................................................164
6.2.1 First-Order LPF and HPF .............................................................................164
6.3 Resonators .........................................................................................................165
6.3.1 Simple Resonator .........................................................................................165
6.3.2 Smith-Angell Improved Resonator ..............................................................168

6.4 Analog Filter to Digital Filter Conversion .........................................................170

6.4.1 Challenge ......................................................................................................178

6.5 Effect of Poles or Zeros at InÞ
nity ....................................................................178

6.6 Generic Bi-Quad Designs ..................................................................................181

6.6.1 First-Order LPF and HPF .............................................................................182

6.6.2 Second-Order LPF and HPF ........................................................................183
6.6.3 Second-Order BPF and BSF ........................................................................184
6.6.4 Second-Order Butterworth LPF and HPF ....................................................184
6.6.5 Second-Order Butterworth BPF and BSF ....................................................185

6.6.6 Second-Order Linkwitz-Riley LPF and HPF ...............................................186

6.6.7 First- and Second-Order APF .......................................................................188
6.7 Audio SpeciÞ
c Filters ........................................................................................188

ed Bi-Quad ........................................................................................189
6.7.2 First-Order Shelving Filters .........................................................................189

7.2.1 Frequency and Impulse Responses...............................................................214
7.2.2 The Effect of Feedback ................................................................................218

7.3 Design a DDL Module Plug-In..........................................................................224

7.3.1 Project: DDLModule ....................................................................................225
7.3.2 DDLModule GUI .........................................................................................225
7.3.3 DDLModule.h File .......................................................................................226
7.3.4 DDLModule.cpp File ...................................................................................226
7.3.5 Declare and Initialize the Delay Line Components .....................................228
7.3.6 DDLModule.h File .......................................................................................230
7.3.7 DDLModule.cpp File ...................................................................................230

7.4 Modifying the Module to Be Used by a Parent Plug-In ....................................233

7.4.1 DDLModule.h File .......................................................................................233
7.4.2 DDLModule.cpp File ...................................................................................234

7.5 Modifying the Module to Implement Fractional Delay .....................................235

7.5.1 DDLModule.cpp File ...................................................................................238

7.6 Design a Stereo Digital Delay Plug-In ..............................................................239

7.6.1 Project: StereoDelay .....................................................................................239
7.6.2 StereoDelay GUI ..........................................................................................241
7.6.3 StereoDelay.h File ........................................................................................241
7.6.4 StereoDelay.cpp File ....................................................................................242

7.7 Design a Stereo Crossed-Feedback Delay Plug-In ............................................244

7.8 Enumerated Slider Variables ..............................................................................245
7.8.1 Constructor ...................................................................................................246
7.8.2 PrepareForPlay() ..........................................................................................246
7.8.3 UserInterfaceChange() .................................................................................246
7.8.4 ProcessAudioFrame() ...................................................................................247
7.9 More Delay Algorithms .....................................................................................248
7.9.1 Advanced DDL Module ...............................................................................248

7.9.2 Delay with LPF in Feedback Loop ..............................................................248

7.9.3 Multi-Tap Delay ...........................................................................................249
7.9.4 Ping-Pong Delay ..........................................................................................250
7.9.5 LCR Delay ....................................................................................................250
Bibliography .............................................................................................................251


Chapter 8: Audio Filter Designs: FIR Filters .......................................................253


8.1 The IR Revisited: Convolution ..........................................................................253


Using RackAFXÕs Impulse Convolver ..............................................................258

8.2.1 Loading IR Files ...........................................................................................258
8.2.2 Creating IR Files ..........................................................................................259
8.2.3 The IR File Format .......................................................................................261

8.3 Using RackAFXÕs FIR Designer .......................................................................262




8.7 Designing a Complementary Filter ....................................................................269


9.9 Bipolar/Unipolar Functionality ..........................................................................324
9.9.1 WTOscillator GUI ........................................................................................324
9.9.2 WTOscillator.cpp File ..................................................................................325
Bibliography .............................................................................................................326

Chapter 10: Modulated Delay Effects ................................................................327

10.1 The Flanger/Vibrato Effect ..............................................................................328
10.2 The Chorus Effect ............................................................................................331

Design a Flanger/Vibrato/Chorus Plug-In .......................................................334

10.3.1 Project: ModDelayModule .........................................................................335
10.3.2 ModDelayModule GUI ..............................................................................336
10.3.3 ModDelayModule.h File ............................................................................336
10.3.4 ModDelayModule.cpp File ........................................................................337
10.3.5 PrepareForPlay() ........................................................................................340
10.3.6 Challenge ....................................................................................................342

10.4 Design a Stereo Quadrature Flanger Plug-In ...................................................342

10.4.1 Project: StereoQuadFlanger .......................................................................342
10.4.2 StereoQuadFlanger GUI .............................................................................342
10.4.3 StereoQuadFlanger.h File ...........................................................................342
10.4.4 StereoQuadFlanger.cpp File .......................................................................343
10.4.5 Challenges ..................................................................................................345

10.5 Design a Multi-Unit LCR Chorus Plug-In ......................................................345

10.5.1 Project: StereoLCRChorus .........................................................................346
10.5.2 StereoLCRChorus GUI ..............................................................................346
10.5.3 StereoLCRChorus.h File ............................................................................346
10.5.4 StereoLCRChorus.cpp File ........................................................................347
10.6 More Modulated Delay Algorithms .................................................................350
10.6.1 Stereo Cross-Flanger/Chorus (Korg Triton
) .............................................350
10.6.2 Multi-Flanger (Sony DPS-M7
) ................................................................350
10.6.3 Bass Chorus ................................................................................................350
10.6.4 Dimension-Style (Roland Dimension D
) .................................................351
10.6.5 Deca-Chorus (Sony DPS-M7
) ..................................................................354
Bibliography .............................................................................................................355
Chapter 11: Reverb Algorithms .........................................................................357

11.1 Anatomy of a Room Impulse Response ..........................................................358

: The Reverb Time ...............................................................................359
11.2 Echoes and Modes ...........................................................................................360

The Comb Filter Reverberator .........................................................................364


The Delaying All-Pass Filter Reverberator ......................................................368


More Delaying All-Pass Filter Reverberators ..................................................370

11.6 SchroederÕs Reverberator .................................................................................372

The Low-Pass FilterÐComb Reverberator .......................................................373

11.8 MoorerÕs Reverberator .....................................................................................375
11.9 Stereo Reverberation ........................................................................................376
GardnerÕs Nested APF Reverberators ..............................................................377

11.11 Modulated APF and Comb/APF Reverb .........................................................381

11.12 DattorroÕs Plate Reverb ....................................................................................382


12.7 Design a Stereo Phaser with Quad-Phase LFOs ..............................................446

12.7.1 Phaser GUI .................................................................................................446
12.7.2 Phaser.h File ...............................................................................................446
12.7.3 Phaser.cpp File ...........................................................................................447
Bibliography .............................................................................................................451
References.................................................................................................................451
Chapter 13: Dynamics Processing ......................................................................453

13.1 Design a Compressor/Limiter Plug-In .............................................................457

13.1.1 Project: DynamicsProcessor .......................................................................458
13.1.2 DynamicsProcessor: GUI ...........................................................................458
13.1.3 DynamicsProcessor.h File ..........................................................................459
13.1.4 DynamicsProcessor.cpp File ......................................................................460
13.1.5 DynamicsProcessor.cpp File ......................................................................465

13.2 Design a Downward Expander/Gate Plug-In ...................................................466

13.2.1 DynamicsProcessor.h File ..........................................................................466
13.2.2 DynamicsProcessor.cpp File ......................................................................466

13.3 Design a Look-Ahead Compressor Plug-In .....................................................468

13.3.1 DynamicsProcessor: GUI ...........................................................................469
13.3.2 DynamicsProcessor.h File ..........................................................................470
13.3.3 DynamicsProcessor.cpp File ......................................................................470

13.4 Stereo-Linking the Dynamics Processor .........................................................472

13.4.1 DynamicsProcessor: GUI ............................................................................472
13.4.2 DynamicsProcessor.cpp File ......................................................................473

13.5 Design a Spectral Compressor/Expander Plug-In ...........................................475

13.5.1 Project: SpectralDynamics .........................................................................476
13.5.2 SpectralDynamics: GUI .............................................................................476
13.5.3 Additional Slider Controls ..........................................................................477
13.5.4 Spectral Dynamics Buttons ........................................................................477
14.3 Design a Wave Shaper Plug-In ........................................................................497
14.3.1 Project: WaveShaper ...................................................................................498
14.3.2 WaveShaper: GUI .......................................................................................498
Bibliography .............................................................................................................500
Appendix A: The VST
and AU
Plug-In APIs .....................................................501


A.1 Compiling as a VST Plug-In in Windows .........................................................501

A.2 Wrapping Your RackAFX Plug-In ....................................................................503
When I started teaching in the Music Engineering Technology Program at the University of
(DSP) assembly language and loading them on to DSP evaluation boards for testing. We had
also just begun teaching a software plug-in programming class, since computers were Þ
at the point where native processing was feasible. I began teaching MicrosoftÕs DirectX
in
1997 and immediately began a book/manual on converting DSP algorithms into C++ code for
the DirectX platform. A year later I had my Þ rst manuscript of what would be a thick DirectX
programming book. However, I had two problems on my hands: Þ rst, DirectX is bloated
with Common Object Model (COM) programming, and it seemed like the lionÕs share of the
book was really devoted to teaching basic COM skills rather than converting audio signal
processing algorithms into C++, creating a graphical user interface (GUI), and handling
user input. More importantly, developers had dropped DirectX in favor of a new, lean, cross-
platform compatible plug-in format called Steinberg VST
, written in ÒstraightÓ C++ without
the need for operating system (OS) speciÞ c components. After taking one look at the Virtual
Studio Technology (VST) applications programming interface (API), I immediately dropped
all DirectX development, wrote VST plug-ins all summer, and then switched to teaching it the
xviii Introduction
A for
Those initial grad students helped shape the direction and ß ow of the book (perhaps without
knowing it). I wanted the book to be aimed at people with programming skills who wanted
Introduction xix
9 introduces oscillator design, which is needed in Chapter 10Õs modulated delay
effects: ß
anger, chorus and vibrato effects. Chapter 11 includes the analysis of a collection
and you are
encouraged to upload your own plug-ins and videos as well. I canÕt wait to hear what you
Will Pirkle
June 1, 2012
The Þ
rst affordable digital audio devices began appearing in the mid-1980s. Digital signal
recordings Þ rst appeared in the early 1970s, but the technology did not become available
for widespread distribution until about 15 years later when the advent of the compact
of data points from a continuous analog signal. The data points are sampled on a regular
interval known as the sample period or sample interval. The inverse of the sample period
is the sampling frequency. A compact disc uses a sampling frequency of 44,100 Hz,
1.1 shows
the block diagram of the input conversion system with LFP, A/D, and encoder.
Violating the Nyquist criteria will create audible errors in the signal in the form of an

Signal Processing Principles
!Nyquis
Nyquis
2 Chapter 1
spectrum. This effect is called aliasing because the higher-than-Nyquist frequencies are
encoded Òin disguiseÓ or as an ÒaliasÓ of the actual frequency. This is easiest explained
with a picture of an aliased signal, shown in Figure 1.2 .
Figure 1.1:
The input conversion system ultimately results in
numerical coding of the band-limited input signal.
Figure 1.2: (a) The Nyquist frequency is the highest frequency that can be encoded
with two samples per period. (b) Increasing the frequency above Nyquist but keeping the
sampling interval the same results in an obvious coding error—the aliased signal is the result.
The
sampling theorem
Digital Audio Signal Processing Principles 3
Once the aliased signal is created, it can never be removed and will remain as a permanent
error in the signal. The LPF that band-limits the signal at the input is called the
anti-aliasing
. Another form of aliasing occurs in the movies. An analog movie camera takes 30 pictures
(frames) per second. However, it must often Þ lm objects that are rotating at much higher rates
than 30 per second, like helicopter blades or car wheels. The result is visually confusingÑthe
helicopter blades or car wheels appear to slow down and stop, then reverse directions and
speed up, then slow down and stop, reverse, and so on. This is the visual result of the high-
frequency rotation aliasing back into the movie as an erroneous encoding of the actual event.
1.2 Reconstruction of the Signal
The digital-to-analog converter (DAC or D/A) Þ rst decodes the bit-stream, then takes the
sampled data points or impulses and converts them into analog versions of those impulses.
Figure 1.4: The ideal reconstruction  lter creates a smeared output with a damped
oscillatory shape. The amplitude of the sin(
shape is proportional to the
amplitude of the input pulse.
LP
f
4 Chapter 1
When a
Outpu
DS
Use
an
Contro
Memor
Progra
m
Memor
Dat
Memor
Progra
Memor
y
DA
an
Contro
Microcontrolle
o
Microprocesso
Digital Audio Signal Processing Principles 5
1.4 Synchronization and Interrupts
There are two fundamental modes of operation when dealing with incoming and outgoing
output data words are synchronized to the same clock as the DSP. These are typically simple
Figure 1.6: A simple signal processing system. The algorithm in this case is
inverting the phase of the signal; the output is upside-down.

Figure 1.7: Block diagram of a synthesizer.
6 Chapter 1
systems with a minimum of input/output (I/O) and peripherals. More-complex systems
olve asynchronous operation, where the audio data is not synchronized to the DSP.
Moreover, the audio itself might not be synchronous, that is, the input and output bit-streams
might not operate on the same clock. A purely synchronous system will be more foolproof,
but less ß
exible.
An asynchronous system will almost always be interrupt-based. In an interrupt-based
design, the processor enters a wait-loop until a processor interrupt is toggled. The processor
interrupt is just like a doorbell. When another device such as the A/D has data ready to
deliver, it places the data in a predesignated buffer, and then it rings the doorbell by toggling
an interrupt pin. The processor services the interrupt with a function that picks up the data,
and then goes on with its processing code. The function is known as an
routine
or an
. The interrupt-based system is the most efÞ cient use of
processor time because the processor canÕt predict the exact clock cycle when the data will
Another source of interrupts is the UI. Each time the user changes a control, clicks a button,
1.5 Signal Processing Flow
1.8 consists of
Initializatio
Wai
Interrup
INTR
Rea
Dat
Writ
Outpu
Dat
Se
&
Loo
Proces
Outpu
ye
Rea
Contro
Dat
n
n
I
INTR
ye
Digital Audio Signal Processing Principles 7
For the sampling theorem to hold true, the audio data must be arriving and leaving on a strictly
timed interval, although it may be asynchronous with the DSPÕs internal clock. This means
that when the DSP does receive an audio INTR it must do all of the audio processing and
handle any UI interrupts before the next audio INTR arrives in one sample interval. The
Thus, an 8-bit system can encode 2
values or 256 quantization levels. A 16-bit system can
encode 65,536 different values. Figure 1.9
shows the hierarchy of encoded audio data. As a
system designer, you must Þ rst decide if you are going to deal with unipolar (unsigned) or
bipolar (signed) data. After that, you need to decide on the data types.
Figure 1.8: The 
owchart for an audio signal processing system.

r
Bipolar
Fractional
Fixed-Poin
Floating
Point
8 Chapter 1
¥ Unipolar or
data is in the range of 0 to
) of data, plus the number zero (0).
¥ Bipolar or
data varies from
Max and is the most common form today.
It also includes the number zero (0).
¥ Integer data is represented with integers and no decimal place. Unsigned integer
audio varies from 0 to
65,535 for 16-bit systems. Signed integer audio varies
quantization levels.
¥ Fractional data is encoded with an integer and fractional portion, combined as int.frac
–1
–1
–2
Digital Audio Signal Processing Principles 9
This slight skewing of the data range is unavoidable if you intend on using the number zero
negative audio data to the second most negative value. This is because phase inversion is
common in processing algorithms, either on purpose or in the form of a negative-valued
coefÞ cient or multiplier. If a sample came in with a value of
32,768 and it was inverted,
there would be no positive version of that value to encode. To protect against that, the
32,767. The audio data that travels from the audio hardware adapter
(DSP and sound card) as well as that stored in WAV Þ les is signed-integer based. However,
oating-point representation.
1.7 Using Floating-Point Data
In many audio systems, the DSP and plug-in data is formatted to lie on the range of
0.9999). In fact the plug-ins you code
in this book will all use data that is on that same range. The reason has to do with overß
ow.
In audio algorithms, addition and multiplication are both commonplace. With integer-based
1.
10 Chapter 1
limits. Addition and subtraction can cause this as well, but only for half the possible values.
Integer
1.8 Basic DSP Test Signals
You need to know the data sequences for several fundamental digital signals in order to begin

+
1.
0

1.0
Digital Audio Signal Processing Principles 11
Figure 1.13 The ½ Nyquist sequence has four samples per cycle.
–1.
0
+1.
–1.
12 Chapter 1
per cycle, twice as many as Nyquist. The ½ Nyquist sequence is {É
1, 0É}.
1.8.4 ¼ Nyquist
The Å Nyquist input sequence in Figure 1.14 represents the Å Nyquist frequency of the
ycle. The Å Nyquist sequence is {É0.0, 0.707,
0.707, 0.0É}.
1.8.5 Impulse
The impulse shown in Figure 1.15 is a single sample with the value 1.0 in an inÞ
nitely long
stream of zeros. The
of a DSP algorithm is the output of the algorithm after
applying the impulse input. The impulse sequence is {É0, 0, 0, 0, 1, 0, 0, 0, 0,É}.
Figure 1.14 ¼ Nyquist sequence has eight samples per cycle.
Figure 1.15 The impulse is a single nonzero sample value in a sea of zeros.
Processin
Audio
Digital Audio Signal Processing Principles 13
1.9 Signal Processing Algorithms
firs
inpu
12t
inpu
14 Chapter 1
Figure 1.17 shows an input signal,
), starting from
(0). The
(0) sample is the
Þ rst sample that enters the signal processing algorithm. In the grand scheme of things,
will be the oldest input sample ever. Indexing the numbers with absolute position is going to
be a chore as the index values are going to become large, especially at very high sample rates.
Another problem with dealing with the absolute position of samples is that algorithms
do not use the sampleÕs absolute position in their coding. Instead, algorithms use the
position of the current sample and make everything relevant to that sample. On the next
4
(
3
(
Sampl
e
Curren
Inpu
(
(
(
(
(
)
Th
(
Th
4
(
5
(
3
(
2
)
(
1)
(
+
2
)
(
+
3
)
(
+1
Th
(
Th
Digital Audio Signal Processing Principles 15
1.11 The One-Sample Delay
Whereas analog processing circuits like tone-controls use capacitors and inductors to alter the phase
and delay of the analog signal, digital algorithms use time-delay instead. You will uncover the
math and science behind this fact later on in Chapters 4 and 5 when you start to use it . In algorithm
Figure 1.18: DSP algorithms use the current sample location as the reference location
and all other samples are indexed based on that sample. Here you can see the current
state of the algorithm frozen in time at the current input sample
).
Figure 1.19: One sample period later, everything has shifted. The previous

.
16 Chapter 1
(
)
)–
)–
Digital Audio Signal Processing Principles 17
1.13 Addition and Subtraction
Addition and subtraction are really the same kind of operationÑsubtracting is the addition of a
negative number. There are several different algorithm symbols to denote addition and subtraction.
signals is really the mathematical operation of addition. Figure 1.23
eral ways of displaying the addition and subtraction operation in block diagrams.
Figure 1.21: Three delay algorithms: (a) one-sample delay, (b) two one-sample
delays cascaded, producing two different outputs,
notice that (c) is functionally identical to (b).
Figure 1.22: The multiplication operator is displayed as a triangle and
Figure 1.23: Addition and subtraction diagrams for two input sequences
these are all commonly used forms of the same functions.
Equation
n –
= a
= p
+ q
= ax
= 2.5x
=
–x
18 Chapter 1
1.14 Algorithm Examples and the Difference Equation
By convention, the output sequence of the DSP algorithm is named
) and the mathematical
difference equation
. Combining the operations
Figure 1.25: Examples of simple multiplier algorithms. Notice the different notation with the
coef cient placed outside the triangle; this is another common way to designate it. (a) Simple
scalar multiplication by an arbitrary value “
that is less than one. (d) Phase inversion turns the signal upside down by using a negative
coef cient; a value of
1.0 perfectly inverts the signal.
=
0.5
+
0.5
Digital Audio Signal Processing Principles 19
1.16 Practical Mixing Algorithm
A problem with mixing multiple channels of digital audio is the possibility of overß
ow or
creating a sample value that is outside the range of the system. You saw that by limiting
1.0, multiplication of any of these
numbers always results in a number that is smaller than either, and always within the same
1.0. However, addition of signals can easily generate values outside the
Figure 1.27:
cient, a–d.

20 Chapter 1
In the next
you will be introduced to the anatomy of a plug-in from a software point
6 through 14 , you will learn how DSP theory allows you to combine
lters, effects, and oscillators for use in your own plug-ins.
Bibliography
Ballou, G. 1987.
Handbook for Sound Engineers
, pp. 898Ð906. Indiana : Howard W. Sams & Co.
Jurgens, R. K., ed. 1997.
Digital Consumer Electronics Handbook
, Chapter 2. New York: McGraw-Hill.
Kirk, R. and Hunt, A. 1999.
Digital Sound Processing for Music and Multimedia
A plug-in is a software component that interfaces with another piece of software called the

Anatomy of a Plug-In
CLIEN
Call
math.h
math.dl
22 Chapter 2
Figure 2.1: (a) In static linking the functions are compiled inside the client.
(b) In dynamic linking the functions are located in a different Þ
linking and is shown in Figure 2.1 . The 
le that contains the precompiled functions is the
DLL. It is more complicated because extra steps must be taken during run-time operation
rather than relying on code compiled directly into the executable. The advantage is that if a
bug is found in the library, you only need to redistribute the newly compiled 
le rather than
∆ddres
Spac

∆ddres
Spac
B
Proces

Proces
B
(∆.exe
(B.exe
function
gigabyte
function

in-proces
DL
out-of-proces
DL
OxFFFFFFFF
Anatomy of a Plug-In 23
While that topic could be the subject of another book, the main thing you need to know is
that typically when the client loads the DLL and begins the communication process, the
DLL is loaded into the same virtual address space as the client. This means that the client
code might as well have the DLL code compiled into it since the addressing requires no
be loaded into another process space or even on
24 Chapter 2
2.3 C and C++ Style DLLs
2.4 .
(.exe
SERVE
(.dll
customDataStruc
Cal
DL
ne
CLIEN
customDataStruc
Anatomy of a Plug-In 25
2.4 Maintaining the User Interface
Most plug-ins have an associated graphical user interface (GUI or UI) with controls for
manipulating the device. There are several schemes, but in general, when a new instance of
the plug-in is created, a new child window is created with the GUI embedded in it. Whenever
a GUI control changes state, a function on the plug-in is called. The client or GUI passes the
plug-in information about which control changed and the new value or state of the control.
The plug-in then handles the message by updating its internal variables to affect the change in
signal processing. Generally, the GUI appearance (the position of the sliders or knobs or the
states of switches) is controlled automatically by the client or the GUI code itself. There are
three different ways the GUI can be handled:
1. The client creates, maintains, and destroys the GUI and forwards control-change
messages to the plug-in.
2. The client creates, maintains, and destroys the GUI but the GUI communicates directly
with the plug-in.
3. The plug-in creates, maintains, destroys, and communicates directly with the GUI,
independent of the client.

Figure 2.3: In a C-style DLL, the client Þ
rst calls an initialization function
(
pPlugln
(.dll
new
customPlugln
create()
(.dll
(.dll
(.
pPlugln1
pPlugln2
(.exe
pPlugln
26 Chapter 2
Figures 2.5 through 2.7 show the three basic GUI scenarios. The  rst difference is in who
creates, maintains, and destroys the GUI.
w which has the bene t that if the client is minimized or closed,
the child windows will be hidden or destroyed accordingly. Therefore, the  rst two scenarios
to the client-child con guration. The second difference is in how the communication 
ows:
indirectly routed through the client or directly from the GUI to the plug-in. The RackAFX
software uses the second paradigm; the client creates the GUI but the GUI communicates
Figure 2.4: In a C++ plug-in, the client calls a creation function and the DLL (server)
pointer for future calls to the plug-in without having to bother communicating
with the DLL. The client might also create multiple instances of the plug-in

254
Handler
Destro
Interface
1,0.75
Anatomy of a Plug-In 27
2.5 The Applications Programming Interface
In order for the client-server scheme to work, both the client and DLL/plug-in must agree on
the naming of the functions. This includes the creation function and all of the functions that
the client will be able to call on the plug-in object. The plug-in might have other functions
Contro
Destro
254
Surfac
254)
Control
Destroy
Interface
Surface
28 Chapter 2
Figure 2.6: In this scenario, the client maintains the GUI,
which communicates directly with the plug-in.
Figure 2.7: In another scenario, the plug-in maintains and communicates
directly with the GUI without interference from the client.
Anatomy of a Plug-In 29
programming interface
or API. It is a de nition of the functions an object must implement
or overridden. It de nes the function prototypes and describes how the functions will be
called and used. The API is written by the client manufacturer and is made available to
2.6 Typical Required API Functions
Plug-ins are designed after the hardware devices that they replace. The audio processing
loop is the same as the hardware version you saw in
1 .
2.8 shows a software
variation on that 
owchart.
Although the various plug-in APIs are different in their implementations, they share
One-Tim
Read
fo
Streamin
fo
a
Cal
Calle
Input
∆udi
Dat
Variable
Dat
fo
Nex
30 Chapter 2
Table 2.1: The typical core operations that plug-in APIs share.
Function
One-time initializationCalled once when the plug-in is instantiated, this function implements any
GUI, and allocating memory buffers dynamically.
Called when the plug-in is to be destroyed, this function de-allocates any memory
declared in the one-time initialization and/or in other functions that allocate
memory. If there are any owned child-windows, the plug-in destroys them here.
Prepare for streamingCalled after the user has hit the play button or started audio streaming but before
the data actually ß ows. This function is usually used to ß ush buffers containing
old data or initialize any variables such as counters that operate on a per-play
basis (not found in some APIs).
Process audio
The main function that does the actual signal processing. This function receives
audio data, processes it, and writes out the result. This is the heart of the plug-in.
Anatomy of a Plug-In 31
2.7 The RackAFX Philosophy and API
The fundamental idea behind the RackAFX software is to provide a platform for rapidly
developing real-time audio signal processing plug-ins with a minimum of coding, especially
In the RackAFX code you will see the qualifier __stdcall preceding each function prototype as
well as implementation. The __stdcall calling convention is there for future compatibility with
Creation
Signal
fo
a
Functio
Cal
32 Chapter 2
Here is part of the interface  le for the CPlugIn object plugin.h, which de
Anatomy of a Plug-In 33
// 2. One Time Destruction
virtual ~CPlugIn(void);
// 3. The Prepare For Play Function is called just before audio streams
virtual bool __stdcall prepareForPlay();

// 4. processAudioFrame() processes an audio input to create an audio
output
virtual bool __stdcall processAudioFrame(” oat* pInputBuffer,
” oat* pOutputBuffer,
UINT uNumInputChannels,
UINT uNumOutputChannels);
// 5. userInterfaceChange() occurs when the user moves a control.
virtual bool __stdcall userInterfaceChange(int nControlIndex);
The 
ve functions in
2.2 are the core RackAFX API—implement them and you
e a legitimate RackAFX plug-in. Best of all, the RackAFX plug-in designer will write
and provide default implementations of all these functions for you. You need only to go in
and alter them to change your plug-in’s behavior. See
A for a comparison of the
RackAFX API and other commercially available formats as well as notes on using RackAFX
plug-in objects inside API wrappers for other formats.
Table 2.2: The RackAFX API core functions.
RackAFX Function
Remarks
34 Chapter 2
Bibliography
Apple Computers, Inc. 2011.
The Audio Unit Programming Guide
. https://developer.apple.com/library/
Accessed August 7, 2012.
Bargen, B. and Donnelly, P. 1998.
Inside DirectX
, Chapter 1. Redmond, WA: Microsoft Press.
Coulter, D. 2000.
Digital Audio Processing
, Chapters 7–8. Lawrence, KS: R&D Books.
(Note: you must create a free developer's account to download the API.) Accessed August 7, 2012.
The RackAFX plug-in designer will help you write your plug-in. When you create a new

Writing Plug-Ins with
pPlugln
pPlugln
to
method
o
you
yourPlugln
36 Chapter 3
3.2 Creation
When you load a plug-in in RackAFX, you are actually passing the system a path to the DLL
youÕve created. RackAFX uses an operating system (OS) function call to load the DLL into its
process space. Once the DLL is loaded, RackAFX Þ rst runs a compatibility test, then requests
=
2
plug-in
that
with
ID
=
2
has
changed
Writing Plug-Ins with RackAFX 37
need to perform more calculations or logic processing in addition to just changing the control
variable. So, in addition to changing and updating your internal GUI variable, RackAFX will
audi
file-specifi
dat
i
plug-i
prepareForPlay(
38 Chapter 3
Figure 3.3: The sequence of events during the play/process operation; audio data from the 
le is
processed in the plug-in and sent to the audio adapter for monitoring.
3.5 Destruction
When the user unloads the DLL either manually or by loading another plug-in, the client Þ
Writing Plug-Ins with RackAFX 39
WeÕll start with the Þ rst type and make a simple volume control. After that, weÕll design a
control that will require memory elements. You will need the
following installed on your computer:
¥ RackAFX
¥ Microsoft Visual C
Express¨ 2008 or 2010 (both are free from Microsoft)
¥ Microsoft Visual C
There is no advantage to having the full version of Visual C
(aka VC
programming unless you plan on using your own GUI resources. Make sure that Visual
ow of writing and
testing your plug-ins, you will Þ nd that you can move easily and swiftly through the rest of
the bookÕs projects because they all follow the same design pattern and the design chapters
will use the same conventions for each project.
3.6.1 Project: Yourplugin
The Þ
rst step will always be the creation of a new project. In this phase, RackAFX creates the
project directory and Þ les along with a derived class based on the project name.
3.6.2 Yourplugin GUI
Next, you lay out your GUI controls based on the algorithm you are following and decide
on the variable data types and names that will connect the GUI elements to your plug-in.
This generally starts with writing the difference equation(s) for the algorithm. Variables in
the difference equation will map to member variables and GUI controls in your plug-in.
40 Chapter 3
3.6.4 Yourplugin.cpp File
3.4 .
=a
Writing Plug-Ins with RackAFX 41
Coefficients
in a block diagram (or transfer function or algorithm) become
float member variables

block diagram.
¥
0: Mute
¥
1.0: Max volume
The output samples
) are a scaled version of the input
) and the scaling factor is named
is called a
coefÞ cient
in the algorithm. The algorithm states that
1 2
4 5
7 8
Lis
42 Chapter 3
Figure 3.6: The menu and toolbar on the left handle most of your
plug-in development.
Figure 3.5: When you start RackAFX, it opens in prototype view. It features the control surface
and plug-in routing controls.
Figure 3.7:
Loa
Debu
Rebuil
Writing Plug-Ins with RackAFX 43
The menu items include:
¥
File: Manage projects by creating, editing or clearing the project.
¥ Modules: Built-in plug-ins that you can use for analysis and testing.
2
3
4
44 Chapter 3
In the preferences you need to:
1. Choose your default folders for projects, WAVE Þ les, and default WAVE Þ
les. You can
use whatever directory you want for your project folder and you can also open projects
from any other folder at any time; the default is simply for conveniently grouping all your
Ne
Writing Plug-Ins with RackAFX 45
compiler start up. In Visual C
you will see a new project and solution
named ÒVolume.Ó If you expand the Volume project then you can see the Þ les that RackAFX
wrote for you. Your derived class is contained in Volume.h and Volume.cpp. Before continuing,
itÕs worth taking a peek into the RackAFXDLL.cpp Þ le and locating the creation mechanism
createObject():
//RackAFX Creation Function

DllExport CPlugIn* createObject()


{


CPlugIn* pOb = new CVolume; // ***


a
Figure 3.10:
The top section of the New/Edit Project window. Notice that your
object, so you will receive errors if you
name the project in a way that produces illegal C
syntax. Below this are more
Right-clic
46 Chapter 3
In RackAFX, you can see that all the sliders and buttons are disabled; the sliders donÕt move
Each slider or button control on the UI will map to and control a
member variable
in your plug-in.
Figure 3.11: Right-click inside the bounding box of a slider and the slider properties
window appears. This is how you con
gure the slider and link it to a variable.
dropdown
list
exposes
choices
for
data
type
Writing Plug-Ins with RackAFX 47
variable in the object. You cannot edit this cell. Start with the control name and enter
ÒVolume.Ó Hit Enter to advance to the next cell. For this version of the plug-in there are no

Property
48 Chapter 3
Your plug-in code will use the index value 0 (uControlID in the properties dialog) to map to
the m_fVolume variable, which is controlled by the slider named ÒVolumeÓ on the UI.
3.9.4 Volume.h File
Before we add the code, look around the plug-in object Þ les (volume.h and volume.cpp) to
As you add, edit, or remove controls from the main UI you will notice that RackAFX will ß
ash to
the compiler and back as it writes the code for you. You might use this ß ashing as a signal that
the code update is synchronized. If you donÕt like it, minimize the compiler and the ß
ashing will
not occur. There is a special check�-box in View Preferences to start the compiler minimized for
this very reason.
Writing Plug-Ins with RackAFX 49
// 1. One Time Initialization

CVolume();




// 7. userInterfaceChange() occurs when the user moves a control.
virtual bool userInterfaceChange(int nControlIndex);






// ADDED BY RACKAFX -- DO NOT EDIT THIS CODE!!! ----------------------- //

//
**--0x07FD--**


” oat m_fVolume;


// **--0x1A7F--**

// -------------------------------------------------------------------- //

};
Aside from the main plug-in functions we discussed in
2 , you will see some more
commented areas of code. In the Þ rst part marked
// Add your code here:
you can add more
variables or function deÞ nitions just like you would in any .h Þ le. Try to keep your code in
the denoted area to make it easier to Þ nd and read. The area below that says:
// ADDED BY RACKAFX—DO NOT EDIT THIS CODE!!!
is very importantÑyou will see your member variable m_fVolume declared in this area. This
You will see the notation
frequently in the printed code as a reminder that
code has been cut out for easier reading.
RackAFX writes C++ code for you! But, you have to be careful not to alter the RackAFX C++
code in any way. You can always tell if the code is RackAFX code because there will be warning
comments and strange hex codes surrounding the RackAFX code. The RackAFX code is left for
you to see only as a backup to your debugging and should never be altered by anyone except
50 Chapter 3
In this case, check to verify that RackAFX added the ß
ariable m_fVolume as you
anticipated. Next, move on to the volume.cpp Þ
le and have a look at it, starting from the top.
3.9.5 Volume.cpp File
Constructor and destructor
Writing Plug-Ins with RackAFX 51
processAudioFrame()
Above the deÞ nition is a
52 Chapter 3
userInterfaceChange()
the user changes a control on the control surface:
/* ADDED BY RACKAFX -- DO NOT EDIT THIS CODE!!! ----------------------------- //

**--0x2983--**


Variable Name Index
-----------------------------------------------




-----------------------------------------------

**--0xFFDD--**
// ----------------------------------------------------------------------------- */
// Add your UI Handler code here ----------------------------------------------- //
//
As with processAudioFrame(), there is a ÒhintÓ comment above the function deÞ
nition which
reminds you how RackAFX has mapped your variable to a control index. In this case, the
m_fVolume variable is mapped to index 0.
bool __stdcall CVolume::userInterfaceChange(int nControlIndex)
{
Writing Plug-Ins with RackAFX 53
At this point, you have built a DLL which is designed to serve up CVolume objects when the
You should
always
build and test your brand-new project Þ rst before modifying any code! You
want to do this to make sure there are no C++ errors (you might have inadvertently hit a key or
changed some code), as well as to make sure your audio system is working and you can hear the
audio data.
Plug-I
processAudioFrame(
=
54 Chapter 3
There are only three lines to modify, one for the Þ rst channel and another two for the other
routing combinations. The modiÞ cation is shown in
where you are scaling the input by
the volume coefÞ cient. Can you see how this relates to the difference equation? If not, stop
now and go back to Þ gure it out. Now, rebuild the project and reload it into RackAFX. Try the
slider and you will see that the volume does indeed change. Congrats on your Þ
rst plug-in!
What makes this plug-in so easy and quick to develop is that the slider volume control
directly
to a variable that is used in the processAudioFrame() function, as depicted in
Figure 3.14 . This means the data coming from the slider can be used directly in the algorithm.
The data coming from the slider and controlling m_fVolume is said to be
raw data
. You use
the raw value to affect the signal processing algorithm.
3.10 Design a Volume-in-dB Plug-In
This next example will show you how to
your raw data to be used in the signal
processing algorithm. The VolumedB plug-in will also be a volume control, but will operate
in dB instead of using a raw multiplier. You may have noticed that your previous volume
control didnÕt seem to do much in the upper half of the throw of the slider. This is because
ections of the slider do not correspond to
linear changes in perceived loudness. To Þ x this, weÕll design another plug-in that will operate
in decibels (dB). The block diagram is identical to the Þ rst project, only the control range of
values has changed.
¥ a
96 dB: Mute
¥ a
0 dB: Max volume
Now, the volume control is speciÞ ed in dB, so you need a formula to convert the dB value
to a scalar multiplier for the algorithm. You should memorize the dB equations now if you
Figure 3.14:
pBuffer>0@ mBfVolum
t
1.
Data
-96.
t
0.
=
1
Writing Plug-Ins with RackAFX 55
havenÕt already, since they will reappear over and over in audio plug-ins. This is the
that will take the raw data from the slider 96 to 0 dB and cook it into a variable we

5
20
x
)
1
(3.2)
CoefÞ cients
in a block diagram (or transfer function or algorithm) become
ß oat member variables
in your plug-in code. Each slider or button control on the UI will map to and control a
variable
in your plug-in.
56 Chapter 3
3.10.1 Project: VolumedB
3.1 .
Table 3.1: The slider properties for the VolumedB project.
Slider Property
Value
Control name
Volume
Variable type
Variable name
m_fVolume_dB
3.10.3 VolumedB.h File
RackAFX has written the code and declared the variable ß oat m_fVolume_dB but we still
need to declare our second variable named m_fVolume, which stores the cooked data. Open
the VolumedB.h Þ
le and declare the variable in the user declarations area:
// abstract base class for DSP “ lters
class CVolumedB : public CPlugIn
{
public: // plug-in API Functions
&#xSNIP;&#x SNI;&#xP SN;&#xIP00;SNIP SNIP SNIP
// Add your code here: ----------------------------------------------- //




” oat m_fVolume;

// END OF USER CODE -------------------------------------------------- //
Writing Plug-Ins with RackAFX 57
3.10.4 VolumedB.cpp File
¥ Cook and initialize the member variable.
¥ Use the pow() function.
CVolumedB::CVolumedB()
{
// Added by RackAFX - DO NOT REMOVE
//
58 Chapter 3
processAudioFrame()
the difference equation.

UINT uNumInputChannels, UINT uNumOutputChannels)
{
// output = input -- change this for meaningful processing
//
// Do LEFT (MONO) Channel; there is always at least one input/one output
// (INSERT Effect)

pOutputBuffer[0] = pInputBuffer[0]*m_fVolume;

// Mono-In, Stereo-Out (AUX Effect)
if(uNumInputChannels == 1 && uNumOutputChannels == 2)

pOutputBuffer[1] = pInputBuffer[0]*m_fVolume;

// Stereo-In, Stereo-Out (INSERT Effect)
if(uNumInputChannels == 2 && uNumOutputChannels == 2)

pOutputBuffer[1] = pInputBuffer[1]*m_fVolume;

3.11 Design a High-Frequency Tone Control Plug-In
This example will show you how to implement the last of the digital signal processing (DSP)
algorithm building blocks: the delay element (
), where
delay. In this example,
1, so we are dealing with a one-sample delay, a fundamental
3.16 shows the block diagram for the Þ
lter.
The design equation is as follows:
Writing Plug-Ins with RackAFX 59
You already know that the coefÞ
and
will become ß oat member variables in our
? In hardware, this would be a
register to store the sample for one clock period. In software, it simply becomes another ß
A DSP filtering algorithm, which is only described in mono or single channel format, that is,
), and one output,
Delay elements will become
float member variables
in your plug-in object. For single-delay
elements, you can simply assign separate variables. For multiple-sample-delay elements you
may also use float arrays to store the data.
60 Chapter 3
3.11.1 Project: SimpleHPF
This plug-in is going to implement a very primitive HF (high frequency) tone control
that behaves like a high-pass Þ lter. It will attenuate low frequencies, leaving only the
3.2 .
Table 3.2: The slider properties for the SimpleHPF project.
Slider Property
Value
Control name
Variable type
Variable name
3.11.3 SimpleHPF.h File
To Þ
gure out what the CSimpleHPF object is going to have to do, Þ rst write the difference
equation. Examine it and Þ gure out which components are going to become coefÞ
and which are going to be memory locations. Also, Þ gure out any intermediate variables
you might need. You can Þ gure out the difference equation by using the rules you learned in
1 to chart the input and output signals. Make sure you understand how this equation
The difference equation is as follows:
a
a
2
Next, Þ
gure out which block diagram components become variables in your C
and
will become a ß oat member variable in the code. Even though we
might be tempted to share the coefÞ cients, these are separate left and right algorithms that
Writing Plug-Ins with RackAFX 61
element will also need to become a member variable and we will deÞ nitely need one
for each channel because these can never be shared. I named mine
The slider will only modify its own m_fSlider_a1 value. We will calculate the values for the
other coefÞ cients using it. We will need to modify the userInterfaceChange() function just
like the preceding example to wire the slider into the algorithm. Jump to your C
and go to the SimpleHPF.h Þ le to add your member variables. Notice the variable that
RackAFX added in the code below:
// 5. userInterfaceChange() occurs when the user moves a control.
virtual bool userInterfaceChange(int nControlIndex);
// Add your code here: ------------------------------------------------------- //

” oat m_f_a0_left;


” oat m_f_a1_left;


” oat m_f_a0_right;


” oat m_f_a1_right;


” oat m_f_z1_left;


” oat m_f_z1_right;

// END OF USER CODE ---------------------------------------------------------- //
// ADDED BY RACKAFX -- DO NOT EDIT THIS CODE!!! ------------------------------- //
// **--0x07FD--**
Figure 3.17:
The HF tone control block diagram with annotations showing the signal math.
62 Chapter 3
” oat m_fSlider_a1;
// **--0x1A7F--**
// --------------------------------------------------------------------------- //
3.11.4 SimpleHPF.cpp File
Writing Plug-Ins with RackAFX 63
bool __stdcall CSimpleHPF::processAudioFrame(” oat* pInputBuffer, ” oat*
pOutputBuffer, UINT uNumChannels)
{
// Do LEFT (MONO) Channel
//

// Input sample is x(n)


” oat xn = pInputBuffer[0];





// READ: Delay sample is x(n-1)


” oat xn_1 = m_f_z1_left;





// Difference Equation


” oat yn = m_f_a0_left*xn + m_f_a1_left*xn_1;


// WRITE: Delay with current x(n)


m_f_z1_left = xn;


// Output sample is y(n)


pOutputBuffer[0] = yn;

OK, now itÕs your turn to implement the other channel. Give it a try by yourself before
64 Chapter 3
userInterfaceChange()
the new a
values.
3.18 (yours may look slightly different).
The analyzer is a powerful tool for checking your plug-inÕs performance. The basic controls are
1. Scope/spectrum analyzer.
2. Basic graphing options.
3. Scope controls.
Flush out delay elements in preparation for each play event in the plug-in. You generally do not
want old data sitting inside these storage registers. The only exceptions are delay-looping effects
where you exploit the old data. This is done in prepareForPlay().
2
4
3
10H
+12.0d
-24.0d
Writing Plug-Ins with RackAFX 65
Figure 3.18: The audio analyzer.
Figure 3.19: A  at frequency response with
= 0.0.
+12.0d
-24.0d
-36.0d
66 Chapter 3
3.12 Design a High-Frequency Tone Control with Volume Plug-In
This Þ
nal example will show you how to deal with more than one slider control by simply
adding a volume-in-dB control to the block diagram. The plan is to add another slider to
the existing plug-in; the new slider will control the overall volume of the plug-in in dB. You
already know how to implement both parts of it, so this exercise is really more about adding
new controls to an existing project.
3.12.1 Project: SimpleHPF
Open your SimpleHPF project in RackAFX using the Open button or the menu/toolbar items.
3.12.2 SimpleHPF GUI
Add the new volume slider: Right-click on the second slider group and add a new slider
0.0
Writing Plug-Ins with RackAFX 67
// END OF USER CODE ------------------------------------------------------------ //
// ADDED BY RACKAFX -- DO NOT EDIT THIS CODE!!! -------------------------------- //

**--0x07FD--**
” oat m_fSlider_a1;
” oat m_fVolume_dB;
// **--0x1A7F--**
// ----------------------------------------------------------------------------- //
3.12.4 SimpleHPF.cpp File
¥ Cook the volume data after the Þ
lter initializations.
CSimpleHPF::CSimpleHPF()
{
&#xSNIP;&#x SNI;&#xP SN;&#xIP00;SNIP SNIP SNIP
m_f_a0_left = Š1.0;
m_f_a1_left = 0.0;
m_f_a0_right = Š1.0;
m_f_a1_right = 0.0;
m_f_z1_left = 0.0;
m_f_z1_right = 0.0;

m_fVolume = pow(10.0, m_fVolume_dB/20.0);

}
68 Chapter 3
processAudioFrame ()
Add the volume control scaling
the Þ
ltering operation.
// Do LEFT (MONO) Channel; there is always at least one input/one output
// (INSERT Effect)
// Input sample is x(n)
” oat xn = pInputBuffer[0];

// READ: Delay sample is x(nŠ1)
” oat xn_1 = m_f_z1_left;

// Difference Equation
” oat yn = m_f_a0_left*xn + m_f_a1_left*xn_1;
// WRITE: Delay with current x(n)
m_f_z1_left = xn;
// Output sample is y(n)

pOutputBuffer[0] = yn*m_fVolume;

// Mono-In, Stereo-Out (AUX Effect)
if(uNumInputChannels == 1 && uNumOutputChannels == 2)

pOutputBuffer[1] = yn*m_fVolume;

// Stereo-In, Stereo-Out (INSERT Effect)
if(uNumInputChannels == 2 && uNumOutputChannels == 2)
{
// Input sample is x(n)
” oat xn = pInputBuffer[1];

// Delay sample is x(n-1)
” oat xn_1 = m_f_z1_right;

// Difference Equation
” oat yn = m_f_a0_right*xn + m_f_a1_right*xn_1;
// Populate Delay with current x(n)
m_f_z1_right = xn;
// Output sample is y(n)

pOutputBuffer[1] = yn*m_fVolume;

}
userInterfaceChange()
¥ Cook the volume data.
¥ Make sure you check your control ID values in case you chose different sliders than I did.
Writing Plug-Ins with RackAFX 69
bool __stdcall CSimpleHPF::userInterfaceChange(int nControlIndex)
{
// decode the control index
switch(nControlIndex)
{
case 0:
{
m_f_a1_left = m_fSlider_a1;
m_f_a1_right = m_fSlider_a1;
m_f_a0_left = m_f_a1_left - 1.0;
m_f_a0_right = m_f_a1_right - 1.0;
break;
}

case 1:


{


m_fVolume = pow(10.0, m_fVolume_dB/20.0);


break;


}

default:
break;
}
3.13 The User Plug-In Menu in RackAFX
As you write more plug-ins, you will notice that they begin to automatically populate the user
plug-in menu item in RackAFX. By now, you should have three plug-ins in this menu. There
are a few things you need to understand about this menu.
70 Chapter 3
¥ It allows you to play with the plug-ins without having to open your compiler and manu-
ally load and unload the DLLs.
¥ You can select different plug-ins from this menu while audio is streaming and they will
automatically slot in and out, so you can audition or show off your plug-ins quickly.
¥ It allows RackAFX to behave just like other plug-in clients by loading all the DLLs it
Þ nds in its PlugIns folder at once when you start the software.
This can be dangerous!
That last item poses a problem during the course of developmentÑif you write a DLL that
does bad things in the constructor, such as hold pointers with garbage values or try to access
memory that hasnÕt been allocated, it may crash RackAFX when it Þ rst starts up. If your
DLL behaves really badly, you might even wound the OS too. This is a difÞ cult issue to avoid
without complicated rules for commissioning and decommissioning the plug-in. Additionally,
you will have the same problem if you are developing a commercial plug-in and you are
using a third-party client; most of these are designed to Þ rst open all the DLLs in their plug-in
folder and check to make sure the plug-ins can be instantiated. If you write a bad DLL, you
might also crash these clients and/or the OS. In at least one commercially available client,
if your plug-in crashes during startup, it will not be loaded again in future launches. When
RackAFX loads your DLL, it does some error checking to try to make sure your plug-in is
legal, but it canÕt check the validity of your construction code.
If RackAFX crashes each time you open it, remove the last DLL you were working on from the
PlugIns folder. Alternatively, you can remove all the DLLsÑyou will want to copy them and
restore them later when you find the bad DLL that caused the crashing. Writing a DLL is chal-
lenging and fun, but since you are writing a component, you can wind up with crashes like this.
During the course of this book you will learn how to implement the following effects:
 EQs/tone controls
 Delay
 Flanger/chorus
 Compressor/limiter/tremolo
 Reverb
 Modulated  lters/phaser
The EQ/tone control theory is the most dif cult of all the effects to explain in a simple
way. These effects are based on DSP  lter theory which involves complex algebra, that
is, the algebra of complex numbers. Complex numbers contain real and imaginary parts.
There are two basic ways to explain basic DSP theory. The  rst is intuitive and involves
4.2 , the 1 kHz frequency is shifted by
45 degrees compared to the
input. At very high frequencies, the phase shift approaches
90 degrees. This phase shift is
not a side effect of the  ltering but an integral part of how it works.
To understand the rami cations of the phase shifting, consider a complex waveform entering
the  lter. Fourier showed that a complex, continuous waveform could be decomposed into a

How DSP Filters Work
72 Chapter 4
Figure 4.1: The fundamental analysis tool for a DSP Þ lter is its frequency response plot.
This graph shows how the Þ
lter ampliÞ
es or attenuates certain bands of frequencies.
2
4
How DSP Filters Work 73
Figure 4.3: A complex waveform is Þ ltered into a smoothed output. The input and output are
decomposed into their Fourier-series components.
Figure 4.4: The same information is plotted as frequency and phase responses.
Ó
a
a
-1
x
a
1
a
Y
Ó
0.
-1

0.
74 Chapter 4
4.1 First-Order Feed-Forward Filter
1)
You can tell why it’s called feed forward—the input branches feed forward into the summer.
The signal  ows from input to output. There is no feedback from the output back to the input.
Figure 4.6: What kind of Þ lter is this? What are its frequency and phase responses?
(0
Hz)
=
(0.5)

(0.0)
+
(0.5)

(0.0)
=
0.0
How DSP Filters Work 75
Figure 4.7: On the Þ
rst iteration the input sample is used in the difference
input is shifted into the delay register.
3. ½ Nyquist
4. ¼ Nyquist
5. Impulse
For each audio sample that enters there are two phases to the operation:
1. Read phase: The sample is read in and the output is formed using the difference equation
and the previous sample in the delay register.
2. Write phase: The delay element is overwritten with the input value—the sample stored in
register is effectively lost.
Start with the DC/step input and begin sequentially applying the samples into the 
lter shown
in Figures 4.7 through 4.10 .

Now, observe the amplitude and phase shift of the input versus output—the output amplitude
In a feed-forward filter, the amount of time smearing is equal to the maximum delayed path
through the feed-forward branches.
=
(0.5

(1.0
+
(0.5

(0.0
=
0.
n)
0.5
76 Chapter 4
equals the input. However, there is a one sample delay in the response, causing the leading
edge of the step-input to be smeared out by one sample interval. This time smearing is a
Next, repeat the process for the Nyquist frequency (DC and Nyquist are the easiest, so we’ll
rst). The 
lter behaves in an entirely different way when presented with Nyquist
( Figures 4.11 through 4.14 ).

Now, make your observations about amplitude and phase. The amplitude at Nyquist
eventually becomes zero after the one-sample-delay time. The phase is hard to tell because the
signal has vanished. Why did the amplitude drop all the way to zero at Nyquist? The answer
is one of the keys to understanding digital  lter theory: the one-sample delay introduced
Figure 4.8: The process continues with each sample. Here the input 1.0 is
combined with the previous input; the second output
0.5.
Delay elements create phase shifts in the signal. The amount of phase shift depends on the
amount of delay as well as the frequency in question.
=
(0.5

(1.0
+
(0.5

(1.0
=
1.
=
(0.5

(1.0
+
(0.5

(1.0
=
1.
How DSP Filters Work 77
Figure 4.9: The sequence continues until we observe a repeating pattern;
1.0 is repeating here.
exactly 180 degrees of phase shift at the Nyquist frequency and caused it to cancel out
In the case of Nyquist, the one-sample delay is exactly enough to cancel out the original
=
(0.5

(+1.0
+
(0.5

(0.0
=
0.
Respons
78 Chapter 4
Figure 4.11: The Nyquist sequence is applied to the Þ
lter. Notice how the delay
element has been zeroed out. The output for the Þ
rst iteration is
0.5.
Figure 4.10: The input and output sequences for the Þ lter in Figure 4.6 at DC or 0 Hz.
½ and ¼ Nyquist? They are a bit more laborious to work through but worth the effort. By
=
(0.5

(-1.0
+
(0.5

(+1.0
=
0.
1
How DSP Filters Work 79
Table 4.1:
The manual labor continues as we work
through the ½ Nyquist frequency.
000
Can you see how
) becomes
1) for the next row? The
sample-delayed version of the input
). The output is
Figure 4.12: The second iteration at Nyquist produces an output
0.
)
=
(0.5

(+1.0
+
(0.5

(-1.0
=
0.
=
(0.5

(-1.0
+
(0.5

(+1.0
=
0.
80 Chapter 4
Next we observe the amplitude and phase relationship from input to output in Figure 4.15 .
At  rst it might seem dif cult to  gure out the sequence {…
½ Nyquist is also encoded with a repeating sequence of four values (0, 1, 0,
Work through ¼ Nyquist the same way ( Table 4.2 ). The ¼ Nyquist frequency sequence is
Figure 4.13:
Continuing the operation at Nyquist, we see that eventually the
1.
0
-1.
-1.
How DSP Filters Work 81
Table 4.2: Å Nyquist input/output.
000
The output is
0.354, …}. Analysis of the output sequence reveals the phase-shifted and slightly
than ½ Nyquist. As you can see in Figure 4.16 there is also one sample of time smearing at
Finally, apply the impulse sequence and  nd the impulse response. The impulse response
is the third analysis tool. The impulse response de nes the  lter in the time domain like the
frequency response de nes it in the frequency domain. The basic idea is that if you know how
the  lter reacts to a single impulse you can predict how it will react to a series of impulses of
varying amplitudes. Take a Fast Fourier Transform (FFT) of the impulse response and you
-1.
+
1.
0
Smearin
Outpu
+
1.
0
Outpu
-1.
+
1.
0
Smearin
82 Chapter 4
Figure 4.15: The input/output relationship in time at ½ Nyquist. The ½ Nyquist
shifted by 45 degrees. The leading edge of the Þ
rst cycle is smeared
out by one sample's worth of time.

Figure 4.16: The input/output relationship at Å Nyquist.
-1.
+1.
How DSP Filters Work 83
Table 4.3: The impulse response input/output relationship.
000
000
000
000
Here you can see that the impulse is  attened and smeared out. It is actually two points on
)-like curve, as shown in Figure 4.17 . Now, you can combine all the frequency
amplitude and phase values into one big graph, as shown in Figure 4.18 .

Magnitud
0.
0.
0.
Frequenc
Nyquis
Nyquis
Angl
fs/
1
2
1
4
-90
-45
0
84 Chapter 4
Figure 4.18: Final frequency and phase response plots for the digital Þ
lter in
Figure 4.6 . Notice that this is a linear frequency plot since ½ Nyquist is
halfway across the
0).
4.2 Design a General First-Order Feed-Forward Filter
To illustrate this and crystallize it as a concept, modify your SimpleHPF  lter so that two
and
coef
cients directly. Also, alter the range of values they can
1.0). Then, experiment with the two values and watch what happens in
the analyzer. How to do this is described next.
Open your SimpleHPF project and modify the user interface (UI). First, change the values for
slider to match the new low and high limits. As usual, you right-click inside the slider’s
bounding box and alter the limits and initial value (shown in bold) as in Table 4.4 .
How DSP Filters Work 85
Figure 4.19: Measured frequency and phase response plots for the Þ
lter you just analyzed by
hand. These are plotted with a linear frequency base.
Figure 4.20: Measured frequency and phase response plots with log frequency base.
86 Chapter 4
Table 4.4: The altered
slider properties.
Slider Property
Value
Control Name
Variable Type
Variable Name
1.0
1.0
Initial Value
Now add a new slider for the
coef
cient just below the
slider ( Table 4.5 ).
Table 4.5: The new
slider properties.
Slider Property
Value
Control Name
Variable Type
Variable Namem_fSlider_a0
1.0
1.0
Initial Value
1.0
Change the userInterfaceChange() function to directly map the slider values to the coef
values. The new slider has a control ID of 10; always check your nControlIndex value since it
might be different depending on your UI.

Variable Name
Index
-----------------------------------------------

m_fSlider_a1 0

m_fVolume_dB 1



-----------------------------------------------
bool CSimpleHPF::userInterfaceChange(int nControlIndex)
{


=
1.
=
-0.
=
-0.
How DSP Filters Work 87

case 0:
// direct map to the a1 Slider

m_f_a1_left = m_fSlider_a1;

m_f_a1_right = m_fSlider_a1;



case 1:
// cook the Volume Slider
m_fVolume = pow(10.0, m_fVolume_dB/20.0);













; // do nothing


88 Chapter 4
You can see the feed-back nature of the  lter; the output
) is fed back into the summer
element. Notice that the feedback coef
cient has a negative
sign in front of it and the difference equation re ects this with the
term. The negative
sign is for mathematical convenience and will make more sense in the next chapter when we
Figure 4.22:
First-order feed-back Þ
lter block diagram.

I I
How DSP Filters Work 89
4.4 Design a General First-Order Feed-Back Filter
4.4.1 Project FeedBackFilter
Slider Property
Value
Control Name
Variable Type
Variable Namem_fSlider_b1
1.0
1.0
Initial Value
Slider Property
Value
Control Name
Variable Type
Variable Namem_fSlider_a0
1.0
1.0
Initial Value
1.0
Table 4.6 : The
slider properties.
90 Chapter 4

” oat m_f_a0_right;


” oat m_f_b1_right;


” oat m_f_z1_left;


” oat m_f_z1_right;

// END OF USER CODE ---------------------------------------------------------- //
4.4.4 FeedBackFilter.cpp File
 Initialize the internal
and
variables to match our GUI variables.
 Zero out the delay line elements.

CFeedBackFilter::CFeedBackFilter()
{
// Added by RackAFX - DO NOT REMOVE

How DSP Filters Work 91



92 Chapter 4

// Difference Equation

” oat yn = m_f_a0_right*xn - m_f_b1_right*yn_1;

// Populate Delay with current y(n)


// Output sample is y(n)





4.24 through 4.26 .

How DSP Filters Work 93
Figure 4.25:

1.0 and

high-pass response than before and has gain of
while the impulse response shows considerable ringing.

Figure 4.24:

1.0 and

and has gain above 11 kHz, while the impulse response shows slight ringing.

94 Chapter 4
What happened in that last 
lter? Why was there no frequency response? The 
lter became
blew up
. It blew up because the
coef
cient was 1.0, which introduced 100%
positive feedback into the loop. The output recirculated through the feedback loop forever
causing the in nite ringing seen in the step and impulse responses. Also notice that as the
variable was increased, the gain at Nyquist also increased. With
4.5 Observations
In doing these exercises, you have made a lot of progress—you know how to implement both
feed-forward and feed-back  lters in a plug-in. You also have a good intuitive knowledge
about how the coef cients control the  lter type. Plus you got to blow up a  lter, so that is
How DSP Filters Work 95
4.5.2 Feed-Forward Filters
 Operate by making some frequencies go to zero; in the case of
Nyquist frequency went to zero; this is called a
zero of transmission
or a
zero frequency
zero
 The step and impulse responses show smearing. The amount of smearing is exactly equal
to the total amount of delay in the feed-forward branches.
 Don’t blow up.
 Are called
Þ nite impulse response
(FIR)  lters because their impulse responses, though
they may be smeared, are always  nite in length.
4.5.3 Feed-Back Filters
 Operate by making some frequencies go to in nity; in the case of
the Nyquist frequency went to in nity and with
went to in nity; this is called a
pole of transmission
or a
pole frequency
or just a
 The step and impulse responses show overshoot and ringing or smearing depending on
the coef cients. The amount of ringing or smearing is proportional to the amount of
feedback.
 Can blow up (or go unstable) under some conditions.
 Are called
inÞ nite impulse response
(IIR)  lters because their impulse responses can
nite.
The problem now is that we want to be able to specify the  lter in a way that makes sense
in audio—a low-pass  lter with a cut-off of 100 Hz or a band-pass  lter with a Q of 10 and
Dodge, C. and Jerse, T. 1997.
Computer Music Synthesis, Composition and Performance
, Chapters 3 and 6.
New York: Schirmer.
Steiglitz, K. 1996.
A DSP Primer with Applications to Digital Audio and Computer Music
, Chapter 4. Menlo Park,
CA: Addison-Wesley.
5.1 you can identify the sine and cosine waveforms by their starting position. But what
about the sine-like waveform that starts at an arbitrary time in the lower plot? Is it a sine that
has been phase shifted backwards or a cosine that has been phase shifted forward? You have
to be careful how you answer because sine and cosine have different mathematical properties;
their derivatives are not the same and it usually becomes difÞ cult when you try to multiply
(5.1)

Basic DSP Theory
Cosin
t
98 Chapter 5
EulerÕs equation is shown below:
You can see that it includes both sine and cosine functions. The
term is the imaginary
number, the square root of
but since that represents current
instead). The
is known as the phase rotation operator;
rotates the phase by 90 degrees.
Suppose you want to shift the phase of a waveform by 180
, thereby inverting it. Mathematically,
you can do this by multiplying the waveform by
1, inverting the values of all its points. Suppose
you wanted to invert the waveform again (which would bring it back to its original shape)Ñyou
1 again. But suppose that you only wanted to shift the phase by
? Is there a number (
? In other words,
You donÕt know what
h
5
(or
Figure 5.1: Sine, cosine, and sinusoid signals.
Basic DSP Theory 99
This leads to
5.3 :

"
1
So, you can perform a conceptual 90-de
gree phase shift by multiplying a waveform by
90-degree phase shift is accomplished by multiplying by
. Some other useful
are
1

EulerÕ
s equation is complex and contains a real part (cos) and imaginary part (sin), and the
) in the equation is not a literal additionÑyou canÕt add real and imaginary
5.1 when you plot the sine and cosine in the same plane. So, we will reject using the
sin() and cos() functions independently and adopt the complex sinusoid as a prepackaged
o. The reason is partly mathematicalÑas it turns out,
is simple to
deal with mathematically. You only need to learn four rules (
5.5 ) in addition
to EulerÕs equation.
EulerÕs equation:
t
cos(
v
t
)
1
j
sin(
v
t
)
T

e
e
a
1
b
)
e
a
1
b
)
e
e
a
2
b
)
e
e
a
2
b
)
t
d
e
5
ae

5
1
e
(5.5)

So, what Euler’s equation is really describing is a sine and cosine pair of functions,
coexisting in two planes that are 90 degrees apart. The word
orthogonal
is the engineering term
for
90 degrees apart
+
100 Chapter 5
The two equations in Equation 5.5 demonstrate that
behaves like a polynomial (
even when the argument is a function of time,
. Equation 5.5 also shows how simply it behaves
in calculusÑmultiple derivatives or integrations are done by simply multiplying the argumentÕs
constant (a or 1/a) over and over. Before we leave this topic, make sure you remember how
to deal with complex numbersÑyouÕll need it to understand where the frequency and phase
responses come from.
5.2 Complex Math Review
Because a complex number has both real and imaginary parts, it cannot be plotted on a single
axis. In order to plot a complex number, you need two axes: a real and imaginary axis. The
two axes are aligned at right angles, just like the
- and
Basic DSP Theory 101
notation is simple. Starting with a complex number in the form A
B, you can Þ
nd the
resulting radius and angle from Equations 5.6 and 5.7 .

The radius (
A
B
Equation 5.9 shows how to extract the magnitude and phase from a transfer function.
0
H
0
5
0
num
0
denom
0
Ar
H
)
5
Ar
2
Ar
Figure 5.3: Plotting 2
3 in polar form.
The frequency response plots of filters are actually magnitude responses of a complex function
transfer function
of the filter. The phase response plots are actually argument responses
of this function. The transfer function is complex because it contains complex numbers; many
102 Chapter 5
5.3 Time Delay as a Math Operator
The next piece of DSP theory you need to understand is the concept of time delay as a
How does the delay of
seconds change the complex sinusoid equation? Since positive
time goes in the positive
direction, a delay of
seconds is a shifting of
seconds. In the
complex sinusoid equation, you would then replace
with
. In other words, any point
on the delayed curve is the same as the nondelayed curve minus
seconds. Therefore, the
Dela
But, by using the polynomial behavior of
and running
5.5 in reverse, you can
rewrite it as shown in
5.11 :
ItÕs a subtle mathematical equation but it says a lot: if you want to delay a complex sinusoid
seconds, multiply it by
Ñthis allows us to express time delay as a mathematical
operator.
In the last two sections youÕve learned that phase rotation and time delay can both be
expressed as mathematical operators.
Time delay can be expressed as a mathematical operator by multiplying the signal to be delayed
seconds by
. This is useful because
is not dependent on the time variable. In
Figure 5.4: A complex sinusoid
and another one delayed by
seconds.
x(t)
Basic DSP Theory 103
5.4 First-Order Feed-Forward Filter Revisited
Being able to express delay as the mathematical operation of multiplication by
means
you can take the block diagram and difference equation for a DSP Þ lter and apply a sinusoid
rather than having to plug in sequences of samples as you did
in Chapter 4 . Then, you can see what comes out of the Þ lter as a mathematical expression
and evaluate it for different values of
with
in Hz) to Þ nd the frequency
and phase responses directly rather than having to wait and see what comes out then try
and the in
x
(
t
)
5
e
t

y
(
t
)
5
x
(
t
)
1
a
a
j
1

T
ore
Figure 5.5: Block diagram of a Þ
rst-order feed-forward
Þ lter with signal analysis.
104 Chapter 5
What is so signiÞ cant about this is that the transfer function is not dependent on time
ev
en though the input and output signals are functions of time. The transfer function
5.14 ) is only dependent on frequency
Notice that the transfer function is complex.
But what values of
are to be used in the evaluation? We know that
, but do we
really care about the frequency in Hz? In
4 when you analyzed the same Þ
lter,
frequency. This is called
normalized frequency
and is usually the way you want to proceed in
5.8 ). And it makes sense too. If
The
transfer function
of the filter is the ratio of output to input. The
frequency response
of the filter
is the magnitude of the transfer function evaluated at different frequencies across its spectrum.
of the filter is the argument (or angle) of the transfer function evaluated at
different frequencies across its spectrum.
To produce the frequency and phase response graphs, you evaluate the function for various val-
then find the magnitude and argument at each frequency. The evaluation uses Euler’s
equation to replace the
term and produce the real and imaginary components.
ð/2
0
-
-
+
H
-
2
1
t
T
t
A
H
Basic DSP Theory 105
you take an audio Þ le and reverse it in time, then run it through a low-pass Þ lter, the same
frequency Þ ltering still occurs.
For Þ
lter evaluation,
varies on a 0 to 2
radians/second range and one way to think about
Nyquist ( Figure 5.9
Figure 5.6: The classic way of deÞ
ning the
.
Figure 5.7:
The classic way of showing a
frequency response plot only shows the positive
portion.
Figure 5.9: One way to divide the 2
range of frequencies includes
both positive and negative frequencies.
ð
H
106 Chapter 5
5.4.2 Frequencies Above and Below
Nyquist
Figure 5.10: Mapping the 0 to 2
range of frequencies
across the 0 to fs range.
Basic DSP Theory 107
Use the Þ
lter coefÞ cients
0.5. You can use Table 5.1 to help with the
evaluation. Evaluate at the following frequencies:
¥ DC: 0
¥ Nyquist:
¥ ½ Nyquist:
¥ Å Nyquist:
Evaluation is a two-step process for each frequency:
1. Use EulerÕs equation to convert the
terms into real and imaginary components.
2. Find the magnitude and argument of the complex equation.
5.5.1 DC (0 Hz)
sin(0)) (5.16)
Figure 5.11:
First-order feed-forward
block diagram.
Table 5.1: Sine and cosine function evaluations at DC, Å Nyquist,
½ Nyquist, Æ Nyquist, and Nyquist.
Frequency
1.0
1.0
1.0
Outpu
+1.
+
-1.
108 Chapter 5

0
H
(
v
)
0
5
"
a
1
jb
)(
2
jb
)
5
"
1
j
0)(1
2
j
0)
5
1.
Ar
(0/1)
Compare these mathematical results ( Equations 5.16 and 5.17 ) with the graphical ones from
Figure 5.12 ).
5.5.2 Nyquist (

)) (5.18)
5
"
1
j
0)(0
2
j
0)

5
Ar
(0/0)
Figure 5.12:
The graphical results show the same information. The magnitude
is 1.0 and the phase shift is 0.
-1.
-1.
Basic DSP Theory 109
The inverse tangent argument is 0/0 and the phase or Arg(
) is deÞ ned to be 0 under this
condition. The C
arctan2
im,re
), which performs the inverse tangent
function; it will also evaluate to 0 in this case. Now, compare our results to the last chapterÕs
Figure 5.13 ).
5.5.3 ½ Nyquist (

/2)
/2)) (5.20)

5
"
1
j
0.5)(0.5
2
j
0.5

5
"
1
0.25
"
5
Ar
0.5/0.5)
Compare this to the last chapterÕs graphical results ( Figure 5.14 ); the magnitude is 0.707 with
45 degrees, and the results agree.
5.5.4 1/4 Nyquist (
Figure 5.13:
The graphical results show the same information at NyquistÑthe magnitude is 0
and there is no phase shift since there is nothing there to shift.
Outpu
Inpu
-1.
Smearin
110 Chapter 5

/4)
/4))

5
"
1
j
0.353)(0.853
2
j
0.353

5
"
1
0.125
"
5
0.923

Ar
0.353/0.853)
Compare to the last chapterÕs graphical results ( Figure 5.15 ); you can see how much more
Smearin
Outpu
0.
0.
0.
-45
-90
1/
Frequenc
Basic DSP Theory 111
Figure 5.15: Graphical results from the last chapter at Å Nyquist.
Figure 5.16: The Þ
nal composite frequency and phase response plots show the same results as
the last chapter, but with a lot less work.
ð/4
I
ð
3
0Hz
R
112 Chapter 5
Table 5.2: The magnitude and angle of
from DC to Nyquist.
Frequency
)|
1.0
1.0
1.0
1.0
Figure 5.17: The positive frequencies map to
the upper half of the unit circle.
I
-ð/
-ð/
0Hz
R
-
Basic DSP Theory 113
analysis. You donÕt have to keep track of the real and imaginary parts. The evaluation at
/4 is plotted on the curve. The circle this arc is laying over would have a radius of 1.0
unit circle
. If you evaluate
over the negative frequencies that correspond
Frequency
)|
1.0
¼ Nyquist0.707
1.0
1.0
1.0
Figure 5.18: The negative frequencies map to the

114 Chapter 5
5.7 The z Substitution
5.24 .
e
This is just a substitution right now and nothing else. Making the substitution in Equation 5.24
and noting the resulting transfer function is now a function of
, not
, we can write it like
Equation 5.25 :
The reason this is useful is that it turns the transfer function into an easily manipulated
. In this case, the polynomial is a Þ rst-order polynomial (the highest exponent
absolute value is 1) and this is the real reason the Þ lter is named a Þ
rst-order Þ lterÑitÕs the
polynomial order of the transfer function.
5.8 The z Transform
The
substitution does a good job at simplifying the underlying polynomial behavior of
The
order
of a filter is the order of the polynomial in the transfer function that describes it. The
order of the polynomial is the maximum absolute exponent value found in the equation.
sample
Th
x(n)
Th
x(n
x(n
x(n+4)
x
sequenc
Futur
sample
Curren
sampl
Basic DSP Theory 115
¥
¥
Instead of thinking of the sample
) delayed by one sample,
. The delayed signals are the result of the whole signal
terms:
¥
)
¥
)
¥
)
¥
)
Figure 5.19:
Our book-keeping rules shown graphically.

The
z transform
changes a sequence of samples in
to a sequence of samples in
by replacing
1, n, n
, z, z
… This works because multiplication by
=
represents the operation of delay or time shift. The resulting transformed sequence now is a
function of the complex frequency
, therefore it transforms things from the
into
complex frequency domain
116 Chapter 5
You can see that this concept relies on the ability to express delay as a mathematical operator.
It not only allows us to express an
based on
52`
5.9 The z Transform of Signals
Remember,
) is the sequence of samples just like the ones you used for analysis.
Figure 5.20 shows an example. This simple, Þ nite length signal consists of Þ
ve samples.
The remaining zero samples donÕt need to be counted.
The sequence
) could also be written
(0),
(1),
(2),
(3),
(4)}, so using
Equation 5.26 we transform
) and write Equation 5.27 :
You could read Equation 5.27 as follows: ÒThe whole signal
) consists of a sample with
an amplitude of 0 at time 0 followed by a sample with an amplitude of 0.25 one sample later
and a sample with an amplitude of 0.5 two samples later and a sample with an amplitude of
0.75 three samples later and . . .Ó This should shed light on the fact that the transform really
involves the book-keeping of sample locations in time and that the result is a polynomial. You
can multiply and divide this signal with other signals by using polynomial math. You can mix
two signals by linearly combining the polynomials.
Figure 5.20: A simple signal for analysis.
Basic DSP Theory 117

2
z

5
5.10 The z Transform of Difference Equations
The
a
a
2
Figure 5.21: The DC signal is inÞ nite in length.
118 Chapter 5
z
trans
orm:
z
)
5
Y
This is a really useful resultÑyou got the Þ nal transfer function in just a handful of steps,
5.11 The z Transform of an Impulse Response
The
transform of a difference equation results in the transfer function. But what if you
donÕt have the difference equation? Suppose you only have a black box that performs some
kind of DSP algorithm and youÕd like to Þ gure out the transfer function, evaluate it, and plot
the frequency and phase responses. It can be done without knowing the algorithm or any
The
transform of the impulse response
) is the transfer function
) as a series expansion.
Evaluate the transfer function to plot the frequency and phase responses.
Basic DSP Theory 119
51`
52`
Try this on the Þ rst-order feed-forward Þ lter weÕve been working on; you already have the
Figure 5.22 ).
The impulse response is
{0.5, 0.5}. Applying the
transform yields Equation 5.32 :

Notice that this is the identical result as taking the
transform of the difference equation and
lter coefÞ
cients (0.5, 0.5)
are
the impulse response {0.5, 0.5}.
5.12 The
of the Transfer Function
When we used the coefÞ
120 Chapter 5
is, the values of the dependent variable that make the polynomial become zero. You can do
z
)
5
0.5
1
0.5
5
0.5
1
0.5
You can Þ nd the zero by inspectionÑitÕs the value of
that forces
) to be 0 and in
1.0. But what does it mean to have a zero at
1.0?
This is where the concept of evaluating
comes into play. When you did that and
plotted the various points, noting they were making a unit circle in the complex plane,
you were actually working in the
-plane, that is, the plane of
. The location of the
0 purely on the real axis and
at Nyquist. In Figure 5.23 the zero is shown as a small circle sitting at the location
1.0.
There are several reasons to plot the zero frequencies. First, you can design a Þ
lter directly
-plane by deciding where you want to place the zero frequencies Þ rst, then Þ
guring
out the transfer function that will give you those zeros. Secondly, plotting the zeros gives
Im
0
Re
2.
2.
1.4
2.0
1.8
1.846
1.414
1/4
Nyquist
fs/8
fs/
Frequency
Basic DSP Theory 121
5.13 Estimating the Frequency Response:
An interesting property of the
-plane and
transform is that you can measure the frequency
122 Chapter 5
5.14 Filter Gain Control
The last thing you need to do is remove the overall gain factor from the transfer function so that
overall Þ
re-working the transfer function a bit. The idea is to pull out the

variable as a multiplier for the whole function. This way, it behaves like a volume knob, gaining
the whole Þ lter up or down. The way you do it is to normalize the Þ
lter by
( Equation 5.34 ):

a
5
a
1
1
a
Le
t
By normalizing by
and using the
variable you can produce a transfer function that looks
basically the same in the polynomial but pulls
out as a scalar multiplierÑa gain control.
Where is the zero of the new transfer function in
5.35 ?
z
)
5
a
1
1
a

5
a
1
1
a
then the function will become 0
regardless of the value of
. This transfer function has a zero at
. If you plug our values
The
Basic DSP Theory 123
Figure 5.25: The Þ
rst-order feed-back Þ
lter.

Figure 5.26: Pushing the input
(
) through the algorithm produces the
transform.
applying each analysis technique to the other algorithms. For example, we will dispense with
the evaluation of
terms and start off directly in the
transform of the difference equations.
5.15 First-Order Feed-Back Filter Revisited
1)
Step 1: Take the
transform of the difference equation
This can be done by inspection, using the rules from Section 5.8 ( Figure 5.26 ). Therefore, the
transform is shown in Equation 5.37 .
124 Chapter 5
Step 2: Fashion the difference equation into a transfer function
w apply some algebra to convert the transformed difference equation to
). The
process is always the same: separate the
) and
) variables, then form their quotient
5.38 ).
Separate variables:
1
b
From
H
z
)
5
Y
(
z
)
z
)
5
a
1
b
Step 3: Factor out
as the scalar gain coefÞ
In this case, this step is simple since pulling
out is trivial, as in
5.39 . However, in
more complex Þ
lters this requires making substitutions as you did in the last

z
)
5
a
1
b
5
a
1
b
(5.39)

5.16 The
Poles
of the Transfer Function
The next step in the sequence is to do a quick estimation of the frequency response using
For the simple Þ
case, Þ nding the poles is done by inspection.
When the denominator of the transfer function is zero, the output is infinite. The complex
frequency where this occurs is the pole frequency or pole.
Basic DSP Theory 125
Examining the transfer function, we can Þ nd the single pole in
5.40 :
z
)
5
a
1
b
5
a
1
b
(5.40)

5
a
1
b
By rearranging the transfer function, you can see that the denominator will be zero when
and so there is a pole at
5.35 you
5.41 :

d
(5.41)
Po
The poles are plotted in the
-plane in the same manner as the zeros but you use an
to
indicate the pole frequency. In
0 and so it is a real pole
5.28 ). In the future, we will ignore the trivial zeros or poles.
A pole or zero at
0 is trivial and can be ignored for the sake of analysis since it has no
effect on the frequency response.
d
d
d
d
d
dB
dB
Hz
Hz
kHz
kHz
zero
126 Chapter 5
Thus, the mechanism is the same as for the one-zero case, except you take the inverse of the
length of the vector. This means that as you near the pole, the vector becomes shorter, but the
Figure 5.27: The frequency response of the Þ
rst-order feed-back Þ
lter with these coefÞ

Figure 5.28: The pole is plotted in the
-plane along with the trivial zero.
=
0.52
=
0.55
Nyquist
Nyquist
=
0.74
=
Basic DSP Theory 127
amplitude becomes largerÑexactly opposite of the zero case. You can see from Figure 5.29
z
)
5
a
1
b
1.0
0.9


z
)
5
1
0.9
e
(5.43)

5
"
b
Figure 5.29: Estimating the frequency response of the Þ
rst-order feed-back design.

128 Chapter 5
5.16.1 DC (0 Hz)
v
)
5
1
0.9
j
1
5
1
0.9
3
cos(
v
)
2
j
sin(
v
)
4

5
1
0.9
3
cos(0)
2
j
sin(0)
4
(5.45)

5
1
0.9
3
1
2
j
0
4

5

5
"
0
5
Ar
(0/0.526)
5.16.2 Nyquist (
v
)
5
1
0.9
1
5
1
0.9
3
cos(
v
)
2
j
sin(
v
)
4

5
1
0.9
3
cos(
p
)
2
j
sin(
p
)
4
(5.47)

5
1
0.9
3
2
1
2
j
0
4

5

5
10
1
j
0
Basic DSP Theory 129

5
"
0
5
1

Ar
) (5.48)
(0/10.0)
5.16.3 ½ Nyquist (
v
)
5
1
0.9
j
1
5
1
0.9
3
cos(
v
)
2
j
sin(
v
)
4

5
1
0.9
3
cos
1
p
/2
2
2
j
sin
1
p
/2
24
(5.49)

5
1
0.9
3
0
2
j
1
4

5

2
j
0.9

0
H
(
v
)
0
5
0
1
0
1
2
j
0.9
0

5
b
5
1
0.81

5
Ar
Ar
Ar
(0/1)
0.9/1)
130 Chapter 5
5.16.4 ¼ Nyquist (
v
)
5
1
0.9
j
1
5
1
0.9
3
cos(
v
)
2
j
sin(
v
)
4

5
/4)
/4)
1
0.636
2
j
0.636

5

2
j
0.636

0
H
(
v
)
0
5
0
1
0
1.636
2
j
0.636
0

5
0.636

5
Ar
Ar
Ar
(0/1)
0.636/1.636)
Make a special note about how we have to handle the magnitude of a fraction with numerator
and denominator. You need to use the two equations in Equation 5.9 to deal with this. The
main issue is that the phase is the difference of the Arg(numerator)
Arg(denominator). If
the numerator was a complex number instead of 1.0, you would need to take the magnitude
of it separately then divide. The Þ nal composite frequency/phase response plot is shown in
5.31 ). Finding the impulse response by hand is going to be tedious. There
1/0.
=1
=
0.7
1/1.
=
0.5
1/1.
=
0.5
Nyquis
Nyquis
Basic DSP Theory 131
In Figure 5.32 we observe excellent agreement with our evaluation; the response is down
20 dB at Nyquist (10) and the phase is 45 degrees at
/2. You followed
six steps in evaluation of this Þ
1. Take the
transform of the difference equation.
2. Fashion the difference equation into a transfer function.
3. Factor out
as the scalar gain coefÞ
cient.
Figure 5.30: The Þ nal frequency and phase response plots.
Figure 5.31: The impulse response of the Þ lter in question.
d
+12.0d
0.0d
12.0d
d
d
d
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
132 Chapter 5
4. Estimate the frequency response.
5. Direct evaluation of frequency response.
6.
transform of impulse response as a Þ
nal check.
5.17 Second-Order Feed-Forward Filter
Analysis of the second-order feed-forward Þ lter proceeds much like the Þ
rst-order Þ lters
youÕve seen so far, but thereÕs a bit more math we have to deal with. The topology of a
second-order feed-forward Þ lter is shown in the block diagram in Figure 5.33 .
Figure 5.32: RackAFXÕs frequency and phase responses are taken from the
transform of the
evaluation in Figure 5.30 Ñnotice these are plotted in dB rather than raw magnitudes.
Figure 5.33: Second-order feed-forward Þ
lter.

Basic DSP Theory 133
The difference equation is as follows:
a
a
2
a
2
Steps 1 & 2: Take the
transform of the difference equation and fashion it into a
We can combine steps to save time. The
transform can be taken by inspection, using the
5.54 .

a
a
2
a
2
Form the transfer function
z
)
5
output
5
Y
Step 3: Factor out
as the scalar gain coefÞ
z
)
5
a
a
a
a

Step 4: Estimate the frequency response
First, this is a pure feed-forward Þ lter, so you know there will only be nontrivial
zeros; there are no poles to deal with. This transfer function is a second-order function
term. In fact, this is a quadratic equation. In order to Þ nd the poles
or zeros, you need to Þ rst factor this equation and Þ nd the roots. The problem is that
this is a complex equation, and the roots could be real, imaginary, or a combination
z
)
5
1
1
a
a
R
134 Chapter 5
can be factored as
z
)
5
(1
2
Z
2
Z


Re
a
1
jb

This analysis results in two zeros, Z
and Z
, located at complex conjugate positions in the
-plane. Figure 5.34 shows an arbitrary conjugate pair of zeros plotted in the
-plane. You can
see how they are at complementary angles to one another with the same radii. Remember, any
-plane is located at Re
and the outer rim of the circle is evaluated for
) (5.57)

Z
Re

Z
Re
j

1
2
Z
1
2
Z
Figure 5.34: A complementary pair of zeros in the
-plane.
°
–45
°
Basic DSP Theory 135
notin
compare functions:
z
)
5
a
1
a
a

5
a
2
2
u
)
z
R
t

R
Equation 5.57 shows how the coefÞ
and
create the zeros at the locations Re
and
. Once again you see that the coefÞ
cients
are
a
0.81 (5.58)
Figure 5.35: The complementary pair of zeros in the
-plane at radii 0.9 and
45 degrees.
136 Chapter 5

2
cos(


u
5
arccos(0.705)

45
Evaluating the frequency response of the complex pair is similar to before, but with an extra
step. When estimating the frequency response with more than one zero:
¥ Locate each evaluation frequency on the outer rim of the unit circle.
¥ Draw a line from the point on the circle to
each
zero and measure the length of these
vectors. Do it for each evaluation frequency.
¥ For each evaluation frequency, the magnitude of the transfer function is the
product of
the two vectors
to each zero pair.
Mathematically, this last rule looks like Equation 5.59 :
1

(5.59)

N
5
U
v
on t
i
t
Follow the progression in
5.36 through 5.39 through the four evaluation frequencies,
5.40 .
Figure 5.36:
The magnitude response at 0 Hz (DC) is the product of the two vectors
drawn to each zero, or 0.49.
fs 8
s 4
s 2
Nyquis
Nyquis
ò Nyquis
=
3.
Nyquis
Nyquis
Basic DSP Theory 137
Figure 5.37: At Å Nyquist, the two vectors multiply out to 0.14.
Figure 5.38: At ½ Nyquist, the response reaches 1.26 as the vectors begin
4.0
3.1
2.0
1.26
0.49
0.14
0.0
1/4
Nyquist
fs/8
1
/
fs/4
Nyquist
138 Chapter 5
Step 5: Direct evaluation
Now you can evaluate the Þ lter the same way as before using EulerÕs equation to separate
the real and imaginary components from the transfer function. Evaluate at the following
¥ DC: 0
¥ Nyquist:
¥ ½ Nyquist:
¥ Å Nyquist:
a
(
z
)
5
1
2
1.27
1
0.81
2

(5.60)

Figure 5.40: The combined response reveals a band-stop (notch) type of Þ
lter.
The minimum
amplitude occurs at the zero frequency, where the vector product is the lowest; this is where the
smallest vector is obtained when evaluating on the positive frequency arc.
Basic DSP Theory 139
Apply EulerÕs Equation:
Now evaluate for each of our four frequencies in Equations 5.61 through 5.68 .
5.17.1 DC (0 Hz)
v
)
5
1
2
1.27
3
cos(
v
)
2
j
sin(
v
)
4
1
0.81
3
cos(2
)
2
j
sin(2
)
4

5
1
2
1.27
3
cos(0)
2
j
4
1
0.81
3
cos(2*0)
2
j
4

5
1
2
1.27
3
1
2
j
0
4
1
0.81
3
1
2
j
0
4
(5.61)

5
1
2
1.27
1

5
"
0
5
Ar
(0/0.54)
5.17.2 Nyquist (
v
)
5
1
2
1.27
3
cos(
v
)
2
j
v
)
4
1
0.81
3
cos(2
)
2
j
)
4

5
1
2
1.27
3
cos(
p
)
2
j
p
)
4
1
0.81
3
cos(2
)
2
j
)
4

5
1
2
1.27
3
2
1
2
j
0
4
1
0.81
3
1
2
j
0
4
(5.63)

5
1
1
1.27
1

5

0
H
(
v
)
0
5
"
b
5
"
0
5
Ar
(0/3.08)
140 Chapter 5
5.17.3 ½ Nyquist (
/2)
/2)
/2)
/2)
/2)
/2)
1.27

5
"
1.27
5
Ar
(1.27/0.19)
5.17.4 ¼ Nyquist (
/4)
/4)
/4)
/4)
/4)
/4)
/2)
/2)
0.08

5
"
0.08
5
Ar
(0.08/0.11)
+6.
0.
–6.
–12.
–18.
–24.
2
4
6
8
1
1
1
1
2
4
6
8
1
1
1
1
1
2
Basic DSP Theory 141
transform of impulse response
51`
52`
This is exactly what we expect. This should help you understand two more very important
In a pure feed-forward 
lter:
The coefficients {
,…} are the impulse response,
The transfer function is the
transform of the coefficients.
Figure 5.41: Plots using RackAFXÕs
-transform of the impulse response.
Finally, IÕll use RackAFX to verify the frequency and phase response from our analysis by
using a plug-in I wrote for second-order feed-forward Þ lters. Figure 5.41 shows the frequency
and phase response plots.
Z
142 Chapter 5
5.18 Second-Order Feed-Back Filter
Analysis of the second-order feed-back Þ lter starts with the block diagram and difference
equation. Figure 5.42 shows the topology of a second-order feed-back Þ
lter.
The difference equation is as follows:
(
n
2
1)
2
2)
Steps 1 to 3: Take the
(
n
2
1)
2
Separate variables:
Form transfer function:
z
)
5
Y
z
)
5
a
Factor out
z
)
5
a
Figure 5.42: Second-order feed-back Þ
lter.

Basic DSP Theory 143
Step 4: Estimate the frequency response
lter block diagrams have all the
coefÞ
cients negated.
It puts the quadratic denominator in the Þ nal transfer function in Equation 5.71 in the same
polynomial form as the numerator of the feed-forward transfer function. Thus, you can use
the same logic to Þ nd the poles of the Þ lter; since the coefÞ
and
are real values, the
poles must be complex conjugates of each other, as shown in Equation 5.72 :
z
)
5
a
1
b
b
can

z
)
5
a
2
P
2
P



P
Re
P
Re
j

1
1
2
P
1
2
P
5
1
2
2
u
)
z
R
t

z
)
5
a
1
b
b
5
a
2
2
u
)
z
R

an
(5.72)

2
u
)

R
This results in two poles,
and
, located at complex conjugate positions in the

5.43 shows an arbitrary conjugate pair of poles plotted in the
-plane. You can see how
they are at complementary angles to one another with the same radii.
To estimate weÕll need some coefÞ cients to test with. Use the following:
0.902. Now, calculate the location of the poles from Equation 5.72 :
b
0.902 (5.73)
0.95
t

2
u
)
5
1.34
R
è
Re
–è
R
I
45
°
–45
°
Re
144 Chapter 5

5
arccos(0.707)

45
Figure 5.44 shows the complex conjugate pair of poles plotted in the
-plane at angles

and radii of 0.95. Evaluating the frequency response of the complex pair is similar to before,
but with an extra step. When estimating the frequency response with more than one pole:
¥ Locate each evaluation frequency on the outer rim of the unit circle.
¥ Draw a line from the point on the circle to
each
pole and measure the length of these
vectors. Do it for each evaluation frequency.
¥ For each evaluation frequency, the magnitude of the transfer function is the
product of the
inverse lengths
of the two vectors to each pole pair.
Figure 5.43: A complementary pair of poles in the
-plane.
Figure 5.44: The poles of the Þ
lter.

Basic DSP Theory 145
Mathematically, this last rule looks like Equation 5.74 :
1


V
the
v
)
i
th
ole
Thus, the process is the same as with the zeros, except that you take the inverse of the length
to the pole.
For feed-forward Þ
¥ The closer the frequency is to the zero, the more attenuation it receives.
¥ If the zero is
the unit circle, the magnitude would go to zero at that point.
For feed-back Þ
¥ The closer the evaluation frequency is to the pole, the more gain it receives.
¥ If a pole is
5.45 through
5.48
5.49 .
In a digital filter:
Zeros may be located anywhere in the
-plane, inside, on, or outside the unit circle since the
filter is always stable; it’s output can’t go lower than 0.0.
Poles must be located inside the unit circle.
If a pole is on the unit circle, it produces an oscillator.
If a pole is outside the unit circle, the filter blows up as the output goes to infinity.
0.7
0.7
R
e
)
0
–12
1
0.7
0.7
Nyquis
0.0
R
1.4
0
–12
0.0
1
1.4
0.7
1.8
R
0
–12
0.7
1
1.8
=
146 Chapter 5
Figure 5.47: The magnitude response at
2.98 dB.
Figure 5.46: The magnitude response at 1/4 Nyquist is a whopping
inverse of 0.05 is a large number.

Figure 5.45:
The magnitude response at 0 Hz (DC) is the product of the inverse of the
two vectors drawn to each zero, or (1/0.71)(1/0.71)
5.9 dB.
2
Nyquist
Nyquist
0.31
=–10.1
dB
(dB)
Nyquis
Basic DSP Theory 147
Step 5: Direct evaluation
No
valuate the Þ lter the same way as before using EulerÕs equation to separate the
real and imaginary components from the transfer function. Evaluate at the following frequencies:
¥ DC: 0
¥ Nyquist:
¥ ½ Nyquist:
¥ Å Nyquist:
z
)
5
a
1
b
b

5
Figure 5.48: The magnitude response at Nyquist is
10.1 dB.
Figure 5.49: The composite magnitude response of the Þ
lter shows that it is a resonant low-pass
lter; the resonant peak occurs at the pole frequency.

148 Chapter 5

v
)
5
Apply EulerÕs equation:
v
)
5
2
1.34
j
v
0.902
v
v
)
5
No
w evaluate for each of our four frequencies starting with DC.
5.18.1 DC (0 Hz)
v
)
5
2
1.34
3
cos(
v
)
2
1
2
j
0
v
)
4
1
0.902
3
cos(2
)
2
j
)
4

5
2
1.34
3
cos(0)
2
j
4
1
0.902
3
cos(2*0)
2
j
4

5
2
1.34
3
1
2
j
0
4
1
0.902
(5.76)

5
2
1.34
1
0.902

5

1
j
0
0
H
v
)
0
5
0
1
0
0.562
1
j
0
0

5
b

5
Ar
Ar
Ar
(0/1)
(0/0.562)
Remember that for magnitudes of fractions, you need to take the magnitude of the numerator
and denominator separately; also for phase, the total is the difference of the Arg(num) and
Basic DSP Theory 149
5.18.2 Challenge
Finish the rest of the direct evaluation calculations on your own. The answers are in Table 5.4 .
1)
Steps 1 to 3: Take the
Table 5.4: Challenge answers.
Frequency (
23.15 dB
dB
dB
dB
dB
dB
dB
d
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
+180.0
+120.0
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
150 Chapter 5
Form t

z
)
5
Y
(
z
)
z
)
5
a
a
1
b
Factor out
z
)
5
a
1
a
1
b


a
a
Figure 5.52:
lter.

Figure 5.51: RackAFXÕs frequency and phase responses are taken from the
-transform
of the impulse response.
I
R
Basic DSP Theory 151
Step 4: Estimate the frequency response
This transfer function has one pole and one zero and both are Þ rst order. Like the other Þ
order cases, we can Þ nd the pole and zero by inspection of the transfer function:
z
)
5
a
1
a
1
b
5
a
1
a
1
b
In the numerator, you can see that if
the numerator will go to zero and the transfer
function will go to zero. In the denominator, you can see that if
the denominator
will go to zero and the transfer function will go to inÞ nity. Therefore we have a zero
and a pole at
. For this example, use the following values for the
0.92, and so we now have
0 and a pole at
0. The pole/zero pair
are plotted in Figure 5.53 .
Evaluating the frequency response when you have mixed poles and zeros is the same as
before, but you have to implement both magnitude steps.
¥ Locate each evaluation frequency on the outer rim of the unit circle.
¥ Draw a line from the point on the circle to
each
zero and measure the length of these
vectors. Do it for each evaluation frequency.
¥ Draw a line from the point on the circle to
each
pole and measure the length of these
vectors. Do it for each evaluation frequency.
Figure 5.53: The pole and zero are both purely real and plotted on the real axis in the
-plane.
152 Chapter 5
1

(5.81)

N
5
t

U
the
v
)
i
th zero

the
v
)
i
th
ole
(5.82)

N
5
t

u
t
i
t
U
t
i
t
5.54 through 5.58 .
Step 5: Direct evaluation
You can evaluate the Þ lter the same way as before using EulerÕs equation to separate
the real and imaginary components from the transfer function. Evaluate at the following
frequencies:
¥ DC: 0
¥ Nyquist:
¥ ½ Nyquist:
¥ Å Nyquist:
1.75
1.30
0
–12
0.0
0.2
d
0
–12
1.7
1
1.7
0
–12
1.40
*
1
1.3
Basic DSP Theory 153
Figure 5.54: The magnitude response at DC is
11.1 dB. Look at the equation and
you can see the zero value bringing down the total while the pole value is trying to push it
Figure 5.55: The magnitude response at
distances are almost the same. The tug of war ends in stalemate here at 0.17 dB of gain.
Figure 5.56: With the pole slightly closer to the evaluation frequency, the magnitude response at
0.64 dB.
m
0
–12
*
1
1.7
½
Nyquis
154 Chapter 5
1
b

5
1
2
0.92
2
0.71
v
)
5
1
2
0.92
j
v
2
Figure 5.57:
/4 the pole/zero ratio favors the pole and the response perks up to
1.0 dB; notice that this is the frequency where the pole is clearly dominating,
but just barely.
Figure 5.58:
response, a useful Þ lter in audio.
Basic DSP Theory 155
Apply EulerÕs equation:
v
)
5
1
2
0.92
v
2
v
v
)
5
1
2
0.92
3
cos(
v
)
2
j
v
)
4
2
0.71
3
cos(
v
)
2
j
v
)
4

5.19.1 DC (0Hz)
v
)
5
1
2
0.92
3
cos(
v
)
2
j
v
)
4
2
0.71
3
cos(
v
)
2
j
v
)
4

5
1
2
0.92
3
cos(0)
2
j
4
2
0.71
3
cos(0)
2
j
4

5
1
2
0.92
3
1
2
j
0
4
2
0.71
3
1
2
j
0
4
(5.84)

5
0.08
1
j
0

0.29
1
j
0
0

5
"
b
b
5
"
Ar
Ar
Ar
(0/0.08)
(0/0.29)
15.19.2 Challenge
Finish the rest of the direct evaluation calculations on your own. The answers are in
Table
5.5 .
Table 5.5: Challenge answers.
Frequency (
1.00 dB
0.82 dB7.23
dB
d
dB
dB
dB
dB
dB
dB
dB
dB
dB
dB
dB
dB
kHz
kHz
kHz
kHz
kHz
kHz
kHz
kHz
kHz
Hz
Hz
kH
kHz
Hz
H
156 Chapter 5
Thus, once again the direct evaluation backs up the estimation from the
-plane. Because
we have a feedback path, extracting the impulse response will be tedious but we can use
RackAFXÕs pole/zero Þ lter module to analyze the impulse response. Figure 5.59 shows the
measured impulse response, while Figure 5.60 shows the frequency and phase responses.
Figure 5.59: Impulse response of the Þ
rst-order shelving Þ
lter.

Figure 5.60: Frequency and phase responses of the Þ
rst-order shelving Þ
lter.

Basic DSP Theory 157
A Þ
rst-order shelving Þ lter is a pole-zero design. The shelf will be located in the region
(
n
2
1)
2
2) (5.86)
Steps 1 to 3: Take the
(
n
2
1)
2
(
n
2
2)
Y
(
z
)
5
a
z
)
1
a
z
)
z
a
z
)
z
b
(
z
)
z
b
(
z
)
z
Se
Form trans

z
)
5
Y
(
z
)
z
)
5
a
a
a
1
b
b
Figure 5.61: The bi-quad.
R
P
ö
158 Chapter 5
Factor out
z
)
5
a
1
a
a
1
b
b

(5.87)

a
a
a
Step 4: Plot the poles and zeros of the transfer function
The bi-quad will produce a conjugate pair of zeros and conjugate pair of poles from the
numerator and denominator respectively. Calculating these locations is the same as in the
pure second-order feed forward and feed back topologies. All you need to do is plot them in
the same unit circle. The transfer function becomes (by simple substitution from previous
z
)
5
a
2
2
u
)
z
R
2
2
2
f
)
z
R
(5.88)
m
e
Basic DSP Theory 159
¥ Draw a line from the point on the circle to
each
pole and measure the length of these
vectors. Do it for each evaluation frequency.
z
1.00

"
1.00
an

a
2(1.00) cos(
68.6
Poles are calculated as follows:

0.88

"
Figure 5.63: Pole/zero plot for the example bi-quad Þ
lter.

dB
dB
dB
dB
dB
dB
dB
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
of
the
zero
of
the
pole
160 Chapter 5

2
52
0.78
2
5
0.78

f
)
5
0.78
(5.90)

arccos(0.414)

65.5
The poles and zeros are in close proximity to each other. The zero is directly on the unit
1.0), so we expect a notch to occur there. The pole is near the unit circle but not
touching it, so we expect a resonance there.
We are not going to go through the full response estimation or direct evaluation since itÕs just
Basic DSP Theory 161
4. Estimate the frequency response.
Direct evaluation of frequency response.
6.
transform of impulse response as a Þ
nal check.
1

(5.91)

N
5
t

U
the
v
)
i
th zero

the
v
)
i
th
ole
Arg(
(5.92)

N
5
t

t
i
t
U
the an
For direct evaluation, you simply plug in various values for frequency and crank through the
algebra. We applied this same routine to feed-forward, feed-back, and combination types of
algorithms, and then used RackAFX to check our results. We also classiÞ ed the Þ
lters into
IIR and Finite Impulse Response (FIR) types.
¥ Any Þ
lter with a feed-back path is IIR in nature, even if it has feed-forward branches
as well.
¥ The feed-back paths in IIR Þ lters produce poles in the
-plane and the poles cause gain to
occur in the magnitude response.
¥ An IIR Þ
lter can blow up when its output steadily increases toward inÞ nity, which occurs
when the poles are located outside the unit circle.
¥ If the IIR Þ
lter also has feed-forward branches it will produce zeros as well as poles.
¥ IIR Þ
lters can have fast transient responses but may ring.
¥ The FIR Þ
lter only has feed-forward paths.
¥ It only produces zeros in the
162 Chapter 5
¥ The FIR Þ
lter is unconditionally stable since its output can only go to zero in the
worst case.
¥ FIR Þ
lters will have slower transient responses because of the time smearing they do on
the impulse response.
¥ The more delay elements in the FIR, the poorer the transient response becomes.
Bibliography
Ifeachor, E. C. and Jervis, B. W. 1993.
Digital Signal Processing: A Practical Approach
, Chapter 3. Menlo Park,
CA: Addison-Wesley.
Kwakernaak, H. and Sivan, R. 1991.
, Chapter 3. Englewood Cliffs, NJ: Prentice-Hall.
Moore, R. 1990.
, Chapter 2. Englewood Cliffs, NJ: Prentice-Hall.
Oppenheim, A. V. and Schafer, R. W. 1999.
It’s time to put the theory into practice and make some audio  lters and equalizers (EQs).
In this  rst category of design techniques, you manipulate the poles and zeros directly in
-plane to create the response you want. You take advantage of the simple equations that
relate the coef cients to the pole/zero locations. Consider the bi-quad. Equation 6.1 shows the
numerator. Equation 6.2 shows the denominator.
z
)
5
a
1
a
a
2
5
a
2
2
u
)
z
R
2
a
2
u
)
a
R
(6.1)


0
164 Chapter 6
z
)
a
1
1
b
b
2
5
a
1
For the numerator or denominator, the
or
coef
cients are in direct control over the
6.2.1 First-Order LPF and HPF

, the corner frequency
Figure 6.1: The Þ
rst-order feed-back Þ
lter and difference equation.
enoug
clos
R
Audio Filter Designs: IIR Filters 165
Figure 6.2: The Þ
rst-order LPF has a single pole and zero on the real axis.

The design equations are as follows:
g
b
g
2
"
1
These simple 
rst-order 
lters work by moving the pole back and forth across the real axis
while holding a  xed zero at Nyquist. When the pole is on the right side of the unit circle,
low frequencies are gained up due to their close proximity to the pole. High frequencies
d
d
d
d
d
d
d
4kH
6kH
8kH
10kH
12kH
14kH
16kH
18kH
20kH
d
d
d
d
d
d
d
H
kH
166 Chapter 6
The difference equation is as follows:

(
n
2
1)
2

, center frequency

, 3 dB bandwidth; or
, quality factor
Figure 6.4: Second-order feed-back Þ lter block diagram.
Figure 6.3:
lter with

= 1 kHz.
b
contro
angl
b
o
I
Audio Filter Designs: IIR Filters 167
Figure 6.5: The location of the resonatorÕs conjugate poles are
The design equations are as follows:
2
4
1
b
u
a
(1
2
b

1
2
b
1
The resonator works by simple manipulation of the conjugate poles formed with the second-
d
d
d
d
d
d
d
H
kH
d
d
d
d
d
d
d
H
168 Chapter 6
low end is boosted ( Figure 6.6 ). The opposite happens ( Figure 6.7 ) when the pole moves to
(
n
2
1)
2
(
n
2
2)
(6.7)
2
0
Audio Filter Designs: IIR Filters 169

, center frequency

, 3 dB bandwidth; or
, quality factor
The design equations are as follows:
b
2
4
1
b
u

1
2
"
This design is also gain normalized with
R
+12.
0.
–12.
–24.
–36.
–48.
–60.
10H
1
kH
170 Chapter 6
6.4 Analog Filter to Digital Filter Conversion
A more widely used approach to  lter design is to  rst start with the classic analog designs
and then convert them into digital versions. There’s good reason to do this because there
are many excellent analog designs already done for us. We just need a way to make them
work in our digital world. While analog  lter design is outside the scope of the book,
Digital
 Uses a transfer function to relate I/O
 Delay elements create phase shift
 Uses the
frequency)
 Poles and zeros in the
 Nyquist limitation
 Poles must be inside the unit circle for
 Uses a transfer function to relate I/O
 Reactive components create phase shift
 Uses the Laplace transform (continuous
time to frequency)
 Poles and zeros in the
 All frequencies from
allowed
 Poles must be in the left-hand part of the
Figure 6.9:
The effect on the resonator shape with the added zeros to hold down DC and
Nyquist. This Þ lter has

= 44.1 kHz,

=
= 20

m
H
z
e
Nyquis
ù

Audio Filter Designs: IIR Filters 171
In Figure 6.10 you can see the
-plane on the right—it is also a complex plane. The
axis is the
frequency axis and it spans
rad/sec. The unit circle maps to the portion on the
oo
e
–½
–Nyquis
z
Nyquis
ù
R
e
oo
oo
oo
oo
oo
172 Chapter 6
Figure 6.11: Mapping the poles and zeros from the analog
-plane to the digital
-plane.
Figure 6.12: Mapping the inÞ nitely large left-hand plane into the Þ nite space inside the unit
circle and the right-hand plane into the exterior of the unit circle.
Audio Filter Designs: IIR Filters 173
We wish to convert an analog transfer function
valent
Effectively, this means we need a way to sample the continuous
-plane to produce the
-plane version. In other words, we need to create a sampled analog transfer function
) where the subscript
stands for “sampled.” We seek to  nd a function
) such that
Equation 6.9 holds true.
) (6.9)
Since the sample interval is
, then the term
would correspond to one sample in time.
So, if we evaluate
) (6.10)
To solve, we note that
nition of
1
1
1
1
a
z
2
1
1
1
b
1
a
z
2
1
1
1
b
1
a
z
2
1
1
1
b
...
d
Taking

s
5
2

2
1
(6.12)
This 
approximation
of the general solution is the bilinear transform. The bilinear
transform we use is Equation 6.13 :
The 2/
term washes out mathematically, so we can neglect it. This equation does the
mapping by taking values at the in nite frequencies and mapping them to Nyquist. So, a pole
.
oo
oo
174 Chapter 6
The bilinear transform maps analog frequencies to their corresponding digital frequencies
nonlinearly via the tan() function ( Equation 6.14 ):
d

v
f
(6.14)
The tan() function is linear at low values of
but becomes more nonlinear as the frequency
increases. At low frequencies, the analog and digital frequencies map closely. At high
frequencies, the digital frequencies become warped and do not map properly to the analog
counterparts. This means that a given analog design with a cutoff frequency
might have the
wrong digital cutoff frequency after the conversion.
The solution is to pre-warp the
analog

lter so that its cutoff frequency is in the wrong
location in the analog domain, but will wind up in the correct location in the digital
domain. To pre-warp the analog  lter, you just use the same equation ( Equation 6.14 )
applied to the cutoff frequency of the analog  lter. When you combine all the operations
Audio Filter Designs: IIR Filters 175
1. Start with an analog 
)—“normalized” means
3. Scale
the normalized 
lter’s cutoff frequency out to the new analog cutoff
by replacing
with
in the analog transfer function.
4. Apply the bilinear transform by replacing
with Equation 6.13 :
5.
Manipulate the transfer function
d
v
v
f
F
 Specify
and
, the lower and upper corner frequencies of the digital 
lter.
 Calculate the two analog corner frequencies with Equation 6.16 :

d
v
tan
c
v
f
d
v
2
v
5
v
v
(6.16)
176 Chapter 6
Next, scale the  lter with Equation 6.17 :
TypeSca
s
5
s
s
5
v
BPF
s
5
s
v
0
BSF
s
5
Ws
Convert the basic resistor-capacitor (RC) analog LPF in Figure 6.14 into a digital LPF. The
sample rate is 44.1 kHz and the desired cutoff frequency is 1 kHz.
v
RC
1
1
Step 2: Calculate the pre-warped analog cutoff:
rad/sec
(6283.2)(1/44100)
Figure 6.14:
A simple analog RC low-pass Þ
lter.

Audio Filter Designs: IIR Filters 177
Step 3: De-normalize the analog transfer function

s
)
5
1
1
s
5
s
s
)
5
1
/
v
1
5
1
/0.07136
Step 4: Apply the BZT:
1)/(
2
1
1
1
1
0.07136
5
z
1
1)
2
1
1
0.07136(
z
1
1)
5
1
0.07136
2
1
1
0.07136
1
0.07136
5
1
0.07136
1
0.07136
2
0.9286

(6.22)
1
0.07136
2
0.9286
1

z
)
1
0.0667
1
2
0.8667
1
a
1
1
b

Equation 6.22 is in the format that we need with the numerator and denominator properly
formed to observe the coef cients. From the transfer function, you can see that:

0.0667

0.0667

–0.8667
The difference equation is Equation 6.23 :
(
n
1
0.0
(
n
2
1)
1
0.8
(
n
2
1)
(6.23)
0.866
)
178 Chapter 6
s
)
5
(1/
The analog LPF has the following speci
cations:
the bilinear transform and some algebra to  nd the coef
cients. (Answer:
6.5 Effect of Poles or Zeros at InÞ
In the analog transfer function ( Equation 6.19 ) of the previous example, you can see that
1 since that would make the transfer function go to in
nity,
) to become 0.0. There is also a zero
. Interestingly, these two in nity values are in the same location because the reality
axes actually wrap around an in nitely large sphere and touch each other
. So, in this  rst-order case engineers only show the single zero at in nity and they
so this transfer function’s pole and zero would be plotted like
Figure 6.16 in the
-plane. For low-pass zeros at in nity, the bilinear transform maps the zero
at in nity to
Figure 6.17 ).
Figure 6.15:
The digital equivalent of the analog RC low-pass Þ
lter.

ù
ó
Audio Filter Designs: IIR Filters 179
Next consider the second-order analog low-pass  lter transfer function:
(1/
This transfer function has a pair of conjugate poles at locations
and
or (
. The bilinear transform maps the poles on the
Figure 6.16: The pole at


-plane.
Figure 6.17: The bilinear transform maps real zeros at inÞ nity to the Nyquist
-plane.
– bj
/4
zer
Analo
Digita
f

180 Chapter 6
-plane to locations inside the unit circle. Once again, it maps the zeros at
1 or the Nyquist frequency ( Figures 6.18 and
Figure 6.18: The bilinear transform maps imaginary zeros at inÞ
-plane.
Audio Filter Designs: IIR Filters 181
6.6 Generic Bi-Quad Designs
The following classical analog  lters are converted to digital and implemented as bi-quad
 LPF (low-pass 
lter)
 HPF (high-pass 
lter)
 BPF (band-pass 
lter)
 BSF (band-stop 
lter)
 Second-order Butterworth LPF and HPF
 Second-order Linkwitz
Riley LPF and HPF (good for crossovers)
 First- and second-order all-pass 
lters (APF)
Low-pass and high-pass  lters are characterized by their corner frequency
and (for second-order
or resonant peaking value. A
of 0.707 is the highest value
can assume
before peaking occurs. It is called the Butterworth or maximally  at response. With a
of 0.707
3 dB point of the  lter will be exactly at
. For these designs, once the
rises above 0.707,
it will correspond to the peak frequency and not the
3 dB frequency. Equations 6.26 through
6.29 relate the
, peak frequency,
3 dB frequency, and the peak magnitude values.
3dB
f
a
1
1
b
1

a
1
1
b
1

f
3dB
a
1
1
b
1

a
1
1
b
1
Peak gain
0.25
Q
2

(6.28)

Peak
20lo
2

(6.29)
(6.30)


(6.31)


(6.32)

–1
1
–1
2
182 Chapter 6
The block diagram is shown in Figure 6.20 .
The difference equation is as follows:

(
n
2
1)
2
2) (6.33)
6.6.1 First-Order LPF and HPF

, corner frequency; see Figure 6.21 for examples
The design equations are as follows:
1
sin
5
cos
u
1
sin
1
2
g
a
1
1
g
a
1
2
g
a
a
1
1
g
Figure 6.20: Generic bi-quad structure.
d
d
d
d
d
d
d
H
H
1kH
kH
H
kH
Audio Filter Designs: IIR Filters 183
6.6.2 Second-Order LPF and HPF

, corner frequency

, quality factor controlling resonant peaking; Figure 6.22 shows the effect of Q and
The design equations are as follows:

5
1
b
5
0.5
1
2
d
sin
u
1
d
sin
u
5
0.5
1
2
d
sin
u
1
d
sin
u
5
1
0.5
1
b
2
cos
5
1
0.5
1
b
2
cos
0.5
1
b
2
g
a
0.5
1
b
1
g
a
0.5
1
b
2
g
a
1
0.5
1
b
1
g
2
a
0.5
1
b
2
g
a
0.5
1
b
1
g
Figure 6.21 :

= 100 Hz, 250 Hz, 500 Hz, and 1 kHz.
10H
kH
184 Chapter 6
6.6.3 Second-Order BPF and BSF

, corner frequency

, quality factor controlling width of peak or notch
; Figure 6.23 shows the BSF
version
Note: These 
lter coef
cients contain the tan() function, which is unde
ned at
1
tan
1
u

2

5
0.5
2
tan
1
u

2
6.6.4 Second-Order Butterworth LPF and HPF

, corner frequency
Figure 6.22: Second-order LPF responses:

= 1 kHz,
= 0.707, 2, 5, 10.
rises above 0.707,

becomes the peak frequency.
BW=
Audio Filter Designs: IIR Filters 185
Butterworth low-pass and high-pass  lters are specialized versions of the ordinary second-
order low-pass  lter. Their
values are  xed at 0.707, which is the largest value it can
assume before peaking in the frequency response is observed.
The design equations are as follows:

p
f

5
tan(
p
f

a
1
1
"
1
C
1
1
"
1
C
2
2
a
a
2
2
C
b
2
C
1)
b
a
2
"
1
C
b
a
2
"
6.6.5 Second-Order Butterworth BPF and BSF

, corner frequency

, bandwidth of peak/notch
Butterworth BPF and BSF are made by cascading (BPF) or paralleling (BSF) a Butterworth
LPF and Butterworth HPF.
Note: These 
lter coef
cients contain the tan() function, which is unde
ned at
186 Chapter 6
The design equations are as follows:
p
f
C
5
tan(
p
f
D
5
2 cos(2
f
D
5
2 cos(2
f
a
1
1
C
a
1

6.6.6 Second-Order LinkwitzÐRiley LPF and HPF

, corner frequency (
6 dB)
Second-order Linkwitz–Riley LPFs and HPFs are designed to have an attenuation of
at the corner frequency rather than the standard
3 dB, shown in
lters are placed in parallel with the same cutoff frequency, their outputs sum exactly and
the resulting response is  at. They are often used in crossovers. We use them for the spectral
The design equations are as follows:
LPF
HPF
p
f
p
f
5
V
u
k
5
V
u
d
5
k
2
2
V
5
k
2
V
V
a
k
a
2
V
a
2
2
a
V
a
k
b
2
2
2
b
2
2
2
b
2
2
V
k
b
2
2
V
k


(6.39)

-4.
-6.
R
e

3ð/
ð
Angle
Audio Filter Designs: IIR Filters 187
All pass  lters have interesting designs that yield equally interesting results. Their frequency
responses are perfectly  at from DC to Nyquist. However, the phase responses are the
Figure 6.25: The Þ
rst-order APF has a ß
at frequency response but shifts

(
/2 in this example).
R
1/
-180°
Ð/4
Ð/2
188 Chapter 6
6.6.7 First- and Second-Order APF

, corner frequency

, steepness of phase shift at
(second-order only)
The design equations are as follows:
Secon
a
5
tan
1
p
f

2
1
1
p
f
c

1
1

5
tan
1
p
Q
2
1
1
p
Q
1
1
52
cos
a
a
a
1.0
b
1
1
2
a
2
a
0.0
a
1.0
b
a
b
1
1
2
a
2
b
0.0
(6.40)
6.7 Audio SpeciÞ
c Filters
The basic classical  lters provide many functions in audio and can be very musical
(e.g., resonant low-pass  lters in synthesizers) but you also need  lters that are very audio
speci c. These  lters are found all throughout plug-ins and have been used in mixing
textbooks because of their speci c audio-only functions. These 
lters include:
 Shelving 
lters
Audio Filter Designs: IIR Filters 189
These all require a slightly modi ed bi-quad structure. The reason is that these 
lters require
mixing the original, un ltered input directly with the output in a mix ratio. The ratio is
controlled by two more coef
and
.
6.7.1 ModiÞ
ed Bi-Quad
You can see that the  lter in Figure 6.27 is a bi-quad with two extra coef
and
kHz
kHz
Hz
Hz
dB
dB
dB
dB
dB
dB
dB
190 Chapter 6
The design equations are as follows:
H
(dB)/20
(dB)/20
1
m

5
1
1
m

5
b
1
u
2

5
b
1
u
2

5
1
2
d
1
d

5
1
2
d
1
d
a
1
2
g
a
1
1
g
a
1
2
g
a
a
1
1
g
Shelving 
lters are used in many tone controls, especially when there are only two, bass
and treble, which are almost always implemented as shelf types. The  lters have a corner
frequency and gain or attenuation value. Figure 6.28
shows a family of shelving 
response curves.
Figure 6.28:
lter responses. The low shelf frequency = 400 Hz,
1
1
10
-6.
d
-3.
d
0.
d
+3.
d
+6.
d
+9.
d
+12.
d
Audio Filter Designs: IIR Filters 191
kH
kH
H
192 Chapter 6
The design equations are as follows:
(dB)/20
1
m

5
0.5
1
2
z
tan
1
u
Q
2
1
z
tan
1
u
Q
2

5
1
0.5
1
b
2
cos
0.5
2
b
a
0.0
a
(0.5
2
b
b
2
b
2
c
m
2
1.0
d
1.0

(6.42)

1
1
-6.
-3.
0.
+3.
+6.
+9.
+12.
d
d
d
d
d
d
d
Audio Filter Designs: IIR Filters 193
The design equations are as follows:

boost
(dB)/20
K
1
K
1
1
1
Boost Cut
1
K
a
a
a
5
2
1
K
1
2

a
a
b
5
1
2
1
K
a
g
a
5
1
2
1
K
b
b
b
b
5
1
2
K
1
K
b
b
h
c
1.0
c
1.0

0.0
Figure 6.31: The constant-Q peaking Þ lter preserves the bandwidth over most
of the range of boost/cut values.

= 1kHz,
= 0.707.
194 Chapter 6
6.7.5 Cascaded Graphic EQ: Non-Constant-Q

, center frequency
 Gain/attenuation in dB
The graphic EQ is a 
xed
Num
Q
5
"
For a 10-band graphic EQ,
1.414, while for a 31-band (aka 1/3 octave) EQ,
The center frequencies of the bands are usually found with the following International
Organization for Standardization (ISO) standard equation:
kH
kH
H
H
1
=
6
H
2
3
=
25
1
Audio Filter Designs: IIR Filters 195
1000*2
5
0,
6
1,
6
2,
6
Figure 6.34:
At the prescribed constant-Q value of
= 1.414 we observe rippling and
an increased high-frequency response with all controls at maximum boost.
1
3
2
1
=
6
H
H
H
kH
kH
196 Chapter 6
The constant-Q graphic EQ follows the same design pattern ( Figure 6.35 ) except that you
use the constant-Q peaking  lter in each of the modules. The equations for  nding the center
are the same as above. Bohn (1986) recommends not rounding the ISO
center frequencies but rather use the exact values. Figure 6.36 shows the composite response
Figure 6.36: The 10-band constant-Q graphic EQ with the prescribed
= 1.414
with all controls at maximum boost.
Audio Filter Designs: IIR Filters 197
 le and the implementation in the pluginobjects.cpp  les, respectively. The CBiquad object
Table 6.2 .
6.8.1 Project: ResonantLPF
Create the project and name it “ResonantLPF,” then add the sliders for the graphical user
interface (GUI).
6.8.2 ResonantLPF GUI

6.37 shows what your  nal GUI will look like. You will need the controls shown in
Table
6.3 .
Table 6.2: The CBiquad object interface.
Member VariablesPurpose
protected:
float m_f_Xz_1
float m_f_Xz_2
float m_f_Yz_1
float m_f_Yz_2
Implements the four delay elements needed for the bi-quad:
 2) in these protected
float m_f_a0
float m_f_a1
float m_f_a2
float m_f_b1
float m_f_b2
lter coef cients
Member Functions
ushDelays()
oat doBiQuad(ß oat f_xn)
198 Chapter 6
6.8.3 ResonantLPF.h File
RackAFX provides you with the built-in bi-quad object named CBiquad and you don’t
need to do anything at all to add it to your project. You can just declare your bi-quad objects
directly in the .h  le like you would any other object:
CBiquad m_LeftLPF; // or whatever you like to name it
Here’s my .h  le with the left and right LPF objects declared:
// Add your code here: ------------------------------------------------------- //




// END OF USER CODE ---------------------------------------------------------- //
We’ll also need a function to calculate the bi-quad’s coef
cients (
, and
) and we
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Figure 6.37: The resonant LPF GUI.
H
Audio Filter Designs: IIR Filters 199

void calculateLPFCoeffs(” oat fCutoffFreq, ” oat fQ);

// END OF USER CODE ---------------------------------------------------------- //
6.8.4 ResonantLPF.cpp File
Write the calculateLPFCoeffs() function in the .cpp  le by using Equation 6.35 . I have used
the same intermediate coef cient names here too.
void CResonantLPF::calculateLPFCoeffs(” oat fCutoffFreq, ” oat fQ)
{
// use same terms as in book:

200 Chapter 6
// Add your code here:

m_LeftLPF.” ushDelays();


m_RightLPF.” ushDelays();

// calculate the initial values


shel
Nyquis
Nyquis
a
Gai
a
Low-Pas
Matchin
Poin
filte
hig
shel
abov
Audio Filter Designs: IIR Filters 201
kH
1kH
H
10 H
z
LP
202 Chapter 6
The design equations are as follows:
4
1
a


max
1
"


v
2
f
2
g
m

tan


V
1
g
g
1
2
g
2
(6.46)

g
1
1

a
1
g

a
g

b
5
2
1

a
a

a
a

a
0

b
b

Figure 6.39: The Massberg and unmodiÞ ed LPF responses with

= 5 kHz and
= 10.
The difference in the high-frequency response is evident.
Audio Filter Designs: IIR Filters 203
6.9.2 Second-Order Massberg LPF

, corner frequency

, quality factor controlling resonant peaking
The design equations are as follows:
a
2
2
a
"
1
a
2
u
depending on the value of Q:
"

"
g
2
1
v
u
2
2
1

1
2
4
4
u
1
2
1
V
tan
tan
V
u
1
2
g
1
V
g
g
1
b
V
min
1
V
Calculate the pole and zero frequencies (
v
2 arctan
1
V

v
2 arctan
a
1
2
a
v
1
a
v
u
g
a
1
2
a
v
1
a
v
u

g
g
g
g
g
1)
Q

g
g
g
g
g
1)
204 Chapter 6
1
a
"
g
a
2
1
V
g

b
2
1
V
1
2
a
"
g
b
1
(6.48)
a
a
a
b
b
a
b
b
Challenge: Modify your digital resonant LPF plug-in to add the Massberg  lter option, then
experiment with high- delity music and listen for the differences in sound.
Biblio graphy
Allred, R. 2003. Second-order IIR  lters will support cascade implementations: 5 part digital audio application
EE Times Design Guide
Audio Filter Designs: IIR Filters 205
Motorola, Inc. 1991.
Digital Stereo 10-Band Graphic Equalizer Using the DSP56001
. APR2/D. Schomberg, ON:
Oppenheim, A. V. and Schafer, R. W. 1999.
Ifeachor, Emmanuel C. and Jervis, Barrie W. 1993.
Digital Signal Processing: A Practical Approach
. Menlo Park:
Addison-Wesley. pp. 398–400.
Before we can start looking at some Þ nite impulse response (FIR) algorithms, we need to
deal with the concept of long delay lines or circular buffers. Not only are they used for the
delay effects, but also they are needed to make long FIR Þ lters. In this chapter weÕll take a
break from the DSP Þ lter algorithms and develop some digital delays. If you think back to the
inÞ nite impulse response (IIR) Þ lters youÕve worked on so far you will remember that after
implementing the difference equation, you need to shufß e the
delay element values. You
do this by overwriting the delays backwards, like this:
m_f_z1;
; //
is the input sample
Suppose you had a Þ lter that was higher than a second-order one and you had to implement

Delay Effects and
Circular Buffers
pBuffer[n
Buffe
Incremen
Buffe
Buffe
Buffe
pBuffer[
??
208 Chapter 7
LOO
pointe
b
+
sample
+
5
+
10
+
15
Delay Effects and Circular Buffers 209
Circular buffers are useful in audio signal processing. You can create circular buffers of audio
Figure 7.2: In a circular buffer, the pointer is automatically wrapped back to the top and
Figure 7.3: Basic DDL.
210 Chapter 7
From the difference equation in Equation 7.1 , you can see that the output consists of an input
*
). The
sequence of accessing the delay line during the processAudioFrame() function is as follows:
1. Read the output value, y(n), from the DDL and write it to the output buffer.
2. Form the product
*
3. Write the input value,
), into the delay line.
7.4
the last write access and just before we increment the pointer index.
If pBuffer is pointing to the current sample value
¥ Where is the
1) sample (the youngest delayed value)?
¥ Where is the oldest sample in the delay?

In Figure 7.5 the youngest sample,
1), is in the location just before pBuffer[i], that is
pBuffer[i
1]. The oldest sample is found by working backwards to the top of the buffer,
+
delaye
sampl
delaye
sampl
Delay Effects and Circular Buffers 211
wrapping back to the bottom, and locating the oldest sample written; it is at pBuffer[i
sample is just above
) and the oldest is just below it. It is easy to understand that the
1) but why is the oldest sample labeled
The answer to the question is that we overwrote the actual oldest sample,
), when we
). This is one of the reasons for our rule about always performing reads before
Figure 7.5: The youngest and oldest samples in the delay line.
n–
delaye
sampl
delaye
sampl
212 Chapter 7
3. Declare a ß
oat buffer for each channel, right and left: for very long delay lines this is
new
operator in the constructor of the plug-in.
backwards if needed, to locate the delayed sample.
You have declared two indices, m_nRead and m_nWrite, to use for buffer. During the
processAudioFrame() function you will need to do the following Þ
ve steps.
Step 1: Read out the delayed audio data,
) sample time; this value is
), the current output value ( Figure 7.7 ).
” oat yn = pBuffer[m_nRead];
Step 2: Form the input combination input + feedback * output:
” oat xn = pInputBuffer[0] + m_fFeedBack*yn;
Figure 7.6: The location of the oldest audio sample
(
Ð
).
sampl
offse
=
mBnWrit
–10
– D –
100)
sampl
offse
+
fb y
Delay Effects and Circular Buffers 213
m_fFeedBack is declared in your .h Þ le; this example code is for processing the left channel,
pInputBuffer[0].
Step 3: Write the input data into the delay line at the m_nWrite location ( Figure 7.8 ).
Figure 7.8: The delayed sample plus feedback is written into the current write location.
214 Chapter 7
Notice that we wrap if the incremented index hits nBufferLength because this references the
vent that the user changes the delay time, you need to recalculate the m_nRead
m_nRead = m_nWrite - nSamples;

// the check and wrap BACKWARDS if the index is negative



m_nRead += nBufferLength; // amount of wrap is Read + Length
7.9 .
The difference equation is as follows:
(
n
1
7.2.1 Frequency and Impulse Responses
Consider the basic delay with
The difference equation is as follows:
x
x
D
To Þ
nd the frequency response, Þ rst take the
transform of the difference equation and form
x
x
D
(
z
)
5
X
z
)
1
X
z
)
z
5
z
)(1
1
z
H
(
z
)
5
Y
(
z
)
z
)
5
1
z
(7.4)

Delay Effects and Circular Buffers 215
Next, calculate the poles and zeros of the transfer function. We can see that this is a pure
feed-forward Þ lter in its current state so there are only zeros. We need to Þ nd the roots of the
0
1
j
sin(
52
1

0
if
Q5
;
p
,
;
3
,
;
5
Figure 7.10: The simpliÞ ed DDL of
samples delay.
216 Chapter 7
Notice that both
, and so on. So the actual solution to Þ nd the roots becomes
Equation 7.7 :
D
v
)
52
1
1
j
0
if
5
;
p
,
;
3
,
;
5
, ...,
;
N
p
until
D
2
1
or
zeros at
v
5
;
k
p
After
1, the whole mathematical sequence repeats again, cycling through odd
. This means that there are
zeros spread equally around the unit circle. This
makes senseÑthe fundamental theorem of algebra predicts
roots for a polynomial of
. Now consider the simple case of
)
52
1
if
v
5
;
k
p

5
1, 3, 5, ...,
v
5
;
p
There are tw
-plane and you can see what the frequency response will be, shown in
7.11 . You
can see from Figure 7.11 that the zeros produce a notch (zero of transmission) at
fact, when the delay time is very small, your ears hear the effect as a frequency response
change; your ears cannot discriminate two samples that are only 23 uS apart as separate
echoes. Now consider what would happen if we increase the delay amount to four samples,
as in Figure 7.12 . Finally, what would happen if the delay is an odd value, like
( Figure 7.13 )?
D
v
)
52
1
1
j
0
cos(4
)
1
j
sin(4
)
52
1
1
j
0
v
5
;
p
k

5
1, 3, 5, ...,
v
5
;
p
,
;
3

(7.9)

R
-24.0d
-36.0d
-60.0d
I
I
R
+12.0d
-24.0d
-36.0d
-60.0d
2kH
R
-24.0d
Delay Effects and Circular Buffers 217
D
v
)
52
1
1
j
0
cos(5
)
1
j
sin(5
)
52
1
1
j
0
v
5
;
k
p

5
1, 3, 5, ...,
1
v
5
;
p
,
;
3
,
;
5
v
5
;
p
;
3
This kind of frequency response in
Figure 7.13 is called
inver
. As we add
more and more samples of delay, we add more and more notches to the response. You can use
Figure 7.11: The
-plane pole/zero plot and resulting frequency response.
Figure 7.12: The
-plane pole/zero plot and resulting frequency response for
= 4 samples.
Figure 7.13: The
-plane pole/zero plot and resulting frequency response for
= 5 samples.
-24.0d
2kH
+12.0d
10H
100H
1kH
10kH
z
218 Chapter 7
the built in module in RackAFX to experiment. Figures 7.14 and 7.15 show the frequency
response for 32 samples of delayÑitÕs an in
erse comb Þ lter with 16 zeros in the positive
frequency domain.

7.2.2 The Effect of Feedback
When you add feedback to the delay, two things happen: Þ rst, for long delays your ear will
Figure 7.14: Frequency response (linear) with
= 32 samples.
Delay Effects and Circular Buffers 219
or 1.0, as shown in the block diagram in Figure 7.17 ; even though we know this would result
in oscillation, it will make calculating the frequencies easier.
The difference equation is as follows:
(
n
1
f
To derive the difference equation, label the output of the delay line
) ( Equation 7.12 ):
Figure 7.16: Impulse response with 90% feedback, 32 samples of delay.
Figure 7.17: Block diagram of the DDL with feedback.
220 Chapter 7
Equation 7.13 into Equation 7.12 gives you the following difference equation:
(7.14)
To Þ
nd the transfer function, separate variables and take the
transform:
(
n
D
)
5
x
(
n
1
x
(
n
D
)
2
f
(
n
D
)

Y
(
z
)
2
fbY
(
z
)
z
D
X
z
)
1
X
z
)
z
1
2
fb
4

Y
(
z
)
3
1
2
fbz
D
5
X
z
)
3
1
1
z
D
fbz

(
z
)
5
Y
(
z
)
z
)
5
1
1
z
D
fbz
fbz
D
(
z
)
5
1
1
(1
2
fb
)
z
D
The new transfer function has both zeros (which we already calculated) and poles. The poles
z
)
5
2
z
D
5
1
2
z
D

5
z
1

1
v
1
App
cos(
D
v
)
1
j
sin(
D
v
)
5
1
cos(
1
j
sin(
5
1
1
j
0
if
U5
0,
;
2
,
;
4
,
;
6
R
+20.0d
+4.0d
-12.0d
-44.0d
-60.0d
2kH
Delay Effects and Circular Buffers 221
D
v
)
5
1
1
j
0
if
5
0,
;
2
,
;
4
,
;
6
, .
;
N
p
until
D
2
1
or
poles at
5
;
k
p
Equation 7.17 shows that the poles will occur at even multiples of
EulerÕs equation becomes 1
0; the analysis is nearly identical to the zero case. Consider the
D
v
)
5
1
1
j
0
cos(4
)
1
j
sin(4
)
5
1
1
j
0
v
5
;
p
k

5
0, 2, 4, 6, ...,
v
5
0,
;
p
,
;
4
v
5
0,
;
p
Figure 7.18 shows the effect of 100% feedback Ð the response is technically inÞ nite at the
2kHz 4kHz 6kHz 8kHz 10kHz
12kHz
14kHz 16kHz 18kHz
20kHz
2kHz 4kHz 6kHz
8kHz 10kHz
12kHz
14kHz
16kHz
18kHz
20kHz
222 Chapter 7
Figure 7.20: The
-plane pole/zero plot and resulting frequency response for
= 4 samples,
50% feedback.
Figure 7.19: The
-plane pole/zero plot and resulting frequency response for
= 4 samples,
75% feedback.
only dependent on the amount of delay. Consider the transfer function with a feedback value
z
)
5
1
1
(1
2
fb
)
z

D
z
)|
5
0.75
1
0.25
D
The poles will have a radius of 0.75, while the zeros will have a radius of 0.25. This will
-plane plot and frequency response in Figure 7.19 . You can see that the lowered
radius results in less gain at the pole frequencies. The peaks are now softer and the overall
gain is reduced down to about
8 dB from inÞ nity. If you continue to drop the feedback to
50% (0.5) then the poles and zeros will be distributed at equal radii, as shown in Figure 7.20 .

Delay Effects and Circular Buffers 223
z
)
5
1
1
(1
2
fb
)
z
fbz
D
z
)|
52
1
1
1.5
D
If you look at Equation 7.20 , you can Þ gure out that the pole frequencies are going to lie at
the zero frequencies (notice the signs of the coefÞ cients). The zeros will be at a radius of 1.5,
while the poles will be at 0.5. A frequency lying on the unit circle will be under the inß
For the four-sample delay, a feedback value of Ð62% will make the frequency response
perfectly ß at, but with Ð3 dB of attenuation seen in Figure 7.21 . The poles will have radii of
0.62 with the zeros at radii of 1.38. This means you can create a delay that has no comb/inverse
ltering, but only at this particular value. Other negative feedback values will give
varying degrees of cancellation. In practice, the poles will dominate and small peaks can appear
Figure 7.22: The effect of inverted feedback on the impulse response; feedback
is Ð90% here.
224 Chapter 7
7.3 Design a DDL Module Plug-In
In the previous projects, it was easy enough to simply declare left and right delay elements
and coefÞ cients for our simple Þ lters. However, as the Þ lters become more complex, this
becomes more tedious and is also bad coding practice since we have replicated some code.
Delay Effects and Circular Buffers 225
7.3.1 Project: DDLModule
7.1 .
We do not need a switch for the feedback option on the UI; it will only be needed by the
Figure 7.24: The DDL Module GUI.
Table 7.1: GUI controls for the DDL module
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
%
%
226 Chapter 7
Table 7.1: GUI controls for the DDL module (Continued)
Slider Property
Value
Control Name
7.3.3 DDLModule.h File
In the .h Þ le, add the cooked variables, m_fDelayInSamples, m_fFeedback, and
¥ Initialize variables.
Delay Effects and Circular Buffers 227
CDDLModule::CDDLModule()
{
&#xSNIP;&#x SNI;&#xP SN;&#xIP00;SNIP SNIP SNIP
// Finish initializations here

m_fDelayInSamples = 0;


m_fFeedback = 0;


m_fDelay_ms = 0;


m_fFeedback_pct = 0;


Cooking the feedback v
alue is easyÑjust divide by 100 to convert the percent to a raw
228 Chapter 7
CDDLModule::CDDLModule()
{
&#xSNIP;&#x SNI;&#xP SN;&#xIP00;SNIP SNIP SNIP
// Finish initializations here
&#xSNIP;&#x SNI;&#xP SN;&#xIP00;SNIP SNIP SNIP



7.3.5 Declare and Initialize the Delay Line Components
For a delay line, you will need the following variables:
¥ A ß
oat* which points to a buffer of samples
¥ An integer read index
¥ An integer write index
¥ An integer that is the size of the buffer in samples
Delay Effects and Circular Buffers 229
Add them to your .h Þ
// Add your code here: --------------------------------------------------------- //
” oat m_fDelayInSamples;
” oat m_fFeedback;
230 Chapter 7

7.3.6 DDLModule.h File
// Add your code here: --------------------------------------------------------- //
” oat m_fDelayInSamples;
” oat m_fFeedback;
Delay Effects and Circular Buffers 231
7.1 . Note: The delay in samples is cast to an integer using


m_nReadIndex = m_nWriteIndex - (int)m_fDelayInSamples; // cast as int!


check and wrap BACKWARDS if the index is negative


if (m_nReadIndex 0)


m_nReadIndex += m_nBufferSize; // amount of wrap is Read + Length

}
7.25 .
DDL>read
=
+
=
+
(1-wet) x(
fo
wra
232 Chapter 7
the write pointer and read pointer will be the same. This also occurs if the user selects the
delay value since we want to read the oldest sample before writing it. So, we need
to make a check to see if there is no delay at all and deal with it accordingly.

oat* pOutputBuffer,
UINT uNumInputChannels, UINT uNumOutputChannels)
{
// SYNC CODE: DO NOT REMOVE - DO NOT PLACE CODE BEFORE IT


Delay Effects and Circular Buffers 233
// DDL Module is MONO - just do a copy here too
// Stereo-In, Stereo-Out (INSERT Effect)
if(uNumInputChannels == 2 && uNumOutputChannels == 2)

pOutputBuffer[1] = pOutputBuffer[0]; // copy MONO!


// SYNC CODE: DO NOT REMOVE

7.4 Modifying the Module to Be Used by a Parent Plug-In
Declare the following new variables:

bool m_bUseExternalFeedback
; // ” ag for enabling/disabling

” oat m_fFeedbackIn
; // the user supplied feedback sample value
234 Chapter 7
” oat* m_pBuffer;
int m_nReadIndex;
int m_nWriteIndex;
int m_nBufferSize;


bool m_bUseExternalFeedback; // ” ag for enabling/disabling


” oat m_fFeedbackIn;




7.4.2 DDLModule.cpp File
processAudioFrame()
¥ Modify the function to allow the use of externally supplied feedback samples:

oat* pOutputBuffer,
UINT uNumInputChannels, UINT uNumOutputChannels)
{
// Do LEFT (MONO) Channel
// Read the Input

oat xn = pInputBuffer[0];

// Read the output of the delay at m_nReadIndex

oat yn = m_pBuffer[m_nReadIndex];

// if zero delay, just pass the input to output
if(m_fDelayInSamples == 0)

yn = xn;


// write the input to the delay




m_pBuffer[m_nWriteIndex] = xn + m_fFeedback*yn; // normal






Delay Effects and Circular Buffers 235
the module to make tw
o different plug-ins:
1. Stereo digital delay
2. Stereo crossed-feedback delay
7.5 Modifying the Module to Implement Fractional Delay
Before we work on the bigger projects, we need to take care of the problem of
7.26 you can see a graphic representation of
x
frac
Sampl
fra
236 Chapter 7
Here is a linear interpolation function you can use; it is already declared in your
le:

” oat dLinTerp (” oat x1, ” oat x2, ” oat y1, ” oat y2, ” oat x);

You give it a pair of data points (
1) and (
Figure 7.26: Linear interpolation of sample values.
Delay Effects and Circular Buffers 237
where we are interpolating across the wrap boundary (from the last sample in the buffer
to the Þ rst one). Suppose the user enters a delay time that corresponds to 2.4 samples of
delay. In the cookVariables() function, we locate the read index to be two samples before
the write pointer because we cast the value to an integer, stripping out the fractional part.
238 Chapter 7
It really only comes down to locating the sample 1 behind our current read index, then using
7.5.1 DDLModule.cpp File
processAudioFrame()
¥ Modify the code to do the interpolation.

oat* pOutputBuffer,
UINT uNumInputChannels, UINT uNumOutputChannels)
{
// Do LEFT (MONO) Channel
// Read the Input
” oat xn = pInputBuffer[0];

// Read the output of the delay at m_nReadIndex
” oat yn = m_pBuffer[m_nReadIndex];


Delay Effects and Circular Buffers 239




// write the intput to the delay
if(!m_bUseExternalFeedback)
m_pBuffer[m_nWriteIndex] = xn + m_fFeedback*yn; // normfInterpal
else
m_pB uffer[m_nWriteIndex] = xn + m_fFeedbackIn; // external feedback
sample
}
7.6 Design a Stereo Digital Delay Plug-In
In this project, we use two DDL modules in one parent plug-in. RackAFX makes it easy
to do this by allowing you to add other plug-in components (.h and .cpp Þ les) into a new
project. It will automatically #include the components too. However, if you use external
modules or other Þ les you might need to manually #include these. In
7.29 you can
ve two DDL modules declared as member objects of the new plug-in.
The plug-in implements its own interface of three sliders, which we use to control our
modules.
7.6.1 Project: StereoDelay
Create a project named ÒStereoDelay.Ó When you create the project, you have the option of
including other modules in your code, seen in Figure 7.30 . RackAFX Þ nds all of the existing
RackAFX projects in the default directory you supply and lists them here. You use the Add
button to move them into your project. If you have a project located in another directory that
is not the default, you will need to move the Þ les on your own (copy them to the new project
directory and #include them in the ne&#xproj;ìt0;w project.h Þ le and add them into the compiler).
RackAFX will automatically copy them and #include whichever modules you choose. In
When you use a plug-in as a module for another parent plug-in you must create and implement
not
expose their sliders to RackAFX, but you can
manipulate the UI variables. All other aspects of the child objects work as expected. In this
plug-in, we will implement another UI to control the modules. See
Appendix
A.2 for advanced
control of the UI variables.
fb_out(n
Z
Dr
We
fb
Dr
y(n
Feedbac
240 Chapter 7
Figure 7.30: Adding existing modules can be done programmatically through RackAFX.
Delay Effects and Circular Buffers 241


#include "plugin.h"

// abstract base class for DSP “ lters
class CStereoDelay : public CPlugIn
{
public: // Plug-In API Functions

// 1. One Time Initialization


7.6.2 StereoDelay GUI
In the .h Þ le, declare two member objects of type CDDLModule. Also, add a function
242 Chapter 7
7.6.4 StereoDelay.cpp File
Delay Effects and Circular Buffers 243
bool __stdcall CStereoDelay::prepareForPlay()
{

Lef
Righ
Lef
Righ
mSe
244 Chapter 7
Rebuild and test the project and you now have a stereo version of the previous project.
Hopefully, you ha

Properties
Control
Control
uControiType
Control
Initial
Delay Effects and Circular Buffers 245
7.8 Enumerated Slider Variables
You can see from Figure 7.31 that there is a new slider control for the GUI to select
246 Chapter 7
case, we only need NORM and CROSS. Go to your plug-inÕs .h Þ le to see the new
// ADDED BY RACKAFX -- DO NOT EDIT THIS CODE!!! -------------------------------- //
// **--0x07FD--**

” oat m_fDelay_ms;
” oat m_fFeedback_pct;
7.8.1 Constructor
¥ Initialize the delay type to NORM.
CStereoDelay::CStereoDelay()
{
&#xSNIP;&#x SNI;&#xP SN;&#xIP00;SNIP SNIP SNIP

// Finish initializations here
m_DDL_Left.m_bUseExternalFeedback = false;
m_DDL_Right.m_bUseExternalFeedback = false;

Nothing to do; we are using this as a direct control variable.
Delay Effects and Circular Buffers 247
7.8.4 ProcessAudioFrame()
¥ Use the enumerated variable in a switch/case statement to modify the feedback as
required. For CROSS operation:
fb
248 Chapter 7
// Stereo-In, Stereo-Out (INSERT Effect)
if(uNumInputChannels == 2 && uNumOutputChannels == 2)
// forward call to sub-object pInput, pOutput, 1 input ch, 1 output ch

m_DDL_Right.processAudioFrame(&pInputBuffer[1], &pOutputBuffer[1], 1, 1);

y
fb
y
Delay Effects and Circular Buffers 249
Figure 7.35: A four-tap multi-tap delay line.
Figure 7.34: An analog delay modeled with an LPF in the feedback loop.
LEF

Dry
We
Dr
Righ
250 Chapter 7
Figure 7.36:
The ping-pong delay builds on the cross-feedback delay by crossing the
inputs as well as the feedback paths to produce the back and forth ping-pong
effect. You will probably want to design the advanced DDL module
rst and use its input, output, and feedback ports.

™™
Delay Effects and Circular Buffers 251
Bibliography
Coulter, D. 2000.
Digital Audio Processing
, Chapter 11. Lawrence, KS: R&D Books.
DSP56KFAM/AD. Schomberg, ON: Motorola, Inc.
Korg, Inc. 2000.
InÞ
nite impulse response (IIR) Þ lters have several attractive properties:
¥ They only require a few delay elements and math operations.
¥ You can design them directly in the
¥ You can use existing analog designs and convert them to digital with the Bilinear
z-Transform (BZT) ; the fact that IIR topologies somewhat resemble the signal ß
ow in
analog Þ lters emphasizes their relationship.
8.1 ).

254 Chapter 8
Figure 8.1: The sin(
)/
If you know how a system affects one single impulse, you can exactly predict how it will affect
a stream of impulses (i.e., a signal) by doing the time domain overlay. If you have the IR of a
system, you have the
for the system coded in a single function.
In the time domain, you can see how the IR of each sample is overlaid on the others and that
52`
In this case,
and
are two generalized signals and neither of them has to be an IR.
Convolution is commutative, so that
*
*
, or Equation 8.3 :
52`
52`
signal
f and
=
0
Audio Filter Designs: FIR Filters 255
The operation this equation is describing is not simple. On the right-hand side of
Equation 8.3 the function
) is one signal while the function
) represents the
second signal reversed in time. The multiplication/summation of the two across
to
describes the process of sliding the two signals over each other to create overlapping
n
a
z
z
z
a
a
256 Chapter 8
). Thus convolution in the time domain is multiplication in the frequency (
domain ( Equation 8.4 ).
Figure 8.4 : The familiar FIR feed-forward structure expanded out to
delay taps with

1 coefÞ cients. It is important to see that there is one less delay element than coefÞ
is multiplied against the original undelayed signal.
sample
the
the
Audio Filter Designs: FIR Filters 257
Next, mentally rotate the structure so it looks like Figure 8.5 . In Figure 8.5 you can see that at
any given time, a portion of the input signal
) is trapped in the delay line. On each sample
coefÞ cients and the samples
). The words Òsliding, summation and productÓ are key
hereÑtheyÕre the same words used to describe convolution.
In Figure 8.6 , the input signal
) moves through the delay line being convolved with
Figure 8.6: You can also think of the coefÞ cients as being frozen in the
(
) buffer while the input
signal marches one sample to the right on each iteration.
258 Chapter 8
So, if an ideal LPF has an IR in the shape sin(
8.2 Using RackAFX’s Impulse Convolver
RackAFX has a built-in module to do impulse convolution and a directory of IR Þ les that you
can experiment with. The impulses are stored in a directory called IR1024 and they are all
1024-point IRs. Some of them came from RackAFX itselfÑyou can save IRs of any plug-in
you make, then load them into the convolver module. You will also learn to write your own
convolution plug-in and tell RackAFX that your software would like to receive IRs any time a
user loads or creates one using the built-in tools.
8.7 ), you will see a box full of the IRs in your
IR1024 directory. You might not have the exact same list as this one but you will have the
Þ le Òoptimal.64.sirÓ in the list. All the IR Þ les are named with the Ò.sirÓ sufÞ x and must
be created in RackAFX or loaded using the RackAFX IR Þ le format (see the website for
File
d
d
d
d
d
d
H
H
kH
kH
Audio Filter Designs: FIR Filters 259
At the bottom right, you will see the buttons for loading and saving IR Þ
les. The Þ rst two,
Save
), will save and load the .sir Þ les from the IR1024 directory. The lower
two buttons save and load the IR to the clipboard. The IR is actually C++ code, and you can
use the clipboard to paste this code directly into your own source code. You might do this to
Figure 8.9: The frequency response for the optimal.64.sir Þ

+10.0d
260 Chapter 8
controls to give you a unique shape. For example, I will make a highly resonant LPF by
Figure 8.11: The ringing IR of the resonant LPF.
Audio Filter Designs: FIR Filters 261
8.2.3 The IR File Format
The IR Þ le actually contains C++ code and you can quickly understand how it works by using
the clipboard functions. In the analyzer window that you still have pulled up, click on the
button
Clipboard and after the success message, open a text editor or a C++ compiler.
Optima
262 Chapter 8
8.3 Using RackAFX’s FIR Designer
All RackAFX plug-ins already have two default IR arrays declared as m_h_Left[1024] and
m_h_Right[1024] and another variable m_nIRLength that deÞ nes how much of the 1024 point
Figure 8.12: The FIR designer controls consist of three parts. The order slider and edit box
Audio Filter Designs: FIR Filters 263
number of coefficients
For
(
number of samples in frequency domain, starting at 0 Hz
For
even
y domain, starting at 0 Hz
/
frequency spacing, starting at 0 Hz
Calculate the Þ
lter coefÞ cients
to
with Equation 8.5 :
1
1)/2
d`d
c
H
(0)
1
2
a
/2
2
i
5
1
H
(
i
)|
c
2
i
c
n
2
N
2
1
d`d
Note:
This produces half the coefÞ cients; the other half are a mirror image, as shown in the
264 Chapter 8
Example: Design an LPF with a cutoff of 5.5 kHz,
1.
16, which produces eight sampled points in the frequency domain with a spacing
of 2.756 kHz.
2. Sample the plot, producing the magnitude response,
) (
8.13 ).
For this plot notice that:
8.14 ).

The resulting Þ lter is guaranteed to exactly match the desired frequency response at the
0.
–12.
–48.
0
2.
5.
8.
13.
16.
6
8
1
14
2
+12.
0.
–24.
–36.
–48.
–60.
2
4
6
1
1
1
1
1
2
Audio Filter Designs: FIR Filters 265
rippling in the pass band and stop band can occur as shown here. You can see that this is a
Figure 8.14:
lter magnitude response.
+12.0d
-36.
-48.
2
6
8
1
1
2
+12.
0.0
-12.
-24.
-48.
-60.
2
4
6
8
1
1
14
1
2
266 Chapter 8
8.5 Complementary Filter Design for Linear Phase FIR Filters
This technique results in a complementary Þ lter, rotated about the center of the Nyquist
bandwidth, that is, rotated about Nyquist/2. To convert an LPF to HPF or vice versa on a linear
phase FIR:
For
even
Multiply the even-numbered coefÞ
cients by
For
odd
Multiply the odd-numbered coefÞ
cients by
1.
This will rotate the frequency response around Nyquist/2 such that an LPF will become an
HPF. However, they will not share the same cutoff frequency, but will rather be mirror images
of each other. Thus, the Þ
rst Þ
lter design above with a cutoff point of 5.5 kHz would produce
an HPF with a cutoff frequency 5.5 kHz
. Table 8.1 shows the coefÞ
while Figure 8.17 shows the frequency response.
Figure 8.15: The same design with speciÞ cations relaxed; the slope is less steep.
Figure 8.16:
The relaxed magnitude response shows improved stop-band attenuation.
Now, the Þ
rst lobe in the stop band has moved to a magnitude of about
improvement of about 15 dB.
0.0 dB
kH
kH
kH
kH
kH
kH
kH
14kHz
Audio Filter Designs: FIR Filters 267
Table 8.1: The LPF and complementary HPF coefÞ cients for the current design.
Low-Pass FilterComplementary High-Pass Filter
H
H
1 kH
kH
H
1 kH
kH
268 Chapter 8
You will see two boxes on the red line, one at DC, the other at Nyquist. These points cannot
be removed. To enter points and move them, use the following rules:
¥ Right-click on the red line to add a new point.
¥ Click on the new point and drag it up or down.
Figure 8.19: The 64-tap FIR Þ lter produces marginal results with poor stop-band attenuation.
H
H
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
Audio Filter Designs: FIR Filters 269
Play an audio Þ le through the new Þ lter and listen to the resonant LPF characteristics. Here
are some interesting things you can do in RackAFX while the audio Þ le is playing or looping:
¥ You can move the order control; the Þ lter will be updated in real time and you can hear
the results.
¥ You can add or remove points on the desired response or move them around, then hit
Calculate to update the Þ lter in real time, and you can also hear the results.
¥ You can save the IR as a Þ le, then load it into the Impulse Convolver module just as
8.7 Designing a Complementary Filter
You can convert any design into a complementary design by hitting the Complementary
button. With the current 164 th -order resonant LPF, Þ rst switch to the linear scale
( Figure 8.21 ).
Hit the Complement button to create the complementary HPF Þ lter. The original design
points are left to show you the complementary nature of the Þ lter. You can clearly see the
rotation about ½ Nyquist here ( Figure 8.22 ). You can perform this operation while audio
Figure 8.20: The 164th -order FIR Þ lter produces an excellent match to the speciÞ

Figure 8.21: The linear domain frequency response of the 1-kHz LPF.
d
0.
d
-12.
d
-24.
d
-36.
d
-48.
d
-60.
d
2
6
8
270 Chapter 8
Þ les are playing in RackAFX as well. If you look at the IRs in the .sir Þ le you can see the
operation of negating the odd-numbered coefÞ
(10

(8.6)
kH
H
H
rippl
Audio Filter Designs: FIR Filters 271
an equal deviation from the ideal in both bands. This is called an
design. The
H
H
1 kH
kH
10 H
z
H
1 kH
kH
272 Chapter 8
but the meaning of pass-band ripple and stop-band attenuation is the same for all Þ
lter types.
You can start with an LPF design by clicking on the Optimal button. Try the following Þ
¥ Type: LPF
¥ F_pass low: 1 kHz (the low edge cut-off frequency)
¥ F_stop low: 2 kHz (the lowest frequency that must receive the required stop band
¥ Filter order: 16
Now, use the Calculate button to generate the Þ lter. You can see from Figure 8.24 that the
Þ lter is not performing exactly to speciÞ cations. Although the pass band looks good, the stop
band does not. Next, begin increasing the order of the Þ lter using the slider or nudge buttons
0.50
0.00
-0.70
-1.00
0
10
20
30
40
51
61
71
81
91
Audio Filter Designs: FIR Filters 273
¥ Load a wave Þ
le and audition the Þ
lter.
¥ Try the other Þ
lter types (HPF, BPF, BSF).
¥ Adjust the order control noting when the Remez exchange algorithm fails to converge or
the Þ lter blows up.
¥ Try to Þ
nd the lowest possible Þ lter order to just match the desired speciÞ
cations.
¥ Save IR Þ
les with various Þ lters you design.
¥ Copy and paste the IR code into your own convolution plug-in.
8.10 Design a Convolution Plug-In
In order to implement the convolution (FIR) algorithm you need to use the delay line theory
from the last chapter. The Þ lters often need hundreds or thousands of delay elements and you
know that a circular buffer works perfectly for storing and updating a sequence of
),
1),
2)É In addition, the IR will need to be stored in a buffer and accessed sequentially with a
pointer like the input buffer. The convolution equation in Equation 8.2 accumulates from
to
which uses both past and future data. We can only use past data and so we only need half of
the equation. The generalized FIR convolution equation is Equation 8.7 :
n
5
(
n
1
(
n
2
1)
1
(
n
2
2)
1
...
1
)
The number of delay elements required is
1 since the Þ
rst term
) operates on the
from Chapter 7 that when we access a circular buffer and write the current input sample,
we are overwriting the oldest sample,
) but we can use this to our advantage in this
The Remez exchange algorithm is not guaranteed to converge. You will receive an error message
if it does not converge. Even if the algorithm does converge, the resulting IR is not guaranteed
Buffe
x
Buffe
h
Buffe
x
Buffe
h
Buffe
h
Buffe
x
Buffe
h
Buffe
x
274 Chapter 8
case by using a 64-element circular buffer to implement a 64-tap FIR and by writing in the
Þ rst sample before doing the convolution operation. This will give us a buffer with
1) lined up and ready for access. We will have an identically sized buffer
). During the convolution operation we will zip through both buffers at
the same time, accumulating the product of each operation. The only tricky thing is that the
IR buffer will be reading sequentially from top to bottom exactly once each sample period to
(0),
(1),
(2), and so on. The input buffer will be circular and reading
backwards
to create the sequence
),
2), and so on, shown graphically in
Figure 8.27 .
If you look at your base class Þ le, PlugIn.h, you will Þ nd the declarations of your built-in IR
buffers and variables:
// impulse response buffers!

” oat m_h_Left[1024];


” oat m_h_Right[1024];

// the length of the IR from 0 to 1024


Audio Filter Designs: FIR Filters 275
8.10.1 Project: Convolver
Create a new RackAFX project; I named mine ÒConvolver.Ó It has no GUI elements
Table
8.2 .
Table 8.2: IR variables
Variable
m_h_Left[1024]The IR buffer for the left channel
m_h_Right[1024]The IR buffer for the right channel
The length of the current convolution
m_bWantIRs
A ß ag to tell RackAFX to populate your IR
buffers automatically whenever the user
loads an IR Þ le or creates a Þ lter with the
276 Chapter 8
8.10.3 Convolver.cpp File
¥ Create the buffers.
Audio Filter Designs: FIR Filters 277
{
// free up our input buffers

8.27 :
¥ Read the current sample
) and write it into the buffer.
278 Chapter 8
¥ Process the second (right) channel the same way.
Increment the delay line write index and wrap if necessary.

oat* pOutputBuffer,
UINT uNumInputChannels, UINT uNumOutputChannels)
{
// Do LEFT (MONO) Channel; there is always at least one input/one output
// Read the Input

” oat xn = pInputBuffer[0];

�// write x(n) -- now have x(n) - x(n…1023)


Audio Filter Designs: FIR Filters 279
// Stereo-In, Stereo-Out (INSERT Effect)
if(uNumInputChannels == 2 && uNumOutputChannels == 2)
{
// Read the Input


�// write x(n) -- now have x(n) - x(n…1023)


280 Chapter 8
Increment the buffer write pointer after the end of the right channel processing. The reason
Audio Filter Designs: FIR Filters 281
This IR has 65 samples and will create a 65-tap FIR Þ lter. Compile the dynamic link
ave Þ le through it; itÕs an LPF at
1 kHz, so this will be easy to verify by ear. Open the analyzer window and look at the
frequency and IRsÑthese will also be identical to the original design. Now that you have

0.
0.
0.
0.
Z
-
-
Z
-
Z
-
Z
x
282 Chapter 8
The moving average interpolator (or MA Þ lter) in Figure 8.28 implements a sliding window
kH
1kH
z
H
10H
d
d
d
d
d
d
+12.0 d
B
Audio Filter Designs: FIR Filters 283






















}
Of course you could also write a function to calculate and populate the IR buffers, but this
one is short enough to code by hand if you want.
The frequency response ( Figure 8.29 ) of an MA Þ lter is always an LPF. The more samples
284 Chapter 8
8.11.2 Lagrange Interpolator
2
The Lagrange interpolator uses a polynomial of order
window of points that you give it. The window is of length
in the above equation. This is
a complex Þ lter because the coefÞ cients change every sample period and they are based on
the window of input values, x
to x
. This Þ
lter can be implemented as a pure math function
call. To facilitate your coding, a Lagrange interpolation function is implemented in your
/*
Function: lagrpol() implements n-order Lagrange Interpolation

double* x Pointer to an array containing the x-coordinates of the

input values
double* y Pointer to an array containing the y-coordinates of the

int n The order of the interpolator, this is also the length of

the x,y input arrays
double xbar The x-coorinates whose y-value we want to interpolate

8.11.3 Median Filter
number of points in window
odd
Acquire samples in windows of
values, then sort and choose the median value as the
lo
t
hig
an
fin
media
valu
0 1 1
2 2
0 1
0
:
1
1
:
2
:
1
3
:
2
(
Audio Filter Designs: FIR Filters 285
The median Þ lter ( Figure 8.30 ) is a very interesting and somewhat strange algorithm. It has
no IR or frequency response. It smoothes an input signal, which is an LPF type of operation,
but it preserves transient edges, which is very un-LPF in nature. It has applications in noise
reduction without losing high-frequency transients. Its central algorithm uses a sorting
mechanism to sort the window of data by amplitude. The median value is chosen from the
sort operation as the output. When the next sample arrives, the window is re-sorted and the
next median value is obtained. To understand how it smoothes a signal without affecting
high-frequency transients, consider the following example.
Example: Design a Þ ve-point median Þ lter and test with example.
Consider this input sequence:
Figure 8.31 .

Figure 8.30: The block diagram of a Þ ve-point median Þ
lter implementation.

Figure 8.31: The Þ rst four windows of the median Þ lter produce an output sequence {1,1,1,2}.
1 1
0 1 1 2
1 2 3
0 1 2
1 2 2
2 3
3 9
2 3
8 9
1 2
1 8
7 8
7 8
7 9
286 Chapter 8
You can see a transient edge where the signal jumps from 1 to 9 and then another transient
where it drops from 9 to 7 to 5. The Þ
rst window operates on the Þ
rst Þ
ve samples and
sorts them from low to high. Then, the median value is chosen as the output. The median
value is shown in a box in the center of each window. You can see the smoothing effect
immediatelyÑthe Þ rst three samples out of the Þ lter are all 1, even though the Þ
rst three
samples vary from 1 to 2. Figures 8.32 and 8.33 show the result of median Þ ltering the signal

FIR Þ
lters can be complicated to design and long convolutions in direct form are slow. You
can use the FIR design tools when you need to create linear-phase Þ lters with very steep
8
7
6
5
4
3
2
1

0
Sampl
Audio Filter Designs: FIR Filters 287
Bibliography
Ifeachor, E. C. and Jervis, B. W. 1993.
Digital Signal Processing, A Practical Approach
Park, CA: Addison-Wesley.
Kwakernaak, H. and Sivan, R. 1991.
, Chapters 3 and 9. Englewood Cliffs, NJ:
Adaptive and Digital Signal Processing
, Chapter 10. Miami: Steward & Sons.
Oppenheim, A. V. and Schafer, R. W. 1999.
Oscillators Þ
nd several uses in audio effects and plug-ins. The obvious use is as an audio test
signal like the one RackAFX provides on the main interface. Additive synthesis of musical
sounds uses multiple sinusoidal oscillators at harmonic frequencies to create complex

Since the pole radius is 1.0, then the
coefÞ
cient is 1.0 as well. The
coefÞ
cient is then
is the pole frequency from 0 to
CHAPTER 9
y(n)
è)
-1
1
2
290 Chapter 9
You can see that the block diagram in Figure 9.1 has no input. Oscillators do not have inputs;
instead they have
(
n
2
1)
2
Figure 9.1: The direct form sinusoidal oscillator
-plane and block diagram.
th
oscillato
start
her
thes
mus
hav
bee
previou
tw
sample
ou
o
¥
, desired oscillation frequency
The design equations are as follows:
2cos(
u
)
b
1.0
Initial conditions:
n
2
1)
5
sin(
2
1
)
Figure 9.2:

Initial conditions that would have produced a sinusoid whose Þ
rst output sample is 0.0.

Figure 9.3: The direct form oscillator is really the feedback side of the bi-quad structure.
292 Chapter 9
9.2 Design a Direct Form Oscillator Plug-In
In our Þ rst version of the direct form oscillator we are going to make it as simple as possible
by restarting the oscillator when the user changes the oscillator frequency. This means we
are going to recalculate the initial conditions as if the oscillator was starting from a phase
of 0 degrees and the Þ rst sample out would have a value of 0.0. After we have that up and
running, we will modify it to change frequency on the ß y, automatically back-calculating the
initial conditions for any given output sample. Here are the oscillatorÕs speciÞ
¥ Monophonic sinusoidal oscillator.
¥ We will need to implement a second-order feed-back block.
¥ We will need a slider for the user to control the oscillation frequency in Hz.
9.4 ).
9.2.2 DirectOscillator GUI
Slider Property
Value
Control Name
Frequency
Variable Type
Variable Namem_fFrequency_Hz
Initial Value
These assignable buttons will trigger your userInterfaceChange() function with their
alues of 50, 51, and 52. See the userInterfaceChange() function for more
Figure 9.5: The DirectOscillator GUI.
294 Chapter 9
9.2.3 DirectOscillator.h File
In the .h Þ le, declare the variables you need to implement the oscillator:
// Add your code here: --------------------------------------------------------- //
//
// coef“ cients, 2nd Order FB

” oat m_f_b1;


” oat m_f_b2;

// delay elements, 2nd Order FB

” oat m_f_y_z1;


” oat m_f_y_z2;


// ” ag to start/stop oscillator


9.2.4 DirectOscillator.cpp File
296 Chapter 9
// calc coeffs and initial conditions


// shuf” e memory





share
298 Chapter 9
n
2
1)
5
sin(
v
nT
Our problem is that we do know the new frequency,
known, then we also know
, but we donÕt know what sample interval
we are on.
However, we can Þ gure it out as follows:
¥ Take the inverse sin of the
1) delay element.
¥ Find the value of
by dividing it by
// coef“ cients according to design equations
m_f_b1 =
2.0*cos(f_wT);
m_f_b2 = 1.0;

q
(
–1
Z
y
Z
y
n
y
q
(
300 Chapter 9
delay element, there are no initial states to update when the frequency is changed; the single
w frequency the same way as in the direct form oscillator.
needs to be updated. A small amplitude variation is observed when the
oscillation frequency changes, but it is small enough to not cause clicks or pops in the output.
It sounds just as smooth as the direct form oscillator when the frequency is adjusted slowly.
The two outputs are labeled
) and
) where the ÒqÓ stands for quadrature. Therefore, there
are two difference equations. The difference equation for
) must be solved Þ rst because
is dependent on it. A GordonÐSmith oscillator block diagram is shown in Figure 9.8 .
The dif
ws:
y
2
P
2
¥
, desired oscillation frequency
The design equations are as follows:

/2)
n
2
1)
5
sin(
2
1
)
Figure 9.8: The GordonÐSmith Oscillator.
The C++ code for the GordonÐSmith oscillator looks as follows (two memory elements have
cookFrequency():
// calculate HS Epsilon

” oat f_wT = (2.0*pi*m_fFrequency_Hz)/(” oat)m_nSampleRate;



In processAudioFrame():
// form yq(n) “ rst

” oat f_yqn = m_f_yq_z - m_fGorSmithEpsilon*m_f_yn_z;


// y(n)

” oat f_yn = m_fGorSmithEpsilon*f_yqn + m_f_yn_z;


// shuf”
e delays




// write out




9.9 ).
Suppose you start at
0 and during each sample period, you read out one value and
advance to the next. At the end of the buffer, you wrap around and start all over. If you
did read out one value per sample period, what would be the resulting frequency of the
waveform?
The answer is
when the index increment is exactly 1.0 through the table. For a
1024-point wave table at a 44,100 Hz sample rate, the table frequency is 43.06 Hz. If you
i
th
Mus
0.
sampl
a
0.
= 0
302 Chapter 9
oating-point numbers with fractional parts. To make any frequency, you calculate the
alue with Equation 9.7 :
desired
is the table length and
Frequenc
Sin
Sa
Tr
Squar
Mod
Norma
Band-limi
Polarit
Bipola
Unipola
transcription out of the table. Linear and polynomial interpolation both overcome these
problems, though there is still distortion in the output. The industry standard is a fourth-order
Lagrange interpolation on the wave table, where the neighboring four points (two to the left
304 Chapter 9
First, add a frequency slider to the UI and connect it to a variable named m_fFrequency_
able
9.2 ). The
limits are chosen as such because they are close to the lower and upper fundamental
Slider Property
Value
Control Name
Frequency
Variable Type
Variable Namem_fFrequency_Hz
Initial Value
9.5.3 WTOscillator.h File
Declare the variables you need to implement the oscillator:
// Add your code here: ------------------------------------------------------- //

// Array for the Table

oat m_SinArray[1024]; // 1024 Point Sinusoid

// current read location

oat m_fReadIndex;

You can see that weÕve added the necessary ingredients (array, read index,
as well as two functions:
¥ cookFrequency() to update the
value when the frequency changes.
306 Chapter 9
bool __stdcall CWTOscillator::prepareForPlay()
{
// Add your code here:
bool __stdcall CWTOscillator::processAudioFrame(” oat* pInputBuffer, ” oat*

pOutputBuffer, UINT uNumChannels, UINT uNumOutputChannels)
{
// Do LEFT (MONO) Channel
// if not running, write 0s and bail


{
pOutputBuffer[0] = 0.0;
// Mono-In, Stereo-Out (AUX Effect)
if(uNumInputChannels == 1 && uNumOutputChannels == 2)
pOutputBuffer[1] = 0.0;
// Stereo-In, Stereo-Out (INSERT Effect)
if(uNumInputChannels == 2 && uNumOutputChannels == 2)
pOutputBuffer[1] = 0.0;
308 Chapter 9
if(uNumInputChannels == 2 && uNumOutputChannels == 2)
pOutputBuffer[1] = fOutSample;
9.6.2 WTOscillator.cpp File

// rising edge2:

” oat mt2 = 1.0/256.0;


” oat bt2 = …1.0;

// falling edge:

” oat mtf2 = …2.0/512.0;


” oat btf2 = +1.0;

// Sawtooth
// rising edge1:

” oat ms1 = 1.0/512.0;


” oat bs1 = 0.0;

// rising edge2:

” oat ms2 = 1.0/512.0;


” oat bs2 = …1.0;

310 Chapter 9


if(i 256)
m_TriangleArray[i] = mt1*i + bt1; // mx + b; rising edge 1
�else if (i = 256 && i 768)

m_TriangleArray[i] = mtf2*(i…256) + btf2; // mx + b; falling edge

else

m_TriangleArray[i] = mt2*(i…768) + bt2; // mx + b; rising edge 2




m_SquareArray[i] = i 512 ? +1.0 : …1.0;
}

able
Slider Property
Value
Control Name
Osc Type
Variable Type
Variable Name
m_uOscType
sine,saw,tri,square
Note
: Here, we use a radio button list with an enumerated string.
9.6.4 WTOscillator.h File
In the .h Þ le, you can see where RackAFX added the variable:
// ADDED BY RACKAFX -- DO NOT EDIT THIS CODE!!! -------------------------------//

**--0x07FD--**

oat m_fFrequency_Hz;

UINT m_uOscType;




// **--0x1A7F--**
// --------------------------------------------------------------------------- //
The m_uOscType is the switch variable and the enumerated list {sin, saw, tri, square}
312 Chapter 9
sin(
k
v
nT
)

5
c
sin
1
v
nT
2
2
1
sin
1
2
nT
2
1
1
sin
1
3
nT
2
2
1
The sa
w-tooth waveform has both even and odd harmonics scaled according to (1/
5
0
2
1
2

sin(3
nT
)
1
1
sin(5
v
nT
)
2
1
The triangle w
aveform has only odd harmonics. The (Ð1)
term alternates the signs of
the harmonics. The harmonic amplitudes drop off at a rate given by 1/(2
+ 1)
which is
exponential in nature.
SQUARE
5
0
1
1
sin((2
k
1
1)
v
nT
)
5
c
sin(
v
nT
)
1
1
sin(3
nT
)
1
1
sin(5
nT
)
1
1
(9.10)
ave is also composed of odd harmonics like the triangle wave. The harmonic
amplitudes drop off at a rate of 1/(2
+ 1), which is not as severe as the triangle wave.
Therefore, the square wave has higher-sounding harmonics and a more gritty texture.
Using these Fourier series equations, you can implement Fourier synthesis (or additive
with the fundamental plus the next Þ ve harmonics (also called partials) of the series given
through 9.10 and create our band-limited tables. The tables will therefore
you need to modify the plug-in to allow for another mode: normal or band-limited. You can
do this with another enumerated UINT data type, using either a slider or radio-button bank.
I will use another radio button bank in this example.
9.7.1 WTOscillator GUI
Start with the UI and add another enumerated UINT by right-clicking inside the second bank
of radio buttons and Þ lling out the properties in
able
9.4 .
Table 9.4:
Button Property
Value
Control Name
Variable Type
Variable Name
m_uTableMode
9.7.2 WTOscillator.h File
This new information appears in the .h Þ

// ADDED BY RACKAFX -- DO NOT EDIT THIS CODE!!! ------------------------------- //

**--0x07FD--**

oat m_fFrequency_Hz;

UINT m_uOscType;






// **--0x1A7F--**
// --------------------------------------------------------------------------- //
As long as we are in the .h Þ le, we need to add more arrays for our band-limited tables. We
want to keep these separate from the other tables to provide both modes of operation.
// Add your code here: ------------------------------------------------------- //

// Array for the Table
314 Chapter 9

oat m_SinArray[1024]; // 1024 Point Sinusoid

oat m_SawtoothArray[1024];
// saw

oat m_TriangleArray[1024];
// tri

oat m_SquareArray[1024];
// sqr
// band limited to 5 partials

oat m_SawtoothArray_BL5[1024];
// saw, BL = 5


oat m_TriangleArray_BL5[1024];
// tri, BL = 5


oat m_SquareArray_BL5[1024];
// sqr, BL = 5

// current read location

oat m_fReadIndex; // NOTE its a FLOAT!

9.7.3 WTOscillator.cpp File
¥ Initialize the tables according to the Fourier series equations. One of the problems with

else

m_SquareArray[i] = i 512 ? +1.0 :
1.0;

// zero to start, then loops build the rest

m_SawtoothArray_BL5[i] = 0.0;


m_SquareArray_BL5[i] = 0.0;


m_TriangleArray_BL5[i] = 0.0;



1)^g+1(1/g)sin(wnT)

for(int g=1; g=6; g++)


{


double n = double(g);


m_SawtoothArray_BL5[i] += pow((”
oat)…1.0,(” oat)(g+1))*




}



1)^g(1/(2g+1+^2)sin(w(2n+1)T)

// NOTE: the limit is 3 here because of the way the sum is constructed

// (look at the (2n+1) components

for(int g=0; g=3; g++)


{


double n = double(g);


m_TriangleArray_BL5[i] += pow((”
oat)…1.0, (” oat)n)*


(1.0/pow((” oat)(2*n + 1),
(”
oat)2.0))*




}


// square: += (1/g)sin(wnT)

for(int
g=1; g=5; g+=2)


{


double n = double(g);


m_SquareArray_BL5[i] += (1.0/n)*sin(2.0*pi*i*n/1024.0);


}


// store the max values

if(i == 0)


{


fMaxSaw = m_SawtoothArray_BL5[i];


fMaxTri = m_TriangleArray_BL5[i];


fMaxSqr = m_SquareArray_BL5[i];


}


else


{


// test and store

if(m_SawtoothArray_BL5[i] � fMaxSaw)


fMaxSaw = m_SawtoothArray_BL5[i];




if(m_TriangleArray_BL5[i] � fMaxTri)


fMaxTri = m_TriangleArray_BL5[i];

316 Chapter 9

if(m_SquareArray_BL5[i] � fMaxSqr)


fMaxSqr = m_SquareArray_BL5[i];


}




// normalize the bandlimited tables

for(int i = 0; i 1024; i++)




// normalize it


m_SawtoothArray_BL5[i] /= fMaxSaw;


m_TriangleArray_BL5[i] /= fMaxTri;


m_SquareArray_BL5[i] /= fMaxSqr;






break;

case tri:


if(m_uTableMode == normal) // normal











else
// bandlimited



fOutSample = dLinTerp(0, 1,











break;

case square:

if(m_uTableMode == normal) // normal

fOutSample = dLinTerp(0, 1, m_SquareArray[nReadIndex],








else // bandlimited


fOutSample = dLinTerp(0, 1, m_SquareArray_BL5[nReadIndex],







break;
// always need a default



fOutSample = dLinTerp(0, 1, m_SinArray[nReadIndex],



)
break;


// add the increment for next time

m_fReadIndex += m_f_inc;


9.11 shows a normal saw-tooth waveform. Figure 9.12 shows a band-limited saw-tooth
waveform.
9.7.5 Square Wave
Figure 9.13 shows a normal square wave. Figure 9.14 shows a band-limited square wave.
d
d
d
d
d
d
d
H
H
1kH
10kH
d
d
d
d
d
d
d
H
H
H
H
318 Chapter 9
Figure 9.12:

ve-harmonic band-limited saw-tooth waveform with

domain (top) and frequency domain (bottom). The aliasing is gone for this 1 kHz signal; it would
reappear when the frequency was raised to the point that the highest harmonic went outside the
Nyquist boundary. The fundamental plus the Þ
ve harmonic peaks are clearly visible.

Figure 9.11: The mathematically perfect saw-tooth waveform with

4 51
Figure 9.14: The Þ ve-harmonic band-limited square wave with

is gone for this 1 kHz signal. There are only two harmonics (the third and Þ
fth)
because the next harmonic would be outside our limit.
Figure 9.13: The mathematically perfect square wave with

320 Chapter 9
9.8 Additional Oscillator Features (LFO)
For the modulated delay lines in the next chapters, we will need to use LFOs with a couple of
additional properties. SpeciÞ cally, we need
¥ A quadrature phase output
¥ Unipolar or bipolar operation
¥ Option to invert the output by 180 degrees
// invert output


322 Chapter 9

oat m_SinArray[1024]; // 1024 Point Sinusoid

oat m_SawtoothArray[1024]; // saw

oat m_TriangleArray[1024]; // tri

oat m_SquareArray[1024];
// sqr

case saw:
if(m_uTableMode == normal) // normal
{


m_SawtoothArray[nReadIndexNext], fFrac);





)

}

{

fOutSample = dLinTerp(0, 1,
m_SawtoothArray_BL5[nReadIndex],
m_SawtoothArray_BL5[nReadIndexNext], fFrac);

fQuadPhaseOutSample = dLinTerp(0, 1,



fFrac);
}
break;


case tri:
324 Chapter 9
// write out




}
processAudioFrame()
¥ Use the doOscillate() function.
bool __stdcall CWTOscillator::processAudioFrame(” oat* pInputBuffer, ” oat*

pOutputBuffer, UINT uNumInputChannels, UINT uNumOutputChannels)
{
// Do LEFT (MONO) Channel
// if not running, write 0s and bail
if(!m_bNoteOn)
{

pOutputBuffer[0] = 0.0;
if(uNumOutputChannels == 2)
pOutputBuffer[1] = 0.0;
9.16 clearly shows the quad phase output: the sin() in the left
channel, cos() is in the right channel.
9.9 Bipolar/Unipolar Functionality
9.9.1 WTOscillator GUI
Add the bipolar/unipolar switch to the UI using the next bank of radio buttons with the
properties in
Table
9.5 . Edit it like you did with the previous buttons and give it the
enumerations. I named my variable Òm_uPolarityÓ and used ÒbipolarÓ and ÒunipolarÓ as my
string/enums. The default will be bipolar.
1.
1.
Table 9.5: The button properties for the polarity control.
Slider Property
Value
Control Name
Polarity
Variable Type
Variable Name
m_uPolarity
bipolar,unipolar
9.9.2 WTOscillator.cpp File
Add the bipolar/unipolar functionality to the very last part of doOscillate() to divide by 2 then
shift by 0.5 as follows:
void CWTOscillator::doOscillate(” oat* pYn, ” oat* pYqn)
{
&#xSNIP;&#x SNI;&#xP SN;&#xIP00;SNIP SNIP SNIP
// write out
*pYn = fOutSample;
*pYqn = fQuadPhaseOutSample;

// create unipolar; div 2 then shift up 0.5


Figure 9.16: The quadrature-phase outputs of the left and right signals. Note: You must put the
1.00
326 Chapter 9


*pYn /= 2.0;
*pYn += 0.5;
*pYqn /= 2.0;
*pYqn += 0.5;


}
Build and test the code; Figure 9.17 shows the output for a unipolar sinusoid.
Here are some projects to try:
Perhaps the most interesting kinds of audio effects, from both listening and engineering
standpoints, are the modulated delay effects. They include the chorus and ß
additionally, modulated delays are also found in some reverb algorithms. These time-varying
Þ lters are actually quite sophisticated and fun to implement, and we have gone to great
lengths to create useful sub-modules. These include the digital delay effect or DDL (digital
delay line) module and the wave table oscillator. If you can design, build, and implement
modulated delay effects, then you are well on your way to proÞ ciency in audio effects coding.
10.1b the amount of delay is constantly changing over
time. The system only uses a portion of the total available delay amount.

Modulated Delay Effects
Figure 10.1: (a) A static delay and (b) modulated delay effect.
–D
dept
–D
328 Chapter 10
The
relates to the size of the portion of the delay line that is being read.
is the speed at which the read index moves back and forth over the
modulated delay section. A low-frequency oscillator (LFO) is used to modulate the delay
amount and the LFO can be just about any kind of waveform, even noise. The most common
waveforms are triangle and sinusoid. In order to make a modulated delay effect, you Þ
rst need
to make a delay line, and then modify it to constantly change and update the read location on
each sample interval.
10.1 The Flanger/Vibrato Effect
The ß
10.2 shows the two states of the modulator with increasing and decreasing pitch
shifting due to the Doppler effect.
10.3 shows alternate ways of diagramming the same
modulator. The vibrato and ß anger are the same effect underneath the hoodÑitÕs only the mix
Modulated Delay Effects 329
Figure 10.2: (a) As the delay increases
, the pitch drops. At the end of the delay, the
the pitch; the maximum delay here is about 10 mSec at 44,100 Hz.
Figure 10.3:
(a) The simplest form of the f langer/vibr
fect. The dotted lines show the limits
of the delay modulation, from 0.0 (no delay) to the full range,
samples. (b) An alternate
version shows the LFO connection to the modulation index to the delay line.
330 Chapter 10
¥ Interpolate the delayed value using linear or polynomial interpolation.
¥ Use
all-pass Þ
lter to make the fractional delay (see the Bibliography).
The ß
anger/vibrato controls consist of:
¥ Modulation depth: how much of the available range is used.
¥ Modulation rate: the rate of the LFO driving the modulation.
¥ LFO type: usually sinusoidal or triangular, but may be anything.
¥ Feedback: as with the normal delay, the feedback has a big effect on the Þ
nal
product.
10.4 , the ß anger/vibrato technically always starts with 0.0 samples of
delay. Because of this, weÕd lik
e to have a LFO that can generate a unipolar output from
10.5 shows a stereo version of the ß anger. A common LFO modulates both delay
lines, but the phases are off by 90 degrees, putting them in quadrature or quad-phase. The
effect is interesting as the two LFOs seem to chase each other, one on the left and the other
The flanger technically modulates the delay right down to 0 samples. For analog
tape flanging, this occurs when the two tape machines come back into perfect
synchronization. This is called
through-zero flanging
or TZF. This means that for an
instant, the output is double the input value. For low frequencies and/or high feedback

Modulated Delay Effects 331
Table 10.1: Delay times for the ß anger and chorus effects.
Min Delay (mSec)
Max Delay (mSec)
Typically 0
10.2 The Chorus Effect
The chorus effect appeared in the late 1970s and was used on countless recordings during the
1980s, becoming a signature sound effect of that decade. The chorusing effect can produce
a range of effects from a subtle moving sheen to a swirling sea-sick de-tuned madness. Like
the ß anger, it is also based on a modulated delay line, driven by an LFO. Although different
manufacturers adopted different designs, the basic algorithm for a single chorusing unit is the
Table
10.1 .
Figure 10.5: A stereo quadrature ß
anger; the dotted lines at the LFO show its 90-degree
332 Chapter 10
¥ Analog chorus units typically do not use any feedback (because our DDL module already
has feedback, we can keep it for e
¥ The range of delay times over which the device operates is the most signiÞ
difference.
10.6 and 10.7 ). Also, the modulation depth and the location of the
center of the delay area vary across manufacturers ( Figure 10.8 ).

Figure 10.6: The basic chorus module. Note the feedback path is optional and not
found in typical analog chorus units. The gray area is the ß anger keep-out zone.
Figure 10.7:
e block diagram that describes the same chorus module.
Figure 10.8:
o different chorus modules with the same
maximum depth (dotted lines) but a different center of operation.


Modulated Delay Effects 333
Many variations on the basic chorus module exist, including:
¥
Multiple chorus modules with different centers of operation
¥ Different LFO frequency or LFO phases applied to different modules
¥ Series modules
¥ Parallel modules
¥ Bass modules with a low-pass Þ
lter/high-pass Þ
lter on the front end

10.9 shows the stereo quad chorus with the left and right LFOs out of sync by
90 degrees. In this permutation, the maximum depths of the left and right channels are
independent, as well as the ability to adjust them. The ß owchart for the basic modulated delay
effect for each processAudioFrame() is shown in Figure 10.10 .

Figure 10.9: A stereo quadrature chorus; feedback paths are optional.
Figure 10.10: The ß
chart for modulated delay effects.

Ou
I

Ou
I
v
dela
tim
o
LF
offse
ne
LF
334 Chapter 10
You will use the DDL module to do most of the low-level work and generate the delayed
values.
10.11 shows the block diagram.
We will provide several user interface (UI) components to control the plug-in and their
variables are available for a parent plug-in to use as well. We will use the existing DDL
module and wave table oscillator plug-ins. Since the DDL module has feedback path control,
there will be many ways to combine these modules into sophisticated units. The controls we
will provide for the user are discussed next. The plan of operation is as follows:
Modulated Delay Effects 335
¥ processAudioFrame()
Call the doOscillate() function on the LFO.
336 Chapter 10
Table 10.2 : The GUI components, variable names, and values.
Slider Property
Value
Control Name
Variable Type
Variable Namem_fModDepth_pct
Low Limit0
High Limit100
Initial Value50
Slider Property
Value
Control Name
Rate
Variable Type
Variable Namem_fModFrequency_Hz
Initial Value
0.18
Slider Property
Value
Control Name
Variable Type
Variable Namem_fFeedback_pct
Initial Value
Slider Property
Value
Control Name
Button Property
Value
Control Name
Mod Type
Variable Type
Variable Name
m_uModType
Enum StringFlanger, Vibrato, Chorus
Button Property
Value
Control Name
LFO
Units
Variable Type
Variable Name
m_uLFOType
10.3.2 ModDelayModule GUI
Table
10.2 .
10.3.3 ModDelayModule.h File
Add the member objects, variables, and function declarations to the .h Þ

// Add your code here: ------------------------------------------------------- //

CWTOscillator m_LFO; // our LFO


CDDLModule m_DDL;
// our delay line module

// these will depend on the type of mod

” oat m_fMinDelay_mSec;


” oat m_fMaxDelay_mSec;

Modulated Delay Effects 337
// functions to update the member objects




// cooking function for mod type


338 Chapter 10
able
Modulated Delay Effects 339

case Chorus:




m_fMinDelay_mSec = 5;


m_fMaxDelay_mSec = 30;


340 Chapter 10

Modulated Delay Effects 341
342 Chapter 10
10.3.6 Challenge
Add another radio button switch to turn on or off TZF that we employ. To turn off TZF,
10.5 and it consists of
two identical ß
anger delay lines running off of LFO values that are in quadrature phase.
Since our mod delay module has a built-in LFO and delay, we can assemble this plug-in
quickly.
10.4.1 Project: StereoQuadFlanger
Make sure to add all the existing modules when you create the new StereoQuadFlanger
¥ ModDelayModule.h.
¥ DDLModule.h (because ModDelayModule # includes it).
¥ WTOscillator.h (because DDLModule #includes it).
10.4.2 StereoQuadFlanger GUI
We will use a simpler UI consisting of the controls shown in
Table
10.4 .
10.4.3 StereoQuadFlanger.h File
In the .h Þ le, declare two instances of the ModDelayModule, one for the left and one for the
right. Also add one helper function to initialize and update the modules. These two member
variables and the one function are all that you need.
// Add your code here: ------------------------------------------------------- //

CModDelayModule m_ModDelayLeft;


CModDelayModule m_ModDelayRight
;

void updateModDelays();

// END OF USER CODE ---------------------------------------------------------- //
Modulated Delay Effects 343
Table 10.4 : The GUI elements for the StereoQuadFlanger.
Slider Property
Value
Control Name
Variable Type
Variable Namem_fModDepth_pct
Initial Value
Slider Property
Value
Control Name
Rate
Variable Type
Variable Namem_fModFrequency_Hz
Initial Value
0.18
Button Property
Value
Control Name
LFO
Variable Type
Variable Name
m_uLFOType


10.4.4 StereoQuadFlanger.cpp File
Add the one helper function updateModDelays(); this is also the function that forces the two
Slider Property
Value
Control Name
Variable Type
Variable Namem_fFeedback_pct
Initial Value
344 Chapter 10









// FLANGER!










// cook them










¥ Forward the call to the member objects.
¥ updateModDelays().
bool __stdcall CStereoQuadFlanger::prepareForPlay()
{
// Add your code here:
// call forwarding!




// dont leave this out … it inits and cooks


Modulated Delay Effects 345
Forward the processAudioFrame() function to the member objects to do the processing.
bool __stdcall CStereoQuadFlanger:: processAudioFrame(” oat* pInputBuffer, ” oat*
pOutputBuffer, UINT uNumInputChannels,
UINT uNumOutputChannels)
{
// Do LEFT (MONO) Channel

)

// Mono-In, Stereo-Out (AUX Effect)
if(uNumInputChannels == 1 && uNumOutputChannels == 2)


// Stereo-In, Stereo-Out (INSERT Effect)
if(uNumInputChannels == 2 && uNumOutputChannels == 2)

1, 1);
10.13 ). If we play it right, we can code this with a minimum of effort,
but we have to be very careful about book-keeping since we have many variables here.
The UI will also be more complicated, with three sliders per chorus module: depth, rate,
and feedback. The LFO type is Þ xed as a triangle for all units. L is in quad phase, R is in
inverse-quad phase, and C is normal.
346 Chapter 10
10.5.1 Project: StereoLCRChorus
Make sure to add all the existing modules when you create the new project:
¥ ModDelayModule.h.
¥ DDLModule.h (because ModDelayModule #includes it).
¥ WTOscillator.h (because DDLModule #includes it).
10.5.2 StereoLCRChorus GUI
Table
I
I
L
Ou
Ou
R
C
L
L
C
C
R
R
Modulated Delay Effects 347






// function to transfer out variables to it and cook


// END OF USER CODE ---------------------------------------------------------- //
10.5.4 StereoLCRChorus.cpp File
There is nothing to initialize in the constructor because we have no variables; the member
objects will initialize themselves at construction time. Implement the updateModules() function
Variable names:
0.18
Variable names:
m_fModFrequency_Hz_L
m_fModFrequency_Hz_C
m_fModFrequency_Hz_R
Variable names:
348 Chapter 10

// 1: quad phase

// 0: normal

// 1: quad phase





// this one is inverted



















// CHORUS!














// cook them












}
¥ Forward the calls to prepareForPlay() to the member objects.
bool __stdcall CStereoLCRChorus::prepareForPlay()
{
// Add your code here:
// call forwarding!






Modulated Delay Effects 349
// dont leave this out … it inits and cooks


350 Chapter 10
// sum to create Left Out


// Mono-In, Stereo-Out (AUX Effect)
if(uNumInputChannels == 1 && uNumOutputChannels == 2)


// Stereo-In, Stereo-Out (INSERT Effect)
if(uNumInputChannels == 2 && uNumOutputChannels == 2)


10.14 ).
10.6.2 Multi-Flanger (Sony DPS-M7
The DPS-M7 has some intensely thick modulation algorithms. This one has two ß
circuits that can be combined in parallel or series on either channel. The channels are also
10.15 ).
10.6.3 Bass Chorus
The bass chorus in Figure 10.16 splits the signal into low-frequency and high-frequency
components and then only processes the high-frequency component. This leaves the
fundamental intact. The comb Þ
ltering of the chorus effect smears time according to how
Modulated Delay Effects 351
much delay is being used. For the bass guitar, this results in a decreased fundamental with an
ambiguous transient edge or starting point. Because bass players need to provide a deÞ
this effect, use the LinkwitzÐRiley low-pass and high-pass Þ
lters to split the signal. Invert
ltersÑit doesnÕt matter which oneÑso that their phase responses
sum properly.
10.6.4 Dimension-Style (Roland Dimension D
This chorus unit (
chorus. Known for
its subtle transparent sound, it features a shared but inverted LFO and an interesting output
1. Dry
2. Chorus output
3. Opposite channel chorus output, inverted and high-pass Þ
ltered
Figure 10.14: A stereo cross-ß

I
I
Ou
Ou
352 Chapter 10
Figure 10.16: A bass chorus.
Figure 10.15: DPS M7 multi-ß
anger.
I
1
Ou
3
4
3
Ou
I
I
Ou
Modulated Delay Effects 353
Figure 10.17 : A dimension-style chorus.
The controls on the original unit consisted of four switches only; these were hardwired
I
I
Ou
Ou
354 Chapter 10
10.6.5 Deca-Chorus (Sony DPS-M7
The deca-chorus has 10 (deca) chorus units, 5 per channel. Each chorus has its own pre-delay,
10.18 ).
Figure 10.18: DPS-M7 deca chorus.
I
Ou
t
Ou
1
1
2
3
4
5
1
2
3
4
5
2
3
4
5
1
2
4
5
2
1
3
5
2
5
1
2
3
5
Out 1
Modulated Delay Effects 355
Bibliography
Cole, M. 2007. ÒRoland dimension C clone for Eventide 7000, Orville and H8000.Ó Accessed August 2012
from http://www.eventidestompboxes.com/forummedia/PATCHES/Orville/ProFXalgorithms/Roland%20
Coulter, D. 2000.
Digital Audio Processing
, Chapter 11. Lawrence, KS: R&D Books.
Dattorro, J. 1997. Effect design part 2: Delay line modulation and chorus.
Journal of the Audio Engineering
Reverb algorithms might represent the Holy Grail of audio signal processing. They have an
appeal that seems universal, perhaps because we live in a reverberant world. Our ears are
time-integrating devices that use time domain cues and transients for information, so we are
sensitive to anything that manipulates these cues. In this chapter, we discuss reverb algorithms
as applied mainly to room simulation. There are two general ways to create the reverberation
effect:
 Reverb by direct convolution—the physical approach.
 Reverb by simulation—the perceptual approach.
In the physical approach, the impulse response of a room is convolved with the input signal
in a large  nite impulse response (FIR)  lter. For large rooms, these impulse responses
might be 5 to 10 seconds. In the mid 1990s, Gardner developed a hybrid system for fast
convolution that combined direct convolution with block fast Fourier transform (FFT)
processing (Gardner 1995). Around the same time, Reilly and McGrath (1995) described
a new commercially available system that could process 262,144-tap FIR 
lters for

Reverb Algorithms
358 Chapter 11
across the decades. Most of this chapter is owed to their work in the  eld. We will focus on
nd computationally ef cient algorithms for interesting
reverberation effects.
Griesinger (1989) states that it is impossible to perfectly emulate the natural reverberation of
a real room and thus the algorithms will always be approximations. It seems that the area of
reverberation design has the most empirically derived or trial-and-error research of just about
any audio signal processing  eld. There is no single “correct” way to implement a reverb
algorithm, so this chapter focuses on giving you many different reverberator modules to
experiment with.
11.1 Anatomy of a Room Impulse Response
The 
rst place to start is by examining impulse responses of actual rooms. There are several
11.1 shows the impulse response plots for two very different spaces; a large concert
hall and a cathedral. The initial impulse is followed by a brief delay called the
pre-delay
. As
the impulse pressure wave expands, it comes into contact with the nearby structures—walls,
 oor, and ceiling—and the  rst echoes appear. These initial echoes, called
early reß ections
Reverb Algorithms 359
11.1.1
RT
: The Reverb Time
The most common measurement for reverb is the
reverb time. Reverb time is measured
by  rst stimulating the room into a reverberant state, then turning off the source and plotting
the pressure-squared level as a function of time. The amount of time it takes for this energy
decay curve to drop by 60 dB is the
reverb time
, or
. Sabine’s (1973) pioneering work in
this area leads to the following formula in Equation 11.1 :

volume of room in ft
surface area of room in ft
RAve
average absorption coefficient
Figure 11.1: The impulse responses of a large hall and cathedral.
Figure 11.2: A generalized model of a rev
5 18522
Concer
Hal
Reverberatio
Reflection
Reflection
Reverberatio
360 Chapter 11
Sabine measured and tabulated the absorption coef
arious materials. The units
are given in
. A room made of several materials is  rst partitioned to  nd the partial
surface area of each material type, then the average is found by weighting the areas with the
absorption coef cients and summed. The reverb time is probably the most common control,
found on just about every kind of reverb plug-in.
11.2 Echoes and Modes
In Schroeder’s early work, he postulates that a natural sounding arti cial reverberator has
both a large echo density and a colorless frequency response. The echo density is simply the
number of echoes per second that a listener experiences in a given position in the reverberant
environment ( Equation 11.2 ).

If the echo density is too lo
w the ear discerns the individual echoes and a  uttering sound is
volume of room
Reverb is often modeled statistically as decaying white noise, which implies that ideal
Reverb Algorithms 361

c

a
n
b
a
n
b
a
n
b
half wave numbers 0, 1, 2, 3...
l
,
,
1
x
2
, width
1
y
2
and hei
Abov
11.3 shows a  ctitious room example with the bell-shaped
resonances.
The
is the number of resonant peaks per Hz. Physicists call the resonant
eigenfrequencies
(note this is not an acoustics-speci
c term; an
eigenfrequency is the resonant frequency of any system). Schroeder’s second postulation is that
for a colorless frequency response, the modal density should be 0.15 eigenfrequencies/Hz or
one eigenfrequency every 6.67 Hz or approximately 3000 resonances spread across the audio
Amplitud
0.
–3.
Frequenc
362 Chapter 11
the modal density as it relates to the volume of the room and modal frequency. Equation 11.5
shows that the resonances b
y.
V

m
5
volume of the room

5
modal fre
The
energy decay r
plot (or EDR) shows how the energy decays over both frequency and
time for a given impulse response of a room.
11.4 shows a very simple 
ctitious EDR.
11.4 , notice that the frequency axis comes out of the page; low frequencies are in
the back. It also shows that this room has a resonant peak, which forms almost right away
Figure 11.4:
-axis), frequency (
-axis) and
amplitude (
-axis) of the energy decay of a room.
Schroeder’s rules for natural reverb:
Echo density: At least 1000 echoes/sec (Greisinger: 10,000 echoes/sec)
Modal density: At least 0.15 eigenfrequencies/Hz
In physical rooms we know that:
Echo density increases with the square of time.
Modal density increases with the square of frequency.
Amplitud
–20
–40
–60
–80
–10
–12
0.
2
D
A
Reverb Algorithms 363
in time. In an EDR, the modal density is shown across the
-axis (time). This simpli ed plot shows just one resonance and no
echo density build-up for ease of viewing.
11.5 shows the EDR of an actual room, in
this case a theater. The eigenfrequencies are visible as the ridges that run perpendicularly to
y axis.

11.6 shows the EDR for a cathedral. In this case, the echo pile-ups are clearly visible
running perpendicular to the time axis. Comparing both EDRs shows that both rooms have
high echo and modal densities; therefore, they should be good reverberant spaces. Both
EDRs show an initial high frequency roll-off and, especially in the theater’s case, the high
frequencies decay faster than the low frequencies. The high-frequency decay is a property of
the treatment of the room surfaces along with the fact that the air absorbs high frequencies
more easily than low frequencies. The high-frequency energy decay in the theater is caused
Fre
–10
–20
–30
–40
–50
–60
–80
–90
–10
0.
0.
0.
0.
0.
0.
0.
0.
0.
10
10
2
364 Chapter 11
unnatural way. The majority of the rest of the chapter is devoted to revealing, analyzing, and
explaining these b
with Schroeder’s reverb modules.
11.3 The Comb Filter Reverberator
One of the reverberator modules that Schroeder proposed is a comb  lter. Remember that we
are looking for modules that produce echoes and the comb  lter with feedback will do just
that. The comb  lter reverberator is familiar looking because it’s also the delay with feedback
7 . We derived the difference equation and transfer function
for the comb  lter in
11.7 in Chapter 7 :
x
D
D
H
(
z
)
5
z
D
We also performed a pole-zero analysis and generated frequency plots. We showed that the
feedback path caused a series of poles evenly spaced around the inside of the unit circle. The
each time through the loop.
Figure 11.6: An EDR for a cathedral.
-4
-6
-8
-10
-12
-14
2
3
4
5
6
4
10
3
10
2
Tim
Reverb Algorithms 365
While
11.8 might look simple, the results certainly trend in the right direction. The
f
f
D
5
Figure 11.7: The basic comb Þ
lter.
Figure 11.8: The poles in the
-plane produce the resonances. The feedback that
. In this example,
= 8 samples, feedback = 80%.
g
d
d
d
d
d
kH
kH
kH
kH
kH
kH
kH
kH
kH
kH
366 Chapter 11
So, the comb  lter produces
(or
/2 resonances from DC
/
. This is exactly what we found
11.9 shows an example of four comb  lters in parallel. Each has its own delay time
4) and gain (
4). However, care must be taken with the gain values. As you
7 , the value of

(11.8)


RT
1/
g
2
or
1/
10
DT
(11.9)
10
2
3
DT

Reverb Algorithms 367
This means we can control the reverb time by using the gain factor
the delay length
tradeoff is that if we increase
ND

D
a
i
5
0
or
(11.10)
368 Chapter 11
The echo density for the parallel combs with delay lengths close to each other is given in
Equation 11.11
i
th comb filter
E
N
D
Knowing the desired
and
you can then calculate the number of parallel comb 
lters
with average delay time
with Equation 11.12 .
"

E
Plugging Schroeder’s values of
into Equation 11.12 yields
12 and the average delay time
11.4 The Delaying All-Pass Filter Reverberator
Schroeder also proposed the delaying all-pass  lter (APF) as a reverberator unit. The impulse
Reverb Algorithms 369
Inspection of the block diagram in
11.10 reveals that this is a rather complex feed-
back/feed-forward structure. We need to extract the difference equation so we can synthesize
the reverb unit. To start, we label the nodes
) and
) in the block diagram, then fashion
) with respect to them:
Now, we expand out the internal nodes and use the familiar time and frequency shifting
transform to continue Equation 11.14 .
Examining the last term in Equation 11.14 ,
), we can rearrange Equation 11.13 to
gx
2
g
d
n
D
)
5
y
(
n
D
)
1
gx
(
n
D
)
2
Substituting
Figure 11.11: Impulse response of SchroederÕs APF reverberator; the frequency response is ß
The delay time was
= 35 mSec with
= 0.6.
g
g - g
370 Chapter 11
11.5 More Delaying All-Pass Filter Reverberators
There are multiple ways to synthesize the delaying APF.
11.12 shows another one.
D eriving the difference equation is easier for this version; we only need to de ne one extra
) in Equation 11.17 :

(
x
D
y
n
52
gw
(
n
1
w
(
n
D
)

g
3
x
n
1
gw
(
n
D
)
4
3
x
n
D
)
(
n
2
4
52
gx
n
2
g
n
D
)
n
D
)
n
2

n
n
D
)
n
D
)
In order to 
nish,  nd
) from the second line in Equation 11.18 and notice that this is
the same as the last two terms in Equation 11.17 .
gw
D
Thus, the last two terms in Equation 11.17 can be replaced by
), and the 
difference equation matches Schroeder’s APF:
gx
D
D

11.13 shows another version of the same APF. The proof that it has the same
difference equation is easy if you look at the node
) in Equation 11.20 :
x
D
y
Figure 11.12:
Another version of the delaying all-pass reverberator.

g
Reverb Algorithms 371
These are the  rst two lines in the derivation for the delaying APF above.
11.14 shows
x
Equation 11.21 is identical to Equation 11.20 , but care must be taken in the synthesis
of this in code—you must form the internal nodes  rst to avoid a zero-delay loop (the
) term above). Inverted APFs simply swap signs on the
coef
cients. This has
the effect of inverting the impulse response while keeping the frequency response



372 Chapter 11
11.6 Schroeder’s Reverberator
Schroeder combined a parallel comb  lter bank with two APFs to create his 
rst design. The
comb  lters produce the long series of echoes and the APFs multiply the echoes, overlaying
their own decaying impulse response on top of each comb echo. The resulting reverberation
unit, shown in
11.15 , sounds marginal but it is very simple to implement.
Schroeder suggests that the ratio of smallest to largest delay in the comb  lters should be
about 1:1.5 and originally speci ed a range of 30–45 mSec in total. The 1:1.5 ratio rule turns







Reverb Algorithms 373
Figure 11.16: The LPF and comb Þ
lter combination.
The APFs should have the following properties:
 Choose delays that are much shorter than the comb  lters, 1 mSec to 5 mSec.
11.6 (Moorer 1979). Placing a low-pass  lter (LPF) in the comb 
lter’s
f the high-frequency content of successive echoes exponentially,
which is what we want. The LPF–comb reverberator block diagram is shown in
11.16 .
)will introduce not only low-pass  ltering, but also its own impulse response into the echoes
11.17 the LPF is shown in the dotted box; notice
that it is turned around backward to follow the  ow of the feedback path.
To 
nd the difference equation, it is easier to start in the frequency domain with the
transforms of the comb and  rst-order feed-forward  lter, since we are already familiar with
them by now ( Equation 11.22
g
D
z
)
5
1
(11.22)
374 Chapter 11
Figure 11.17: The LPFÐcomb Þ
lter expanded.
Filtering the feedback loop in the frequency domain is done by simply multiplying the LPF
in
11.23 .
z
)
5
z
D
H
z
)
g
D
z
D
g
D
g
z
D
g
D
g
(
z
)
z
)
5
z
D
g
The next step is to separate variables and multiply out the denominator:

g
5
X
z
)
z
(
z
)
3
1
g
(
z
)
z
D
5
X
z
)
z
2
g
Y
(
z
)
(
z
)
z
(
z
)
z
X
z
)
z
D
g
z
)
z
D
(
z
)
(
z
)
z
(
z
)
z
X
z
)
z
D
z
)
z
D
2
(11.24)
Reverb Algorithms 375
Lastly, take the inverse
n
n
1)
(
n
D
)
5
x
n
D
)
n
D
2
1)
(11.25)
In order for the  lter combination to remain stable (after all, it is a massive feedback system)
and
can be related as in Equation 11.26 :
where
g
Because the
g
g
11.8 Moorer’s Reverberator
Moorer proposed a similar design to Schroeder’s reverberator which uses a parallel bank of
LPF–comb  lters. Because the LPFs remove energy from the system, more units are needed
11.18 you can see the differences from Schroeder’s reverb—there are more comb
units and only one all-pass on the output. The same care must be tak
Comb FilterDelay (mSec)
1500.460.4482
2560.470.4399
3610.4750.4350
4680.480.4316
5720.490.4233
6780.500.3735
All-Pass FilterDelay (mSec)
160.7–
Note:
2 seconds, total
0.83
376 Chapter 11
11.9 Stereo Reverberation
Conducting listening tests, Schroeder (1984) found that listeners overwhelmingly preferred
a stereo reverb to a mono version of the same algorithm. Both Schroeder and Moorer’s
reverbs can be adapted for stereo usage. In fact, this scheme can be used with just about any
reverb consisting of comb and APFs. The  rst thing to note is that mathematically, there is
no reason why you can’t place the APFs before the comb 
lters since their transfer functions
domain. Then, the individual outputs of the comb  lters can be combined
through a mixing matrix to provide the left and right outputs. The mixing matrix is an array
of weighting values for each comb  lter’s output. Jot proposed that the matrix should be
orthogonal for the most uncorrelated output. The mixing matrix is shown in Equation 11.28 .
The rows are the outputs of the comb  lters and the columns are left and right.
11.19
shows a mixing matrix for the left channel of a Schroeder reverb unit.
Figure 11.18: MoorerÕs reverb.
g

-g

g

g
g

g
g
g
Z
Z
Z
Z
Z
Z
Z

Reverb Algorithms 377
Figure 11.19:
A stereo implementation for a Schroeder reverbÑonly one channel is shown for
clarity; the right output is fashioned in a similar way. Notice the mix matrix values do not have to
1.0 but need to follow the orthogonality of alternating signs.

11.10 Gardner’s Nested APF Reverberators
Schroeder experimented with reverbs made only from delaying APF modules in series. The
abundant time smearing suggested that this might be a viable option. Unfortunately the
8 .
11.20 shows the same delaying APF structure as
shown in
Com
Com
+0.
Com
+0.

Com
+0.
Com
+0.
Com
-0.
Com
+0.
Com
-0.
APF
APF
Righ
Ou
APF
Lef
378 Chapter 11
The sequence of operation—reading before writing—is key for implementing this design.
Speci
1. Read the output of the last delay cell,
2. Read the value of the  rst delay cell,
3. Form the new value for the last cell
) and write it back into the cell as
).
4. Form the new value for the  rst cell
Gardner’s idea was to nest multiple APF structures inside each other so that they
shared
the same delay line. This would produce layers of embedded echoes.
11.20 shows a
delaying APF with a total delay time of
. Nesting another APF with a delay of
inside it
Gardner also de
w schematic representation of his nested  lter structures that
removes the clutter of the delay cells, gain, and summation components.
11.22 shows
a nested APF structure.
2 and gain
while the inner
1 and
as its values. Additionally, pre- and post-delays may be added to the
transversal delay line before and after the nested structure as shown in
11.23 .
Figure 11.20:
The delaying APF structure sitting across a transversal delay line.

Figure 11.21:
wo delaying APF structures sharing the same delay line.
z z
Reverb Algorithms 379
Consider the nested APF structure in
11.24 . It has an outer APF with a delay time of
0.3 and two inner APFs with delay and gain values of 22 mSec (
0.6). This is actually the  rst part of one of Gardner’s larger designs.
Figure 11.25 shows the impulse responses as each APF is added to the structure.
You can certainly see how the echo density begins to grow as you add each APF unit. We note
several things about the nesting system:
 In the example, the  rst nested APF is 22 mSec in length; because of the commutative
property of the delay operator, it doesn’t matter where the 22 mSec delay is placed within
the 35 mSec outer element.
 The 8.3 mSec APF is not nested inside the 22 mSec APF; it comes anywhere after it but
still inside the outer APF.
Figure 11.23: Pre- and post-delays of length
and
have been added to the nested APF
structure. The second diagram shows how Gardner would lay this out schematically.
Figure 11.22: A nested APF schematic; Gardner gave the delay times in mSec.
380 Chapter 11
 Echo density piles up as time increases.

The system can produce ringing or instabilities.
11.26 shows Gardner’s three reverb designs for small, medium, and large room
simulations. In each algorithm, the reverb time is controlled by the loop gain or “g” control
(notice there are two of them in the medium room algorithm).
Figure 11.25: (a) The output of the single outer APF with delay of 35 mSec and gain of 0.3
rst nested APF with delay of 22 mSec has
been added. (c) The second nested APF with delay of 8.3 mSec has been added to the Þ
rst two.

Figure 11.24:
lter and two nested units.
(0.3
(0.4
(0.6
13230 17640
Reverb Algorithms 381
Figure 11.26: From top to bottom these are GardnerÕs nested APF reverb algorithms for small,
two input locations. Also notice the use of pre- and post-delays on some of the nested modules.
11.11 Modulated APF and Comb/APF Reverb
The modulated delay line can be used to further increase time smearing and echo density
by using it in an APF module. The low-frequency oscillator (LFO) rate is kept low (1 Hz)
1 3
382 Chapter 11
Figure 11.27: A modulated APF.
Figure 11.28: Block diagram of DattorroÕs reverb.
comb  lters (chorus modules) in addition to modulated APFs to further reduce the overall
complexity and memory requirements. The modulated APF is shown in
11.27 .
The modulated APF in
11.27 modulates only the very end of the delay line, producing
a delay described by Equation 11.29 :
where
11.28 has a block diagram that reveals its 
recirculating tank circuit. This  gure-8 circuit could be applied to the other reverb block
diagrams to generate countless variations on this theme.
z
Pre-Dela
LPF
APF
APF
APF
APF
Dela
APF
Modulate
Modulate
Dela
Dela
c
b
a
e
f
d
Reverb Algorithms 383
11.29 —it has no output
) node. In fact,
the left and right outputs are taken from various locations within the delay lines, marked a–f
in the diagram. This is a mono-in, stereo-out reverberator. The  rst LPF is marked “diffusion”
while the second pair (LPF2 and LPF3) are designated “damping.” The 
rst  lter controls
the diffusion in the series APFs while the second pair controls the high-frequency roll-off
in the tank circuit. These LPFs are all DC -normalized single pole feedback  lters shown in
Figure 11.30 .

Table
11.2 gives the various values for the  lters, followed by the equations that give the left
Values for
kHz are calculated and given in the tables.
11.30 shows the entire reverb algorithm
Table
11.3 lists the control ranges and defaults.
Table 11.2: Gain and delay values for DattorroÕs plate reverb.
Delay (samples)
Delay (samples)
= 44.1 k
20.625
20.625
Fixed Delay
Delay (samples)
Delay (samples)
= 44.1 k
742176241––
844536590––
Delay (samples)
Delay (samples)
= 44.1 k
(index)
908 +/–81343+/–124
672 +/–8995+/–124
Figure 11.29:

lter, easy to use in reverb algorithms, including the
lter and LPF type module. (b) This version merely reverses the effect of the slider or control.

g
x
y
(a
g
-
z
(b
z
384 Chapter 11
Figure 11.30: DattorroÕs plate reverb algorithm.
I
Reverb Algorithms 385
Table 11.3: Control values for DattorroÕs plate reverb.
RangeDefault
0.0–1.00.5
Bandwidth0.0–1.00.9995
Damping0.0–1.00.0005
The left and right outputs ( Equation 11.30 ) are summed from points within the delay lines
11.28 .

y
a
3
266
4
1
a
3
2974
4
2
b
3
1913
4
1
c
3
1996
4
2
d
3
1990
4
2
e
3
187
4
2
f
1066
4
d
3
353
4
1
d
3
3627
4
2
e
3
1228
4
1
f
2673
4
2
a
3
2111
4
2
b
3
335
4
2
c
3
121
4
y
a
3
394
4
1
a
3
4401
4
2
b
3
2831
4
1
c
3
2954
4
2
d
3
2945
4
2
e
3
277
4
2
f
1578
4
d
3
522
4
1
d
3
5368
4
2
e
3
1817
4
1
f
3956
4
2
a
3
3124
4
2
b
3
496
4
2
c
3
179
4

(11.30)

11.31 .
If you look at
11.31 and think about Schroeder’s parallel comb  lter bank, you can see
that this is a variation, indeed a generalization, on the structure. In the general FDN, every
possible feedback path is accounted for. Additionally, each delay line has input and output
386 Chapter 11
An equivalent way to look at
11.31 is shown in Figure 11.32 .
ou can see that the
x
b
z
c
y
g
g
b
z
-D2
c
g
g
Reverb Algorithms 387
Figure 11.32:
Another version of the two-delay-line FDN.
It is a unitary matrix. This reverberator would ring forever because of 1.0 gain values. Jot
proposed adding an absorptive coef
cient
to each delay line. For a colorless reverb, the
value for
in Equation 11.32 is given by Jot.
In
11.33 , each delay line undergoes the proper attenuation to keep the reverb colorless.
However, it does not include the frequency dependent absorptive losses we noted from the
energy decay relief diagrams. To accomplish this, Jot then suggested inserting LPFs instead
Figure 11.33: The FDN with decay factor control.
nn
388 Chapter 11
of the static attenuators. The LPF magnitudes are cleverly chosen in relation to the frequency-
erb time, RT
) in Equation 11.33 :
20 log
the magnitude of the filter at some frequency

11.34 .
Finally
11.35 shows a generalized version of the FDN. It does not include the absorptive
values or
) for clarity. The feedback matrix is always square,
×
, where
is the number of delay lines. Because it is square, it can be made to be unitary.
As an experiment, you can try to implement a four delay line version using the unitary matrix
Equation 11.34 and gain coef
Y
ou can also try to de-correlate the four delay line outputs by using Jot’s orthogonal matrix
Reverb Algorithms 389
11.14 Other FDN Reverbs
Smith (1985) developed a variation on the FDN theme with his waveguide reverberators.
Each waveguide consists of a two delay lines that move in opposite directions with

390 Chapter 11
be made to be lossless by adhering to mathematical conditions involving the matrices’
eigenv
vectors (Smith and Rocchesso 1994).
Dahl and Jot (2000) proposed another UFDN type of reverb algorithm based on a structure
they call the
absorbent all-pass Þ lter
(AAPF).
11.36 shows the block diagram of the
AAPF which consists of a standard delaying APF with an LPF inserted in signal path. The
combined an early re ections block that consisted of a multi-tapped delay line with a late
reverberation block. In their late reverberation model, they use series AAPFs in a UFDN loop
as shown in
11.37 . The block marked M is the unitary feedback matrix that mixes and
lter
Chemistruck, Marcolini, and Pirkle (2012) also experimented with FDNs for reverb
11.38 .
-
z
LP
Lef
AAP
AAP
AAP
AAP
z
LPF
AAP
AAP
Lef
Righ
AAP
AAP
AAP
AAP
Z
LP
AAP
AAPFR
Righ
Reverb Algorithms 391
Two different FDNs were used; the normal four-delay line and the delay and LPF in series
11.35 , but without the correction  lter on the output). An impulse response was taken
11.39 shows the stereo impulse
)
Four-Dela
y
Z
-D
-DI
-D/
Z
-D/
a
Diffusio
392 Chapter 11
 Two comb 
lters in each bank are LPF–comb 
Each parallel comb 
lter bank feeds an output LPF (“damping”).
 The output LPF damping control also adjusts the LPF–comb 
lter
values.
 Each output LPF feeds an output APF diffusion module.
 A single
Control for all comb 
lter gain variables.
Reverb
Figur



PCombSDI
394 Chapter 11
11.16 RackAFX Stock Objects
Table
11.4 shows
Figure 11.41: The one-pole LPF.
Table 11.4 :
oleLPF object.
Member Variables
Purpose
oat m_fLPF_g
oat m_fLPF_z1
Register to hold the single sample delay,
Member Functions
–1
Reverb Algorithms 395
11.16.2
Implements a delay of
samples with an output attenuator. This is also the base class for
Table
11.5 shows object members.
Table 11.5: The CDelay object .
Member Variables
Purpose
oat m_fDelayInSamples
 oat m_fOutputAttenuationOutput attenuation variable
 oat* m_pBuffer
Pointer to our dynamically declared buffer
int m_nReadIndexCurrent read location
int m_nWriteIndexCurrent write location
int m_nBufferSizeMax buffer size
int m_nSampleRateSample rate
 oat m_fDelay_ms
396 Chapter 11
11.16.3
Implements a
-sample comb  lter with feedback coef
cient
in
Table
shows object members.
Table 11.6: The CCombFilter object.
Member Variables
Purpose
 oat m_fComb_g
The one and only feedback gain coef
Member Functions
Table
11.7 shows object members.
Figure 11.42:
The delay with output attenuator.
Figure 11.43: The comb Þ
lter.
Reverb Algorithms 397
Table 11.7: The CLPFCombFilter object.
Member Variables
Purpose
oat m_fComb_g
The one and only feedback gain coef
oat m_fLPF_g
 oat m_fLPF_z1
Register for one pole LPF
Member Functions
398 Chapter 11
11.16.5
Implements a
-sample delaying APF in
Table
11.8 shows object members.
Table 11.8: The CDelayAPF object.
Member Variables
Purpose
oat m_fAPF_g
Member Functions
Reverb Algorithms 399
 nd on a reverb for the purpose of experimentation. Refer back to
11.37 , which has
Slider Property
Value
Slider Property
Value
Control Name
Pre Delay
Control Name
Pre Dly Atten
Variable Type
Variable Type
 oat
Variable Namem_fPreDelay_mSec
Variable Namem_fPreDelayAtten_dB
Initial Value
Initial Value
Input Diffusion
Slider Property
Value
Control Name
Variable Type
Variable Namem_fInputLPF_g
Initial Value
Slider Property
Value
Slider Property
Value
Control Name
Control Name
Variable Type
Variable Type
 oat
Variable Namem_fAPF_1_Delay_mSecVariable Name
Initial Value
Initial Value
400 Chapter 11
Slider Property
Value
Slider Property
Value
Control Name
Control Name
Variable Type
Variable Type
 oat
Variable Namem_fAPF_2_Delay_mSecVariable Name
Initial Value
28.13
Initial Value
Parallel Comb Filter Bank 1
Slider Property
Value
Slider Property
Value
Control Name
Control Name
Variable Type
Variable Type
 oat
Variable Namem_fPComb_1_Delay_mSecVariable Namem_fPComb_2_Delay_mSec
Initial Value
31.71
Initial Value
37.11
Slider Property
Value
Slider Property
Value
Control Name
Control Name
Variable Type
Variable Type
 oat
Variable Namem_fPComb_3_Delay_mSecVariable Namem_fPComb_4_Delay_mSec
Initial Value
Initial Value
44.14
Parallel Comb Filter Bank 2
Slider Property
Value
Slider Property
Value
Control Name
Control Name
Variable Type
Variable Type
 oat
Variable Namem_fPComb_5_Delay_mSecVariable Namem_fPComb_6_Delay_mSec
Initial Value
Initial Value
Reverb Algorithms 401
Slider Property
Value
Slider Property
Value
Control Name
Control Name
Variable Type
Variable Type
 oat
Variable Namem_fPComb_7_Delay_mSecVariable Namem_fPComb_8_Delay_mSec
Initial Value
41.41
Initial Value
Input Diffusion and Damping
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Slider Property
Value
Slider Property
Value
Control Name
Control Name
Variable Type
Variable Type
 oat
Variable Namem_fAPF_3_Delay_mSecVariable Name
Initial Value
Initial Value
Slider Property
Value
Slider Property
Value
Control Name
Control Name
Variable Type
Variable Type
 oat
Variable Namem_fAPF_4_Delay_mSecVariable Name
Initial Value
Initial Value
1 D
I I
I I
I I
I I
~=~=
+ +
I I
~=~=~=
402 Chapter 11
Reverb Output
Slider Property
Value
Slider Property
Value
Control Name
Reverb Time
Control Name
Reverb Algorithms 403
// Add your code here: ………………………………………………………………………- //

// Pre-Delay Block



// input Diffusion






// parallel Comb Bank 1








// parallel Comb Bank 2










// damping




// output diffusion




// function to cook all member object's variables at once



// END OF USER CODE …………………………………………………………………………… //

11.17.4 Reverb.cpp
Write the only extra function, cookVariables():
// function to cook all variables at once





// Pre-Delay

404 Chapter 11

Reverb Algorithms 405
Initialize all the objects with their max delay times; all delay times except the pre-delay
have maximum values of 100 mSec.
 Initialize the pre-delay for its maximum of 2 seconds.
406 Chapter 11

Reverb Algorithms 407

// Form our input = L + R (if there is a R)


” oat fInputSample = pInputBuffer[0];















// begin the series/parallel signal push

// Pre-Delay

” oat fPreDelayOut = 0;



�// Pre-Delay Out - fAPF_1_Out

” oat fAPF_1_Out = 0;




�// fAPF_1_Out - fAPF_2_Out

” oat fAPF_2_Out = 0;



�// fAPF_2_Out - fInputLPF

” oat fInputLPF = 0;



// comb “ lter bank
// variables for each output

” oat fPC_1_Out = 0;


” oat fPC_2_Out = 0;


” oat fPC_3_Out = 0;


” oat fPC_4_Out = 0;


” oat fPC_5_Out = 0;


” oat fPC_6_Out = 0;


” oat fPC_7_Out = 0;


” oat fPC_8_Out = 0;


” oat fC1_Out = 0;


” oat fC2_Out = 0;

�// fInputLPF - fPC_1_Out, fPC_2_Out, fPC_3_Out, fPC_4_Out








�// fInputLPF - fPC_5_Out, fPC_6_Out, fPC_7_Out, fPC_8_Out




408 Chapter 11




// form outputs: note attenuation by 0.15 for each and alternating signs




�// fC1_Out - fDamping_LPF_1_Out

” oat fDamping_LPF_1_Out = 0;



�// fC2_Out - fDamping_LPF_2_Out

” oat fDamping_LPF_2_Out = 0;



�// fDamping_LPF_1_Out - fAPF_3_Out

” oat fAPF_3_Out = 0;




�// fDamping_LPF_2_Out - fAPF_4_Out

” oat fAPF_4_Out = 0;



.willpirkle.com
for
more example reverb algorithms and code.
11.18 Challenge
Design your own reverb. Start by implementing some of the classics (Schroeder, Moorer)
and some of the more recent versions (Gardner, Jot, Dattorro) combining different modules.
Reverb Algorithms 409
Or start with the reverb design here and modify it. For example, try replacing the APFs with
e comb  lters. You can easily identify ringing and
oscillations using the impulse response tool, so keep it open as you experiment.
Bibliography
Griesinger, D. 1995. How loud is my reverberation?
Modulated Þ

Modulated Filter Effects
12.1: A simple LFO-modulated LPF.
c
412 Chapter 12
12.1 Design a Mod Filter Plug-In: Part I Modulated f
For our Þ rst effect design, weÕll start with a modulated second-order LPF and modulate the
cutoff frequency with an LFO. Then, we can increase the complexity by adding another LFO to
and even give the option to run the two LFOs in quadrature phase. We can use the
second-order digital resonant LPF youÕve already designed from
6 for the Þ
lter. Notice
constant at 2.0. And, we will introduce a
built-in RackAFX object to handle the LFO for us. The block diagram is shown in Figure 12.4 .
6 along with the built-in wave table oscillator
object to quickly implement the mod Þ lter effect. This project will use two built-in objects:
1. CBiquad for the Þ
lter
y
LP
f
Trigge
Generato
Modulated Filter Effects 413
Figure 12.4: The mod Þ lter block diagram.
You used CBiquad in
12.5 .

12.1.1 Project: ModFilter
Create the project; because we are using built-in objects for the Þ lter and LFO there are no
12.1.2 ModFilter GUI
For the initial design, you will need the following slider controls in Table 12.2 . Note that
414 Chapter 12
Modulated Filter Effects 415
Figure 12.5: The ß
owchart for the mod Þ
lter process function.
Add the following to the .h Þ
// Add your code here: ------------------------------------------------------- //
// BiQuad Objects

CBiquad m_LeftLPF;

CBiquad m_RightLPF;

// one LFO for the fc

Slider Property
Value
Control Name
Variable Type
Variable Name
LFO
m_uLFO_waveform
sine,saw,tri,square
Table 12.2: GUI controls for the ModFilter.
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Mod Rate fc
m_fModRate_fc
1.0
LF
Bi-Qua
Filte
416 Chapter 12
12.1.4 ModFilter.cpp File
¥ Initialize the rate min and max values. The objects are self initializing upon creation.
CModFilter::CModFilter()
{
&#xSNIP;&#x SNI;&#xP SN;&#xIP00;SNIP SNIP SNIP

// Finish initializations here

6 :
void CModFilter::calculateLPFCoeffs(” oat fCutoffFreq, ” oat fQ, CBiquad* pFilter)
{
// use same terms as book

Modulated Filter Effects 417

�pFilter-m_f_a2 = fAlpha;




418 Chapter 12
// cook to calculate



Modulated Filter Effects 419
12.2 Design a Mod Filter Plug-In: Part II, Modulated f

In the second design, we will modify the current plug-in to include the modulation of the
value. We will use a second, independent LFO for the
but will share the same LFO
waveform type with the
LFO. The block diagram is shown in
12.6 .
12.3 .
Table 12.3: Additional controls for the second LFO.
12.2.2 ModFilter.h File
We need to add the following new functions and variables, basically just duplicating the ones
modulation:
¥ LFO object for
modulation
¥ calculateQ() to calculate the new
value from the LFO sample value
¥ Min and max
values
// Add your code here: -------------------------------------------------------- //
CBiquad m_LeftLPF;
CBiquad m_RightLPF;

Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Mod Rate Q
m_fModRate_Q
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
420 Chapter 12
LF
LF
Q
Modulated Filter Effects 421
Write the calculateQ() function, which behaves the same way as calculateCutoffFreq(); you
” oat CModFilter::calculateQ(” oat fLFOSample)
{

422 Chapter 12
processAudioFrame()
a new
LFO value.
¥ Use the LFO value to calculate a new
value.
¥ Calculate a new
LFO value.
¥ Use the LFO value to calculate a new
value.
¥ Use the
and
values to calculate new Þ
lter coefÞ cients.
¥ Do the bi-quad routines on the input samples.

oat* pOutputBuffer,
UINT uNumInputChannels, UINT

{

// output = input -- change this for meaningful processing



oat fYn = 0; // normal output

oat fYqn = 0; // quad phase output

// call the fc LFO function; we only need “ rst output

m_fc_LFO.doOscillate(&fYn, &fYqn);

// calculate fc

oat fc = calculateCutoffFreq(fYn);
// call the Q LFO funciton

m_Q_LFO.doOscillate(&fYn, &fYqn);

// calculate the new Q

oat fQ = calculateQ(fYn);


// use to calculate the LPF

calculateLPFCoeffs(fc, fQ, &m_LeftLPF);


calculateLPFCoeffs(fc, fQ, &m_RightLPF);

// do the BiQuads
pOutputBuffer[0] = m_LeftLPF.doBiQuad(pInputBuffer[0]);
// Mono-In, Stereo-Out (AUX Effect)
if(uNumInputChannels == 1 && uNumOutputChannels == 2)

pOutputBuffer[1] = pOutputBuffer[0]; // just copy
// Stereo-In, Stereo-Out (INSERT Effect)
if(uNumInputChannels == 2 && uNumOutputChannels == 2)
pOutputBuffer[1] = m_RightLPF.doBiQuad(pInputBuffer[1]);

Modulated Filter Effects 423
noise through it, the modulation becomes clearly audible. WeÕll Þ nish the plug-in by making
one more modiÞ cation: the ability to place the right and left LPF modulation sources in
12.3 Design a Mod Filter Plug-In: Part III, Quad-Phase LFOs
In the third design iteration, we will modify the current plug-in to allow for quadrature phase
LFOs. The block diagram is given in
12.7 .
12.4 . Note: This is a direct control variableÑthere is
nothing extra to add in userInterfaceChange() or prepareForPlay() since we only use it in the
Figure 12.7: The mod Þ lter with quad-phase LFOs.
Q
0
c
LF
424 Chapter 12
12.3.2 ModFilter.cpp File
processAudioFrame()
¥ Check the enumerated variable and use the appropriate LFO output sample.

oat* pOutputBuffer,
UINT uNumInputChannels, UINT

{

// output = input -- change this for meaningful processing


oat fYn = 0; // normal output

oat fYqn = 0; // quad phase output
// call the LFO function; we only need “ rst output

m_fc_LFO.doOscillate(&fYn, &fYqn);

// calculate both fc values (can be streamlined!)

oat fc = calculateCutoffFreq(fYn);

oat fcq = calculateCutoffFreq(fYqn);


Button Property
Value
Control Name
Variable Type
Variable Name
LFO
m_uLFO_Phase
NORM,QUAD
Modulated Filter Effects 425
// do the BiQuads
pOutputBuffer[0] = m_LeftLPF.doBiQuad(pInputBuffer[0]);
// Mono-In, Stereo-Out (AUX Effect)
if(uNumInputChannels == 1 && uNumOutputChannels == 2)
pOutputBuffer[1] = pOutputBuffer[0]; // just copy
// Stereo-In, Stereo-Out (INSERT Effect)
if(uNumInputChannels == 2 && uNumOutputChannels == 2)
pOutputBuffer[1] = m_RightLPF.doBiQuad(pInputBuffer[1]);

12.8 .
426 Chapter 12
Table
12.5 .
Modulated Filter Effects 427
428 Chapter 12
only 63.3% of the edge. In digital systems, the value is usually 80% to 99%. The envelope
12.9 ). The
1
+
R
ATTAC
out
Modulated Filter Effects 429
In analog RC circuits, the cap charges and discharges exponentially, as shown in
12.12 . The analog attack time is calculated as the time it takes to reach 63.2% of the
full charge, while the release time is the time to release from the fully charged state do
36.8% of the full charge. Different combinations of R and C change these times. A digital
12.13 )
/(release_in_mSec*SampleRate*0.001)
The code for the difference equation is:
�if(fInput m_fEnvelope)
m_fEnvelope = m_fAttackTime * (m_fEnvelope - fInput) + fInput;
else
Amplitud
t
430 Chapter 12
Compare this to the block diagram in
12.13 and you can see how it implements the
Charge
100

0
63.2
ô
a
Chargin
ô
r
36.8
Modulated Filter Effects 431
Table 12.6:
The GUI controls for the envelope follower plug-in.
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Pre-Gain
m_fPreGain_dB
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Threshold
m_fThreshold
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Attack Time
m_fAttack_mSec
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Release Time
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Button Property
Value
Control Name
Variable Type
Variable Name
Time Constant
m_uTimeConstant
analog, digital
Button Property
Value
Control Name
Variable Type
Variable Name
Direction
m_uDirection
UP, DOWN
And, it needs controls for the modulated Þ
¥
¥ Direction of modulation (up or down)
EnvelopeFollower.h File
We need to add the following new functions and variables; most can be taken directly from
your ModFilter project. We will need:
¥ An LPF bi-quad for each channel (left, right)
432 Chapter 12
Modulated Filter Effects 433
” oat CEnvelopeFollower::calculateCutoffFreq(” oat fEnvelopeSample)
{
// modulate from min upwards



434 Chapter 12
Modulated Filter Effects 435
processAudioFrame()
¥
The envelope followerÕs processAudioFrame() function will operate in the sequence
shown in Figure 12.14 .
bool __stdcall CEnvelopeFollower::processAudioFrame(” oat* pInputBuffer, ” oat*

pOutputBuffer, UINT
uNumInputChannels, UINT
{

// Do LEFT (MONO) Channel; there is always at least one input, one output

oat fGain = pow(10, m_fPreGain_dB/20.0);


Lef
∆bov
Ne
Frequenc
o
=
Minimu
Filte
Bi-Qua
436 Chapter 12
// do the BiQuads

pOutputBuffer[0] = m_LeftLPF.doBiQuad(pInputBuffer[0]);

Modulated Filter Effects 437
to do the same. APFs have a delay response that is ß at across the Þ
rst one-third or so of
alternative to interpolation. The phaser didnÕt sound like a ß
anger at all. The reason is that
in a ß anger, the notches in the inverse-comb Þ
ltering are mathematically related as simple
multiples of each other. This is not the case for the phaser. The phaser is still used today in
12.15 and 12.16
. Even if you canÕt read schematics, itÕs worth a moment to take a look at the
12.17 .
The phase of the signal is shifted by 90 degrees at the frequency given by Equation 12.2 :
C
R
r
The positiv
e control voltage is a triangle wave, which alters the resistance of the Þ
eld effect
Figur
Inpu
+

V
1
2
1u
10
2

+
2
10
+

2
10
2
2
Vc
6
Stage
20
2

+
Vc
10
0.1u
2
+

20
20
10
2

+
10
V
Frequenc
Oscillato
(0.0
V+
+1
7
7
36
1
+

2.
100u
+
Inpu
LF
APF
APF
APF
APF
APF
APF
F
Dept
1
Outpu
Vou
Outpu
2
2
+

10
+

Figur
t
Modulated Filter Effects 439
¥ Other phaser models also included a feedback path from the Þ ltered output back to the
lter chain input to increase intensity of the effect.
¥ The six APF modules have the frequency ranges in
12.7 .
Because this is a complex design, we will break it down into two parts: Þ rst, weÕll design an
abbreviated version in
12.18 with only three APF modules and no feedback control.
The design equations for Þ rst-order APFs from
6 are given next.
Table 12.7: Minimum and maximum phase rotation frequencies for the phaser APFs.
1.6 kHz
* We will have to cut off at ½ Nyquist instead.
¥
, corner frequency
p
Q
/
f
1
1

52
a
a
b
1
1
a
2
a
1.0
b
b
1
1
a
2
b
W
eÕll need to implement the APF using the built-in BiQuad object. The built-in oscillator will
be used for the LFO; many aspects of the design and code chunks can be borrowed from your
ModFilter projects. SpeciÞ c to this design is that it:
440 Chapter 12
Figure 12.18: The block diagram of the simpliÞ
ed, Þ
rst part of the design.

¥ Needs multiple APFs, each with its own range of frequencies
¥ Needs only one LFO to control all APFs
¥ Needs to calculate all APF cutoff frequencies and coefÞ cients at the same time
Table
12.8 to your GUI.
Table 12.8: The phaser GUI elements.
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Rate
m_fModRate
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Button Property
Value
Control Name
Variable Type
Variable Name
LFO Type
m_uLFO_Waveform
sine, saw, tri, square
12.6.3 Phaser.h File

Modulated Filter Effects 441
coefÞ
cients in one pair of functions (one for left and one for right). Because there is so
for each bi-quad. You donÕt need to #include anything since these are built in. Add these
les
// Add your code here: ------------------------------------------------------- //

CBiquad m_LeftAPF_1;


CBiquad m_RightAPF_1;




CBiquad m_LeftAPF_2;


CBiquad m_RightAPF_2;




CBiquad m_LeftAPF_3;


CBiquad m_RightAPF_3;


// function to calculate the new fc from the Envelope Value

oat calculateAPFCutoffFreq(” oat fLFOSample, ” oat fMinFreq, ” oat fMaxFreq);

// Two functions to calculate the BiQuad Coeffs: APF

void calculateFirstOrderLeftAPFCoeffs(”
oat fLFOSample);


void calculateFirstOrderRightAPFCoeffs(”
oat fLFOSample);


// helper function for APF Calculation

void calculateFirstOrderAPFCoeffs(” oat fCutoffFreq,



// min/max variables

oat m_fMinAPF_1_Freq;


oat m_fMaxAPF_1_Freq;


oat m_fMinAPF_2_Freq;


oat m_fMaxAPF_2_Freq;


oat m_fMinAPF_3_Freq;


oat m_fMaxAPF_3_Freq;

// LFO Stuff

442 Chapter 12
CPhaser::CPhaser()
{
&#xSNIP;&#x SNI;&#xP SN;&#xIP00;SNIP SNIP SNIP

// Finish initializations here
Modulated Filter Effects 443



calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_LeftAPF_1);


// APF2
� Bi Quad

fCutoffFreq = calculateAPFCutoffFreq(fLFOSample, m_fMinAPF_2_Freq,




calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_LeftAPF_2);


// APF3
� Bi Quad

fCutoffFreq = calculateAPFCutoffFreq(fLFOSample, m_fMinAPF_3_Freq,




calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_LeftAPF_3);

}


void CPhaser::calculateFirstOrderRightAPFCoeffs(” oat fLFOSample)

{

// APF1
� Bi Quad

oat fCutoffFreq = calculateAPFCutoffFreq(fLFOSample, m_fMinAPF_1_Freq,




calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_RightAPF_1);


// APF2
� Bi Quad

fCutoffFreq = calculateAPFCutoffFreq(fLFOSample, m_fMinAPF_2_Freq,




calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_RightAPF_2);

�// APF3 - fc - Bi Quad

fCutoffFreq = calculateAPFCutoffFreq(fLFOSample, m_fMinAPF_3_Freq,




calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_RightAPF_3);

}
¥ Flush the delays in the APFs.
444 Chapter 12
Modulated Filter Effects 445
¥ Use the depth control to mix the output.
Calculate the depth so that 100% depth gives a 50/50 mix ratio (done by dividing depth

oat* pOutputBuffer, UINT

uNumInputChannels, UINT uNumOutputChannels)
{
// Do LEFT (MONO) Channel; there is always at least one input, one output

” oat fYn = 0;
// normal output

” oat fYqn = 0;
// quad phase output
// mod depth is at 100% when the control is at 50% !!

oat fDepth = m_fMod_Depth/200.0;

// call the LFO function; we only need “ rst output

m_LFO.doOscillate(&fYn, &fYqn);


// use the LFO to calculate all APF banks



// do the cascaded APFs

oat fAPF_1_Out = m_LeftAPF_1.doBiQuad(pInputBuffer[0]);


oat fAPF_2_Out = m_LeftAPF_2.doBiQuad(fAPF_1_Out);


oat fAPF_3_Out = m_LeftAPF_3.doBiQuad(fAPF_2_Out);


// form the output

()

// calculate


// do the cascade

fAPF_1_Out = m_RightAPF_1.doBiQuad(pInputBuffer[1]);


fAPF_2_Out = m_RightAPF_2.doBiQuad(fAPF_1_Out);


fAPF_3_Out = m_RightAPF_3.doBiQuad(fAPF_2_Out);

// Mono-In, Stereo-Out (AUX Effect)
if(uNumInputChannels == 1 && uNumOutputChannels == 2)

pOutputBuffer[1] = pOutputBuffer[0];

// Stereo-In, Stereo-Out (INSERT Effect)
if(uNumInputChannels == 2 && uNumOutputChannels == 2)

pOutputBuffer[1] = fDepth*fAPF_3_Out + (1.0  fDepth)*pInputBuffer[1];


446 Chapter 12
12.7 Design a Stereo Phaser with Quad-Phase LFOs
Adding the extra APFs is just more of the same cut-and-paste operation; you only need
to change the range of frequencies and adjust the cooking and processing functions for
that. The addition of the feedback is simpleÑwe need another slider on the UI and a
storage device. The quadrature phase is easy to implement because our LFO
already produces this output for us; we only need a UI change and a branch statement in
the code.
12.7.1 Phaser GUI
Update the GUI with the new controls in
Table
12.9 .
Table 12.9: Additional controls for the quad-phase phaser plug-in.
12.7.2 Phaser.h File
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Intensity
Button Property
Value
Control Name
Variable Type
Variable Name
LFO Type
m_uLFO_Phase
NORM, QUAD
Modulated Filter Effects 447
// function to calculate the new fc from the Envelope Value

oat calculateAPFCutoffFreq(” oat fLFOSample, ” oat fMinFreq, ” oat fMaxFreq);
&#xSNIP;&#x SNI;&#xP SN;&#xIP00;SNIP SNIP SNIP

oat m_fMinAPF_3_Freq;

oat m_fMaxAPF_3_Freq;

oat m_fMinAPF_4_Freq;


oat m_fMaxAPF_4_Freq;


oat m_fMinAPF_5_Freq;


oat m_fMaxAPF_5_Freq;


oat m_fMinAPF_6_Freq;


oat m_fMaxAPF_6_Freq;

// LFO Stuff

448 Chapter 12
Modify the cooking functions to calculate the new APF coefÞ
void CPhaser::calculateFirstOrderLeftAPFCoeffs(” oat fLFOSample)
{
&#xSNIP;&#x SNI;&#xP SN;&#xIP00;SNIP SNIP SNIP
�// APF3  fc - Bi Quad
fCutoffFreq = calculateAPFCutoffFreq(fLFOSample, m_fMinAPF_3_Freq,


calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_LeftAPF_3);

�// APF4  fc - Bi Quad

fCutoffFreq = calculateAPFCutoffFreq(fLFOSample, m_fMinAPF_4_Freq,




calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_LeftAPF_4);

�// APF5  fc - Bi Quad

fCutoffFreq = calculateAPFCutoffFreq(fLFOSample, m_fMinAPF_5_Freq,




calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_LeftAPF_5);

�// APF6 - fc - Bi Quad

fCutoffFreq = calculateAPFCutoffFreq(fLFOSample, m_fMinAPF_6_Freq,




calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_LeftAPF_6);

}
void CPhaser::calculateFirstOrderRightAPFCoeffs(” oat fLFOSample)
{
&#xSNIP;&#x SNI;&#xP SN;&#xIP00;SNIP SNIP SNIP
�// APF3 - fc - Bi Quad
fCutoffFreq = calculateAPFCutoffFreq(fLFOSample, m_fMinAPF_3_Freq,
m_fMaxAPF_3_Freq);

calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_RightAPF_3);

�// APF4 - fc - Bi Quad

fCutoffFreq = calculateAPFCutoffFreq(fLFOSample, m_fMinAPF_4_Freq,




calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_RightAPF_4);

�// APF5 - fc - Bi Quad

fCutoffFreq = calculateAPFCutoffFreq(fLFOSample, m_fMinAPF_5_Freq,




calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_RightAPF_5);

�// APF6 - fc - Bi Quad

fCutoffFreq = calculateAPFCutoffFreq(fLFOSample, m_fMinAPF_6_Freq,




calculateFirstOrderAPFCoeffs(fCutoffFreq, &m_RightAPF_6);

}
Modulated Filter Effects 449
Flush the new APF buffers.
450 Chapter 12

oat fYn = 0; // normal output

oat fYqn = 0; // quad phase output
// mod depth is at 100% when the control is at 50% !!

oat fDepth = m_fMod_Depth/200.0;

oat fFeedback = m_fFeedback/100.0;

// call the LFO function; we only need “ rst output

m_LFO.doOscillate(&fYn, &fYqn);

// use the LFO to calculate all APF banks


// do the cascaded APFs

oat fAPF_1_Out = m_LeftAPF_1.doBiQuad(pInputBuffer[0] +




oat fAPF_2_Out = m_LeftAPF_2.doBiQuad(fAPF_1_Out);

oat fAPF_3_Out = m_LeftAPF_3.doBiQuad(fAPF_2_Out);

oat fAPF_4_Out = m_LeftAPF_4.doBiQuad(fAPF_3_Out);


oat fAPF_5_Out = m_LeftAPF_5.doBiQuad(fAPF_4_Out);


oat fAPF_6_Out = m_LeftAPF_6.doBiQuad(fAPF_5_Out);

// for next sample period

m_fFeedbackLeft = fAPF_6_Out;


pOutputBuffer[0] = fDepth*fAPF_6_Out + (1.0 - fDepth)*pInputBuffer[0];

// use the fc and Q to calculate the Left LPF coeffs

if(m_uLFO_Phase == QUAD)








// do the cascaded APFs


fAPF_1_Out = m_RightAPF_1.doBiQuad(pInputBuffer[1] +



fAPF_2_Out = m_RightAPF_2.doBiQuad(fAPF_1_Out);
fAPF_3_Out = m_RightAPF_3.doBiQuad(fAPF_2_Out);

fAPF_4_Out = m_RightAPF_4.doBiQuad(fAPF_3_Out);


fAPF_5_Out = m_RightAPF_5.doBiQuad(fAPF_4_Out);


fAPF_6_Out = m_RightAPF_6.doBiQuad(fAPF_5_Out);


// for next sample period

m_fFeedbackRight = fAPF_6_Out;

// Mono-In, Stereo-Out (AUX Effect)
if(uNumInputChannels == 1 && uNumOutputChannels == 2)


Modulated Filter Effects 451
// Stereo-In, Stereo-Out (INSERT Effect)
if(uNumInputChannels == 2 && uNumOutputChannels == 2)

(…)*pInputBuffer[1];

Dynamics processors are designed to automatically control the amplitude, or gain, of an
audio signal and consist of two families: compressors and expanders. Technically speaking,
compressors and expanders both change the gain of a signal after its level rises
above
a
13.2 shows the input/output transfer functions for the expander family. The ratios
are reversed. You can see that as the signal falls below the threshold, it is attenuated by the
ratio amount. For example, below the threshold on the 1:2 line, every decrease in 1dB at the
line, when the input falls below
the threshold it receives in nite attenuation, that is, it is muted. This version of the device is
(or noise gate) and represents the most extreme form of downward expansion.
Perhaps the most important aspect of both families is that their gain reduction curves are
linear for logarithmic axes. These devices operate in the
log domain
. The two families of
dynamics processors yield four combinations: compression and limiting, and expansion and
gating. We will implement all four of them.

454 Chapter 13

13.3 shows the feed-forward and feedback topologies for dynamics processors. There
is some debate as to which is best; we’ll stay out of that argument and design the feed-forw
version  rst. Then, you can design a feedback version for yourself. Both designs include
12 )
Figure 13.2:
The generalized input/output transfer curves for a downward expander at
various reduction ratios. The ratio is in the form
:
.
Figure 13.1: The gener
alized input/output transfer curves for a compressor at various
reduction ratios. The ratio is in the form
:
.
)
1:4
Dynamics Processing 455
13.4 shows the block diagram of
the  rst dynamic processor plug-in.

13.4 labels the input and control signals as follows:

),
input/output

DC
A
Ou
G
li
(dB
456 Chapter 13
(13.1)



Downward expander gain (dB):
) =
))
(13.2)



The point in the input/output plot where the signal hits the threshold, thereby engaging the
device, is called the
of the compression curve. With a hard knee the signal is either
above the threshold or below it and the device kicks in and out accordingly. The soft-knee
compressor allows the gain control to change more slowly over a width of dB around
(
). In this way, the device moves more smoothly
into and out of compression or downward expansion.
13.5 compares hard- and
soft-knee curves.
Figure 13.5: Hard and soft-knee compression curves.
widt
Dynamics Processing 457
There are several approaches to generating the soft-knee portion of the curve when
calculating the gain reduction value. Curv
tting by interpolating across the knee width
13.6 shows
an example. The knee width (
) is 10 dB with a 40 dB threshold. For a compression
is 0.75. Therefore, the soft-knee
=
10d
d
kne
kne
458 Chapter 13
You’ll notice a new feature in this plug-in: the ability to monitor a signal and display it on a
able
13.1 shows the graphical user interface (GUI) slider properties
Table 13.1: The GUI controls for the dynamics processor plug-in.
Slider Property
Value
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Slider Property
Value
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Attack Time
m_fAttack_mSec
Control Name
Variable Type
Variable Name
Initial Value
Release Time
Slider Property
Value
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Control Name
Variable Type
Variable Name
Initial Value
Dynamics Processing 459
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Button Property
Value
Control Name
Variable Type
Variable Name
Processor
m_uProcessorType
COMP, LIMIT, EXPAND, GATE
Button Property
Value
Control Name
Variable Type
Variable Name
Time Constant
m_uTimeConstant
Digital, Analog
13.1.3 DynamicsProcessor.h File
We need to add the following new functions and variables; most can be taken directly
from your ModFilter project. There’s not much to the basic device. You will need the
following:
460 Chapter 13
13.1.4 DynamicsProcessor.cpp File
We need to implement Equation 13.1 and also need to add the code to provide the Lagrange
The gain calculations simply follow Equation 13.1 . See the Lagrange interpolation function
in the pluginconstants.h  le for documentation. You provide it with
and
arrays for the end
points along with the order (two, since we have two endpoints) and the
value to calculate
value.
Dynamics Processing 461
462 Chapter 13






Dynamics Processing 463

// gain calc


” oat fGn = 1.0;


// branch






464 Chapter 13
13.2 .
Figure 13.7:
(dB
Routin
2
2
L R
L R
Dynamics Processing 465
466 Chapter 13
13.2 Design a downward expander/gate plug-in
We’ll add on to the current project. Here is what is needed:
 A function to calculate downward expander gain
 Branching in the processAudioFrame() function to implement all four dynamics
processor operations
13.2.1 DynamicsProcessor.h File
Add the following function declaration for the downward expander calculation:
// Add your code here: ------------------------------------------------------ //
Dynamics Processing 467


468 Chapter 13


else if( m_uProcessorType == GATE)


13.3 Design a Look-Ahead Compressor Plug-In
12 how the output
Dynamics Processing 469
look-ahead technique. We can’t look ahead into the future of a signal, but we can delay the
present. If we insert a delay line in the forward signal path (not the side-chain) we can mak
13.9 .
You can use the CDelay object that comes as a “stock reverb object” and add it to your project
now for our look-ahead pre-delay. Do this by using the Edit button and edit the project. Check
the box marked “Stock Reverb Objects” and then hit OK. Your compiler will ask you if you
want to reload the project, so answer yes. You will then have the reverb objects included in
your project. We’ll add a single new slider to control the look-ahead time. Remember when
using the CDelay-based objects that you need to make sure you initialize the object to have
the same maximum delay time as your slider. The amount of look-ahead delay will depend on
13.3 .
I
RM
470 Chapter 13
Table 13.3: The look-ahead slider properties.
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
Look-Ahead
13.3.2 DynamicsProcessor.h File
Declare the delays for the look-ahead:
// Add your code here: ------------------------------------------------------ //
Dynamics Processing 471
userInterfaceChange()
472 Chapter 13
13.4 Stereo-Linking the Dynamics Processor
You can see from the block diagram and your own code that our stereo dynamics processor
shares all of the controls: gains, attack, release, threshold, and ratio. However, the ultimate
value of the dynamic gain factor
) really depends on the audio content of each channel,
13.10 shows how the right channel’s gain computation has been removed since it
now shares the left channel’s side-chain path. In order to modify the e
xisting processor, we
need to do two things: modify the UI to add a stereo link switch and add more branching in
13.4.1 DynamicsProcessor: GUI
For a change, I’ll use a slider control in enum mode to make a switch for the stereo link.
Remember that you can use sliders to make enumerated-list variables in addition to the
buttons. Use
Table
Figure 13.10: The stereo-linked dynamics processor.
Lef

left
Dynamics Processing 473
Table 13.4: The stereo link slider properties.
Slider Property
Value
Control Name
Variable Type
Variable Name
Stereo Link
m_uStereoLink
13.4.2 DynamicsProcessor.cpp File
There are many different ways to do this branching and you should de
nitely try to
gure out your own way. My code is for education purposes and not very streamlined
474 Chapter 13
Dynamics Processing 475
13.5 Design a Spectral Compressor/Expander Plug-In
In this project, you will design a spectral dynamics processor. A spectral processor splits
up the input signal into two or more frequency bands, then applies signal processing
independently to each band. Finally, the processed  ltered outputs are summed to produce the
 nal output. In this way, you can apply signal processing to only one band of frequencies, or
apply different types of processing to different bands. In this design, we will create a two-
band spectral compressor/expander. We will use complementary low-pass  lter (LPF) and
high-pass  lter (HPF) units to split the incoming audio into two bands: low-frequency (LF)
and high-frequency (HF). Then, we will process each band independently and recombine
the outputs as shown in
13.11 . You can compress, say, only the HF content to squash
cymbals and other sibilance. Or you can smooth out the bass by compressing it slightly, or
y combination of both that you like.
Take a look at
13.8 and check out the features. The input LPF and HPF are adjusted
control. This means their cut-off points are always overlapping. The outputs
are processed through two independent dynamics processors with independent make-up
6 .
Figure 13.11: A two-band spectral compressor.
O
476 Chapter 13

u

5
V
u

5
k
2
2
V
5
k
2
V
V

k
a
2
V

2
2
a
V
a
k
b
2
2
2
b
2
2
2
b
2
2
V
k
b
2
2
V
k
Because these are second-order 
lters, we can use RackAFX’s built-in bi-quad to do the
ltering; all we need to do is supply the calculation of the coef cients. Thus, we’re going to need
to design a stereo dynamics processor with independent attack, release, threshold, ratio, and
gain make-up controls. We will need a shared
control for the two  lters. Although this plug-in
shares most of the features of the previous project, for simplicity there are a few modi
cations:
 This design will not include the look-ahead function.
 The knee width will be a common shared control that both bands will use.
13.5.1 Project: SpectralDynamics
Create a new project named “SpectralDynamics.” Since we are not implementing the look-
ahead feature you don’t have to add the stock objects.
13.5.2 SpectralDynamics: GUI
The GUI is basically a double version of the previous design without the look-ahead function.
However, we also need to add a cutoff frequency slider to control the point where the bands
NOTE: Unlike the last plug-in, the processor will be hard-coded in stereo link mode. We can
Dynamics Processing 477
split and a knee-width control.
Tables
13.5 and 13.6 show all the controls you will need.
In
Variable Name
Variable Name
Slider Property
Value
Slider Property
Value
Control Name
Variable Type
Variable Name
Initial Value
m_fFilterBankCutoff
Control Name
Variable Type
Variable Name
Initial Value
13.5.3 Additional Slider Controls
478 Chapter 13
13.12 shows the
GUI after all the controls have been added.
13.5.6 SpectralDynamics.h File
Because the bulk of the work is in the UI code, you only need a few more components to
Sen
d
m
∆ttac
Re
Rati
Gai
d
Gai
d
Rati
Re
m
∆ttac
m
Thres
Sen
d
Widt
Dynamics Processing 479

480 Chapter 13

” oat fb1_Num = -2.0*k_squared + 2.0*omega_c_squared;


” oat fb2_Num = -2.0*k*omega_c + k_squared + omega_c_squared;

// the LPF coeffs

” oat a0 = omega_c_squared/fDenominator;


” oat a1 = 2.0*omega_c_squared/fDenominator;


” oat a2 = a0;


” oat b1 = fb1_Num/fDenominator;


” oat b2 = fb2_Num/fDenominator;

Dynamics Processing 481
bool __stdcall CSpectralDynamics::prepareForPlay()
{
// Add your code here:

// Flush the “
lters

m_LeftLPF.” ushDelays();


m_LeftHPF.” ushDelays();


m_RightLPF.” ushDelays();


m_RightHPF.” ushDelays();

// calculate the Coeffs all at once!



482 Chapter 13
userInterfaceChange()
Dynamics Processing 483

// Time Constant









484 Chapter 13
// split the signal into m_Left LF and HF parts

” oat fLeft_LF_Out = m_LeftLPF.doBiQuad(fLeftInput*fLFGain);


” oat fLeft_HF_Out = m_LeftHPF.doBiQuad(fLeftInput*fHFGain);


// invert ONE of the outputs for proper recombination


Dynamics Processing 485




486 Chapter 13
Build and test the plug-in. This is a complex device with many interactions. It can sound a lot
because it is a very special case of one. Be careful with gain and threshold
13.13 shows this con guration. Reiss notes that
this has the effect of allo
wing the release part of the envelope to fade out when the input
drops below the threshold. If you want to try to implement this one, you will need to make
A second variation called log-domain, shown in Figure 13.15, places the timing controls
after the gain computer but before the conversion to the linear domain. It could also be
Dynamics Processing 487
Bibliography
Ballou, G. 1987.
Handbook for Sound Engineers
, pp. 850–860. Carmel, IN: Howard W. Sams & Co.
Floru, F. 1998. “Attack and Release Time Constants in RMS-Based Feedback Compressors.”
Journal of the Audio

There are a few effects left over that didn’t have an exact chapter to reside in, so they are
presented here. Interestingly, they are some of the simplest to implement but can have a
massive sonic impact on the audio they process. These effects include
 Tremolo
 Auto-panning
 Ring modulation
 Wave shaping
14.1 Design a Tremolo/Panning Plug-In
The tremolo is a modulated amplitude effect that uses a low-frequency oscillator (LFO) to
directly modulate the output. The LFO waveform is usually triangular or sinusoidal. If the
LFO is a square wave, it produces a gapping effect, where intermittent chunks of audio are
alternatively muted then restored to unity gain. An auto-panning algorithm pans the signal
from left to right following an LFO. Since they are both amplitude-based effects we can
combine them into one plug-in. The block diagrams are shown in
.
L
L
490 Chapter 14
modulate the amplitude of the input signal. The only tricky part is to decide how far below
ne the depth as follows:
 Depth
 Depth
 Depth
For auto-panning, a depth of 0 yields no effect while 100% pans the signal through the
full left–right stereo  eld. The panning is calculated using the constant power pan rule
( Equation 14.1 ). In fact, any plug-in you design that has a pan control on it needs to use the
same calculation. Instead of linearly adjusting the left and right gains, they follow the curves
of the  rst quarter cycle of a sin/cos pair. This way, they overlap at the 0.707 points. It should
be noted there are other panning schemes, but this one seems to be the most common. The
LFO must be in bipolar mode for this to work easily.
LFO
1
1
2
Gain
1
p
2
Gain
1
p
2
LFO is bi
14.1.1 Project: TremoloPanner
Create the project and name it “TremoloPanner.” There are no other objects or options to add.
14.1.2 TremoloPanner: GUI
We need rate and depth controls for the LFO and buttons to change the LFO type and the
operational mode of the plug-in shown in
able
14.1 .
TremoloPanner.h File
We need to add a wave table object and two gain calculation functions, one for the tremolo
Miscellaneous Plug-Ins 491
Table 14.1: TremoloPanner graphical user interface (GUI) controls.
Slider Property
Value
Slider Property
Value
Control Name
Mod Rate
Control Name
Variable Type
Variable Type
ß oat
Variable Name
m_ModRate
Variable Name
Initial Value
Initial Value
Button PropertyValue
Control NameMode
Variable Typeenum
Variable Namem_uMode
Enum StringTremolo, Panner
Button PropertyValue
Control NameLFO
Variable Type
Variable Namem_uLFO_Waveform
Enum Stringsine, saw, tri, square
TremoloPanner.cpp File
The tremolo gain calculation will:
 Multiply the LFO sample by depth/100.0.
 Add the value 1.0-depth /100.0 to the result.
That provides the mapping we need for the effect.
” oat CTremoloPanner::calculateGainFactor(” oat fLFOSample)
{
// “ rst multiply the value by depth/100
” oat fOutput = fLFOSample*(m_fModDepth/100.0);
// then add the value (1 „ m_fModDepth/100.0)
fOutput += 1 „ m_fModDepth/100.0;
492 Chapter 14
the LFO.
Miscellaneous Plug-Ins 493
processAudioFrame()
the LFO value.
 Calculate the channel gain values according to the mode.
In processAudioFrame() we need to generate a new LFO value, calculate the new gain factor,
bool __stdcall CTremoloPanner::processAudioFrame(” oat* pInputBuffer,
” oat* pOutputBuffer,
UINT uNumInputChannels,
UINT uNumOutputChannels)
{
// Do LEFT (MONO) Channel; there is always at least one input/one output

” oat fYn = 0;
// normal output

” oat fYqn = 0;
// quad phase output
// call the LFO function; we only need “ rst output
m_LFO.doOscillate(&fYn, &fYqn);
494 Chapter 14
else
{
// do right channel, its value
pOutputBuffer[1] = pInputBuffer[1]*fGnR;
}
}
Table
14.2 . The depth
Figure 14.2: The ring modulator block diagram.
Inpu
Outpu
Carrie
Miscellaneous Plug-Ins 495
Table 14.2: RingModulator plug-in GUI controls.
Slider Property
Value
Slider Property
Value
Control Name
Mod Freq
Control Name
Variable Type
Variable Type
ß oat
Variable Namem_fModFrequencyVariable Name
Initial Value
Initial Value
14.2.3 RingModulator.h File
We only need to add a single wave table object to implement the carrier oscillator.
// Add your code here: -------------------------------------------------- //
// the Carrier Oscillator
496 Chapter 14
bool __stdcall CRingModulator::userInterfaceChange(int nControlIndex)
{
Miscellaneous Plug-Ins 497
14.3 Design a Wave Shaper Plug-In
14.3 shows the arctangent
), where
controls the amplitude of the input value and thus the amount
of nonlinear processing applied. The exact equation is shown in Equation 14.2 , where
a normalization factor has been added to restrict the output to the range of
1.
You can see in
1, the input/output relationship is nearly linear.
increases, the S-shaped curve emerges and adds gain. For example, with
5 and
) is about 0.6.
)
arctan(
kx
n
(14.2)
1
=
5
498 Chapter 14
inversion. We will design a plug-in that will implement the arctangent wave shaping and
 The
-value for both the positive and negative halves of the input signal
 The number of stages in series, up to 10
 Inverting or not inverting every other stage when cascaded
14.3.1 Project: WaveShaper
Create a project and name it “WaveShaper.” There are no other objects or options to add.
14.3.2 WaveShaper: GUI
Add the controls in
Table
14.3 to implement the GUI we will need for the WaveShaper
Table 14.3: WaveShaper plug-in GUI controls.
Slider Property
Value
Slider Property
Value
Control Name
+Atan Exp
Control Name
ÐAtan Exp
Variable Type
Variable Type
ß oat
Variable Name
m_fArcTanKPos
Variable Name
m_fArcTanKNeg
0.10
0.10
Initial Value
Initial Value
Slider Property
Value
Slider Property
Value
Control Name
Stages
Control Name
Invert Stages
Variable Type
Variable Type
Variable Name
m_nStages
Variable Name
m_uInvertStages
Initial Value
OFF,ON
WaveShaper.h File
There’s nothing to do in the .h  le as we require no additional variables, memory storage
Miscellaneous Plug-Ins 499

oat* pOutputBuffer,
UINT uNumInputChannels,
UINT uNumOutputChannels)


// Do LEFT (MONO) Channel; there is always at least one input/one output
// (I NSERT Effect)
” oat f_xn = pInputBuffer[0];
// cascaded stages
for(int i=0; im_nStages; i++)
{
�if(f_xn = 0)
f_xn = (1.0/atan(m_fArcTanKPos))*atan(m_fArcTanKPos*f_xn);
else
f_xn = (1.0/atan(m_fArcTanKNeg))*atan(m_fArcTanKNeg*f_xn);
500 Chapter 14
Build and test the plug-in. Note that in order to fully replicate a tube circuit you will need
much more than just the distortion component, as tube circuits are amplitude sensitive and
band limited. However, this plug-in is a good place to start for experimenting with nonlinear
Bibliography
Black, H. S. 1953.
. New York: Van Nostrand-Reinhold.
Dodge, C. and T. Jerse. 1997.
Computer Music Synthesis, Composition and Performance
, Chapter 4. New York:
Schirmer.
Roads, C. 1996.
The Computer Music Tutorial
, Chapter 2. Cambridge, MA: The MIT Press.
U.S. Navy. 1973.
Basic Electronics
,
Rate Training Manual NAVPES, 10087-C
. New York: Dover Publications.
Once you learn how to design plug-ins with RackAFX you will Þ nd that learning other
plug-in application programming interfaces (APIs) is fairly straightforward. The main
reason is that most of the plug-in APIs share very similar structuring and function calls as
RackAFX. An analogy might be learning to ß y: once you have the fundamentals of solo
ight internalized, learning to ß y different
of airplanes is a matter of understanding
the peculiarities of each one. You donÕt need to worry about the principles of lift, drag,
pitch, and yaw after youÕve mastered them. The current popular plug-in APIs include
SteinbergÕs Virtual Studio Technology (VST), Apple ComputerÕs Audio Units (AU) , and
AvidÕs Avid Audio eXtension (AAX)
formats. If you want to write for these APIs, the
A.1 . The resulting DLL will work with both RackAFX and any Windows
VST client. Just copy the DLL from your /PlugIns folder to the designated VST folder
for the client.

The GUI options are as follows:
¥ Use VST clientÕs GUI: VST clients will provide you with a
whose

APPENDIX A
The VST
and AU
502 Appendix A
bank of sliders to more elaborate GUIs. The client will query your plug-in for control
ou do not have to provide any extra code
for this option.
¥ Use RackAFX custom GUI: If you have the proper version of RackAFX, this button
will be enabled. Choosing this option will allow the GUI you create with GUI
designer (
B ) to be used by the VST client. It will look and work identically
to the one you create in RackAFX. You do not ha
ve to provide any extra code for
this option.
¥ I will write my own GUI: If you have the skills to do this, then you can check this box
B for more information about
Figure A.1: The VST Compatible switch reveals three options for the GUI.
The VST
and AU
Plug-In APIs 503
A.3
when we compare the different API function calls. However, you need to know a bit more about
Plug-I
Plug-I
functio
functio
504 Appendix A
you are really creating a new C++ object that is added to a linked list. The linked list is a
objects. A CUICtrl object can represent one of the following:
¥ Slider
¥ Radio button bank
The VST
and AU
Plug-In APIs 505
RackAFX automatically sorts the GUI component objects by type. The sliders and radio
able
A.2 .
Table A.1: Comparison of base classes in the APIs.
VST
AudioEffect, AudioEffectX
AU
AUBase, AUEffectBase,
AUKernelBase
RackAFX automatically groups all the legal VST controls at the top of the control list. It
506 Appendix A
Function
VST
AU
Instantiation
AudioEffectX()
AUEffectBase()
AUKernelBase()
~AudioEffectX()~AUEffectBase()
~AUKernelBase()
prepareForPlay()
resume()
updateDisplay()
A.1). VST and AU also allow you to design your own GUI. VST
provides a GUI class of objects that you can optionally use in your coding that allows for
cross-platform compatibility. AU only runs on Mac and does not provide cross-platform
The VST
and AU
Plug-In APIs 507
A.4.1
Default GUI
First, you deÞ ne an enumerated list of unsigned integer-type (UINT) constant values in your
Plugin.h Þ le where ÒPluginÓ is the name and you have derived it from the AudioEfffectX
base class. These constants are the control ID values the host will use when querying about,
508 Appendix A

All default VST controls use values from 0.0 to 1.0 regardless of how they map in your actual
plug-in. You must write the cooking and uncooking functions that will convert your internal
variable values to and from values from 0.0 to 1.0.
The VST
and AU
Plug-In APIs 509
A.4.2
You will notice that youÕve already had to write a lot of code just to deal with the UI; you
have not written any code to do any meaningful processing. In RackAFX, you alter three
1. Constructor
2. prepareForPlay()
3. processAudioFrame()
Constructor
¥ Init all variables.
510 Appendix A

The VST
and AU
Plug-In APIs 511
from each channel. The frames arrive as pointers to buffers. In VST, the audio is processed
buf
of frames, rather than single frames.
A.3 shows a two-channel in and
Input
inputs[0
inputs[1
Righ
outputs[0
outputs[1
Righ
512 Appendix A
// inc frame counter






You can now see the similarities and differences in RackAFX and VST:
¥ ThereÕs more overhead for dealing with the GUIÑeven if you write your own GUI, you
must provide the default implementation functions because not all VST clients have
custom GUI capability.
A.5 VST Plug-In with RackAFX Wrapping
To wrap the RackAFX plug-in you Þ rst need to create a new AudioEffectX derived class.
Then, you make one of the member variables a RackAFX plug-in object. Suppose you have a
delay plug-in named ÒCStereoDelayÓ and you want to embed it into the VST plug-in. In the
.h Þ le you simply declare an instance of it: CStereoDelay m_Delay.
A.5.1 Default GUI
You still need to Þ ll in those Þ ve VST functions, but you will be using your plug-in to supply
The VST
and AU
Plug-In APIs 513

B ) and radio
514 Appendix A
CDelay::CDelay (audioMasterCallback audioMaster)

(audioMaster,
A.6 AU Overview
The VST
and AU
Plug-In APIs 515
GUI. AU plug-ins are another order of magnitude in complexity. The actual audio processing
A.6.1 Default GUI
Suppose we have a simple volume plug-in with just one volume control. AU calls a control a
516 Appendix A

A.6.2 Signal Processing
The AU processing function is simply named Process(). The audio data moves in buffers. The
buffers can contain any arbitrary number of channels with the simplest systems being
x
or equal input/outputs. Although the inNumChannels variable is passed to the function, it is
The VST
and AU
Plug-In APIs 517
{
518 Appendix A


// call your processing function
doProcess(inputs, outputs, inFramesToProcess);
In this case, your processing function is just like the VST processReplacing()Ñit is taking
pointers to buffer pointers as arguments, then it would split out the data as in the VST version
RackAFX has several optional graphical user interface (GUI) controls that were not covered
B.2 , with three
controls embedded. You can edit the controls, remove them, or move them up and down the

You treat these embedded slider controls just as any other slider, so there’s nothing to add to
your normal operational code. When you run your plug-in, use the alpha wheel to select one

APPENDIX B
More RackAFX Controls
520 Appendix B
Figure B.2: The LCD is loaded with three controls.
Figure B.1: The New button pops up the slider properties to add a control to the LCD.
of the controls and the value knob to adjust its value. Note there is currently no edit control
connected to the LCD for manual entry of control values.
B.3 shows the LCD control
An alternative way to use the LCD control is for global operations (like effects
More RackAFX Controls and GUI Designer 521
2
o
3
selecte
Knob
522 Appendix B
ratios, which are not linear but rather exponential. In RackAFX, you can use the joystick to
B.5 shows the vector joystick and
The joystick is arranged in a diagonal with four corners labeled A, B, C, D, clockwise;
this is in keeping with Dave Smith’s original lingo. As the joystick moves around, a new
B.7 , the AC-mix is 0.25
and the BD-mix is 0.75.

By sweeping around the  eld, you can generate many different combinations. You could
Joystic
Progra
More RackAFX Controls and GUI Designer 523
/* joystickControlChange
Indicates the user moved the joystick point; the variables are the relative
mixes of each axis; the values will add up to 1.0
B
|
A x C
|
D
The point in the very center (x) would be:
fControlA = 0.25
fControlB = 0.25
fControlC = 0.25
fControlD = 0.25
�AC Mix = projection on X Axis (0 - 1)
�BD Mix = projection on Y Axis (0 - 1)
*/
Figure B.6: The exponential mix ratios are evident in the vector joystick.
Figure B.7: This position results in the 50/50 mix of A/B only. AC-mix = 0.25, BD-mix = 0.75.
Progra
Joystic
Progra
Joystic
524 Appendix B

bool __stdcall CVolume::joystickControlChange(” oat fControlA,
” oat fControlB,

” oat fControlC,

” oat fControlD,
” oat fACMix,

” oat fBDMix)


// add your code here
B.5
you can also see four drop list controls to the right of the joystick. I ha
B.5 the
A-corner is the “SINE” corner.
2. They allow the user to change the meaning of the apex in the plug-in by selecting
different values.
More RackAFX Controls and GUI Designer 525
526 Appendix B
features, so always check the latest additions at the website. Depending on which version you
hav
e that shown in
B.8 . The 
ow of
operations is as follows:
In prototype view (the main RackAFX view), you assemble your plug-in. You create
controls, give them min, max, and initial values, connect them to variables, and so on.
Because this is a development mode, you will probably change some controls, add or
B.8 .
3. You drag and drop controls from the left side and arrange them however you like on the
surface. Because they have transparent backgrounds, they can be overlapped.
4. For the slider, radio buttons, and knob controls, you must connect the control with the
variable in your plug-in that it will control.
an
Selectio
Contro
More RackAFX Controls and GUI Designer 527
 Damping
B.9 shows the GUI designer after one knob control has been placed
and mapped to the pre-delay time. A second knob has just been dropped ne
clicking on the knob itself pops up another customization box. Here you can do the following:
 Connect the knob to a variable via the drop list.
 Change the appearance of the knob and dot combination.
 Change the edit box behavior, hide it, or change its colors.
B.10 and B.11 . The current GUI designer rules are as
follows:
 For sliders, you link to a control variable, just like the knobs. You can customize many
aspects of the slider control’s appearance.
 When you run out of continuous controls (sliders) you can’t drag any more knobs/slider
Figure B.9:

528 Appendix B
More RackAFX Controls and GUI Designer 529
Figure B.12: The Þ nished reverb GUI.
Figure B.11: The Radio and Assignable button GUI customization dialog boxes.

B.12 shows my  nished reverb GUI. I chose the white background just to make it
easier to see in print. After you are done, go back to the prototype view and rebuild your
plug-in. You need this rebuild to embed the new GUI information into your code. The GUI
styles are coded into UINT arrays in your control variables as well as the plug-in object.
530 Appendix B
If you have enabled your plug-in to be compiled to produce a Virtual Studio Technology (VST)
plug-in (Appendix A) then your RackAFX custom GUI will be available in any Windows VST
client that supports custom GUIs (which is just about all of them). You launch it in whatever
way the client digital audio workstation (DAW) requires.
Figure B.13: The reverb GUI in action!
Index
absorbent all-pass Þ
(AAPFs) 390
addition operation 17
address generation unit
(AGU) 207
all-pass Þ
lter (APF)
reverberators 368Ð71
alpha wheel GUI controls 519
conversion 1, 170Ð8
anti-aliasing Þ lters 3
applications programming
interfaces (APIs) 27Ð9, 501
functions typically required
and RackAFX philosophy
31Ð3
asynchronous operation 6
attack times of envelope
early reß ections
358
echoes 360
eigenfrequencies 361
energy decay relief (EDR)
plots 362Ð4
enumerated slider variables 245Ð8
Index
Index
panning
tremolo/panning
plug-in design
synchronous operation 5Ð6
through-zero ß
anging (TZF)
time delay as a mathematical
operator 102
transfer functions 101, 104
evaluation of 106Ð12
transverse delay line 257
tremolo/panning plug-in
design 489Ð94
trivial zeros 125
wave shaper plug-in design
wave table oscillators 301Ð8
wave tables: addition of
band-limited 312Ð19
waveguide reverberators
Index
AUDIO EFFECT
b::processAu
uff
uff
WITH DIGITAL AUDIO SIGNAL PROCESSING THEORY
iF
(”
uff
uff
tBuff
//DoL
_fVo
e
I
I

Приложенные файлы

  • pdf 1027343
    Размер файла: 10 MB Загрузок: 0

Добавить комментарий