This Makes Equalizing Vocals A Lot Easier

Tip 1: Each Processor Equalizes

EQ is thought of as the only processor that equalizes; however, each processor in a vocal chain, including saturation, compression, reverb, delay, and so on, alters the frequency response.

Saturators and compressors introduce harmonics or multiples of fundamental frequencies, as well as some distortion that’s not related to the fundamental.

These harmonics amplify the range they occupy.

Reverb creates multiple taps of the original signal that decline in amplitude over time.

The most recent taps, or the taps closest to the original signal boost the same frequency range as the original signal. Delay does the same thing. Of course, these taps can be equalized but you get the idea.

If you’re equalizing at the beginning of your vocal chain, and you don’t understand how so much has been undone by the end of your chain, that’s why.

Each processor affects the frequency response, just not as purposefully as an EQ and often in a way that’s harder to predict.

Watch the video to learn more >

Tip 2: Why EQ at the Beginning of a Chain?

With what I just said in mind, why even equalize at the beginning of a vocal chain? If it’s going to be undone, then what’s the point of starting with EQ?

In short, the first EQ in a vocal chain is used to control what is fed into the subsequent processors.

Although you can’t fully balance the vocal with the first EQ, you can ensure that certain ranges aren’t exacerbated by saturation, compression, reverb, and so on.

For example, say the original vocal has some plosives and rumble, or other unmusical and unrelated aspects. The first EQ lets you attenuate those aspects with a HP filter set to below the fundamental frequencies.

Or, maybe the low mids are way to high in amplitude - a bell filter attenuating this range will help balance them before compression.

You can also boost some frequencies that you want more of before subsequent processors. For example, if I have a compressor after the first EQ, I could boost a little of 2kHz to 5kHz.

This way the compressor is working a little harder on this range. When I use make up gain after compression, that range will be boosted a little more and may have a slightly more processed sound.

So, the first EQ isn’t a remedy for everything that needs to be addressed with the vocal, it’s an opportunity to shape it before you build your chain.

Watch the video to learn more >

Tip 3: Dynamic EQ vs. Hz. Specific Compression

Early on in the chain, de-essing is a useful insert. You don’t want to amplify the sibilance with other processors and then have to combat a more aggressive version of it later in the chain.

I’d recommend either following your first EQ with a de-esser, or if it’s an option, using dynamic EQ to control sibilance.

In terms of CPU, adding a dynamic bell filter is comparable to adding another processor for de-essing, so what are other reasons for picking one over the other?

In short, dynamic equalization lets you pinpoint the offending frequencies easier; however, it doesn’t let you control the reaction as well as some de-essers.

Usually with a multi band compressor or a de-esser you have some control over the attack and release, whereas this varies from dynamic EQ to EQ.

Personally, I’ve found that a de-esser works better, but if you like using a dynamic EQ for de-essing, I’d recommend using 2 filters.

One narrow filter that’s a bandwidth of about 200Hz - 300Hz - this one can attenuate more aggressively. Then, situate a gradual bell filter in the general area of the sibilance to control it to a lesser extent.

Adjust the center frequency of the bands until you find just the right area for the sibilance. The more accurately you position the filters, the less you’ll have to attenuate to achieve the desired effect.

Lets listen to traditional de-essing and compare it with the technique I just covered.

Watch the video to learn more >

Tip 4: Understand Vocal Ranges

Each vocal is different, but there are patterns you can keep in mind that help a lot with decision making.

I conceptualize a vocal as having 7 parts or distinct ranges, that work regardless of the vocal.

First, there’s the fundamental range.

This includes the fundamental frequency for each performed note.

The fundamental frequencies are the building blocks or foundation for everything higher in frequency.

When a fundamental frequency is distorted slightly, which is always the case with a vocalist since no one sings perfectly formed sine waves, overtones are created.

Once you understand the fundamental range, note that everything below it is not musically related. It can be mic rumble, plosives, ambient noise, or other unrelated signal.

The 2nd and 3rd harmonics are associated with fullness, warmth, and if too high in amplitude, muddiness. These are the 2nd and 3rd multiples of the fundamental.

These harmonics are high in amplitude and make up a lot of the vocal’s low mids.

Then we have our 4th, 5th, 6th, and 7th order harmonics.

The 4th and 5th typically include vowel articulation, and depending on the vocalist, can also contain the start of nasal resonances that can make a vocalist sound congested.

The 8th, 9th, 10th, and potentially 11th harmonics are another vocal formant, or a cluster that contains vowel and consonant information.

This is in the range that we’re most sensitive to, or 2-5kHz. Amplifying this range will increase perceived clarity, since the vowel and consonant info in this range will be amplified, making it easier to discern what’s being sung.

Between 5-12kHz, we have the percussive aspects of the vocal. Unlike lower frequency harmonics, these aren’t tied to the fundamental. They’re the unique aspect of the language being sung that contains consonants like T and K, esses, and other percussive information.

It’s difficult to tie these percussive elements to a note, so they’ll rarely be in key with the performance; however, they still make up a large portion of the vocal.

They’re important for articulation, but quickly become unpleasant, especially in a musical context in which the vocal plays more of a tonal role.

Lastly, we have air - everything above the percussive aspects of the vocal. These may be very high order overtones of the fundamental. They might be room reflections or any other delicate and easily masked part of the vocal.

So from low to high frequency these ranges include:

The unrelated and unmusical.

The fundamental range.

2nd and 3rd order harmonics.

Mid order harmonics

High order harmonics

Percussive language specific aspects

And lastly, air.

It helps to note that from the fundamental range to the high order harmonics, everything is related. Almost every frequency is tied to the original sung note, and the fundamental. When notes are changed, they all move in unison, because they are inherently connected.

If you were to sing a tonal note, that is a note without an percussive aspects like an ooh sound, that’s all you’d have.

It’s the addition of consonants and sibilance that generates a lot of 5-12kHz, and the combination of the vocalist with the recording environment that generates both the unmusical and unrelated, and a lot of the air frequency range.

Let’s take a listen to a vocal as I isolate and affect these ranges.

Notice how each one affects the vocal in the intended way - since we’re not dealing with strict frequency ranges, and instead addressing what’s included in the performance.

Watch the video to learn more >