Before we EQ, let’s just observe the vocals - vocals are complex since they cover a wide array of frequencies.
In the lows, we have the fundamental, or the vocal’s lowest note - typically between 75Hz and 300Hz, depending on which octave the vocal is singing in. Anything below this fundamental is not needed and can be attenuated, but we’ll cover that in a moment.
From this fundamental, we have multiple overtones or harmonics - these are multiples of the fundamental. For example, if the fundamental is an A2 note - which has a frequency of 110Hz, then the 2nd order harmonic, or A3, has a frequency of 220Hz. A4 is 440Hz, A5 is 880Hz, and so on.
So, these musical or tonal aspects of the vocal make up a significant amount of the frequency range - but not all aspects of the vocal are musical. We’ll have sibilance or ess sounds and consonants, such as Tee and Kay sounds, which occupy the high-frequency ranges. Additionally, we’ll have vowel articulation, which occurs around 500-1000Hz.
Nasal congestion creates a resonance between 700Hz and 1.3kHz, while the chest cavity of the singer creates a resonance in the low-mids.
All of these elements vary depending on the singer, making equalizing vocals something that will vary from performance to performance. But knowing where each element is located is a good starting point that will help when we begin to EQ.
Let’s quickly listen to the vocal so that you can hear what we’re working with.
Watch the video to learn more >
Masking is the interaction between frequency ranges and other frequency ranges - resulting in one range covering up or masking another. So, within our vocal, we can have the low end of the vocal, masking or covering, the higher, more clarifying ranges.
This is especially true of 250Hz, which can quickly lower the perceived level or roughly 2.5 - 5kHz.
This masking doesn’t just occur within the vocal but also between the vocal and other instruments. For example, if the bass guitar has a high level of 250Hz, this will mask or cover up 2.5 - 5kHz in the vocal when both are present at the same time.
That said, we should never equalize vocals with them soloed - we need to monitor the interaction between the vocals and other instruments.
Typically, a great starting point for equalizing vocals is attenuating lows. So, within the context of the full mix, I’ll use a HP filter to attenuate everything that’s below the vocal’s fundamental. Then, I’ll center a bell filter on 250Hz and subtly attenuate it until the vocal sounds clearer and more balanced.
Referencing the notes from the last chapter, we’ll adjust as needed - for example, if the vocal is sounding nasally, I could find those tones between 700 - 1.3kHz and attenuate as needed. Or if the sibilance is hitting too hard, we could attenuate some of that with a narrow bell filter in the highs.
Depending on the notes sung and the overtones or harmonics, we can find an in-key note between 2.5 and 5kHz, and amplify it to increase clarity while retaining the vocal’s musicality.
Before we move on, let’s listen to the vocal’s isolated ranges and compare them to the full vocal - notice how masking affects the perceived level of particular ranges.
Then, we’ll listen to the vocal solo, and then, in the context of the mix, we'll notice how much interference and masking alter our perception of the vocal.
Watch the video to learn more >
One incredibly important idea that doesn’t get covered enough is how other processors equalize a signal. For example, a de-esser - which is really a frequency-specific compressor, will attenuate the sibilance of a vocal. Granted, it does it dynamically or whenever the sibilance is loud enough, whereas an EQ is static or constant, but it affects the frequency response nonetheless.
Another example is if I use a compressor like the Distressor and use a quick attack and release, I’ll distort the vocal’s transient. When they distort, high frequencies will be generated, again altering the frequency response of the vocal.
Or, I could use an LA-2A compressor, which smoothly attenuates the vocal - this will cause long attenuation, causing other transients of the vocal to be attenuated and, in turn, reducing the high-frequency range.
With this in mind, I want to listen to a vocal being compressed with the 3 processors I just mentioned - first a de-esser, then a distressor, and then an LA-2A, again in the context of the full mix. I’ll match the input and output so that we can hear how each compressor type is altering the frequency response of the vocal.
Watch the video to learn more >
Earlier, we covered harmonics and overtones and how these make up a large portion of our vocal’s frequency response.
Saturation uses a signal's fundamental to generate additional harmonics, making a signal sound fuller, but also equalizing through the introduction of these harmonics.
For example, say I saturate a vocal, and the saturator introduces a 2nd order harmonic. This will increase the vocals low to low-mids.
Or, imagine I use a frequency-specific saturator that uses a higher frequency as the fundamental - the harmonics generated from this saturator would create harmonics in the high mids and highs.
Different saturator types and emulations create differing harmonic formations. For example, a warm tube emulation will create the 2nd order harmonic I mentioned a moment ago, whereas tape saturation will focus more on the low mids and mids.
Exciters create harmonics in the high-frequency range, which will make the vocal much brighter or amplify the high-frequency range.
So let’s do this - let’s use the plugin Saturn 2 - first I’ll saturate with a warm tube setting and isolate it to the lows and low mids. Then, I’ll use a clean tape setting and isolate it to the mids. Lastly, I’ll use a clean tube setting and isolate it to the high mids and highs.
Notice that as I enable each saturation band, the frequency response of the vocal is altered - showing how saturation can and should be used to affect a vocal’s frequency response.