For the first 2 chapters, let’s cover something that isn’t discussed too often - masking.
Masking, particular spectral masking, occurs when lower more powerful frequencies mask or cover up higher ones - that said, it’s easy to remedy this by dipping some of these low frequencies, typical between 200 and 300Hz, and boosting some of the ones that were covered up, usually 2 - 3kHz.
If we look quickly at this graph and observe the white lines, we can see how a frequency at 250Hz can cover up other frequencies. So let’s listen to the changes we made with our EQ, and notice how the vocal becomes clearer .
Sometimes your vocal is being masked or is masking other instruments - we can more easily observe this with the Pro-Q 3 by selecting the competing signal as the external side chain. In the analyzer section, we’ll enable collision detection, letting us observe where excessive frequency overlap is occurring.
If we want our vocal to sit back in the mix more, we can attenuate it at these cross-overs. If we want it to stick out, we could amplify the vocal at these frequencies.
Let’s attenuate this vocal and then amplify it at these frequencies, and notice how it gets pulled back, and then brought forward.
Although we don’t often consider music theory when equalizing, frequencies can be considered and treated as notes on a scale. If we know the key of our song, or if we can observe our fundamental frequency, we can amplify in-key frequencies, making the vocal more musical and in tune.
Let’s listen to a vocal that’s had its in-key frequencies accented, and notice if it has a slightly more musical sound to it.
Just like the previous chapter in which we found in-key notes, we can also find out-of-key notes that can make the vocal sound out of tune or less musical. I’m not great at music theory, so I typically use an instrument to test specific notes.
Then I log which ones sound out of key with the song, and I dip some of these slightly on the vocal. Let’s listen to a vocal with both in-key note accented and out-of-key notes attenuated, to see if it improves the vocal.
Resonances are clusters of frequencies that can sound good, like with harmonics - but they can also be out-of-key or too powerful for the vocal to sound balanced. These resonances often change frequency, so it helps to try a dynamic resonance attenuator, like Soothe 2.
With it, I’ll select a range of frequencies and utilize subtle settings to reduce resonances without affecting the timbre. Let’s listen and consider how this plugin achieves what a typical EQ wouldn’t be capable of.
If we want to address masking in our vocals in a more accurate and dynamic way, we could use Soundtheory’s Gullfoss EQ. I’ll isolate the processing to the low-mids to high-mids, and subtly increase the recover and tame functions until I feel that I have a balanced sound.
Although this plugin and Soothe 2 which I mentioned in the last chapter are great, please don’t feel like you need them in order to create balanced vocals.
Let’s listen to this plugin and notice how it balances the sound.
When we record vocals, we’re often singing into 1 of 4 main microphone capsules - each with unique frequency response. These are the K67, the K47, the CK12, and the less common M7, all of which can be emulated using an EQ if we know the frequency response.
This can be helpful if we want our vocals to have a specific and widely recognizable sound. That said, we need to keep in mind that our microphone is going to have its own frequency response, so we’ll need to compensate for that when possible.
Let’s listen to emulations of the K47 and K87 and see if the vocal becomes more indicative of classic or more modern recordings.
Although the capsule is the most influential in shaping the frequency response, the mic circuit, body, and other factors play a role. For example, even though the U87 uses a K67 capsule, its circuit includes a de-emphasis EQ that attenuates some high frequencies, giving it a unique sound.
Similarly, the widely popular Sony C800 uses a CK12 capsule but imparts small changes. So let’s emulate the Neumann U87 and then the Sony C800, and consider how you could use these settings on your vocals for various purposes.
For these last 2 chapters, let’s cover important vocal frequencies, and look at a helpful EQ curve.
First off, we can cut everything up to the fundamental, since anything below this will be unrelated - then we can boost or dip our fundamental and 2nd order harmonic to create a warmer, or clearer sound respectively. Next, let’s boost or cut the harmonic around 500Hz to affect vowel pronunciation.
Amplifying it will bring the vocal forward, attenuating it will make it sound a little further back. Nasal resonances or a nasal tone will be found between 700 and 1.3kHz depending on the singer - odds are you’ll want to dip this if it’s present.
Our ears are most sensitive to 3 - 5kHz, due to how our ears evolved to prioritize the human voice, so if we want the vocal to cut through a mix, we can boost this region.
Sibilance is between 5-10kHz but should probably be left up to a de-esser.
Lastly, we can amplify above 12kHz to add some air.
Let’s listen to the vocal with this curve.
All of the important frequencies that we mentioned in the last chapter will apply when it comes to equalizing a dialogue recording. The main difference will be in how we treat sibilance, since attenuating it makes more sense - also, a boost to our fundamental can be even more aggressive.
Let’s listen to a dialogue recording I made without any processing, and then add this EQ.