When mixing frequencies, it helps to understand how various processors, from EQs to saturators, and even compressors, will affect the frequency response of the signal. It’s also helpful to know the important frequencies of various instruments and instrument groups - such as drums and lead vocals.
For the first 3 chapters, let’s look at some lesser-discussed technical aspects of mixing frequencies.
Masking occurs when powerful lower frequencies cover-up or cause phase cancellation to, less powerful higher frequencies. With this in mind, we can amplify or attenuate some of these lower frequencies to either increase or decrease the effect masking is having on our higher frequencies.
For example, I could dip 200Hz on my vocal to reduce masking in the high frequencies, or I could amplify the higher frequencies to achieve a similar effect.
Let’s listen to how dipping 200Hz in a vocal suddenly makes the higher frequencies open up.
Whenever we use a zero-latency equalizer, we need to keep in mind how that’ll in turn affect the phase, which will subsequently affect the frequency response. For example, if we observe the phase of a zero-latency filter, we’ll notice that it’s altered when frequencies are altered.
This isn’t a bad thing but helps to know that unexpected changes are being made. Let’s listen to filters in zero-latency, compare it to linear phase and see if we can notice a difference.
If we consider the Fletcher-Munson curve and compare the response of most modern mixes, we can begin to see a pattern of how frequencies are mixed. Let’s say we’re listening to music at a normal loudness - our ears would need about 10 to 15% more bass than high mids.
That is, if we want to hear them at the same perceived loudness. If we observe our analyzer, we’ll notice a similar downward slope from left to right, in which the bass frequencies have been mixed louder, and the high mids a little lower.
Although each mix is different, this downward slope will typically exist due to how we perceive frequencies. Let’s use a tilt EQ to shift the spectrum in the opposite direction, and notice how quickly it becomes unbalanced sounding.
Next, let’s spend some chapters covering more practical info.
When mixing vocal frequencies, know that you can use a high pass to right below your fundamental. Roughly 200Hz is where you get the vocal’s warmth, 500Hz is where you’ll find vowel pronunciation, 750Hz to 1kHz is where you find nasally tones, and 2kHz to 3kHz will be the vocal’s presence.
Above that from 5 to 12kHz we have potential vocal sibilance, and anything above that is considered Air, which are unnatural sounding when boosted, but pleasant nonetheless.
Let’s take a listen to a vocal and adjust these areas.
A kick drums sub will be between 40 to 60Hz, its most percussive aspect at 80 to 100Hz, and the beater and snap will be between 1.5kHz to 3.5kHz. For the snare, 150Hz to 250Hz is the body, 1.2kHz to 3kHz is the snap, and above is buzzing and air.
Cymbals will vary but are typically higher in frequencies, and toms are usually tuned to specific notes, with their fundamentals and overtones in the mids to high mids.
Let’s take a listen to a full drum loop with these areas affected.
Bass and kick can be difficult to mix; however, there’s a lot of room that we can make for the 2. Kick frequencies are typically static, so their fundamental will remain the same, but basses change notes, and their fundamentals will change when a new note is played.
A good way to blend the 2 is to dip the kick’s fundamental on the bass track, and then add some harmonic distortion to the bass to psycho-acoustically replace that frequency.
Let’s take a listen to this strategy and see if improves the balance between the 2.
If you want an instrument to have a LOFI quality, or maybe sound like classic equipment, all you need to do is isolate the mid frequencies, by attenuating low and high frequencies. For example, the transducer in an old telephone can’t replicate high or low frequencies, so let’s cut both.
This same concept can be applied to emulate the sound of any old or LOFI equipment. The reason being, what makes it sound LOFI is an inability to capture all frequencies, for one reason or another.
Let’s listen to this filter, will a little added distortion, and see if it’s reminiscent of an older telephone.
Distortion creates harmonics, which are frequencies generated above the fundamental - sometimes these are called overtones. When we use distortion to add these harmonics or overtones in, we’re amplifying the frequencies at which these harmonics are located.
So if the fundamental is 100Hz, a second-order harmonic would be at 200Hz. As a result, 200Hz would be amplified by this harmonic.
Let’s take a listen to second-order harmonic distortion and notice how it amplifies frequencies above the fundamental.
Just like how equalization can cause or alleviate masking, compression can reveal or cover up certain frequencies. For example, if I compress a snare with strong response at 200Hz, this attenuation will likely open up the mids to high mids, simply because 200Hz is less prevalent.
The point is, all processing, be it compression, distortion, or even things like stereo imaging and panning, will in some way or another affect the frequency response.
Let’s experiment with this concept, by compressing 200Hz on a full mix and pay attention to the higher frequencies.
Last up, let’s talk about how certain compression settings will dynamically amplify frequencies - when we set a quick attack and release, under 10ms and 50ms respectively, we create distortion whenever the compressor is triggered. If I set a <1ms attack, and <10ms release, I’d achieve significant dynamic distortion.
Like we covered earlier, this distortion is really harmonics that amplify specific frequencies. So in short, with these settings, I’m attenuating frequencies when compression occurs, but also, adding frequencies due to the distortion.
Let’s take a listen to this and notice how the timbre of the compressed signal changes.