Once You Know This, Mixing Music is So Much Easier

A More Logical Way to Divide the Frequency Response

The most popular way to conceptualize a frequency response is as the sub, low, low mid, mid, high mid, high, and air ranges.

But these ranges are too broad to accurately pinpoint frequencies.

As a result, some engineers divide these ranges even further in the hopes of making this idea more useful, but then this kind of just becomes convoluted.

And to make everything more frustrating, subjective terms like warm, muddy, thin, clean, boomy, etc, are used as descriptors for each range.

So, do me a favor and put that idea aside for now, so we can start from scratch.

The Frequency Response of an Individual Instrument or Vocal Can Be Divided into 4 Parts.

1. The Fundamental Frequency2. Harmonious Overtones3. Disharmonious Overtones4. Unmusical/Unrelated

So, let’s look at an example.

Say I pluck a bass guitar, and I play the note A1.

The fundamental, the foundation of the note A1, is 55Hz. Notice the high amplitude signal at 55Hz and notice how it’s the highest amplitude frequency in the overall frequency response.

The fundamental frequency is just that - 1 frequency; it can, of course, have some modulation above and below, but the fundamental is centered on a single frequency.

So, this is part 1—it’s the simple, single-frequency building block of the overall frequency response.

Moving on to part 2 - we have all harmonious overtones.

All harmonious overtones are created from the fundamental frequency. What does this mean?

It means that multiples of this fundamental frequency are generated from the fundamental - these are often called harmonics.

The 1st order harmonic is the fundamental - in our example, 55Hz. The second-order harmonic is 110Hz, or 55Hz. times 2.

The 3rd-order harmonic is 165Hz or 55Hz. times 3.

The amplitude of harmonics generally decreases the further away they are from the fundamental; just as importantly, all of these harmonics will sound musical because they are direct multiples of the fundamental.

For example, the second-order harmonic in this example is 110Hz, or A2, which is 1 octave higher than A1.

Part 3 includes Disharmonious overtones.

As the name suggests, this includes all overtones that are not multiples of the fundamental frequency.

Sticking with the example of our plucked bass note, this mainly includes the percussive snap of the string. It could also include scraping of the finger on the fret board. Furthermore, the bass guitar’s body could resonate at a frequency other than A1 or a multiple of A1.

So, it’s every part of the instrument’s sound that still relates to the performance but isn’t a multiple of the fundamental frequency.

And lastly, Part 4 is the unmusical and unrelated.

This includes rumble, which could be caused if our bass guitar was accidentally hit against something. Or hum, which could occur from a ground loop, causing unwanted feedback.

You can’t always separate these unmusical and unrelated elements from a performance, but some useful changes can be made, which we’ll cover in a moment.

Already, we can see how thinking of the frequency response in this way dramatically narrows the potential frequencies we need to affect.

If the bass guitar plays the notes A1 and E1, I know that the fundamental frequencies are 55Hz and 82.5Hz.

Instead of a low range that spans hundreds of frequencies, I now have 2 exact frequencies that comprise the bass guitar’s lows.

Furthermore, I know the exact frequencies of high amplitude harmonics, like 110Hz. 165Hz. for the A note, or 165Hz, and 248Hz for the E note.

Meaning I know exactly where to center a bell filter if I want to affect that area. Or, I know the exact range of frequencies to affect if I want to use fewer filters.

Watch the video to learn more >

Let’s See this Idea in Practice.

Using a more complex signal, like a vocal in which multiple notes are being sung.

If I were using the more familiar and common idea of frequency ranges, I’d introduce an HP filter and set it somewhere in the low range without much of an idea as to why.

But now that I know how the fundamental is the building block for the note, I can set the HP filter to right below the lowest frequency fundamental.

As the singer transitions notes, we can observe how the fundamental transitions in tandem.

By determining the lowest note sung, I can find the lowest frequency fundamental - meaning I can be confident that my HP filter is attenuating only the unmusical and unrelated information that exists below the musical performance.

If the lows sound too high in amplitude, it’s not a predetermined frequency range but the fundamental and the 2nd and/or 3rd-order harmonics that are causing this issue.

So now I know exactly where to cut with bell filters, or I can create a higher bandwidth filter to address all of the 2nd and 3rd-order harmonics of the collective performance.

Furthermore, if I only notice this issue when a particular note is sung, I can center a bell on the exact harmonic that’s causing the issue. For example, if the lows are too high in amplitude whenever the note A2 is sung, I know the problem frequency is 110Hz, 220Hz, and perhaps 330Hz.

If I were trying to fix this issue using pre-determined EQ ranges like what’s typically taught, I’d probably resort to using a dynamic filter to attenuate only when the range hits a particular amplitude.

But now, I know the exact frequencies that need to be attenuated, so I just need a couple of subtle bell filters.

Or, say I’m having trouble discerning what the vocalist is saying - I could find a multiple of the fundamental or fundamentals around the vocal’s 3rd formant and boost to help with intelligibility. I could blindly boost a high-mid range and hope it works, or I can pinpoint the exact frequency that needs to be addressed and center my bell filter on it.

If the vocal sounds nasally, I know this is caused by a higher-order harmonic mixed with nasal cavity resonation. So again, I can pinpoint the exact offending frequencies by using a multiple of the relevant fundamental frequency.

The majority of instruments will be comprised of parts 1 and 2, or the fundamentals and harmonics of the performance.

Basically, any instrument that contains a resonant quality, like guitar, vocals, kick drums, snares, woodwinds, string instruments, pianos, and so on, allows you to use the fundamental frequency as a way to find specific higher frequencies that need to be amplified or attenuated.

The only way to address part 4 or the unmusical and unrelated is with an HP filter when using EQ. That said, more specific processors are needed to remove hum, buzz, etc, and ideally, this is minimized during tracking.

This just leaves us with disharmonious overtones that don’t adhere to a strict structure. For example, vocal sibilance, the snap of a snare or percussive instrument, and so on.

So, instead of having 6 - 12 ranges that might be relevant or might not, we have a system for addressing the majority of exact frequencies and exact frequency ranges. The only remaining variable that may require more trial and error is disharmonious overtones.

Maybe this seems complicated, but thinking this way and getting accustomed to it has cut my mix time in half.

And this video doesn't even address how knowing this helps with compression and saturation that generates harmonics, let alone arrangement when recording instruments.

The more you move away from the antiquated and simplified idea of “this range is responsible for this” and begin understanding the frequency structure of an instrument or vocal, the easier mixing becomes.

Watch the video to learn more >