We don’t think of EQs as altering the dynamic range - but they do in a few ways. The first way is easiest to understand with an example.
Say, I’m EQing the kick. Its overall dynamic range is -72dB to -8dB, with the fundamental causing the peak. If I amplify the fundamental with a bell by 3dB, the peak should now be about -5 dB.
The noise floor of the affecting region will also be amplified by about 3dB, with variation in how much is amplified due to the Q value, bell shape, etc.
The noise floor of other areas, or most of the kick, won’t be altered. So, before, our range was -72dB to -8dB or 64dB in total, and our range now is -71dB to -5dB, which is 66dB in total since not all of the noise floor was amplified.
This is all interesting and good to know, but admittedly, there is a much more hidden, rarely discussed way EQs are messing with dynamics and not equalizing the way we think they do.
Right now, we’re looking at a sine wave with variable amplitudes. It’s quiet, then loud, then quiet again. When introducing a processor, we can observe any alterations to their levels.
Let’s introduce a high-pass filter at 40Hz with a slope of 24dB/octave. Note that the sine wave used by plugin doctor currently has a frequency of 2kHz. So, this filter should not impact the sine wave, but look at what happens.
It’s subtle, but the filter's introduction causes a modulating spike when the sine wave’s amplitude is increased. It causes the same modulation in an equal and opposite manner when the signal’s amplitude decreases.
If we increase the filter’s slope, the alteration becomes more aggressive. As I increase the cutoff frequency, closer and closer to the frequency of the sine wave or 2kHz., the distortion and subsequent alteration to the dynamics becomes more significant, with the most significant change occurring around 1.8kHz.
Additionally, notice that it takes roughly 20ms before the amplitude adjusts back to the intended value—definitely long enough to be noticeable.
But, as I’ve come to expect, this issue is even more prevalent in low-frequencies.
If I alter the sine wave to 88Hz, then with the cutoff set to around 60Hz, we’ll observe alterations to the amplitude just like before; however, instead of taking 20ms to return to the accurate amplitude, it takes close to .5s.
And watch what happens as the cutoff approaches the sine wave’s frequency.
Again, with the cutoff at 80Hz, we’d expect to see no alteration to the dynamics of a higher frequency, but the changes are significant and only get weirder.
With the cutoff at 90Hz, notice this gradual amplification from the quiet section to the loud. Additionally, notice this massive amplitude change when transitioning from the loud section to the quiet.
What do you think will happen if I move the high pass filter way past the sine wave’s frequency? If you think that we shouldn’t have a signal, I thought the exact same thing because, that makes sense.
With the cutoff at 1kHz, 88Hz is still present and modulating whenever the signal’s amplitude would have transitioned from quiet to loud. And it’s not until the cutoff approaches 23kHz that we’ve finally fully attenuated the signal. If I had asked you, hey, ‘How do I get rid of signal around 90Hz?’ I can’t imagine your answer or anyone’s answer would be to set an HP filter around 23kHz.
So, is this fixed when using linear phase filters? No, not really - both regarding amplitude changes when the cutoff is set before the frequency and after. In fact, now, in addition to post-ringing, we have pre-ringing.
Well, I’ve been using a really high slope for a while; what if it was switched to 6dB per octave?
It’s better, but we still have weird modulation to the signal’s amplitude occurring during any transition of the original sine wave’s amplitude. And to cover all bases, this happens with bell filters, notch filters, obviously low pass filters, and more.
Maybe this all seems purely technical, but this issue has massive implications for how something sounds.
To test this, I took a track and attempted to attenuate about 70Hz and below with high-pass filters. Using a null test between the original and the processed signal, I should be able to hear the difference between the 2.
If the filter is incredibly accurate and the processing does not make any other unintended alterations to the signal, then I should only hear 70Hz and below.
So, let’s listen to the differences between the original signal and the processed one when I use a minimum phase HP filter, a linear phase HP filter, and, lastly, the cleanest method I’ve found, which I’ll show you how to do in a moment.
Watch the video to learn more >
The minimum phase filter was not accurate in the slightest. When we look at the waveforms, note that very little phase rotation occurs at lower amplitudes, but once the amplitude increases, aggressive changes occur, just like what we observed with the sine wave.
This continues throughout the song. The polarity is completely inverted in spots; the waveforms are shaped and seemingly have different amplitudes and behaviors. All in all, this simple filter completely alters the signal.
Imagine the implications of this when processing multiple instruments with various filters, differing dynamics, etc. It appears that seemingly simple EQ filters have the power to alter a lot more than we think they do.
Meanwhile, the linear phase filter performed significantly better. The waveform doesn’t give us any obvious indications of being altered in any way we didn’t intend.
However, if you listened very carefully during the demo, you could hear quiet, high-frequency information - indicating that the filter is causing some unexpected changes to the signal.
The most accurate is this method.
The RX platform isn’t supposed to be an EQ—it’s meant for surgical changes to audio, but its design offers a unique opportunity when mixing.
With it, it’s possible to highlight various aspects of the track over time and affect them.
I cannot say this for certain, but my best guess is that when we process a track within this platform, we’re not using certain filters like with an EQ - we’re altering information on a binary level.
So, if I highlight or select all of the information below 70Hz and use the Gain module to attenuate the highlighted region, the platform deletes the information associated with that frequency range.
By deleting information instead of filtering, the processed signal is not subjected to whatever unintended changes a filter may cause.
So, say I want to amplify 3kHz on the vocal by 2dB. With the vocal file in RX, I’d use the frequency select tool to highlight the region.
If I need to isolate a more specific range, I can zoom in using the vertical slider, scroll until I find the right range, highlight, and then use the gain module to affect the area.
I’ll show you how to do this quickly in a moment, but let’s listen to this method used on a vocal to cut frequencies below 80Hz, reduce a narrow band of nasally tones by 5dB, boost the range around 3kHz by 3dB, and add 3dB of air above 11kHz.
Watch the video to learn more >
In short, RX can be set as the default editor in your DAW. In Logic, I enabled the advanced settings, went to audio, then audio editor, and selected RX as the external editor.
Now, if I want to use it to process a clip, I’ll select the clip, hit shift+W, and RX pops up.
After I make the changes I want, I can select File, then Overwrite Original File, and close out of RX. Logic then replaces the original clip with the processed one.
If I decide I need to revert to the original, I can open RX again with Shift+W, click the initial state, overwrite the file, and close out, so this process isn’t destructive to your files in any way.
Keep in mind that the changes applied with this method will occur before any of the inserts used in the DAW.
For that reason, I think this method is best suited for aggressive filters, such as notch filters and HP and/or LP filters.
I’m going to perform a simple mix using this method for any aggressive filters I would have normally used. We’ll A/B the same mix, but done with filters, and let me know if one sounds better to you.