1. Know Everything about Your Plugins
2. Match the Vocals’ Loudness
3. Utilize Various Compression Types
4. Oversampling is Important
5. Don’t Ignore De-essing
6. Align Phase Before Processing
7. Experiment with M/S Routing
8. Know the Hz Response of Your Monitoring
9. If Compressing, Try RMS Detection
This first one seems a bit obvious, but we often don’t know a lot of the changes our plugins are making to a signal - the more we know what they’re doing, the more control we have over our processing. For example, many emulators change the frequency response.
Even compressor plugins may alter the amplitude of our frequencies. It helps to know behind the scenes processing is happening to avoid redundant or conflicting processing,
If you’re working on multiple tracks at once, like when mastering an EP or album, use clip gain to match their loudnesses, based on the vocal’s loudness. Even though this doesn’t match their LUFS, the vocal is often the forefront of the song and determines a listener's perceived loudness.
So even if the surrounding instrumentation doesn’t match, or the songs vary in style - matching the level of the vocal before mastering is a great way to establish consistency.
Various compressor types have very different characteristics with which you can achieve different things - for example, a tube compressor has a slower attack and release resulting in long compression and a subsequently smoother sound. So if I wanted my master to have a full sound I could use a tube compressor for my parallel compression.
Or say I want a very clean sound - I could use a digital compressor like the Pro-C 2 and select the mastering option - this creates virtually no distortion so it’s great for a clean-sounding master.
I always talk about this topic, but I want to show it - first, I’m going to use a sine wave, which is at 18000Hz. Then I’m going to reduce the sampling rate of my signal from 44.1, to 22.05, to 11.025, and so on using a down-sampler.
Notice that as I reduce the sampling rate, the sine wave is reflected down the frequency spectrum. When I lower the sampling rate, I lower the highest frequency a signal can occupy, causing reflections and harmonics - know as aliasing distortion.
So in short use a compressor with oversampling to avoid this when mastering.
One of the bigger mistakes you can make when mastering is to avoid de-essing - granted, de-essing is typically used on vocals, but it has a place when mastering. If the track has too much vocal sibilance, I’ll use a de-esser with oversampling to compress it.
Often, I’ll listen intently to the song to find specific points where de-essing is needed, then for those sections only, I’ll automate the de-esser in. This way de-essing only occurs when needed, and won’t affect unrelated sections.
Before you begin mastering a track, realign the phase of the signal using Izotope RX. Use the Phase module and select Suggest to measure the phase rotation of the audio file - then click render to fix any misalignment; if we click Suggest again, we can see the new value.
This doesn’t change the amplitude of your track, nor does it alter anything about the signal from a sonic perspective, but it will help your metering more accurately measure signal peaks. As a result, your track will be less susceptible to clipping distortion.
Most engineers master a stereo file as a left and right signal, but try this if you want to experiment; I’ll duplicate the master, then insert the MSED plugin by Voxengo on both - mute the side on one and the mid on the other. Now my master is mid and side.
These two signals are identical to the original file, and they let me process my mid and side signals separately. This will give you a lot more control over your stereo image, and the overall timbre of your master.
Most monitors aren’t exactly flat; for example, Beyerdynamic DT 770 Pro Headphones have some pretty drastic shifts in the high-frequency range, making it difficult to accurately determine how these frequencies will translate to other speakers. Although this can’t be fixed entirely, it definitely helps to keep this in mind.
If I’m using headphones that I know both the high frequencies, I’ll have to compensate accordingly.
When compressing, we’ve gotten pretty used to top-down or peak detected compression; however, RMS or root-mean-square detection is a great way to achieve more natural sounding compression. Instead of detecting the signal from the peaks, it measures the average loudness and compresses when that level crosses the threshold.
Our ears do something very similar in that they compress loud sounds after they’ve been loud for an extended period of time - not exactly at the peak of the soundwave.