De-essers are the most convenient way to control sibilance, but there are other methods that sound better.
For example, multi-band compression allows for greater control over the attack and release of the attenuation.
Dynamic EQ allows for more precise attenuation.
Clip gain is better for clean reduction of sibilance, even if it’s less accurate.
But, by far the best way to attenuate sibilance is with an FFT editor, like the RX program.
Its time-consuming, but with the vocal imported into this program, we can find the areas that contain the relevant info, and highlight it.
Then, we can reduce the amplitude of the region in a manner that introduces less artifacts than EQ, or compression.
Clip gain is still the cleanest; however, this method is the best combination of accuracy and clean processing.
So, you can definitely still use de-essers if you want - I definitely do when I don’t have time to edit the vocal’s sibilance.
But just know that if you do have some extra time, there are better methods.
Let’s listen to the difference between a de-essed vocal, and the same vocal controlled with an FFT program.
Watch the video to learn more >
This certainly can be true, but this generalization gives mixing engineers the wrong idea.
Saturation can be used to create a more aggressive, and forward vocal; however, it can have the opposite effect.
When transients are reshaped with the soft clipping most saturators introduce, while the input is increased, it has a similar effect to aggressive compression.
The transients are often what gives a vocal an aggressive, percussive, and cutting sound, so their retention is important if that’s the goal.
Additionally, if the low frequencies trigger the saturator, then the harmonics will likely form in the low mids. If this region is amplified too greatly, it will mask the higher frequencies that are associated with clarity and intelligibility.
If you want to use saturator for an aggressive and forward vocal, it’s best to use frequency specific saturation, and consider a saturation shape closer clipping.
Squaring the wave will be more audible and indicative of an aggressive sound while introducing higher-order harmonics that amplify the regions associated with clarity.
Let’s take a listen to saturation that doesn’t cause a more forward vocal, and just keep in mind that saturation doesn’t always have the same effect. It depends on which range is saturated, and the wave shaping that’s involved.
Watch the video to learn more >
A Chorus effect does create a stereo signal by creating variation between the left and the right channels; however, a lot of processing does this.
In fact, any effect that introduces differing processing between the left and right channels will do this.
For example, if I place a stereo reverb on a mono vocal, reflections will differ between the left and right channels, creating a stereo signal with a side image.
The same could be said for delay and any other temporal effect.
If I equalize a vocal with a filter on the left but not the right, vice versa, or simply vary the filters, I’ll create info in the side image.
Even delinked compression in which the left and right channels are treated differently will cause the variation needed to create a side image.
So, although chorusing is seen as the main way to make a vocal wider, or have varying info in the left and right channels, just about any processor can do this.
Let’s take a listen to a mono vocal quickly becoming stereo if processing is added to one channel but not the other.
Watch the video to learn more >
A lot of engineers think you need to duck the instrumental in some way if you want the vocal to sit on top of the mix.
The idea is, that the surrounding instrumentation is dipped every time the vocal is present.
Whether you do this with a compressor, or something like Soothe 2, know that this isn’t needed at all.
All you need to do is make the lead vocal equal in loudness to the instrumental.
The majority of pop tracks create this equal relationship between the lead and all other instrumentation.
For what I work on, it always seems a little too loud, but I can’t argue that this is how the majority of finished productions sound.
So, next time your vocal is buried under other instrumentation, try sending all instrument tracks to a bus and measuring its loudness.
Then measure the loudness of the lead vocal with its effects included.
If the vocal is lower than the instrumental, simply turn it up.
Alternatively, just use your ears. If the vocal sounds too quiet, turn it up.
If it’s too quiet only on occasion, then you need to address the dynamics either through compression, or editing.
Let’s take a listen to a vocal set to the same loudness as the instrumental. Notice no ducking is needed, just finding the right level.
Watch the video to learn more >
PreDelay can be useful, but it’s not the best way to introduce reverb to a vocal while retaining clarity.
All it does is push the initial reflections back, but this doesn’t keep the reflections from interfering with subsequent peaks or vocal passages.
That said, if vocal phrases are spaced out a good amount, then it will work. But if you have a vocal with a faster cadence, I’d recommend reverb ducking instead.
This is basically compression for the reverb while using the original vocal as the trigger.
Some plugins will offer ducking as a function - for example, this free emulation of the Bricasti M7 has a ducking function.
However, if you use a different reverb and want to do this, it’s not too difficult.
Just send the vocal to an aux track and insert the reverb on that auxiliary parallel channel. Then insert a compressor and set the original vocal track as the external side-chain.
Ensure that the vocal is triggering the compression and introduce the settings you want that create a clear vocal while introducing reflections in a way that achieves what you’re trying to.
Let’s compare reverb with pre-delay to reverb ducking on the same vocal passage.
Notice how much clearer the ducked reverb is after the first transient or peak.