When mixing a song, start with routing in which similar instruments are grouped together, then sent to individual busses and route these busses to your stereo output. Then you can attenuate frequencies, establish levels, compress, saturate and excite, process your busses, add temporal effects, and more.
I’ve covered this idea a couple of times, but it really does make a huge difference when mixing - first, group like instruments together, color code them, then change their output from Stereo Output to a Bus. All of my drums will go to one bus, all of the synths to one, etc.
So the signal is routed from each individual track, down through the channel fader, then these multiple signals go to the auxiliary track or bus, and then these multiple bus signals collect at the master output.
This gives us multiple points at which we can process signals. We can affect the individual instrument, or that instrument bus depending on what we’re trying to accomplish.
With everything routed correctly, let’s start finding frequencies in each individual track that we don’t like - these might be plosives on the vocals or some scrapping on a guitar - it really depends. Essentially, find any frequency you don’t want to amplify and attenuate it with an EQ.
High Pass filters work well for attenuating plosives and hum, bells for a moderately sized frequency range, and notch filters for very specific problem frequencies,like maybe click bleed.
When determining levels, and when mixing in general, a lot of engineers overuse the Solo function - which can cause big issues for a mix. In short, how an instrument sounds alone is rarely if ever how it will sound with other instruments due to phase relationships between the multiple signals.
With that said, use your faders to achieve a decent balance between your various instruments - this will be changed later but it’ll give you a decent idea of what’s important in the mix, and what is more a support instrument or signal.
Now that our levels are roughly balanced, we can hear if one instrument is getting buried in some sections of the song but cutting through in others - this is a good indication compression is needed. This is typically the case with vocals and is the most common instrument that’ll need compression.
With that in mind, let’s quick control the vocal - I’ll introduce roughly 4dB of attenuation to control it - using a moderate knee and a 50ms release to ensure the vocal stays detailed. Granted, compression is a lot more complex than this, so look into some of our other videos for more details.
With the frequency balanced and our dynamics controlled let’s start amplifying aspects of the mix with saturation, distortion, and some exciters - these all share harmonic generation in common. Depending on the plugin or hardware, different low or high frequency or ordered harmonics will form, amplifying aspects of the signal.
For example, an exciter creates high order harmonics making the high frequencies of the signal you introduce on louder, in turn increasing clarity.
In this mix, I’ll use some tube emulation saturation to create a strong second-order harmonic, which is great for achieving a warm sound.
At this point, I’m happy with my dynamics and the frequency response of the individual instruments, so let’s make it all sound better by processing the busses. Since we’re affecting all instruments in a group at once, this will create a cohesive sound amongst the instrumentation.
For example, I’m going to add tape emulation to my drum bus - I’ll drive the input a little to achieve some compression and mild distortion. Also, this tape plugin affects the frequency response, so that will shape the entirety of that drums as well.
I typically find I’ll also use a mid-side eq to perform some stereo imaging and get better control of my bus’s frequencies.
Use the bus send from your instrument busses (sorry I know the naming makes this convoluted) we’ll set up a couple of reverbs. The first is a room reverb - this will emulate room reflections, be it a studio, practice room, etc; the second will be an ambient reverb.
I’ll send each bus to both reverbs and blend in the signal using the channel faders. Then I’ll time each reverb to my BPM to make them sound more musical. Using the formula 60000/BPM, I find that a quarter note is 500ms.
So I’ll make my room reverb 500ms or half a second, and my ambient reverb 1 second.
We can use additional sends for delays and other needed modulators. I’ll then reassess individual tracks and see if one, maybe the vocal, needs some additional, maybe longer reverb.
The effects we used most likely changed the volume of our various instruments, so unless we compensated for the change of each effect, we’ll need to rebalance our mix. The busses come in handy here, since if we want more or less of one group, we can easily accomplish this.
Additionally, we may find that some of our processing reveals other unwanted frequencies,or maybe not enough on others, so we can alter some settings at this point in the mix.
Lastly, to add some creativity to the mix, let’s introduce some automation to our effects - for example, let’s increase the wet/dry amount for one of our saturators in a particular section of the song on the drum bus. This will temporarily make the drums have more saturation.
This is a simpler example since you can automate just about any function of any plugin you used - so if you wanted a bell filter on your EQ to sweep from low it high, you can do that.
Or if you want your reverb to become 20 seconds long before jarringly cutting down to 1 second, that’s also a possibility.