7 Tips for Better Mixes

Mix All Tracks at the Same Time

One mistake I see engineers make, and admittedly I sometimes make myself, is mixing with an instrument or group soloed.

A mix is more about interaction than it is about individual elements. You could make the drums sound great on their own, but when combining them with the rest of the mix, these interactions will absolutely, without a doubt, change how they sound.

So, if you want to make a great mix, you always need to mix an instrument, vocal, or instrument group within the context of everything else. Now, if you’re trying to hear something very specific, maybe there’s some unwanted resonance somewhere, then by all means, solo instruments until you find it.

But, again, if you have multiple parts that have to work together, then they need to be mixed together, not individually.

Let’s listen to a mix that already has some processing and notice that when I solo the drum bus, it sounds a good deal different than when I play everything together.

Watch the video to learn more >

Don’t be Afraid to Skip Around

As engineers, it’s easy to get caught up in strict, super-logical thinking. This makes sense because we’re dealing with multiple variables, all coming together simultaneously, so thinking about things logically is definitely helpful.

But flexibility is just as, if not more, important.For example, say I started one of the track chains with EQ. By the end of the chain, I’m hearing a frequency that just isn’t working or is too prominent.

The worst thing I can do in this situation is be stubborn and think, no, the EQ has already been set and established. Let me add another processor or try to make it work by changing the latest insert, etc.

There’s nothing wrong with going back to that first insert and changing it.

The same could be said about jumping between instruments - you don’t need to finish an instrument group before working on another. Any processing you add to one instrument will interact with the processing on another - as the relationships between these instruments change, you need to be flexible enough to adjust previous forms of processing and levels to accommodate that change.

To illustrate this idea, let’s listen to an example in which I’ve purposefully made my first insert cause a problem. Then, I’ll fix the issue by making the needed changes instead of trying to avoid them or introduce more processing.

Watch the video to learn more >

The Best Way to Avoid Over-processing

Over-processing is definitely different from using a lot of processing. I used to think, ‘Well if I want to avoid over-processing, I need to use less processing.’

But that just causes my mix to not sound finished. Instead, the best way to avoid an over-processed sound is to ensure that processors are not conflicting with one another.

Let me give you an obvious example just to show what I’m talking about here.

Say I want to boost 2kHz on the vocal by 3dB. The right way to do this would be to insert the EQ and boost the range. A not-so-great way to do this would be to insert an EQ and boost 2kHz by 8dB, and then insert another EQ and attenuate the range by 5 dB.

The first way accomplishes the job with one processor; the second method needs two processors and redundant processing to accomplish the same task.

Although this example is a really straightforward one, many processors and processing types interact with one another in unexpected ways.

Let’s look at a more complex example. Say I excited the vocal to get a bright sound. Now the sibilance is too harsh, so I need to de-ess heavily. Now, I can’t get the de-esser to balance the sound without it becoming noticeable, so I have to add a resonance reducer. I didn’t set the resonance reduction thoughtfully, so now my vocal doesn’t sound as full, so I’ll add a saturator.

The saturator adds harmonics to the highs, and now the sibilance sounds harsh again. You get where I’m going with this - it’s easy to get caught in this vicious cycle of, I want this, but now this is wrong - I fixed this, but now this is off, etc.

All I had to do in this example was insert the de-esser before the exciter - the sibilance is controlled, and then the vocal is brightened. That’s a lot easier, less CPU intensive, and sounds a lot less processed than using a bunch of processors in a round-about way.

So, use as many processors as needed to achieve the sound you want, but ensure that how you’re processing is optimized.

Let’s take a listen to an example of a vocal that’s overprocessed due to improper routing and conflicting processing. Then, we’ll listen to the vocal with processing introduced thoughtfully.

Watch the video to learn more >

Exciters are Your Friend

Exciters don’t get the love they deserve, at least in my opinion. We often associate exciters with an artificially bright sound, but when they came out in the 70s, an era of production known for a natural and smooth sound, exciters were used on almost everything.

They work by first splitting the signal between the original and a parallel channel - the parallel channel introduces an HP filter, after which harmonic distortion takes place. Then, the affected signal and the original signal are combined.

This design causes the saturation element to read the fundamental as a higher frequency - subsequently, harmonics form from this fundamental, causing high-frequency harmonics.

These harmonics are really helpful for a few reasons - first, harmonics create a psychoacoustic effect that makes lower frequencies easier to hear. So, say the kick’s higher register is being excited; this means that when someone listens on a speaker that doesn’t support the kick’s fundamental, they’ll still perceive the kick as being there.

Next, they add signal to an often sparse frequency range, making the highs complex and full, as well as acting as an EQ shelf for high frequencies. Since the harmonics are amplifying specific ranges, we can think of this as frequencies being amplified, similar to how an EQ amplifies.

Lastly, by generating high-frequency harmonics, we’re improving the signal-to-noise ratio by masking the noise. Noise and hiss usually reside in the high frequencies and are always a huge pain to try and get rid of, but by generating harmonics that include no noise, they shift the relationship between noise and musical signal in favor of the musical signal.

So, let’s listen to a mix with exciters disabled and then enabled and notice how we increase the clarity of the mix’s presence and just have an overall more balanced sound.

Watch the video to learn more >

Know What Should Stay Mono

Mono compatibility is important, but it shouldn’t be the primary focus of a mix. In short, there are elements that should stay primarily mono or in the mid-image; however, don’t be afraid to move elements around - either through hard panning, psychoacoustic delay effects, or binaural processing.

To keep things simple, the three elements of a mix should almost always stay mono or centered. These are the kick, the bass, and the lead vocal.Everything else can be moved around the stereo image as needed.

For example, guitars can be panned left and right. BGVs can be delayed to move into the 180-degree field.

A supporting synth can be panned with binaural panning to make it sound like it’s behind the listener. All of these things will make these instruments less mono-compatible, but if the mix is listened to on stereo speakers, headphones, earbuds, car speakers, and even through most TV speakers, it’ll sound fine.

As music playback is shifting to earbuds, more creative stereo imaging can be introduced. Just be sure that the super important instruments, like the lead vocal, kick, and bass, stay in the center. Aside from making the track more mono-compatible, having these elements centered keeps the mix driving.

Let’s listen to a mix in which mono compatibility was unnecessarily prioritized, and then, we’ll listen to one in which creative panning was used. Notice how the mix is augmented by taking advantage of the full 180-degree stereo field.

Watch the video to learn more >

Avoid Mix Bus Processing

I think the biggest mistake someone can make when mixing is to mix with processors on the stereo output or mix bus.

I understand the idea; people want to hear how their mix and the effects they’re using will sound when everything is turned up and compressed. But doing so gives you an entirely unrealistic depiction of how the mix actually sounds.

Now, some engineers like to mix and master at the same time, and that’s totally fine if that’s how you like to work. But if that is how you work, then the processors on the mix bus can stay. If you’re sending the track to another engineer, though, they need to be taken off first.

So, I really recommend making a mix sound as great as you can without stereo bus processing.

Instrument bus processing is a completely different story - adding processors here is incredibly helpful and will give instrument groups a collective sound - but again, avoid processors on the mix bus.

Let’s take a listen to a mix that was done with a limiter on the output throughout the entire session. Notice that when it’s removed, the mix falls apart.

Watch the video to learn more >

Keep Emulations Era-Specific

Plugin emulations of gear are great - they give us access to processors that would either be difficult to route or too expensive to own outright.

But, with all of these emulations comes a strange mishmash of eras in production.

For example, say I add an 1176 to my kick, but then on the snare, I use an intelligent compressor from Ozone or something. On paper, this doesn’t seem like an issue, and you may disagree with me on this. That’s totally cool, but I’m convinced that to the listener, something subconsciously does not feel right.

The vast majority of listeners will not be able to tell you what compressor you’re using on a particular instrument - most won’t even be able to tell you if compression is happening, but they will either enjoy the sound or they won’t enjoy the sound.

This is why I make an attempt to keep my processing within a particular era of production. There’s some flexibility here, but I want to avoid using a classic-sounding processor with a much more modern and digital-sounding one.

The same goes for reverb and other temporal effects - I’m not going to use a classic-sounding plate reverb emulation and then a digital-sounding algorithmic one on another instrument.

Now, the one caveat to this is with a processor that’s incredibly transparent. For example, if I use a Weiss compressor from the 90s and an 1176 from the 70s, the first will sound naturally dynamically controlled, and the latter will sound distinctly compressed - but in this instance, there isn’t much conflict between the sounds.

So, let’s listen to a mix in which processors are kept within a certain time frame - and then we’ll listen to one in which an unlikely combination of processors is used. The latter won’t sound bad, but let me know if something feels off to you in the comments.

Watch the video to learn more >