Fixing Phase

Explaining Frequency, Phase, and Interference

Before we cover fixing phase issues, we must understand the phase, some of the terminology, and the general concepts. If you’re familiar with these ideas, feel free to skip ahead. There are chapter markers in the video - but if you need a refresher or this is all new to you, I’d recommend watching this chapter.

So, let’s start by looking at the waveform of a sine wave. A sine wave is just an elementary one-frequency signal - so it keeps things easy if we’re trying to learn some concepts.

So, every waveform has a peak and a trough. Just think of a peak as having a positive value and a trough as having a negative value, like a + for the peak and a - for the trough.

Although we’re dealing with a digital sine wave right now, in the physical world, these peaks and troughs are created by compressions and rarefactions - compressions are when air molecules are pressed close together or compressed - represented here as the peak.

Rarefactions are when the molecules spread out, represented here as the trough. The concept is very similar to a wave in water - when the water’s molecules are close together, the wave peaks, and after the peak crashes, everything spreads out.

So, this is the general shape of a waveform. When a sound wave is of a higher frequency, we have more of these peaks and troughs per second.

For example, a 1Hz. Wave is one peak and one trough within the period of 1 second. So it’s pretty simple if we expand on that idea - a 20Hz. Wave has 20 peaks and 20 troughs per second, and so on. A complete peak-trough cycle is also called an oscillation - so a 20Hz wave is 20 complete oscillations within 1 second.

Now that we have the general idea of a sound wave, let’s talk about how these waves interact.

So let’s say I have (1) 20Hz. Wave - then, I have another 20Hz playing at the same time.

If we observe the oscillations, we’ll notice that they align - so one track’s peaks match up with the other’s peaks, and troughs match with troughs.

If both signals were played simultaneously, we’d have constructive interference.

Constructive interference occurs when two or more signals add to one another. So look at the output - if I play (1) 20Hz. Wave, the overall amplitude is lower than if I play both together.

Since they’re aligned, they add to one another, causing a higher amplitude signal when combined.

But let’s look at what happens when they aren’t aligned.

If I invert phase one of the signals, the peaks of one waveform co-occur as the troughs of the other waveform. If we play both, we’ll notice that we have no signal.

This is because we have perfect destructive interference. The amplitude of the two waveforms, the frequency, and timing are all identical; however, the troughs cancel out the peaks, the peaks cancel out the troughs, and we’re left with complete phase cancellation.

This is also sometimes called a null signal or one that is completely nullified.

Let’s go back to having the two signals identical - but this time, I will shift the timing of the 2nd waveform. So, in this example, some parts of the peaks and troughs are aligned, but some are not.

Where they’re aligned, we should get constructive interference - where they aren’t, we’ll notice destructive interference.

Again, let’s go back to having the two signals identical, but this time, I’m going to shift the amplitude of 1, making it say 6dB louder than the other.

Since they’re aligned, we’ll have constructive interference - but with a higher amplitude at our output than in the previous constructive interference example.

If I were to invert the phase of one track again, we’d notice that the signal is not completely null. Everything that matches is canceled or nullified. However, one signal still has 6dB more than the other - so now we know that both timing and amplitude affect phase interference.

Let’s look at one more common cause of phase interference: frequency. So, again, I’ll use a 20Hz wave for the demo, but let’s now include a 100Hz. Wave.

Now, the relationship becomes more complex - there are multiple points in which we have constructive and destructive interference.

What I want you to keep in mind is that due to the complexity of phase relations between various signals, there’s absolutely no way we can eliminate destructive interference- unless we want all of our music to be single-frequency sine waves, but that doesn’t seem like a great option.

Let’s look at one more conceptual topic before we move on to more practical ideas.

Watch the video to learn more >

Explaining Phase Rotation vs. Phase Interference

So, whereas phase interference has to do with the relationship between multiple signals, phase rotation has to do with the orientation of peaks and troughs within an individual signal or track. For example, inverting the phase, like we did in the last chapter, is a complete phase rotation in which we flip entirely the peaks and troughs.

Phase rotation is also relevant when we discuss asymmetrical waveforms, or waveforms that lean more toward the positive or the opposing side.

This happens with instruments that have aggressive compressions - for example, a trumpet typically emphasizes compressions over rarefactions, or the positive and peaks, over the negative and troughs.

There’s nothing wrong with this, really - it won’t affect the sound of the signal or track. Still, it will likely affect how our processors interact with it, so if possible, it’s best to remedy an asymmetrical waveform by using phase rotation. The only processor I know of that can adjust phase rotation is Izotope’s RX.

Its phase module will measure the asymmetry and rotate the phase as needed - again, it doesn’t affect the sound, the amplitude, or anything like that. It only affects the relationship of peaks and troughs - but all the information is still present.

So, to clarify, whereas phase interference, or what I personally sometimes call phase relations, has to do with two or more competing signals, phase rotation refers to the adjustment of the peaks and troughs within a single signal or track. Now, phase rotation can cause phase interference, but we’ll cover that in more detail later in the video.

Now that we know the concepts let's discuss how we can attempt to minimize destructive interference - starting with recording.

Watch the video to learn more >

Phase Interference During Recording

When recording, there are two main causes of phase interference - the first being multiple microphones being used on a sound source.

For example, if I use a stereo pair to mic an acoustic guitar, the waves that hit one mic will vary slightly from those that hit the other - either in frequency, amplitude, time, or compression and rarefaction.

The 2nd contributing factor is the room. As the performance occurs, the room will reflect and refract sound waves - sometimes this sounds great, sometimes it doesn’t - it depends on the recording environment.

A heavily insulated or “Dry” room helps minimize this type of interference by absorbing sound waves before they can travel back to the microphone. Still, as I’m sure you know, sometimes the reflections can help augment the sound, add some life to a recording, and so on.

So, this means we need to be concerned about two things when recording, at least when considering phase relations.

The first is our microphone placement, and the 2nd our recording environment - and, of course, these two elements can play heavily into one another.

Starting with mic placement - we should keep any two or more microphones the same distance from a sound source. Although this won’t be perfect, it’ll help minimize differences in timing between the multiple signals.

For example, if we’re recording a drum set and using a spaced pair overhead technique - it helps to measure the distance of each from the snare or kick. Ideally, these two will be placed the same distance from the kit. This will help to lessen differences in the recorded frequencies and amplitudes.

If it doesn’t, then to match the amplitude, we could apply more gain to the microphone with a lower level - or we could fix this during mixing. Also, as you’d imagine, since each microphone uses different capsules and circuitry, resulting in varying frequency and dynamic responses, it helps to use a matched pair to keep the signals as similar as possible.

Lastly, the room is harder to control. Most production, especially for drums, is digital nowadays, so you don’t need to worry about this too much - but if you’re recording in a room, try to dampen the reflections unless the room has been designed with the proper architecture and materials to create a pleasant sound.

With these ideas covered, let’s take a more practical approach and discuss how to fix these issues when mixing.

Watch the video to learn more >

Manually Fixing Phase Interference with Polarity

As you might have noticed during the first chapter, I can use some utility plugins to affect the polarity of a signal.

In Logic Pro, the Gain utility plugin offers phase inversion, but many other DAWs include similar stock plugins.

Another plugin I’m going to use is a correlation meter - which I’ll place on the stereo output.

So, what we’re going to use is something called a Left-Right method - which I came across when watching this video from a channel called ‘Brass Palace.’ so, big thank you to that engineer - I believe his name is Chris - for sharing this really helpful idea.

In short, whenever we have a multi-tracked instrument, like a drum set, we’ll use phase inversion and observe our correlation meter until all tracks have the best possible relationship.

So, let's start with the drum overheads and pan those completely to the left. Then, we’ll bring in our kick and pan that completely to the right. So right now, both are soloed, the overheads are panned left, and the kick right, and we’re observing our correlation meter.

If we have a positive value with the correlation meter, that’s generally a good sign; if it’s negative, it’s generally a bad sign.

With the Gain Utility plugin inserted on the kick, and while observing the correlation meter, I’ll invert the phase and see how this affects the correlation.

If the inversion improves the correlation, I’ll keep it inverted. If the original phase rotation was better, I wouldn’t invert the polarity - it’s really that simple. Once I’ve measured the kick and decided which polarity is best, I’ll keep it soled and then pan it to the left with the overheads.

Next, I’ll work on the snare - so the kick and overheads are panned to the left and the snare to the right. Again, I’ll use the Gain utility plugin on the snare while observing the correlation meter. Once again, I’ll see which polarity improves the relationship.

When the snare is done, I’ll pan it to the left and go through the other drum multi-tracks like the toms, high hats, and any other related tracks and repeat the process.

Once I have everything inverted or kept as it was originally, according to what results in the best correlation, I’ll put all the tracks back into the middle.

Let’s listen to a live recorded drum set. We’ll do a before and after and notice how the drums sound more focused after the method has been introduced. That said, keep in mind that you may like the more spread-out sound that comes with more phase cancellation, so using your ears is always important.

Watch the video to learn more >

Manually Fixing Phase Interference with Delay

Inverting the phase isn’t the only way to improve phase relations between multi-tracked instruments.

We can also use sample delay to adjust the timing of a track until its waves better align with another recording of that same instrument.

So let’s go back to our drums - I’m going to remove all of the utility plugins, so basically, everything is back to how it was originally.

Instead of the utility plugin, this time, I’m going to use this sample delay plugin called ‘Sound Delay’ by Voxengo - it’s completely free and works with lots of DAWs on multiple operating systems, so I’d definitely recommend it. Also, it helps to know that this doesn’t delay the signal like a stylistic or creative temporal delay - instead, imagine it simply pushing back the full signal by the number of samples or milliseconds introduced.

Then we’re going to use the same method that we used last time - that is, put a correlation meter on the stereo output, solo the overheads and the kick, with the overheads panned to the left and the kick to the right.

Then, I’ll slowly increase the number of samples by which the kick is delayed. As I adjust the kick’s timing, I’ll keep an eye on the correlation meter. Whenever the correlation is as positive as I can possibly get it, I’ll keep the settings, close the plugin, pan the kick to the left with the overheads, and then move on to the next track that’s part of this multi-track instrument.

Just like before, this would be the snare - I’d delay it until it correlates as best as possible, then repeat this process for all of the drum tracks.

Keep in mind that this doesn’t need to be done for separately recorded instruments - for example, say after I recorded drums, I used one microphone to record a tambourine or shaker. I would not consider this as part of the multi-tracked drums.

With that said, let’s take a listen to the drums having their phase adjusted with this method. Again, we’ll do a full A/B, and keep in mind that you may prefer the non-adjusted or original recording.

Watch the video to learn more >

Phase Interference within a Processor

So far, we’ve considered how phase interference can occur between multiple signals - be it constructive or destructive interference.

Another thing that’s important to consider is how a plugin can alter the phase of a signal simply by processing it.

This occurs most notably with equalizers - since they utilize multiple internal filters to create amplitude changes, which requires shifts in the phase to make this happen.

To show what I mean, take a look at this Bertom EQ Analyzer - with it; we can monitor the phase changes that occur when an equalizer alters the frequency response.

In most instances, this doesn’t make a big difference - but you’ll notice that if we use a high-pass filter, this shift becomes pretty aggressive.

If the slope is increased to 18/dB per octave or greater, this will cause the phase to shift 180 degrees - practically speaking; this affects the overall frequency response.

In the instance of a high-pass filter, a small to moderate resonance filter will be created right above the cut-off frequency.

More importantly, though, let’s consider how equalization can affect other signals when we deal with a multi-tracked signal.

Watch the video to learn more >

Multi-tracked Instruments and EQ

If we have multiple signals, all containing info or waveforms of the same instrument, equalizing one track but not another can have very interesting consequences.

So, let’s go back to our drum example - again, we’ll use a correlation meter on the output, but this time, let’s keep the correlation or phase relationships that we achieved with the delay-based phase alignment.

To showcase this, I’ll solo the overheads, kick, and snare and use the same method as before. So right now, the drums have pretty good phase relations - everything correlates pretty well, but let’s see what happens when I add a high pass filter to the snare.

We’ll notice that altering the frequency response affects the correlation in a much more significant way than we’d expect.

Because the EQ affects the phase rotation, this shifts the peaks and troughs of the affected snare, in turn altering the phase relationship between the close-miked snare and the other signals that also include the snare, however faint that inclusion may be.

There is a way to avoid this while still being able to equalize the snare, but we’ll cover this in the next chapter. For now, let’s listen to EQ being used on a snare that has already been adjusted to correlate with other multi-tracked instruments - and notice how the EQ has a negative impact on the correlation.

Watch the video to learn more >

Linear Phase and Phase Correction

So, the way we fix the issue from the last chapter is to use a linear phase filter. Many equalizers include an option to use a linear phase setting - this will correct the phase rotation caused by the equalizer, in turn avoiding the phase interference caused by the rotation.

Just to check this, let’s look at the Bertom EQ Analyzer again (and by the way, this plugin is free if you want to test plugins for yourself), and we’ll notice that the phase stays at 0 degrees - in other words, no phase rotation occurs.

Next, let’s look at the correlation meter and perform the same tests from the last chapter - we’ll notice that the correlation doesn’t change when we introduce the same high-pass filter if the EQ is set to a linear phase mode. Again, this is because we’re not introducing phase rotation and subsequently avoiding any changes to the phase relationships.

Although linear phase filters are incredibly useful for this reason, they do have a drawback called pre-ringing distortion. This distortion occurs after the signal passes through the processor; however, when our DAWs correct the timing due to the latency caused by the linear phase setting, the small distortion gets centered directly on the original signal.

This can affect transients and create a strange ringing sound in the low frequencies; however, the effects are very mild, so the pros in this situation definitely outweigh the cons.

So let’s take a listen to the same HP filter being used, but this time, using a linear phase EQ. We’ll notice that the correlation doesn’t change like it did when using a minimum phase or zero latency filter.

Watch the video to learn more >

Parallel Processing and Phase Interference

Last up, I want to quickly talk about a big issue to avoid - especially if you’re concerned about the phase relations of your mix, or in this case, mix or master.

Parallel processing is a useful technique, but it can quickly cause the same issues we covered in Chapter 8. So, let’s say we want to add parallel EQ to the drum bus - I know this isn’t a common form of processing, but bear with me here for a moment.

Say I wanted to isolate the mids of the drums for the sake of processing just the mid frequencies but with a parallel setup.

Well, we’ll notice that this does something very similar to our snare in chapter 8. Since we’re adjusting the peaks and troughs of the signal via phase rotation, when the two signals are combined, we’ll notice destructive phase interference.

Except this time, it’s a lot worse. Since the signals were originally identical, shifting the phase rotation of the parallel track with an EQ has a huge impact on the phase relations. Think back to Chapter 1 - when I adjusted the timing of 1 of the two, the 20Hz. Waves would greatly affect the relationship between them because, in all other ways, they were identical.

So when we combine a signal with an identical signal, like with parallel processing, but change the phase rotation or invert the phase, we can expect severe cancelation.

Notice that when we observe the output or the combination of the original signal and the parallel processed signal, we have huge notch filters at the cutoffs - which makes sense because these filters are where we see large changes to the phase rotation.

Again, the remedy for this is avoiding phase rotation by using a linear phase EQ.

So let’s listen to 2 examples - one in which a parallel signal is being equalized with minimum phase filtering and one in which it’s being equalized with linear phase filtering. Notice how we don’t have issues when we use the linear phase filter type.

Watch the video to learn more >