How to Expertly Edit Your Tracks

The Biggest Editing Mistake I See

Sitting in on other sessions, I notice this all the time.

There’s nothing wrong with crossfades, but using crossfades without listening to the edit often causes a strange overlap between the 2 signals.

The thinking makes sense, blending this track into the next and vice versa should result in a smooth transition.

But if there's any variation between those 2 tracks, which there almost always will be, then you’ll create an unnatural sound in which 2 signals that don’t match up are blended together to create a kind of strange mismatch of signals.

Granted, this is why the crossfade can be adjusted, but let’s be honest, most times this is a set-it-and-forget-it technique used to move through a session quickly.

I’ll cover better methods in a moment but for now, let’s listen to a crossfade between 2 different takes and notice how the blend doesn’t sound natural, even though the 2 performances are intended to be as close as possible.

Watch the video to learn more >

How to Cut and Fade Tracks

Knowing how to cut a track properly results in the need for fewer fades and smaller crossfades.

The best place to cut a signal is right when the waveform is at 0-degrees. By 0-degrees I’m referring to phase rotation, not phase shift.

Semantics aside, it’s the line between the peak and the trough, represented as a straight line.

When editing, you can edit down to the sample, meaning if you zoom in enough you can find the exact moment to split the clip.

Or if you need to shorten a clip from either direction, this is a great place to make that edit.

If you’ve ever heard a click or a pop after making a cut, this is why. The signal is high in amplitude when the cut is made, causing a noticeable difference between the signal and silence, or the signal and another signal with a differing amplitude.

So, if you want to cut a track into silence, or go from silence into a clip, this is the best option.

If you’re going from silence into the clip, and the change sounds like it happens too quick, move the edit back from the intended start point slightly, and again center the cut at a 0-degree point.

Then use a small fade-in that ends right at the intended start point.

If you’re going from a clip into silence, and it sounds like it’s ending too quickly, move the cut slightly after the intended stop point, again placed at a 0-degree point, and use a small fade out than begins at the intended stop point.

Lastly, if you’re trying to blend one track into another, a crossfade is the best option, but there’s a catch.

As you might have guessed from the first section of the video this crossfade can’t, or at least shouldn’t, be done arbitrarily.

Again, zoom in and find a 0-degree or as close to 0-degree as possible point. So, notice I’m dragging the end points of the tracks around while observing the waveform’s rotation.

Once I find a good point where they overlap well, I’ll create the crossfade. If it’s a stereo track it may be more difficult, since you’ll need to find a good compromise if there are no points where both overlap at 0-degrees.

Maybe the left channel does but the right doesn’t or vice versa, but use your best judgement and find the best compromise.

You might be thinking, well, just use your ears. This seems annoyingly over the top, or like I’m mixing with my eyes.

You can kind of use your ears for this, either you’ll hear a pop or you won’t; however, you really can’t hear a less than 1ms change and think, that’s not aligned properly and I need to move it 20 samples to the left or right. Unless you have superhuman hearing.

I don’t, so, I look to see where that alignment will work best.

Once you find the best cut, use a small crossfade.

The bigger the crossfade the more unrelated signal you’ll blend together.

A tiny crossfade placed at the best point will always sound better than a big crossfade if you’re going for a seamless edit.

So, let’s listen to some edits real quick. I’ll cut clips at 90 or -90 degrees and notice how the edit is pretty audible.

Then, I’ll make the edit at 0 degrees and notice how much smoother the transition is.

Then, we’ll compare a crossfade done poorly, with one done with the method I showed here.

Watch the video to learn more >

Using LUFS Normalization

This technique doesn’t work in all circumstances, but if you have a performance that varies greatly in loudness, yet, it’s supposed to be consistent, it works well.

For example, if a vocal performance during the chorus varies between sections, you can isolate the clip and use LUFS or loudness normalization to bring it up, or turn it down.

First, find a section that’s the intended loudness, or determine what you want the loudness to be, typically you’ll want the peak to be around -5dB so you can add some additional processing.

With that value determined, isolate sections that fall below or go too far above it, and normalize it to that LUFS.

You might be wondering, why not just use peak normalization?

Peaks can vary greatly and aren’t a good indication of consistency - for example, maybe the vocalist sang something too loudly in a section, and now peak normalization is using that part to determine how much gain is needed. If that’s the case, which is pretty common, peak normalization isn’t a good option.

Once you perform LUFS normalization to a larger section of the performance, go back and listen to fine-tune the changes. LUFS normalization uses clean gain to make changes, so it’s easier to bring the gain up or down as needed.

It can be slightly time-consuming, but it’s a great method if you want consistent performances and you’re trying to avoid processing like compression as much as possible.

Let’s take a look at this editing method, and let me know in the comments if you plan on trying it.

Watch the video to learn more >

Editing Exact Notes

So, say you’re editing a vocal, bass, guitar, piano, and so on, and you want to isolate a note.

Maybe you want to use it creatively, or fix a mistake in the performance, etc. How can you be sure that you’ve properly isolated the note?

Fortunately, this is easy to do, again by observing the waveform.

First, you can use your ears to find the note.

Then, zoom in.

In this example I have a bass part and multiple notes occur in a short period of time.

By looking closely you can see where the notes change. The denser the oscillations, the higher the frequency.

So, say I’m going from a lower pitch note to a higher one.

I can find exactly where the notes change by finding where the oscillations change from more spread-out oscillations to more dense oscillations.

I could also do the same if it’s going from high pitch to low pitch. You’ll easily be able to see a difference. Again, once you find where that occurs create the cut at the 0-degree point.

This is also super helpful for unmusical aspects of a performance.

Say the vocal has annoying sibilance. I could de-ess, but it’s so aggressive that the de-esser is having to work too hard and the processing is becoming audible.

By zooming in I can find the points at which the performance changes from a note, to higher frequency sibilance.

It’s easy to see - you’ll have oscillations that are a bit more spread out and indicative of a lower frequency, and then a super dense cluster. The dense cluster is the sibilance.

And of course you can double check this by listening to the section.

Once I cut at the 0-degree points around the sibilance, I can use clip gain to attenuate it.

The process will be slow at first, but like everything, the more you do it the easier and more second nature it becomes.

So, let’s listen to a vocal track that I’ve designed to artificially have annoying levels of sibilance using frequency-specific expansion.

Then, I’ll carefully isolate the sibilance and attenuate with clip gain for a more balanced sound.

Watch the video to learn more >