How to Prepare for an Audio Mix

Mixing Tracks View

Your home studio software or ‘Digital Audio Workstation’ (DAW) will undoubtedly produce audio direct from any instrument you play in it. This means that for the simplest productions, you may be able to get by without mixing and mastering your creations. But to ensure the best quality for your music, mixing is an essential process. This article describes how to prepare for an audio mix.

Before you even touch a mixing fader, there are some preparations to make. These are the basis of a good mix. They will save you much time and heartache later on and make it much more likely that you will get to a professional-sounding mix first time. My DAW of choice is Cakewalk Sonar, as I compose in MIDI and that is one of its many strengths. But the principles described here will apply to most platforms. This is just the way I do it, worked out over years of study and trial. It’s not gospel. When it comes to any creative process, there are never really any rights or wrongs.


Your composition may consist of several sound sources. You may have vocals or purely acoustic instruments recorded through a microphone. Or perhaps direct-input recordings, played live straight into an audio interface to a channel in your DAW. Or perhaps you have composed in MIDI as I often do – in my case, to make up for the fact that I can play so few instruments well enough for live capture! Doesn’t matter which source – to prepare an audio mix, we will treat them all the same.

There are four stages to this preparation. These are – check the outputs, bounce to audio, refine the audio, and routing. And each of those stages has more than one process. Here we go.

1. Check the Outputs

For all your audio tracks as they stand now, and for all your outputs from synths and samplers – check their output volume on the output track’s VU meter. You want a nice clean signal at close to 0dB (also called ‘unity’). You don’t want them too quiet or you may later struggle to get them to take their place beside the rest of the band. And you don’t want them too hot over unity, because that means you will be losing information from the signal.


This loss of information is known as ‘clipping’ because an output that is too loud may overwhelm your system’s audio circuitry and so clip the peak off the audio wave, introducing distortions. It is true that when it comes to music, loud is good – but not at this stage. That’s a matter for the mix, and ultimately the mastering process. Right now, what we need is accuracy, not volume, and certainly not distortion.

For recorded tracks, adjust the track output. For synths and samplers, adjust the volume within the instrument itself, where possible.


Set all pan pots (‘potentiometer’) to centre. We’ll worry about their place in the soundstage later when we get to mixing. We do this because some synths and samplers try to save composers’ time and avoid some elements of mixing by offering panned stereo outputs. But that’s not our purpose here – we’re not looking for shortcuts, but for the eventual perfect production.

And it’s for that same reason that with a few exceptions, we set all instrument outputs to mono. The exceptions are, for example, full string sections, which can really only ever be stereo. Or perhaps a swirling synthesizer. Or maybe any instrument such as an organ or guitar being played through a Leslie rotating speaker. Strings, stereo; individual violin, mono. Backing vocals, stereo; lead vocalist, mono. Grand piano, stereo; Rhodes piano, mono.

Even the drums should be mono for now, at least individual drums and cymbals. There is an argument that if your drum sampler has an overhead mic channel, that should be stereo. But not all drum samplers have that luxury. In any case, an overhead mic is easy to replicate later by routing a send from the drum bus through a stereo reverb, but that’s again a question for the mix, not the preparation.

2. Bounce to Audio

Sonar Track FoldersIn readiness for this, I make use of track folders. I put all my MIDI tracks in one, and all my synth and sampler output audio tracks in another. I make another folder – called ‘Bounced’ – where I will place all the tracks I am about to render.

Whether you use folders or not, the rest of the process still applies. Go to an output track, and first ensure that its output is not passing through any buses. This is to make sure that the render will be pure, and not processed by any effects or volume controls. I tend to point the output at the audio interface’s speaker channel, just to be sure.

Then solo the audio track, which should also solo any associated MIDI track. Now bounce it to a new audio track, in your ‘Bounced’ folder if you have one.¬† Switch off the solo. Repeat until everything is bounced.

Finish the bounce by switching off all MIDI and synth output tracks – it’s called ‘archive’ in Sonar, and you can do it with one click at track-folder level. Finally, collapse the MIDI and Audio folders. What you have left are your bounced tracks of pure audio, either mono or panned to centre and ready for the next stage.

3. Refine the Audio

Set all your bounced tracks to a clean, non-bussed output as we did earlier. Now it’s time to refine what we have. Again, we have a repetitive process to follow, almost the same for each track. If your DAW has a console view (like the channel strips on a mixer desk), use that rather than a view of the track itself.

Sonar Channel StripGain Staging

Set the bounced track to loop-play at its loudest point. Now watch the track’s VU meter. We are going to use the track’s input gain pot (not its output fader) to adjust its volume. The reason we do this is to ensure we have room in the track’s gain for effects we will add later. Adjust the gain pot until the meter is peaking at about -16dB.

This is the first part of what is called ‘gain-staging’. When eventually we start mixing, we will do this across the mix to make sure nothing clips and to avoid unwanted distortions.


Sonar Prochannel CompressorNow add a compressor to the track. We’re going to make sure the track’s voice will get heard in the mix, so we’ll add mild compression to make sure we’re putting most of the instrument’s energy into its heart rather than its fringes. Add about -4dB of compression at a ratio of around 4:1, and adjust the compressor’s makeup gain to restore any lost volume.

We do this for all tracks, although there are a couple of exceptions. One is large string sections, which I tend to find do not benefit much from this form of early compression. Another is a bass instrument, where the instrument’s essence is so important – in this case, we might use a much higher compression ratio, say 20:1.


The next refinement may or may not be needed, depending on the quality of your sampled instruments or of the original recording. If there are glitches, imperfections, harshness or dullness in the instrument, we can try to fix that now with a few touches of EQ. We strap an equalizer to the track and set it playing.

To get rid of harshness or imbalance in the instrument’s sound spectrum, we will use one or more ‘notch filters’. Take a channel of the equalizer and turn its gain all the way up. This will show up on any graphical EQ as a mound, depicting the range of frequencies affected. Now adjust the mound’s ‘Q’ value until its base is as narrow as it will go. Now sweep that peak across the audio spectrum until you find the harshness (click here for a video going into more depth on this). When you do, turn the EQ channel’s gain all the way down until you’ve turned the peak upside down. That’s it. Your notch filter has removed the offending frequency.

You can also use EQ to enhance parts of the instrument’s frequency range where it sounds at its best. This will also aid mixing later on, because the essence of the instrument will provide its focus, and that instrument will by its nature sound different to others. Thus, it will be easier to find its place in the mix.

However, if enhancement is what you seek, remember mixing’s two golden rules. One is that “you cannot polish a turd”. If the instrument or recording sounds bad, then replace it rather than waste time trying to make it do something it cannot. The other rule is that ‘less is more’. This sometimes translates into the use of subtractive rather than additive EQ. To highlight a frequency band, you can either increase the gain of that band (‘additive’) or decrease the gain of the surrounding frequencies (‘subtractive’).

4. Routing

Up to now, all our tracks have been pointing at the audio interface’s speaker outputs. Now it’s time to bring them under collective, rather than individual control. We do this by creating one or more buses and routing our audio through them. A ‘Master’ bus is essential, aimed at the speakers.

We can also use buses for grouping instruments together. Say you have a flute, an oboe and a bassoon. Point all these at a bus you call ‘Wind’, and route the Wind bus to the Master. Now adjust the volume of each of those wind instruments, so that they sit well with one another.

Do the same with, say the drums in a drum bus, or the guitars in a guitar bus. Thus, when it later comes to the mix, you can adjust the volume of a whole section of the band with just a single bus fader.

This has been about how to prepare for an audio mix. When later the actual mixing begins, many¬† engineers will also use buses to host common effects, such as reverb, compression and audio mastering tools. But that’s for another article.

Please SHARE - tell others about us:

Leave a Reply

Your email address will not be published. Required fields are marked *