Sound Recording Basics

Sound Recording Basics

sound_header

04 MAY 2017

written by Mike

SOUND RECORDING BASICS

 

 

Throughout the years, capturing sounds has evolved in a dramatic way. From phonograph to a microphone in a mobile phone. From analog to digital.

People still use analog recording, but I will focus on a few aspects of digital recording. Digital recording is the most common, cheapest, and easiest method of capturing needed sounds.

Sound recording can be fun, exciting, hectic, tiresome, laborious and unforgiving gig. But with a few guidelines and basic knowledge, the difference between amateurish and well sounding production can be huge.

Just try to remember last Internet video that you watched.

Was a picture quality good?

What about the sound?

How many times do you have to play with your volume control when switching between videos?

How noisy are the recordings of that famous vlogger you follow?

The forgotten art of quality sound recording tells a difference between a wannabe Internet star and a professional.

 

monitors_article_1

RECORDING EQUIPMENT

The simplest setup for recording sound would be a microphone, cable/lead and a sound recorder. Connect the microphone via cable to the recorder and voila!

Of course, there is a lot more to it, and professional recording sessions are much more complicated. But the basic principles stay the same.

Let’s have a quick look at the basic three components of the setup.

Microphone

There are a lot of heavy, big books on microphones alone. But to have a good understanding of the subject we can distinguish two types of microphones: dynamic and condenser.

Condenser microphones are bit more sensitive than dynamic. You can use them to record vocals in the studio, wide range instruments such a piano or violin. DPA Microphones debunks some of the myths here. 

 

Cables/Leads

Most common cables used to connect a microphone to the recorder are XLR balanced connectors. They can carry the sound over a long distance without inducing any unwanted noise.

USB cables that connect a microphone to the computer are also popular.

Sound Recorder

The subject of sound recorders is wide as the sea but just try to think about it for a second. Anything that can capture a sound is a sound recorder. A mobile phone is the most common one; a simple stereo recorder like Zoom H4N can be handy too. At the professional end, there are a lot of different kinds of sound recorders.

Small, portable ones we use for interviews. The medium we can use for recording dialogue on a movie set. Recorders from Sound Devices have a good opinion.

For a beginner, a simple, direct USB microphone will do but even a basic setup through audio interface will always get you a superior quality.

 

 

monitors_article_2

TECHNIQUES

Techniques of recording audio are an art in itself. There is a choice of correct microphone, the placement of the microphone, recording levels and setting. These are only a few variables that a good sound engineer has to take into consideration. It is important to research the techniques that someone else used for the recording that you want to do.

Using an unusual placement or setup can lead to unexpected and often exciting results. Like using a “trash mic” for example. Every recording requires a different approach. It is important to have an open mind but also a good knowledge of basic procedures.

Have your standard set up in place and then another one as an experiment. And if you are just starting that will often be the case.

COMMON RULES

Like in everything experimenting and learning from mistakes is a great thing. But there are a few standard rules that you should apply if you want your recording to sound awesome.

Be wise when choosing the microphone

– it can mean a great difference to a general sound of your recording.

Use intelligent microphone placement

– remember the last time when you had to raise the volume to the maximum to listen to that famous vlogger? Or maybe you had to turn it right down?

Know your set up

– microphone, cable, and recorder. Using USB microphones is fine but even with the most basic audio interface connected to your computer the results will be much better.

Know your volumes

– a quiet recording will result in a noisy recording. Turn the volume up, but record too loud and the distortion will ruin your work.

Always record more than you need

– you will have more options to choose and also a backup if something happens to the original recording.

Do a test and listen back to it

– going back to the placement and choice of the equipment. It is always better to get it right at the beginning rather than trying to correct it later on.

Know your basics

– audio recording can be a complicated subject. The basic knowledge of recording, compression and EQ will make a big difference to your final project.

Have fun!

– Experiment and have fun with the process. The more you learn hands on, the better your projects will sound in the future.

It doesn’t matter if you are working on your Internet video channel, making a family holiday video or recording an interview at work. Follow these simple rules and each one of your productions will be better in the end.

Next time when you watch something, focus on listening. Not only on music but also on dialogue and ambience.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Frequency of Sound

Frequency of Sound

sound_header

08 APRIL 2017

written by Mike

FREQUENCY OF SOUND

 

 

What is the first thing that comes to your mind when you think about the waves?

How it travels?

What are the lowest pressure levels humans can hear?

How can we surf them?

None of the above and I’ve only tried surfing once so I can’t advise on that. I’m going to give away the answer; it’s the Hertz.

The Hertz, named after scientist Heinrich Rudolf Hertz indicates how often the particles vibrate when sound energy passes through a medium. Medium is air, water, steel and so on. Remember that vibrating particles don’t move, they pass the energy forward, sort of like an audience wave during a football match.

A hertz is a unit of vibration.

1 Hz = 1 vibration / 1 second

What you need to remember is that the particles will always vibrate at the same frequency. So, for example, you struck an awesome guitar note at 1000Hz. As the sound waves travel through the air particles will interact with each other (creating compressions and rarefactions) but the frequency stays the same, 1000Hz.

And every particle on the way will vibrate at 1000Hz. Energy from the source will have the same frequency when it gets into your ear.

Easy to remember. It stays the same.

Now, one vibration is called period. A period is either peak to peak or trough-to-trough. The period indicates one complete cycle of vibration.

So when the Moon travels around Earth, it completes its cycle. But 27 days (around that number) is a period of the Moon’s orbit. Next important thing to remember is a relationship between frequency, pitch and directionality.

High frequency will have a high pitch and will be more directional.

Low frequency will have a lower pitch and will be less directional.

So something like a bass in an EDM can be less than 200Hz with low, pumping “thump” and omnidirectional flow. It travels in every direction possible. That is why you can hear it through your neighbour’s wall the most. It is also because low-frequency waves are much longer that high-frequency waves. So in an instance of distant explosion, you are more likely to hear the low rumble rather that the full frequency blast.

monitors_article_1

 

 From the acoustics point of view, it is quite hard to control low frequencies. The most common solution is installing so-called “bass traps” at the end of your room, often in the corners. Their job is to absorb and dampen the low-frequency nightmare.

On the other hand, you have high frequencies. A scream of a small child will be much more directional, shorter and easier to control. From the acoustics point of view of course.

High frequencies travel in short waves and installing few diffusers and absorbers will do a decent job of stopping them. High frequencies can also add ‘air’ or ‘breath’ into the mix but more than often a nasty sibilance will drive you mad.

We also have mid-range, which are middle frequencies. These are frequencies that our ear is most sensitive to. That’s why all the instruments and vocals will fight for a place in your mid-range. I will talk more about mid frequencies when we get to EQ overview. For now, just remember that clashing mid-range will “muddle” your mix. I know, it’s a super scientific term to describe it.

Ok, so when we talk about frequency we always say about the range. We tend to divide frequencies in low, mid and high range. It helps to know these guidelines when we get to EQ.

The human ear can detect a lot of frequencies. Our listening device is so sensitive that we can detect frequency difference of 2Hz. I’m talking here about people trained in music but most of us can still detect small frequency changes.

To generalise, our frequency range is around 20Hz to 20kHz.

That is in our prime age too. When we get older, we tend to detect less and less high frequencies so the range can fall to 17kHz or less. That’s why it is important always to take care of your hearing, take breaks and wear earplugs when necessary.

That not only applies when you work with live sound. Working as a re-recording mixer for twenty or thirty years will take a toll on your hearing too. Do you know how many times I had a conversation with a mixer that went something like that:

“What do you mean it’s saturating? I can’t hear anything there!”

So yeah, it can be quite interesting.

Frequencies below our range of hearing (20Hz) we call infrasound. By using the special device a scientist can detect geophysical changes and monitor activities of volcanoes, earthquakes or avalanches.

Frequencies above our hearing range (20kHz) we call ultrasound. You may recognise the name from pregnancy or other medical tests.

High frequencies can create an image of inside organs in our body or an image of a baby by using a sonogram. Sonars in submarines also use ultrasound to detect underwater things. They send off the signal that bounces off anything that interrupts its travels, just like bats do.

It’s important to note that animals don’t perceive sounds in the same ways as we do. Elephants can go as low as 5Hz, dogs detect sounds from 50Hz to 45kHz, and cats can reach around 85kHz. There are other animals that can go extremely high such as bats (120kHz) or dolphins (200kHz). In contrast, blue whales are known to use infrasound to communicate overlong distances underwater.

It must be quite handy for them as sound travels much faster in water too.

monitors_article_2

 

Ok, let’s go back to differences between frequencies. As you know, it’s quite rare to hear a single frequency. Most sounds are made of low, medium and high frequencies and they are all different.

Some frequencies, when played together sound nice, other can be a cacophony. These relationships are a basis for the music system and music intervals.

Nice sounding frequency interference is called consonant, horrible ones we call dissonant.

Let’s have a look at the ratios and frequency relationships in music intervals.

Octave – 2:1 – 512Hz/256Hz

Third – 5:4 – 320Hz/256Hz

Fourth – 4:3 – 342Hz/256Hz

Fifth – 3:2 – 384Hz/256Hz

So as you can see two sound waves played at the same time can create a pleasant sound.

Not just intervals, but chords, solos and music scales are all built on frequency relationships. You don’t need to know precise frequencies to play the piano, but the knowledge becomes handy when you want to record and mix it.

Awesome tool that lets you to input your pitch data and show you the exact frequency of that note.

Pitch to frequency calculator

Before we finish, let’s quickly look at the other characteristic of a sound wave – its power. Decibel or dB is a unit used to measure the intensity of a sound (sound pressure level – SPL).

0dB is a near silence, the least audible sound.

A 3dB increase will double the level of the signal; a 3dB decrease will reduce the level by one-half.

We can describe the levels of signal with a power of ten.

10dB is 10 times as powerful as 0dB.

20dB is 100 times more powerful than 0dB

30dB is 1000 times more powerful than 0dB.

A whisper will be around 15dB, but normal chitchat around 60dB. The jet engine we can describe with around 120dB, a gunshot less 140dB.

All of we measure with SPL, sound pressure level. SPL is an acoustic pressure built up within a defined atmospheric area. So moving away from the sound and doubling the distance will reduce the power of a signal by 6dB. Moving closer to the source and halving the distance will increase the power by 6dB.

I will go more into the depth of sound pressure and waveform characteristics in the future articles.  Just like with the rest of the topics, basics are important; you can easily find a deeper analysis of them on the Internet.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

What is Mixing?

What is Mixing?

sound_header

06 APRIL 2017

written by Mike

WHAT IS MIXING?

 

Why does every audio engineer want to be a mixer?

And why people get Oscars for it?

Sound mixing is an art of combining many sounds into one. There is a simple metaphor to describe mixing that may explain it better.

A lot of mixers say it is like cooking, you add different ingredients to create the perfect dish. Of course, add too much or too little of something and your mix is not as tasty as you wanted it to be.

For many sound engineers becoming a mixer is the Holy Grail. The best of the best are legends in the industry with a lot of money and prestigious awards.

The reason sound mixing is a respectable skill is because it requires a lot of technical knowledge. Also – good hearing, subtle touch, creative mind and, of course, a lot of experience.

Doing a good mix on your YouTube video or another small project is not going to need that much dedication. But it is important to have the fundamental knowledge of the craft. There are a lot of different kinds of audio mixing such as music mixing, dynamic game mixing or live mixing. In this segment, I am going to focus on linear, movie mixing and present how to approach it in your project.

 

TOOLS

 

The days of analog mixing are pretty much gone. And yes, there are still people who will fight for it, but the world has gone digital, and the art of mixing sound followed.

Your basic tools for mixing will be a good computer, software of your choice and maybe a mixing control that acts as a mixing desk.

The mixing control desk will usually not affect your sound at all. All the processing happens inside the software. Sets of faders and knobs correspond to your program and make the process much easier than working with a mouse and keyboard.

The software of your choice can be anything that you feel comfortable working with. Especially, if you are using third party plugins. Remember, the most important tools are your hands and your ears. Well, your eyes too as a lot of work happens on the computer screen.

 

monitors_article_1

GROUND WORK

 

Preparation is everything. As a mixer, you will work with a client. Be it a director, producer or an independent filmmaker.

There will be someone looking over your shoulder. That is exactly why good communication from the beginning is important. It will help you decide on a right approach and also right tools for the project.

Will it be a loud action movie?

Maybe it is a subtle drama where the sound drives the story?

Or maybe its purpose is to be in the background, a delicate soundscape.

Sometimes you will have a different idea than your client. Hence, a healthy conversation is important as both of you want to do the best job possible. There is also the other side of preparation that sometimes gets forgotten.

A good exchange of information with a sound engineer and sound editor is important and can make a lot of difference.

Try to develop a healthy dialogue between you and other people on the team. Do that in the early stage of the project and you will receive the sounds just the way you like it.

And it will make the whole process much more enjoyable.

 

TECHNIQUES

 

Techniques of mixing are a topic for a large book, and everyone has their opinion on the subject.

But it is important to understand the fundamentals.

Volume, panning, EQ, and compression are your basic tools when it comes to mixing.

Volume control

is important as you will have to decide which sounds will take the priority over the others. Loud action scenes can be great, but sometimes a moment of silence can have an even bigger impact.

Panning

stands for a panorama, and it means locating the sounds around you. Dog’s barking may come from the left; sound of the helicopter is above your head. And the main character stays in the center.

EQ

represents equalization, and it is a sound frequency tool. Each sound has a frequency spectrum that you can adjust to your liking.

Does a guitar have too much low end? You can cut it out from its spectrum and create space for other sounds.

Compression

we can explain as making quiet sounds louder and loud sounds quiet. It is the most important tool when it comes to controlling the dynamic of your mix. And it can take years before you grasp the value of compression.

PRINT

To print your mix means to record it into stems and masters.

So for example if you are creating a 5.1 surround mix, your master will be divided into six mono audio tracks – Left, Centre, Right, Left Surround, Right Surround, Sub (low-frequency track)

Besides printing your full mix, it is also important to record other stems such as dialogue stems, music, stems, effects stems, vocals stems and so on. These stems will be a part of your final deliveries.

To talk about the art of sound mixing is like trying to explain techniques of painting. Everyone has a different style and approach, and you have to develop your own. It takes years to develop a great ear and sense of a great mix.

And the only way to do that is practice, practice, practice.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Sound Advice On Mix Stems

Sound Advice On Mix Stems

sound_header

01 APRIL 2017

written by Mike

SOUND ADVICE

ON MIX STEMS

 

 

Let’s have a look at elements of the mix.

I will focus on movie mix stems as it’s something that I know the best. Final mixes and elements are called deliverables. In the past people worked with physical film or tapes, today we have digital files.

Deliverables for films tend to fall into two categories:

Domestic and International.

What is the difference between the two?

Domestic is a native language production. So if a movie production is in America or England, domestic mixes will be in English. In the deliverable package, you can find an original Atmos mix, Atmos objects, 7.1, 5.1, stereo mix, dialogue stems, DCP and so on. Anything that will go to a cinema, or for other releases such as TV.

International is a bit different.

The international material will go to the countries that dub their movies. Go to my article that describes dubbing if you are unsure what it is https://mikemigas.com/26653/

In the international package, you can find some extra audio stems such as options and helpers. They give you control over the original mix and make the dubbing process easier.

Below I explain some of these elements in detail.

monitors_article_1

STEMS

What are they?

Stems are sub-mixes. Groups of similar audio are mixed, so it is easier to change, fix and remix the material in the future. It tidies up a project and gives you some control over it too. In movies, stems are used for dubbings and updates. In music, the most popular use is for a remix.

There can be a lot of different kinds of stems, but today let’s have a look at the most common ones.

FX STEM

Movies, games. Some crazy music maybe. Anything that you create during sound design will be in this stem. Explosions, spaceships, gunshots. Remember that the stems are already mixed. They will include EQ, compression, automation, and panning.

FOLEY STEM

Anything that was recorded as part of Foley will go into that stem.

Footsteps, cloth, door creaks will be in there. Sometimes production sound can end up in Foley Stem too. It can be quite difficult to separate it from a live recording.

FX and Foley Stems combined make up for Effects. I know it’s quite confusing right now, but it will be easier to understand once we get to M&E.

MX STEM

Music. No effects, just the score. Music Stem is great to listen on its own, but I still prefer listening to the M&E.

M&E

Music and Effects. This stem combines music, FX and Foley. Everything is mixed and balanced and the only thing missing is dialogue. Why?

Simple answer – dubbing. When movies are dubbed in, for example, German language, mixing engineers will take German dialogues and M&E and marry them together. Of course, the process is a bit more complicated than playing Tetris, but you can see what I mean.

DST STEM

Dialogues. An important part of the mix as it gets changed and updated most often.

Added lines, changed lines, and extra ADR will result in dialogue stem updates and fixes.

Remember that dialogue stem can be any language, not just native. German will have its own, Russian and French too. It makes it easier to update, as often M&E will stay the same.

monitors_article_2

OPTIONS/NEUTRALS

This kind of stem is specific to dubbing. Options will include any neutral sound that all countries can use. Examples?

Screams, fight noises, grunts, breaths, even some low background chatter.

So for examples, you are dubbing a movie in German. You don’t want to pay the actor to do all the grunts and efforts so you get him or her just to record German lines. All the rest neutral stuff will come from this options stem.

Options are handy, and they get used all the time.

HELPERS

Helper Stems are just for guidance. Sometimes M&E will have some background chatter, some shouts or efforts mixed in. It means they cannot be removed, and they will stay in every dubbed mix.

What is the difference between Options and Helper?

With Options, you get a choice. In the situation where your Italian or Spanish actor recorded a shout, you can choose if you want to use a neutral from the stem or the recorded one. With M&E shout, you cannot.

Helpers will show you what is in M&E and will help you to avoid mistakes of doubling up. Imagine there is a shout in M&E but your dubbing actor also recorded one. If you don’t mute the latter, the shouts will double up in the mix.

Helpers help you to avoid that.

VOCAL STEMS

If you are working on musical or anything that has a song, you will see a vocal stem. Vocals will not be included in DST Stem as these are two separate elements.

OTHER STEMS

There may be more or less stems depending on a project. Vocal Stems can be split into Choirs and Leads, DST into Crowd and Main Stems. It is important to remember that when you combine all of them, you will get the Full Mix.

So that is it. I hope this will make you understand elements of the mix a bit better and next time when you are working on a new project, you will see straight away what is the deal with all these crazy audio tracks.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Sound Advice On Mix Formats

Sound Advice On Mix Formats

sound_header

29 MARCH 2017

written by Mike

SOUND ADVICE

ON MIX FORMATS

 

When we think about the sound, we tend to see it as music in your headphones and movies on your laptop. Maybe a surround sound set up at home if you are a grown up. It’s only when you start to dig more into the world of sound engineering you will discover there is a lot more to standard format and mixes.

Today I will share with you an advice and overviews of the most popular formats, mixes and mix elements.

It is important to recognise them and why we use one over another. It’s is only a surface scratch. The area of your expertise and choice of your focus will require learning more about some formats, and the others may be useless to you.

But like with everything else it is good to know the fundamentals, so you have something to talk about with other audio nerds. Excuse me, audio engineers.

Ok, so where is this stuff used?

Everywhere. I mean TV, radio, cinema, games, Internet and everything in between. And the best thing is, every platform requires different formats and conversion.

Fun!

No, not really. It requires a lot of technical knowledge and a lot of work.

Let’s start with the basics.

monitors_article_1

Mono

It is one signal. When you record your voice with a microphone, it’s a mono signal.

You can pan it (move it) to the left or the right, but it’s still going to be just one signal. If you duplicate it and pan one to the left and other to the right it’s not going to make it stereo, it will only make it louder (by about 3dB, that is double sound intensity).

So remember while a mono signal can be fed through a pair of headphones speakers it won’t make it stereo. It is still the same one signal.

Stereo

Stereo means having two input signals. For example, if you are recording a guitar with two microphones it will be stereo. One signal goes to the left, the other to the right. It is only the simplest description because the subject of stereophonic sound is very deep.

For example, there is a difference between stereo and a true stereo that can apply to convoluted reverbs. In true stereo, two input signals are split into four and spread to L/R outputs.

There can be other types such as mono to stereo, joint stereo, intensity stereo, mid/side stereo.

For know let’s keep it simple. Two input signals – two output signals. Stereo.

LtRt

LtRt meaning Left Total/Right Total is another two-track element with Left and Right audio but encoded with Dolby Surround matrix.

What does it mean?

In normal Stereo, you got two outputs – left and right. We call that mix Lo/Ro. In Lt/Rt, you have four outputs – L C R S (left,center, right, mono surround).

Four tracks are “downmixed” or encoded into two-track (left and right), but it can be played back as mono, stereo or LCRS. You will need special software plugins or dedicated hardware to encode/decode this mix.

It’s not often used, but clients still ask for it.

Why?

Some old venues that can’t handle standard 5.1 may need it, airline play and TV could use it too.

R128 EBU

R128 is the latest European Broadcasting Union Recommendation for volume and peak levels. These we use for TV broadcast. In postproduction, you may be asked to produce both 5.1 R128 and 2.0 R128 mixes, both set to standardised rules.

What rules?

LUFS, which is relative Full-Scale loudness, must be set to -23dB (with +/- 1dB of margin) with peak set up to -10dB. For feature films, a peak is set to -3dB. ViSLM is a great tool to measure and correct these settings.

We build R128 mixes from separate audio stems.

monitors_article_2

5.1

5.1 is now a common name for surround sound. The signal is split into six channels L C R Ls Rs LFE (left, center, right, left surround, right surround, sub) and it is the most common set up in home cinemas.

Most DVD and Blu-Ray systems will decode TV broadcast, clips from YouTube or audio from your phone into surround. But it is only when you listen to true surround mix that you can appreciate the art of it.

Games and Blu-Ray movies will always have a true surround mix, and that’s why they sound better than a TV broadcast or streamed shows. There is much less compression on these disks.

7.1

7.1 is another type of surround sound. The difference between 5.1 and 7.1 is extra two channels of audio. The signal is split into L C R Ls Rs Lsr Rsr LFE (left, center, right, left surround, right surround, left rear surround, right rear surround, sub).

7.1 mixes are much less popular than 5.1 mixes and only in selected countries movies get a 7.1 cinema release. It depends on cinema sound setup.

5.1 mix can derive from 7.1 mixes and that is what mixers still do on most dubbing stages.

Atmos

Atmos is the newest format from Dolby. It is still unknown to the general public and only around 2000 cinemas in the world are capable of playing Atmos mix in full scale.

What is it?

It’s quite difficult to explain in words; you need to experience a demo or a movie mixed in Atmos to understand the idea behind this new system. Atmos system divides mix into objects. It has up to 128 separate channels that the system sends to 64 speakers.

I am only familiar with Atmos mixes for movies so take my explanation of the system as only a small part of the technology.

The way the industry leans towards is to mix movies in Atmos and then derive everything else from it. That means 7.1/5.1/stereo and everything in between will be a “downmix” from Atmos.

Some mixers still hesitate and prefer to mix in 7.1 and only then do a quick Atmos job. It is quite an expensive venture and not as effective. Movies you will watch in Atmos will have sound effects flying all around you, rain coming from above, shouts from the left, explosions from behind.

It is quite something. I may write an article dedicated just to Atmos.

For now, just remember – speakers all around you mean Atmos.

Binaural

Binaural recording gives you the sense of being “in the room”.

What does it mean and how you can achieve it?

And how do you experience it?

Let’s start with the last. Binaural is a headphone mix. I mean you can sort of experience it with surround sound, but headphones are the way to go.

Imagine a band standing in the circle and you being inside of it. That’s what binaural is. It gives you the sense of sounds around you, moving as they pass you.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

How To Use Your Ears

How To Use Your Ears

sound_header

09 MARCH 2017

written by Mike

HOW TO USE YOUR EARS?

 

I know that we, sound engineers, are proud of our listening devices (ears), and we are keen to offer listening advice to other people. I guess it comes with the job. One of my tutors, an old-school guy, said that he is worried about us.

He said that new generation of producers, composers and engineers tend to spend a lot of time looking at the computer screen.

And it can distract us from the “real” job.

He told us that we should approach sound just like an artist looks at the painting. He or she needs to step back to appreciate it in full scale. I agree with his ideas but on the other hand, I thought to myself “I like looking at the screen too!”

Where is the correct answer then?

Just like with sound, in balance.

For a period, I was responsible for quality control (QC) of mixes in a post production department. I had a team of people under my supervision, and our job was to listen to stuff that came out of dubbing theatres. And to find any errors and blunders.

The way we worked was with a picture on one screen and edit window on the other. I would listen for mistakes but also watch the waveforms for possible errors. After a while I found myself finding most of the errors by looking. I could recognise a click, sound drop out or a reverb cut.

It reminded me of the scene from Matrix when the guy says he doesn’t see numbers anymore.

Once, my boss asked me what do I think about the idea of QC without looking at the screens.

“Oh dear…” I thought to myself.

His suggestion was inspired by the “old” school. That was how they learnt back then.

“Mhmmm….ok….I mean I look at waveforms as a helpful guidance but let’s asked the rest of the team at the next meeting. Let’s see what they say.”

I tried to be as diplomatic as I could.

To cut to the chase, we had our meeting and the idea was dropped in a flash. The truth is, we adopted looking and listening as one. To us, breaking it would make our job harder and more prone to error.

  

monitors_article_1

 

LISTENING DURING RECORDING

 

Listening and recording at the same time is a tough cookie. It will be stressful, as you want to get it right first time. Also, in live recording; you won’t have a luxury of a second time.

What can you do to make the whole process stress-free?

Look at the big picture.

During recording, it’s easy to get caught up in details. You start to listen to individual elements and spend time on one thing while neglecting everything else. If you spend a whole day getting your drums to sound “right”, you will not have enough time to get great vocals.

Another classic example of losing the “big picture” stuff is adding new elements. It is so easy to add another layer of bass or pad.

Just one more plugin. 

All you need is a bit of extra RAM memory and good enough CPU. Too much is too much. And it doesn’t matter that you can run three hundred tracks in your DAW without any problems. Sometimes four tracks are enough.

Listen in balance.

Always do a quick mix during the recording, get the feel of the final product. A good monitor balance will make the whole process smooth, and it can help you make a decision on adding another element to the mix.

That lead melody sounds great on its own?

Listen to it in balance with everything else. Do you still need it?

Listening in context will help you to answer all these questions. It is also less stressful when you know that you won’t have to do as much “fix in the mix” stuff later on.

Make the headphone mix as good as possible.

There is a simple rule. If the headphones sound good, the musicians will sound good too. It’s not only from a practical point of view. Yes, the drummer must hear bass and vice versa, it helps. But it is also a psychological trick. If the recording already sounds pretty good just imagine how awesome the final mix will be!

A listening job is not just for the engineer; correct balance during overdubbing is crucial too.

And a few more quick tips:

Headphone bleed can be a pain. Invert the polarity to cut it off.

A vocalist needs good headphone balance to pitch with the tracks. Sometimes one ear off during the recording can help.

You can affect the vocalist’s pitch with a simple fold back tricks.

If the singer is flat, turn the vocal monitor down in the headphones.

If the singer if sharp, turn the vocal monitor up in the headphones.

Moving a bass track a few milliseconds back or forth can help to find a groove “on the go.” 

 

monitors_article_2

 

LISTENING DURING EDITING

 

Editing comes before mixing. You don’t need to be in a dubbing theatre to do a good editing job (it would be nice though!). When you edit, you prepare sounds for a mix. That is why how you listen to them is relevant too.

Watch that screen.

I know, I have just talked about the relevance of listening, and the first point is to look at the screen. Why? Well, with editing looking at the screen is as important as listening. You need to see where to cut the audio, where to move it.

If you need to draw out some clicks, zooming in on waveforms helps.

I would say it’s 50-50 for me. It’s a draw.

Quiet environment is essential, but headphones are ok too.

Working in a peaceful environment is great but with editing, you don’t need to be as strict. I mean, yes, it will be hard to hear lip smacks or clicks if you are working near a building site. But normal house conditions will be all right.

Worst case you can also do a little edit on your headphones.

It’s not ideal, but it’s better than nothing.

Listen in context.

We editors tend to dwell on small stuff. Everything needs to be perfect! Guess what. You won’t be able to hear most of that stuff in the full mix.

Listen in context when possible. Do a little temp mix. Balance the tracks so they resemble the final mix. It will give you the idea what you should focus on.

What does the mixer want?

And the last one. And if only I knew the answer.

Jokes aside, the communication between the parties is necessary. Maybe the mixer doesn’t care about your fades, but they want you to color the clips. Maybe they don’t need your work with clip gain at all.

Find out what do they need in the session. Do it at the beginning. It can save you a lot of time.

The last point is still about the listening. To the other person. 

LISTENING DURING MIXING

 

Mixing sounds together is all about listening, right? You can just go with the feel of the moment.

Yes and no.

Mixing is an art; I do agree. Every mixer works in a different way. But there are a few helpful tips that I want to suggest.

Listen on different systems.

So you got your expensive monitors, you soundproofed your bedroom.

Are you sitting in the perfect listening position?

Good, but guess what. No one is going to listen to your mix in that way. Most people that will hear your work will be listening on their phones, in their cars or at home while doing the dishes.

Buy a pair of cheap USB speakers; listen to your mix from your phone. With and without headphones. Play it on your laptop. If you want your work to be good, it needs to sound great through these systems. If you can only appreciate it on a high-end studio monitor. Well, you got a problem.

One more thing.

Mixers from older generation sometimes will say, “no one mixes on headphones!”. That may be true, but everyone listens on headphones. Have a couple of different pairs ready. Test your mixes on them.

Listen in different environments.

You got the previous point; that’s great. Now it is time to shake it up. Go and listen to the mix in a car. Go outside your room and listen through the door. Take a phone with you to the gym and listen when you work out.

These are only a few ideas; try to come up with other weird scenarios. Just think where other people listen to stuff.

Read a book and listen at the same time?

It won’t hurt to try.

Turn the volume down.

Everyone wants to hear their work loud, on the biggest speakers. And yes it can be good if you want to EQ some stuff.

But to analyse the balance, it is best if you turn the volume down. My sound production teacher used to say “If it sounds good on the low level, then it’s a start.”

The other good thing about low volume is that your room acoustics won’t play such a big role. And that is an important point for all the bedroom mixers out there. Smaller monitors and lower level, you can’t go wrong with that.

Listen musically and sonically.

So I understand it in this way. When I want to listen to my mix musically, I close my eyes, and I try to hear it as a whole. I may not pick up the details, but I will hear if something is off balance.

Try it for yourself. Computer screens tend to lie when it comes to sound. A second method is sonic listening. Bring up your meters and frequency analysers.

Is everything in balance? Spectrum looking good?

Use your eyes, ears and mouse. Make all these plugins work. So I try to jump between these two ideas back and forth. What you can’t hear, you will see it. And hopefully, vice versa.

Have a reference material ready.

We all have mixes that inspire us. All time favourites. The ones that we want to copy. Have it near you. When in doubt with your work, put your cherished piece on.

In a heartbeat, you will know where do you need to improve.

Watch that bass.

The unloved child.

Why do you sound so good in my mixing room, but when I play you on a TV system you betray me?

Why?

Get the bass under control. It’s easier said than done, but with some basic acoustic treatment, you should be all right. After a while, you will learn your speakers, and you will know how to tame the beast. Also, there is nothing better than a clear and strong bass in your mix.

  

LISTENING DURING QUALITY CONTROL

 

So as I mentioned before, I used to be responsible for a Quality Control in a post house. Listening is somewhat different as you only focus on mistakes. Even if a particular mix is rubbish, it’s not your problem. You are there to point out blunders only.

Know the guidelines.

Before you start work, you need to know what you are looking for.

What are the most common mistakes? What gets fixed and what doesn’t?

Every project will be different, but once you know the protocol you are off to a better start.

Watch (sync) and listen.

With quality control, at least in movies, you need to both listen to the mix and watch the screen. Why?

Mistakes such as sounds out of sync, missing Foley or low dialogue are harder to spot if you are just listening. If there is a missing sound effect for a dog, the only way you will notice it is if you see the dog on the screen. It can get a little tricky because you will also need to watch the waveforms as some stuff you won’t necessary hear.

Example?

Let’s say there is a two-second drop out in the left surround channel. When you listen to a loud mix blasting from L-C-R, you won’t hear the error. The only way to spot it is to see it on the screen. So, yeah.

You need to watch waveforms on one screen, the picture on the other and also listen to the mix. At the same time.

Don’t obsess about the small stuff, listen to the Final Mix.

“There is a little sound click in left surround, audible when you play the isolated channel at half speed.”

Don’t be that guy. Believe me, it was I when I started. I felt like a hero for noting these little flaws. Until people much smarter than I am put me in my place. Try to see the big picture.

Can you hear this minor issue in a final mix?

Will fixing it make the product better?

No?

Then it is probably ok. After a while, you will learn what is and what isn’t relevant.

  

 

LISTENING DURING DELIVERABLES

  

Ok, so for the end I have something unusual. Hearing during deliverables.

What? Why would you listen to stuff when you are just prepping files?

Try to see it as the last stage of Quality Control. And it is better to be safe than sorry. Especially after you click on “send” button.

What do you deliver?

Is it 5.1 mix? Stereo? Or just the stems?

Have a quick listen and make sure you included the right material. A good practice is to spot-check the mix. Play a few sections at random.

Listen to different elements and individual channels.

Check the beginning and the end.

Check the fade in at the beginning and fade out at the end. When you work with a lot of data on a daily basis, it is easy to cut something short by mistake.

If the start and the end are both in the right place, the rest should be all right too.

Unusual places to check.

What are the unusual places you can check?

Maybe listen to LFE in isolation. Or check the overlaps between the reels of a movie. These are the places where mistakes happen often.

Why?

It’s because no one checks them. Every project will have an extra element or something different. Don’t forget to listen to these exceptions too.

Is everything in sync?

Make sure that all the tracks and all the stems are in sync with each other. Down mixes tend to have an induced delay so make sure you move it to the right place. An unfortunate mouse click can move the whole mix out of sync so make sure that everything is intact before you send.

All right, that’s it for today!

I hope now you have a better understanding how to use your ears on different stages of sound engineering.

And remember; take care of your listening devices. You only have a pair!

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Pin It on Pinterest