Intro to Microphones

Intro to Microphones


20 MAY 2017

written by Mike




In this article, I want to share with you an overview of a recording microphone. Microphones or mics are the basic instruments of capturing sounds. You have one inside your laptop, your phone and your camera. Even in a smart watch.

Before we dig deeper in microphone placement techniques and various recording tips you need to learn more about mics.

What is a condenser? Dynamic?

When you work with professionals and experienced sound engineers, you will learn that choosing the right microphone is number one thing one the list. Every audio professional will have their preferred mic, and least favourite too. Whatever you want to do you will need to understand basic characteristics of a recording microphone. So, let’s start with that.

When it comes to elemental features of a mic, three things matter.

A transducer, frequency response and directionality.

I. Transducer

A transducer in a microphone transforms acoustic energy (e.g. your voice) into electrical energy. How a microphone registers sound depends on a type of a transducer. Two main ones are Dynamic and Condenser.


Dynamic microphones are quite cheap to build and robust.

So how do they work?

Dynamic mic operates on small electrical generator built from a diaphragm, voice coil and magnet. Let’s say you are recording yourself for a YouTube channel. The force of your voice, as a sound wave, makes a diaphragm vibrate. The diaphragm can be described as a thin membrane hidden behind microphone’s metallic mesh.

At the rear of the diaphragm is a voice coil, a coil of wire, which also vibrates. A small magnet forms a magnetic field around that wire. Physics. The movement of that coil within the magnetic field generates electrical signals that correlate to the force of your voice. Because dynamic microphones can survive in the toughest environments, they are number one choice for live performance.

It is almost impossible to overload a dynamic microphone. Good examples are Shure SM58 we use them for live sound and Shure SM57 another classic and cheap microphone. If you don’t know which one to buy you should get SM57. It will do the job.

Shure SM7B is a classic dynamic microphone used by sports commentators and radio presenters.

Have you ever wondered how is it possible that they shout their heads off, and the sound stays clear?

In most cases, Shure SM7B is the answer. My favourite dynamic microphone would also be Beyerdynamic M201A smooth sounding mic that works great on a snare but also on some louder singers.






Condenser microphones are a bit more complicated than dynamic, more sensitive and more expensive (well, it depends…).

The basics of a condenser mic lie in a capacitor.

The force of your voice will resonate a thin metal or metal-coated membrane that sits in front of a rigid backplate. The space between the two contracts and the motion produces electrical signals.

Now, the biggest difference between dynamic and a condenser is that the latter requires additional power to run. There are two ways to power up your condenser microphone.

First one is with batteries, second, we call phantom powerPhantom power runs through the microphone cable from the interface e.g. mixing desk or audio interface. Condenser microphones are sensitive and delicate. They also produce more noise than their dynamic siblings. Maximum sound level specification means that if you shout into a condenser, there is a high probability that the recording will distort.

Good condensers are great in capturing a wide dynamic and frequency range. Try recording an acoustic guitar with a condenser and then with a dynamic microphone. You will hear that condenser will capture the smallest nuances and movements of the guitar.

A classic pair of condenser mics would be AKG 414. Sound engineers often use them as overheads for drums and choirs.

Neumann U87 is a classic studio microphone used for vocals. It is the first choice for ADR recordings or dubbings. Recording sound on sets also requires a sensitivity of a condenser. Microphones such as Sennheiser MKH-416 combine a subtlety of a condenser transducer and a robustness of a dynamic microphone. Remember also to buy a pop shield and keep an eye on a noise level.

II. Frequency response

Frequency response it the reason every music producer, sound engineer or a foley recordist has a preferred microphone. Transducer decides how the sound is captured; frequency response chooses what to capture.

Let’s say you recorded your dog. If your recording sounds 100% the same as your dog in real life, it means that the microphone that you used has a flat frequency response. It didn’t change the sound. Microphones with the flat response are used for measuring acoustics of space and can be quite expensive. Also, you don’t want to use them on your recordings.

Why not?

Well, the sound of a microphone can make your recording better. It can add depth and warmth. It can capture smooth low frequencies or sharp high frequencies. It can omit frequencies that you don’t want. Some microphones will add punch to your drums or presence to vocals. Other times you may wish to use a microphone with a detailed response.

You don’t want to omit anything when recording a wide frequency instrument such as piano. Before using a microphone check its frequency response and its desired use. It’s also good to experiment with different settings.




III. Directionality

The last one on our list is directionality.

Directionality describes the most sensitive side of a microphone. Polar patterns describe how a microphone will pick a sound and what is its best position for it. There are quite a few polar patterns to choose from, but today I will focus on three main ones.


The omnidirectional microphone will register sound at all angles. The polar pattern covers 360 degrees. It means it will pick up the sound from the back as well as from the front. With the same intensity. These polar patterns are great if you want to capture an ambience of a place, something like an inside of a cave.

Another use is to leave an omni in the room as a so-called ambient mic. You can then add this additional layer to your mix later on.


As you probably guessed, unidirectional microphones will register sounds from one particular direction more than from others. Most popular will be a cardioid, a heart-shaped polar pattern. It will pick up less ambient sound than an omnidirectional microphone, and it works great when you want the focus.

For example, if you wish to record a dialogue on set you don’t want to capture a technical crew that is chatting in the corner. Unidirectional microphones are made for this kind of stuff.


Bidirectional microphones are sensitive at front and back but omit material from their sides. They are great for vocal duets and individual stereo recording techniques such as mid-side, M-S.

This polar pattern is used when you want to dismiss unwanted sources of sound. As I mentioned before these are helpful on movie sets, during live music recordings or any environment with more than one sound source. Correct microphone placement is a skill in itself, and I will share with you some advice on that in another article.

To know your equipment is essential.

How it all works and why you want to use it?

These are the questions that you need to ask yourself before making any decision. Microphones are everywhere. You don’t have to know all the details and technical specs of their build, but don’t be ignorant. When it comes to selecting the right gear, ignorance is not bliss.

Liked the article? Follow me! 🙂

Subscribe for the latest updates

Sound Recording Basics

Sound Recording Basics


04 MAY 2017

written by Mike




Throughout the years, capturing sounds has evolved in a dramatic way. From phonograph to a microphone in a mobile phone. From analog to digital.

People still use analog recording, but I will focus on a few aspects of digital recording. Digital recording is the most common, cheapest, and easiest method of capturing needed sounds.

Sound recording can be fun, exciting, hectic, tiresome, laborious and unforgiving gig. But with a few guidelines and basic knowledge, the difference between amateurish and well sounding production can be huge.

Just try to remember last Internet video that you watched.

Was a picture quality good?

What about the sound?

How many times do you have to play with your volume control when switching between videos?

How noisy are the recordings of that famous vlogger you follow?

The forgotten art of quality sound recording tells a difference between a wannabe Internet star and a professional.




The simplest setup for recording sound would be a microphone, cable/lead and a sound recorder. Connect the microphone via cable to the recorder and voila!

Of course, there is a lot more to it, and professional recording sessions are much more complicated. But the basic principles stay the same.

Let’s have a quick look at the basic three components of the setup.


There are a lot of heavy, big books on microphones alone. But to have a good understanding of the subject we can distinguish two types of microphones: dynamic and condenser.

Condenser microphones are bit more sensitive than dynamic. You can use them to record vocals in the studio, wide range instruments such a piano or violin. DPA Microphones debunks some of the myths here. 



Most common cables used to connect a microphone to the recorder are XLR balanced connectors. They can carry the sound over a long distance without inducing any unwanted noise.

USB cables that connect a microphone to the computer are also popular.

Sound Recorder

The subject of sound recorders is wide as the sea but just try to think about it for a second. Anything that can capture a sound is a sound recorder. A mobile phone is the most common one; a simple stereo recorder like Zoom H4N can be handy too. At the professional end, there are a lot of different kinds of sound recorders.

Small, portable ones we use for interviews. The medium we can use for recording dialogue on a movie set. Recorders from Sound Devices have a good opinion.

For a beginner, a simple, direct USB microphone will do but even a basic setup through audio interface will always get you a superior quality.





Techniques of recording audio are an art in itself. There is a choice of correct microphone, the placement of the microphone, recording levels and setting. These are only a few variables that a good sound engineer has to take into consideration. It is important to research the techniques that someone else used for the recording that you want to do.

Using an unusual placement or setup can lead to unexpected and often exciting results. Like using a “trash mic” for example. Every recording requires a different approach. It is important to have an open mind but also a good knowledge of basic procedures.

Have your standard set up in place and then another one as an experiment. And if you are just starting that will often be the case.


Like in everything experimenting and learning from mistakes is a great thing. But there are a few standard rules that you should apply if you want your recording to sound awesome.

Be wise when choosing the microphone

– it can mean a great difference to a general sound of your recording.

Use intelligent microphone placement

– remember the last time when you had to raise the volume to the maximum to listen to that famous vlogger? Or maybe you had to turn it right down?

Know your set up

– microphone, cable, and recorder. Using USB microphones is fine but even with the most basic audio interface connected to your computer the results will be much better.

Know your volumes

– a quiet recording will result in a noisy recording. Turn the volume up, but record too loud and the distortion will ruin your work.

Always record more than you need

– you will have more options to choose and also a backup if something happens to the original recording.

Do a test and listen back to it

– going back to the placement and choice of the equipment. It is always better to get it right at the beginning rather than trying to correct it later on.

Know your basics

– audio recording can be a complicated subject. The basic knowledge of recording, compression and EQ will make a big difference to your final project.

Have fun!

– Experiment and have fun with the process. The more you learn hands on, the better your projects will sound in the future.

It doesn’t matter if you are working on your Internet video channel, making a family holiday video or recording an interview at work. Follow these simple rules and each one of your productions will be better in the end.

Next time when you watch something, focus on listening. Not only on music but also on dialogue and ambience.

Liked the article? Follow me! 🙂

Subscribe for the latest updates

Frequency of Sound

Frequency of Sound


08 APRIL 2017

written by Mike




What is the first thing that comes to your mind when you think about the waves?

How it travels?

What are the lowest pressure levels humans can hear?

How can we surf them?

None of the above and I’ve only tried surfing once so I can’t advise on that. I’m going to give away the answer; it’s the Hertz.

The Hertz, named after scientist Heinrich Rudolf Hertz indicates how often the particles vibrate when sound energy passes through a medium. Medium is air, water, steel and so on. Remember that vibrating particles don’t move, they pass the energy forward, sort of like an audience wave during a football match.

A hertz is a unit of vibration.

1 Hz = 1 vibration / 1 second

What you need to remember is that the particles will always vibrate at the same frequency. So, for example, you struck an awesome guitar note at 1000Hz. As the sound waves travel through the air particles will interact with each other (creating compressions and rarefactions) but the frequency stays the same, 1000Hz.

And every particle on the way will vibrate at 1000Hz. Energy from the source will have the same frequency when it gets into your ear.

Easy to remember. It stays the same.

Now, one vibration is called period. A period is either peak to peak or trough-to-trough. The period indicates one complete cycle of vibration.

So when the Moon travels around Earth, it completes its cycle. But 27 days (around that number) is a period of the Moon’s orbit. Next important thing to remember is a relationship between frequency, pitch and directionality.

High frequency will have a high pitch and will be more directional.

Low frequency will have a lower pitch and will be less directional.

So something like a bass in an EDM can be less than 200Hz with low, pumping “thump” and omnidirectional flow. It travels in every direction possible. That is why you can hear it through your neighbour’s wall the most. It is also because low-frequency waves are much longer that high-frequency waves. So in an instance of distant explosion, you are more likely to hear the low rumble rather that the full frequency blast.



 From the acoustics point of view, it is quite hard to control low frequencies. The most common solution is installing so-called “bass traps” at the end of your room, often in the corners. Their job is to absorb and dampen the low-frequency nightmare.

On the other hand, you have high frequencies. A scream of a small child will be much more directional, shorter and easier to control. From the acoustics point of view of course.

High frequencies travel in short waves and installing few diffusers and absorbers will do a decent job of stopping them. High frequencies can also add ‘air’ or ‘breath’ into the mix but more than often a nasty sibilance will drive you mad.

We also have mid-range, which are middle frequencies. These are frequencies that our ear is most sensitive to. That’s why all the instruments and vocals will fight for a place in your mid-range. I will talk more about mid frequencies when we get to EQ overview. For now, just remember that clashing mid-range will “muddle” your mix. I know, it’s a super scientific term to describe it.

Ok, so when we talk about frequency we always say about the range. We tend to divide frequencies in low, mid and high range. It helps to know these guidelines when we get to EQ.

The human ear can detect a lot of frequencies. Our listening device is so sensitive that we can detect frequency difference of 2Hz. I’m talking here about people trained in music but most of us can still detect small frequency changes.

To generalise, our frequency range is around 20Hz to 20kHz.

That is in our prime age too. When we get older, we tend to detect less and less high frequencies so the range can fall to 17kHz or less. That’s why it is important always to take care of your hearing, take breaks and wear earplugs when necessary.

That not only applies when you work with live sound. Working as a re-recording mixer for twenty or thirty years will take a toll on your hearing too. Do you know how many times I had a conversation with a mixer that went something like that:

“What do you mean it’s saturating? I can’t hear anything there!”

So yeah, it can be quite interesting.

Frequencies below our range of hearing (20Hz) we call infrasound. By using the special device a scientist can detect geophysical changes and monitor activities of volcanoes, earthquakes or avalanches.

Frequencies above our hearing range (20kHz) we call ultrasound. You may recognise the name from pregnancy or other medical tests.

High frequencies can create an image of inside organs in our body or an image of a baby by using a sonogram. Sonars in submarines also use ultrasound to detect underwater things. They send off the signal that bounces off anything that interrupts its travels, just like bats do.

It’s important to note that animals don’t perceive sounds in the same ways as we do. Elephants can go as low as 5Hz, dogs detect sounds from 50Hz to 45kHz, and cats can reach around 85kHz. There are other animals that can go extremely high such as bats (120kHz) or dolphins (200kHz). In contrast, blue whales are known to use infrasound to communicate overlong distances underwater.

It must be quite handy for them as sound travels much faster in water too.



Ok, let’s go back to differences between frequencies. As you know, it’s quite rare to hear a single frequency. Most sounds are made of low, medium and high frequencies and they are all different.

Some frequencies, when played together sound nice, other can be a cacophony. These relationships are a basis for the music system and music intervals.

Nice sounding frequency interference is called consonant, horrible ones we call dissonant.

Let’s have a look at the ratios and frequency relationships in music intervals.

Octave – 2:1 – 512Hz/256Hz

Third – 5:4 – 320Hz/256Hz

Fourth – 4:3 – 342Hz/256Hz

Fifth – 3:2 – 384Hz/256Hz

So as you can see two sound waves played at the same time can create a pleasant sound.

Not just intervals, but chords, solos and music scales are all built on frequency relationships. You don’t need to know precise frequencies to play the piano, but the knowledge becomes handy when you want to record and mix it.

Awesome tool that lets you to input your pitch data and show you the exact frequency of that note.

Pitch to frequency calculator

Before we finish, let’s quickly look at the other characteristic of a sound wave – its power. Decibel or dB is a unit used to measure the intensity of a sound (sound pressure level – SPL).

0dB is a near silence, the least audible sound.

A 3dB increase will double the level of the signal; a 3dB decrease will reduce the level by one-half.

We can describe the levels of signal with a power of ten.

10dB is 10 times as powerful as 0dB.

20dB is 100 times more powerful than 0dB

30dB is 1000 times more powerful than 0dB.

A whisper will be around 15dB, but normal chitchat around 60dB. The jet engine we can describe with around 120dB, a gunshot less 140dB.

All of we measure with SPL, sound pressure level. SPL is an acoustic pressure built up within a defined atmospheric area. So moving away from the sound and doubling the distance will reduce the power of a signal by 6dB. Moving closer to the source and halving the distance will increase the power by 6dB.

I will go more into the depth of sound pressure and waveform characteristics in the future articles.  Just like with the rest of the topics, basics are important; you can easily find a deeper analysis of them on the Internet.

Liked the article? Follow me! 🙂

Subscribe for the latest updates

What is Mixing?

What is Mixing?


06 APRIL 2017

written by Mike



Why does every audio engineer want to be a mixer?

And why people get Oscars for it?

Sound mixing is an art of combining many sounds into one. There is a simple metaphor to describe mixing that may explain it better.

A lot of mixers say it is like cooking, you add different ingredients to create the perfect dish. Of course, add too much or too little of something and your mix is not as tasty as you wanted it to be.

For many sound engineers becoming a mixer is the Holy Grail. The best of the best are legends in the industry with a lot of money and prestigious awards.

The reason sound mixing is a respectable skill is because it requires a lot of technical knowledge. Also – good hearing, subtle touch, creative mind and, of course, a lot of experience.

Doing a good mix on your YouTube video or another small project is not going to need that much dedication. But it is important to have the fundamental knowledge of the craft. There are a lot of different kinds of audio mixing such as music mixing, dynamic game mixing or live mixing. In this segment, I am going to focus on linear, movie mixing and present how to approach it in your project.




The days of analog mixing are pretty much gone. And yes, there are still people who will fight for it, but the world has gone digital, and the art of mixing sound followed.

Your basic tools for mixing will be a good computer, software of your choice and maybe a mixing control that acts as a mixing desk.

The mixing control desk will usually not affect your sound at all. All the processing happens inside the software. Sets of faders and knobs correspond to your program and make the process much easier than working with a mouse and keyboard.

The software of your choice can be anything that you feel comfortable working with. Especially, if you are using third party plugins. Remember, the most important tools are your hands and your ears. Well, your eyes too as a lot of work happens on the computer screen.





Preparation is everything. As a mixer, you will work with a client. Be it a director, producer or an independent filmmaker.

There will be someone looking over your shoulder. That is exactly why good communication from the beginning is important. It will help you decide on a right approach and also right tools for the project.

Will it be a loud action movie?

Maybe it is a subtle drama where the sound drives the story?

Or maybe its purpose is to be in the background, a delicate soundscape.

Sometimes you will have a different idea than your client. Hence, a healthy conversation is important as both of you want to do the best job possible. There is also the other side of preparation that sometimes gets forgotten.

A good exchange of information with a sound engineer and sound editor is important and can make a lot of difference.

Try to develop a healthy dialogue between you and other people on the team. Do that in the early stage of the project and you will receive the sounds just the way you like it.

And it will make the whole process much more enjoyable.




Techniques of mixing are a topic for a large book, and everyone has their opinion on the subject.

But it is important to understand the fundamentals.

Volume, panning, EQ, and compression are your basic tools when it comes to mixing.

Volume control

is important as you will have to decide which sounds will take the priority over the others. Loud action scenes can be great, but sometimes a moment of silence can have an even bigger impact.


stands for a panorama, and it means locating the sounds around you. Dog’s barking may come from the left; sound of the helicopter is above your head. And the main character stays in the center.


represents equalization, and it is a sound frequency tool. Each sound has a frequency spectrum that you can adjust to your liking.

Does a guitar have too much low end? You can cut it out from its spectrum and create space for other sounds.


we can explain as making quiet sounds louder and loud sounds quiet. It is the most important tool when it comes to controlling the dynamic of your mix. And it can take years before you grasp the value of compression.


To print your mix means to record it into stems and masters.

So for example if you are creating a 5.1 surround mix, your master will be divided into six mono audio tracks – Left, Centre, Right, Left Surround, Right Surround, Sub (low-frequency track)

Besides printing your full mix, it is also important to record other stems such as dialogue stems, music, stems, effects stems, vocals stems and so on. These stems will be a part of your final deliveries.

To talk about the art of sound mixing is like trying to explain techniques of painting. Everyone has a different style and approach, and you have to develop your own. It takes years to develop a great ear and sense of a great mix.

And the only way to do that is practice, practice, practice.

Liked the article? Follow me! 🙂

Subscribe for the latest updates

Sound Advice On Mix Stems

Sound Advice On Mix Stems


01 APRIL 2017

written by Mike





Let’s have a look at elements of the mix.

I will focus on movie mix stems as it’s something that I know the best. Final mixes and elements are called deliverables. In the past people worked with physical film or tapes, today we have digital files.

Deliverables for films tend to fall into two categories:

Domestic and International.

What is the difference between the two?

Domestic is a native language production. So if a movie production is in America or England, domestic mixes will be in English. In the deliverable package, you can find an original Atmos mix, Atmos objects, 7.1, 5.1, stereo mix, dialogue stems, DCP and so on. Anything that will go to a cinema, or for other releases such as TV.

International is a bit different.

The international material will go to the countries that dub their movies. Go to my article that describes dubbing if you are unsure what it is

In the international package, you can find some extra audio stems such as options and helpers. They give you control over the original mix and make the dubbing process easier.

Below I explain some of these elements in detail.



What are they?

Stems are sub-mixes. Groups of similar audio are mixed, so it is easier to change, fix and remix the material in the future. It tidies up a project and gives you some control over it too. In movies, stems are used for dubbings and updates. In music, the most popular use is for a remix.

There can be a lot of different kinds of stems, but today let’s have a look at the most common ones.


Movies, games. Some crazy music maybe. Anything that you create during sound design will be in this stem. Explosions, spaceships, gunshots. Remember that the stems are already mixed. They will include EQ, compression, automation, and panning.


Anything that was recorded as part of Foley will go into that stem.

Footsteps, cloth, door creaks will be in there. Sometimes production sound can end up in Foley Stem too. It can be quite difficult to separate it from a live recording.

FX and Foley Stems combined make up for Effects. I know it’s quite confusing right now, but it will be easier to understand once we get to M&E.


Music. No effects, just the score. Music Stem is great to listen on its own, but I still prefer listening to the M&E.


Music and Effects. This stem combines music, FX and Foley. Everything is mixed and balanced and the only thing missing is dialogue. Why?

Simple answer – dubbing. When movies are dubbed in, for example, German language, mixing engineers will take German dialogues and M&E and marry them together. Of course, the process is a bit more complicated than playing Tetris, but you can see what I mean.


Dialogues. An important part of the mix as it gets changed and updated most often.

Added lines, changed lines, and extra ADR will result in dialogue stem updates and fixes.

Remember that dialogue stem can be any language, not just native. German will have its own, Russian and French too. It makes it easier to update, as often M&E will stay the same.



This kind of stem is specific to dubbing. Options will include any neutral sound that all countries can use. Examples?

Screams, fight noises, grunts, breaths, even some low background chatter.

So for examples, you are dubbing a movie in German. You don’t want to pay the actor to do all the grunts and efforts so you get him or her just to record German lines. All the rest neutral stuff will come from this options stem.

Options are handy, and they get used all the time.


Helper Stems are just for guidance. Sometimes M&E will have some background chatter, some shouts or efforts mixed in. It means they cannot be removed, and they will stay in every dubbed mix.

What is the difference between Options and Helper?

With Options, you get a choice. In the situation where your Italian or Spanish actor recorded a shout, you can choose if you want to use a neutral from the stem or the recorded one. With M&E shout, you cannot.

Helpers will show you what is in M&E and will help you to avoid mistakes of doubling up. Imagine there is a shout in M&E but your dubbing actor also recorded one. If you don’t mute the latter, the shouts will double up in the mix.

Helpers help you to avoid that.


If you are working on musical or anything that has a song, you will see a vocal stem. Vocals will not be included in DST Stem as these are two separate elements.


There may be more or less stems depending on a project. Vocal Stems can be split into Choirs and Leads, DST into Crowd and Main Stems. It is important to remember that when you combine all of them, you will get the Full Mix.

So that is it. I hope this will make you understand elements of the mix a bit better and next time when you are working on a new project, you will see straight away what is the deal with all these crazy audio tracks.

Liked the article? Follow me! 🙂

Subscribe for the latest updates

Pin It on Pinterest