Simple Audio Techniques for Podcasts

Simple Audio Techniques for Podcasts

sound_header

30 JUNE 2017

written by Mike

SIMPLE AUDIO TECHNIQUES

FOR PODCASTS

 

Most of the creative projects have three stages:

pre-production

production

post-production

The same principle applies to podcasting. You get planning, recording and post production, and depending on a podcast, each stage will be slightly different. Today I will look at one element of post production process – editing.

Sound editing is a crucial aspect of a well-produced show. It doesn’t matter if it’s a scripted podcast, interview style show or a comedy audition. The complexity of audio editing will depend on the nature of a podcast but let’s have a look at most used functions. This post is titled ‘simple editing’ as I will try to describe tools and solutions that are possible in most audio sequencers. Editing with specialised software such as iZotope products will be featured in the future articles.

What kind of sound editing can you expect while working on podcasts?

Based on my experience with Casefile – a scripted show and multiple of other non-scripted podcasts let me list a few things that should be helpful to you during the process.

Importing files

First of all, you will need to import audio files to audio sequencer of your choice. Most audio will be recorded in 44.1kHz, however, I always convert to 48kHz. If I were in charge of recording, I would also select 48kHz at the source.

It’s a standard for motion picture sound, and it will give you more headroom to work with, better quality.

Once you import the files, you will realise that most will be submitted/recorded as STEREO. If you are recording yourself, then you can do a MONO recording from the start, but working with others, I guarantee that more than often it will be stereo.

MONO – one single audio track

STEREO – two audio tracks, usually panned to left & right channels

If you receive the dialogues as STEREO, use a function to split it into MONO and leave it as a one centre audio track. For the most part, the dialogue should always sit in the centre. Yes, there are exceptions to it, like binaural recordings but these are exemptions from the rule.

Multiple dialogue recordings should be kept on separate audio tracks, don’t stick all of it on just one.

monitors_article_1

Grouping & Colours

I always work with colours when it comes to editing. It means colour coding tracks, so you can visually recognise them. With Casefile, the voice of the Host will be green and other audio clips blue. When I was working on other interview-style podcasts, I had person one coloured green, and person two on separate track coloured blue and so on.

You are working with ears and eyes so use it to your advantage.

Markers

Markers are an essential element of your editing process. They will help to you note down any mistakes, important parts of the podcast, musical cues.

For Casefile I tend to use markers to note down musical cues and significant moments in the story. In other podcasts, I used them to mark sections that were possibly getting cut.

Pauses

Creative breaks are a big process when I edit Casefile podcast. I usually leave a long pause during sentences for a dramatic impact or cut the breaks shorter during more tense moments.

To create breaks just move the audio around, but don’t forget to fill the space with a background, room noise for consistency.

For non-scripted podcasts, pauses are also helpful. A few seconds between two people speaking or asking a question will give a listener a chance to catch up. Sometimes a person just speaks too fast and adding a few artificial pauses will help.

monitors_article_2

Breaths

How you deal with breaths will depend on the style of the podcast. I always try to minimise them, either by eradicating them (Casefile podcast) or making the volume lower.

Some people say that cutting breaths will make the podcast sound unnatural, I would say that it depends on the show.

If you decide to remove all the breaths from the podcast, you will need to use a tool such as De-breather or Strip Silence. Doing it manually will take too much time.

Uhms and Aahs

It’s easier with scripted podcasts as you won’t have to deal with many ‘filler’ phrases. With live recordings, it won’t be that easy. Most people are not professional speakers and will use some kind of a filler when they speak.

These will need to be cut manually, but make sure you won’t turn the podcast into sounding too fake and robotic

It’s easy to go overboard with the editing so use your ears for better judgement.

Background Noise

Any background noise, in particular between the sentences, will need cutting and replacing with a neutral room tone. Other noise such as hiss and rumble will need special tools like De-Noise. But anything else can be easily cut manually or with functions such as Strip Silence.

Other simple audio editing techniques will depend on the nature of the podcast.

It can include pitching up or down the audio or even slowing it down (or making it faster). It was only a few weeks ago when I had to slow a recording down by few percent as the person speaking was talking a bit too fast. In any of these kinds of treatments, you need to be careful not to induce artificial sounding artefacts and change the sound too much.

Simple editing will be enough if you are working on a hobby project from home. Just listen to it, tidy it all up and make it sound a bit tighter.

When it comes to more complicated productions, you will need to use specialised tools, but I will get to them another time.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Podcasting and Marketing

Podcasting and Marketing

sound_header

09 JUNE 2017

written by Mike

PODCASTING

AND MARKETING

Should I start a podcast?

The question that every marketer asks, at least in the present times.

Why is that?

Well, if you google future of podcasting it’s easy to see that everyone seems to be bullish on the medium. From the Copyblogger’s The Astounding Growth of Podcasting to articles from Forbes and Business Insider we can learn that the podcasting trend is going up and up.

And yet, it is still quite a niche industry.

When I analyse Casefile statistics, over 56% of listens happen in the US, followed by Australia with 15% then UK and Canada + others. Podcasting is growing, but it’s still quite a few years away from becoming an established content medium, like YouTube or Instagram. Hence why I think the time to start one is now.

You can have a look at people like Gary Vaynerchuk and see why he is pushing audio content so much. Love him or hate him, he knows a thing about internet marketing, and he knows how to follow the attention. He knows how to market himself.

Another trend that I noticed is, most of the internet personalities are jumping on the podcasting bandwagon. Go and look at top 100 iTunes charts, see how many celebrities can you can recognise. I bet it’s going to be a few.

 

What does it mean?

It means that if you want to be successful, you should follow what successful people do. Apart from that we also have growth in audiobook industry, and we can already see the spill into podcasting. Only recently Audible announced a 5 million dollar fund for playwrights to write audio dramas. They wouldn’t be investing if they didn’t expect some kind of return.

There is no secret to podcasting; it can help you to establish expertise in the industry. Let’s say you are a graphic designer.

Who will you be able to attract better clients and higher rates?

A designer who works from home and attracts business on word of mouth only?

Or someone who also runs a design blog, podcast, course, book and others?

Podcasting is just another medium that can help you to market the business, to establish the expertise. The difference is that it is still niche with the low barriers to entry. Yes, there is some competition, but not as fierce as in other places on the internet.

It’s getting harder every day, so better not wait for too long. It’s a simple question of, do you think it’s easier to start a popular Youtube channel now versus ten years ago?

Podcasting industry is still not regulated; there’s isn’t a big corporation that rules them all. But with time it will happen, there will be rules, schemes, guidelines. If you start early enough, you can be the one who helps to write the rules, which helps to shape the industry.

So don’t dismiss the medium.

Of course, I want to finish with the disclaimer – podcasting is not for everyone.

Take me for example, I know how it works, I produce a popular one.

Why didn’t I start a podcast myself?

Self-awareness is the key here. If you don’t feel comfortable behind a microphone, then don’t force it. But if you think that you can give it a try, do it.

Best time to start was ten years ago; the second best is today.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Intro to Microphones

Intro to Microphones

sound_header

20 MAY 2017

written by Mike

INTRO

TO MICROPHONES

 

In this article, I want to share with you an overview of a recording microphone. Microphones or mics are the basic instruments of capturing sounds. You have one inside your laptop, your phone and your camera. Even in a smart watch.

Before we dig deeper in microphone placement techniques and various recording tips you need to learn more about mics.

What is a condenser? Dynamic?

When you work with professionals and experienced sound engineers, you will learn that choosing the right microphone is number one thing one the list. Every audio professional will have their preferred mic, and least favourite too. Whatever you want to do you will need to understand basic characteristics of a recording microphone. So, let’s start with that.

When it comes to elemental features of a mic, three things matter.

A transducer, frequency response and directionality.

I. Transducer

A transducer in a microphone transforms acoustic energy (e.g. your voice) into electrical energy. How a microphone registers sound depends on a type of a transducer. Two main ones are Dynamic and Condenser.

Dynamic

Dynamic microphones are quite cheap to build and robust.

So how do they work?

Dynamic mic operates on small electrical generator built from a diaphragm, voice coil and magnet. Let’s say you are recording yourself for a YouTube channel. The force of your voice, as a sound wave, makes a diaphragm vibrate. The diaphragm can be described as a thin membrane hidden behind microphone’s metallic mesh.

At the rear of the diaphragm is a voice coil, a coil of wire, which also vibrates. A small magnet forms a magnetic field around that wire. Physics. The movement of that coil within the magnetic field generates electrical signals that correlate to the force of your voice. Because dynamic microphones can survive in the toughest environments, they are number one choice for live performance.

It is almost impossible to overload a dynamic microphone. Good examples are Shure SM58 we use them for live sound and Shure SM57 another classic and cheap microphone. If you don’t know which one to buy you should get SM57. It will do the job.

Shure SM7B is a classic dynamic microphone used by sports commentators and radio presenters.

Have you ever wondered how is it possible that they shout their heads off, and the sound stays clear?

In most cases, Shure SM7B is the answer. My favourite dynamic microphone would also be Beyerdynamic M201A smooth sounding mic that works great on a snare but also on some louder singers.

 

monitors_article_1

 

 

Condenser

Condenser microphones are a bit more complicated than dynamic, more sensitive and more expensive (well, it depends…).

The basics of a condenser mic lie in a capacitor.

The force of your voice will resonate a thin metal or metal-coated membrane that sits in front of a rigid backplate. The space between the two contracts and the motion produces electrical signals.

Now, the biggest difference between dynamic and a condenser is that the latter requires additional power to run. There are two ways to power up your condenser microphone.

First one is with batteries, second, we call phantom powerPhantom power runs through the microphone cable from the interface e.g. mixing desk or audio interface. Condenser microphones are sensitive and delicate. They also produce more noise than their dynamic siblings. Maximum sound level specification means that if you shout into a condenser, there is a high probability that the recording will distort.

Good condensers are great in capturing a wide dynamic and frequency range. Try recording an acoustic guitar with a condenser and then with a dynamic microphone. You will hear that condenser will capture the smallest nuances and movements of the guitar.

A classic pair of condenser mics would be AKG 414. Sound engineers often use them as overheads for drums and choirs.

Neumann U87 is a classic studio microphone used for vocals. It is the first choice for ADR recordings or dubbings. Recording sound on sets also requires a sensitivity of a condenser. Microphones such as Sennheiser MKH-416 combine a subtlety of a condenser transducer and a robustness of a dynamic microphone. Remember also to buy a pop shield and keep an eye on a noise level.

II. Frequency response

Frequency response it the reason every music producer, sound engineer or a foley recordist has a preferred microphone. Transducer decides how the sound is captured; frequency response chooses what to capture.

Let’s say you recorded your dog. If your recording sounds 100% the same as your dog in real life, it means that the microphone that you used has a flat frequency response. It didn’t change the sound. Microphones with the flat response are used for measuring acoustics of space and can be quite expensive. Also, you don’t want to use them on your recordings.

Why not?

Well, the sound of a microphone can make your recording better. It can add depth and warmth. It can capture smooth low frequencies or sharp high frequencies. It can omit frequencies that you don’t want. Some microphones will add punch to your drums or presence to vocals. Other times you may wish to use a microphone with a detailed response.

You don’t want to omit anything when recording a wide frequency instrument such as piano. Before using a microphone check its frequency response and its desired use. It’s also good to experiment with different settings.

 

monitors_article_2

 

III. Directionality

The last one on our list is directionality.

Directionality describes the most sensitive side of a microphone. Polar patterns describe how a microphone will pick a sound and what is its best position for it. There are quite a few polar patterns to choose from, but today I will focus on three main ones.

Omnidirectional

The omnidirectional microphone will register sound at all angles. The polar pattern covers 360 degrees. It means it will pick up the sound from the back as well as from the front. With the same intensity. These polar patterns are great if you want to capture an ambience of a place, something like an inside of a cave.

Another use is to leave an omni in the room as a so-called ambient mic. You can then add this additional layer to your mix later on.

Unidirectional

As you probably guessed, unidirectional microphones will register sounds from one particular direction more than from others. Most popular will be a cardioid, a heart-shaped polar pattern. It will pick up less ambient sound than an omnidirectional microphone, and it works great when you want the focus.

For example, if you wish to record a dialogue on set you don’t want to capture a technical crew that is chatting in the corner. Unidirectional microphones are made for this kind of stuff.

Bidirectional

Bidirectional microphones are sensitive at front and back but omit material from their sides. They are great for vocal duets and individual stereo recording techniques such as mid-side, M-S.

This polar pattern is used when you want to dismiss unwanted sources of sound. As I mentioned before these are helpful on movie sets, during live music recordings or any environment with more than one sound source. Correct microphone placement is a skill in itself, and I will share with you some advice on that in another article.

To know your equipment is essential.

How it all works and why you want to use it?

These are the questions that you need to ask yourself before making any decision. Microphones are everywhere. You don’t have to know all the details and technical specs of their build, but don’t be ignorant. When it comes to selecting the right gear, ignorance is not bliss.

Liked the article? Follow me! 🙂

Subscribe for the latest updates

What is Sound Wave

What is Sound Wave

sound_header

12 MAY 2017

written by Mike

WHAT IS

SOUND WAVE

 

Let me ask you a few questions.

What is a sound wave?

What is the difference between transverse wave and longitudinal one?

And why does it matter?

It matters because it is a fundamental knowledge of sound. You can’t write a book without knowledge of the alphabet, right? Well, I guess you could. But it’s much easier to know the letters. Anyway, let’s start with something basic, a waveform.

What is a waveform and how do we describe it in modern, digitalized world?

monitors_article_1

 

Let’s start with physics. So the picture represents a sound wave. Now, notice that I said “represents” a sound wave. It doesn’t really look like it. A sound wave is a disturbance of particles within a medium. A medium can be anything like air, water or steel.

It is a mechanical wave and to better understand it, imagine a slinky. Now, a movement of a sound wave is kind of like a coil movement of a slinky. It either compresses or spreads apart.

If you got one nearby, grab the first coil and move it back and forth. It will create a disturbance. The first loop will push and pull on the second coil that then displaces the third one and so on.

So the energy introduced into the first coil is carried by a medium, our slinky, from one location to another. It is important to remember that.

Why?

Remember the picture?

It is only a representation of sound. We use a sine wave to portray the sinusoidal nature of the pressure-time variations. In simpler words, it’s easier to imagine a wave in this way. It does, sort of, look like a wave. In reality, sound passes through a medium and disturbs particles in the longitudinal, linear motion.

monitors_article_2

 

If you need to remember one definition of a sound, memorise this:

The sound is a longitudinal wave with compressions and rarefactions.

Let’s go back to the pictures and have a look at some of the details. First, we have a source of the wave. We can describe the source as a body that disturbs the first particle of the medium. A source can be anything that vibrates. So if you play the guitar, the string will vibrate. If you sing – your vocal chords do the job.

What about YouTube videos? Are they the source?

Yes, your computer makes the diaphragm of a speaker vibrate. Now, we have created a disturbance in a medium. But what it is exactly? A medium is just a term for a bunch of particles bundled together. They stay near each other, and they collaborate with each other.

So as I said before, a medium can be anything that will carry the energy, particle disturbance, forward. It can be anything, but the air is always our first bet.

Now, under the picture, I wrote medium – particle interaction.

Why is that?

I want you to understand why we define a sound wave as a mechanical wave. The energy, our sound wave, is moved from one place to another by particle interaction in a medium. Let’s say the medium is air.

The first bunch of particles moves from their position and they either pull or push next group of particles from their position. This neighbourhood disturbance is mechanical and carries on through the air.

Ok, but what is a transverse wave?

And what is the difference?

In a transverse wave, the oscillations happen at the right angle to the movement of the energy. So if you know Indiana Jones and his whip, you will remember that the wave travelled on one side. It doesn’t go up and down when he uses it. Light is a transverse wave and so is a ripple in a pond.

Let’s go back to the characteristics of our sound wave.

Wavelength

We describe a wavelength of a wave as a complete wave cycle. A repeat of the pattern. In a sinusoidal graphic representation of a wave (my first picture), we measure wavelength from peak to peak or from trough to trough. In a longitudinal representation of a wave (second picture), we measure wavelength from compression to compression or rarefaction to rarefaction. Both images represent a case of a repeating pattern, the waves.

Peak

The peak is the highest point in the wave. Sort of the loudest moment in a wave. Sound engineers often say that something is “peaking” or “clipping”. It means that the sound of the recording is trying to go beyond the loudness limit. You can see it on your meters; they will go red.

Or in your waveform. Instead, of a nice, round one, you will have a squashed square. You can see (and hear) it in a lot of modern music and movie mixes. Everyone wants it loud and big. Last week I went to see something in IMAX. The trailers were so loud that people in the audience were covering their ears. A bit of dynamic range would not hurt.

Anyway, remember that this is only a graphic representation of a sine wave; sound does not look like that.

Trough

The trough is the lowest point in the wave. Or the quietest moment in a wave. So the wave will always be in motion, from peak to trough. If you want to calculate the stretch from peak to a trough it is always twice the amplitude of a wave. Amplitude is a sort of strength of a wave. It can represent loudness.

RMS

RMS or root mean square is an average amplitude of sound waves. So, when you listen to a song on YouTube, your ears, and your head, do a little compression to the sound. They soften loud sounds a bit and protect your hearing.

For example, if you are in a dance club, the music won’t sound as loud after a while. It can still hurt your hearing, though. Earplugs are the answer. You can describe the RMS as “what your ears hear”. Not a mathematical representation of sounds, but human. Remember that our ears can pick up the softest sound and the loudest bangs.

All that is mechanical action and our “defence systems” help to make sense of all these crazy sounds around us.

Compressions

In a longitudinal description of a wave, compressions are particles bundled together. These are the regions of high pressure. Sort of like a train during busy hours. We, people, represent the particles.

Rarefactions

Rarefactions are opposite to compressions. The pressure is low, particles are spread apart, and there is a lot more room for activities.

 

Physics of waves are fundamental to your understanding of sound, music, recording and many other aspects of your life. From now on when you think of sound remember always to have two pictures in your head, one as a sine wave and the other one as a mechanical, long wave.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Sound Recording Basics

Sound Recording Basics

sound_header

04 MAY 2017

written by Mike

SOUND RECORDING BASICS

 

 

Throughout the years, capturing sounds has evolved in a dramatic way. From phonograph to a microphone in a mobile phone. From analog to digital.

People still use analog recording, but I will focus on a few aspects of digital recording. Digital recording is the most common, cheapest, and easiest method of capturing needed sounds.

Sound recording can be fun, exciting, hectic, tiresome, laborious and unforgiving gig. But with a few guidelines and basic knowledge, the difference between amateurish and well sounding production can be huge.

Just try to remember last Internet video that you watched.

Was a picture quality good?

What about the sound?

How many times do you have to play with your volume control when switching between videos?

How noisy are the recordings of that famous vlogger you follow?

The forgotten art of quality sound recording tells a difference between a wannabe Internet star and a professional.

 

monitors_article_1

RECORDING EQUIPMENT

The simplest setup for recording sound would be a microphone, cable/lead and a sound recorder. Connect the microphone via cable to the recorder and voila!

Of course, there is a lot more to it, and professional recording sessions are much more complicated. But the basic principles stay the same.

Let’s have a quick look at the basic three components of the setup.

Microphone

There are a lot of heavy, big books on microphones alone. But to have a good understanding of the subject we can distinguish two types of microphones: dynamic and condenser.

Condenser microphones are bit more sensitive than dynamic. You can use them to record vocals in the studio, wide range instruments such a piano or violin. DPA Microphones debunks some of the myths here. 

 

Cables/Leads

Most common cables used to connect a microphone to the recorder are XLR balanced connectors. They can carry the sound over a long distance without inducing any unwanted noise.

USB cables that connect a microphone to the computer are also popular.

Sound Recorder

The subject of sound recorders is wide as the sea but just try to think about it for a second. Anything that can capture a sound is a sound recorder. A mobile phone is the most common one; a simple stereo recorder like Zoom H4N can be handy too. At the professional end, there are a lot of different kinds of sound recorders.

Small, portable ones we use for interviews. The medium we can use for recording dialogue on a movie set. Recorders from Sound Devices have a good opinion.

For a beginner, a simple, direct USB microphone will do but even a basic setup through audio interface will always get you a superior quality.

 

 

monitors_article_2

TECHNIQUES

Techniques of recording audio are an art in itself. There is a choice of correct microphone, the placement of the microphone, recording levels and setting. These are only a few variables that a good sound engineer has to take into consideration. It is important to research the techniques that someone else used for the recording that you want to do.

Using an unusual placement or setup can lead to unexpected and often exciting results. Like using a “trash mic” for example. Every recording requires a different approach. It is important to have an open mind but also a good knowledge of basic procedures.

Have your standard set up in place and then another one as an experiment. And if you are just starting that will often be the case.

COMMON RULES

Like in everything experimenting and learning from mistakes is a great thing. But there are a few standard rules that you should apply if you want your recording to sound awesome.

Be wise when choosing the microphone

– it can mean a great difference to a general sound of your recording.

Use intelligent microphone placement

– remember the last time when you had to raise the volume to the maximum to listen to that famous vlogger? Or maybe you had to turn it right down?

Know your set up

– microphone, cable, and recorder. Using USB microphones is fine but even with the most basic audio interface connected to your computer the results will be much better.

Know your volumes

– a quiet recording will result in a noisy recording. Turn the volume up, but record too loud and the distortion will ruin your work.

Always record more than you need

– you will have more options to choose and also a backup if something happens to the original recording.

Do a test and listen back to it

– going back to the placement and choice of the equipment. It is always better to get it right at the beginning rather than trying to correct it later on.

Know your basics

– audio recording can be a complicated subject. The basic knowledge of recording, compression and EQ will make a big difference to your final project.

Have fun!

– Experiment and have fun with the process. The more you learn hands on, the better your projects will sound in the future.

It doesn’t matter if you are working on your Internet video channel, making a family holiday video or recording an interview at work. Follow these simple rules and each one of your productions will be better in the end.

Next time when you watch something, focus on listening. Not only on music but also on dialogue and ambience.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Frequency of Sound

Frequency of Sound

sound_header

08 APRIL 2017

written by Mike

FREQUENCY OF SOUND

 

 

What is the first thing that comes to your mind when you think about the waves?

How it travels?

What are the lowest pressure levels humans can hear?

How can we surf them?

None of the above and I’ve only tried surfing once so I can’t advise on that. I’m going to give away the answer; it’s the Hertz.

The Hertz, named after scientist Heinrich Rudolf Hertz indicates how often the particles vibrate when sound energy passes through a medium. Medium is air, water, steel and so on. Remember that vibrating particles don’t move, they pass the energy forward, sort of like an audience wave during a football match.

A hertz is a unit of vibration.

1 Hz = 1 vibration / 1 second

What you need to remember is that the particles will always vibrate at the same frequency. So, for example, you struck an awesome guitar note at 1000Hz. As the sound waves travel through the air particles will interact with each other (creating compressions and rarefactions) but the frequency stays the same, 1000Hz.

And every particle on the way will vibrate at 1000Hz. Energy from the source will have the same frequency when it gets into your ear.

Easy to remember. It stays the same.

Now, one vibration is called period. A period is either peak to peak or trough-to-trough. The period indicates one complete cycle of vibration.

So when the Moon travels around Earth, it completes its cycle. But 27 days (around that number) is a period of the Moon’s orbit. Next important thing to remember is a relationship between frequency, pitch and directionality.

High frequency will have a high pitch and will be more directional.

Low frequency will have a lower pitch and will be less directional.

So something like a bass in an EDM can be less than 200Hz with low, pumping “thump” and omnidirectional flow. It travels in every direction possible. That is why you can hear it through your neighbour’s wall the most. It is also because low-frequency waves are much longer that high-frequency waves. So in an instance of distant explosion, you are more likely to hear the low rumble rather that the full frequency blast.

monitors_article_1

 

 From the acoustics point of view, it is quite hard to control low frequencies. The most common solution is installing so-called “bass traps” at the end of your room, often in the corners. Their job is to absorb and dampen the low-frequency nightmare.

On the other hand, you have high frequencies. A scream of a small child will be much more directional, shorter and easier to control. From the acoustics point of view of course.

High frequencies travel in short waves and installing few diffusers and absorbers will do a decent job of stopping them. High frequencies can also add ‘air’ or ‘breath’ into the mix but more than often a nasty sibilance will drive you mad.

We also have mid-range, which are middle frequencies. These are frequencies that our ear is most sensitive to. That’s why all the instruments and vocals will fight for a place in your mid-range. I will talk more about mid frequencies when we get to EQ overview. For now, just remember that clashing mid-range will “muddle” your mix. I know, it’s a super scientific term to describe it.

Ok, so when we talk about frequency we always say about the range. We tend to divide frequencies in low, mid and high range. It helps to know these guidelines when we get to EQ.

The human ear can detect a lot of frequencies. Our listening device is so sensitive that we can detect frequency difference of 2Hz. I’m talking here about people trained in music but most of us can still detect small frequency changes.

To generalise, our frequency range is around 20Hz to 20kHz.

That is in our prime age too. When we get older, we tend to detect less and less high frequencies so the range can fall to 17kHz or less. That’s why it is important always to take care of your hearing, take breaks and wear earplugs when necessary.

That not only applies when you work with live sound. Working as a re-recording mixer for twenty or thirty years will take a toll on your hearing too. Do you know how many times I had a conversation with a mixer that went something like that:

“What do you mean it’s saturating? I can’t hear anything there!”

So yeah, it can be quite interesting.

Frequencies below our range of hearing (20Hz) we call infrasound. By using the special device a scientist can detect geophysical changes and monitor activities of volcanoes, earthquakes or avalanches.

Frequencies above our hearing range (20kHz) we call ultrasound. You may recognise the name from pregnancy or other medical tests.

High frequencies can create an image of inside organs in our body or an image of a baby by using a sonogram. Sonars in submarines also use ultrasound to detect underwater things. They send off the signal that bounces off anything that interrupts its travels, just like bats do.

It’s important to note that animals don’t perceive sounds in the same ways as we do. Elephants can go as low as 5Hz, dogs detect sounds from 50Hz to 45kHz, and cats can reach around 85kHz. There are other animals that can go extremely high such as bats (120kHz) or dolphins (200kHz). In contrast, blue whales are known to use infrasound to communicate overlong distances underwater.

It must be quite handy for them as sound travels much faster in water too.

monitors_article_2

 

Ok, let’s go back to differences between frequencies. As you know, it’s quite rare to hear a single frequency. Most sounds are made of low, medium and high frequencies and they are all different.

Some frequencies, when played together sound nice, other can be a cacophony. These relationships are a basis for the music system and music intervals.

Nice sounding frequency interference is called consonant, horrible ones we call dissonant.

Let’s have a look at the ratios and frequency relationships in music intervals.

Octave – 2:1 – 512Hz/256Hz

Third – 5:4 – 320Hz/256Hz

Fourth – 4:3 – 342Hz/256Hz

Fifth – 3:2 – 384Hz/256Hz

So as you can see two sound waves played at the same time can create a pleasant sound.

Not just intervals, but chords, solos and music scales are all built on frequency relationships. You don’t need to know precise frequencies to play the piano, but the knowledge becomes handy when you want to record and mix it.

Awesome tool that lets you to input your pitch data and show you the exact frequency of that note.

Pitch to frequency calculator

Before we finish, let’s quickly look at the other characteristic of a sound wave – its power. Decibel or dB is a unit used to measure the intensity of a sound (sound pressure level – SPL).

0dB is a near silence, the least audible sound.

A 3dB increase will double the level of the signal; a 3dB decrease will reduce the level by one-half.

We can describe the levels of signal with a power of ten.

10dB is 10 times as powerful as 0dB.

20dB is 100 times more powerful than 0dB

30dB is 1000 times more powerful than 0dB.

A whisper will be around 15dB, but normal chitchat around 60dB. The jet engine we can describe with around 120dB, a gunshot less 140dB.

All of we measure with SPL, sound pressure level. SPL is an acoustic pressure built up within a defined atmospheric area. So moving away from the sound and doubling the distance will reduce the power of a signal by 6dB. Moving closer to the source and halving the distance will increase the power by 6dB.

I will go more into the depth of sound pressure and waveform characteristics in the future articles.  Just like with the rest of the topics, basics are important; you can easily find a deeper analysis of them on the Internet.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Pin It on Pinterest