Mixing on Headphones

Mixing on Headphones

sound_header

14 JULY 2017

written by Mike

MIXING ON HEADPHONES

 

During my studies, one rule that was always passed onto us by the teachers was – never mix on the headphones!

The mix should always be done in an acoustically treated room with expensive monitors. It’s the only way to make the piece sound good in every environment.

After the studies, I joined a private school for music production, taught by working professionals – same deal there. The mix must be done on SSL desks, with Dynaudio speakers in a room designed for a quarter of a million pounds.

“Well, it is what it is,” I thought to myself.

After that, I got a job in a sound department at a movie studio. Eight mixing theatres, two with Dolby Atmos sound. Safe to say – everything sounded fantastic there. I worked with most talented dialogue mixers in the country, real veterans of audio mixing.

The rule – never mix on headphones!

Professional mixes must be done in deluxe rooms with the expensive set-up. Even my editing studio had a surround system of calibrated DynaudiosI got used to that comfy chair and top of the shelf editing and mixing system.

Then, I left my job.

I knew I wanted to go freelance and work from home. The issue was real, I’ve cleaned the dust off my Focusrite interface and Adam monitors and was ready. The problem was that my set up is in the bedroom, no acoustics, no high-end studio design.

Mixing gig was out of the window.

Or was it?

monitors_article_1

 

After 18 months with Casefile (and other projects), I learned that rules could be broken and be shaped.

I’m proud to say that I mix on headphones.

Yes, I said. Get over it.

I figured that most people listen to podcasts on their phones, on cheap ear-in headphones. So number one goal should be to make it sound as good as possible on that platform. Casefile needs a good mix, a good balance for score and narration. I can’t lie, it is tricky, and I still make mistakes. But so far the unique approach worked quite well for the podcast and my production practice.

I do the first edit on speakers. The first edit is cutting out mistakes, working with creative breaks and pauses, making the narration as a whole.

I do the second edit on headphones. This takes place in iZotope Rx, and it is in-depth cleaning process. I’m not able to hear every little lip smack on the monitors and Sony MDR-7506 headphones are brilliant in revealing details.

When I write music, I do it on monitors.

When I mix the cues, it’s all on headphones.

Then the first mix – I do the first run on Sony MDRs. I try to balance the score and narration, but the issue is that these phones are closed-cup.

They cut out external noise and give amazing, however not a real representation of the mix.

monitors_article_2

Why not real?

Well, it’s only a small percentage of people who listen to the podcast on these kinds of headphones. Most use ear-ins with their phones. Plus the listening is usually done during the work commute, at the gym or work.

That’s why there is a second pass on the mix. And that’s when I use cheap ear-ins. I have a few pairs as each sounds slightly different and I change them during the mix. I make the adjustments to the score and narration.

And that finalises it.

Yes, I will still check the mix on the monitors, on other mediums but the primary goal is to make it sound good on cheap ear-ins.

There is also an issue of exporting to MP3 format. The mix will sound different when played as compressed MP3 in comparison to what I’ve done in Pro Tools. So I keep that in mind during the mixing process too.

Is it a perfect process? Of course not, but as the saying goes ‘if it sounds good, then it’s good’.

The point I want to make is that times are changing and technological progress means that bedroom producers have now much more power than in the past. Yes, it’s great to have a dedicated room for your work. Acoustically designed for high-end systems. But a laptop and pair of headphones will work too, and it shouldn’t stop you from trying.

Of course, let’s not forget that it’s the mastery of skills that matter the most. Don’t worry about the set up as much, improve where you can but what’s most important – get to work!

To learn more about headphones check out The Big Difference Between DJ Headphones from Home DJ Studio.

Liked the article? Follow me! 🙂

Subscribe for the latest updates

Simple Audio Techniques for Podcasts

Simple Audio Techniques for Podcasts

sound_header

30 JUNE 2017

written by Mike

SIMPLE AUDIO TECHNIQUES

FOR PODCASTS

 

Most of the creative projects have three stages:

pre-production

production

post-production

The same principle applies to podcasting. You get planning, recording and post production, and depending on a podcast, each stage will be slightly different. Today I will look at one element of post production process – editing.

Sound editing is a crucial aspect of a well-produced show. It doesn’t matter if it’s a scripted podcast, interview style show or a comedy audition. The complexity of audio editing will depend on the nature of a podcast but let’s have a look at most used functions. This post is titled ‘simple editing’ as I will try to describe tools and solutions that are possible in most audio sequencers. Editing with specialised software such as iZotope products will be featured in the future articles.

What kind of sound editing can you expect while working on podcasts?

Based on my experience with Casefile – a scripted show and multiple of other non-scripted podcasts let me list a few things that should be helpful to you during the process.

Importing files

First of all, you will need to import audio files to audio sequencer of your choice. Most audio will be recorded in 44.1kHz, however, I always convert to 48kHz. If I were in charge of recording, I would also select 48kHz at the source.

It’s a standard for motion picture sound, and it will give you more headroom to work with, better quality.

Once you import the files, you will realise that most will be submitted/recorded as STEREO. If you are recording yourself, then you can do a MONO recording from the start, but working with others, I guarantee that more than often it will be stereo.

MONO – one single audio track

STEREO – two audio tracks, usually panned to left & right channels

If you receive the dialogues as STEREO, use a function to split it into MONO and leave it as a one centre audio track. For the most part, the dialogue should always sit in the centre. Yes, there are exceptions to it, like binaural recordings but these are exemptions from the rule.

Multiple dialogue recordings should be kept on separate audio tracks, don’t stick all of it on just one.

monitors_article_1

Grouping & Colours

I always work with colours when it comes to editing. It means colour coding tracks, so you can visually recognise them. With Casefile, the voice of the Host will be green and other audio clips blue. When I was working on other interview-style podcasts, I had person one coloured green, and person two on separate track coloured blue and so on.

You are working with ears and eyes so use it to your advantage.

Markers

Markers are an essential element of your editing process. They will help to you note down any mistakes, important parts of the podcast, musical cues.

For Casefile I tend to use markers to note down musical cues and significant moments in the story. In other podcasts, I used them to mark sections that were possibly getting cut.

Pauses

Creative breaks are a big process when I edit Casefile podcast. I usually leave a long pause during sentences for a dramatic impact or cut the breaks shorter during more tense moments.

To create breaks just move the audio around, but don’t forget to fill the space with a background, room noise for consistency.

For non-scripted podcasts, pauses are also helpful. A few seconds between two people speaking or asking a question will give a listener a chance to catch up. Sometimes a person just speaks too fast and adding a few artificial pauses will help.

monitors_article_2

Breaths

How you deal with breaths will depend on the style of the podcast. I always try to minimise them, either by eradicating them (Casefile podcast) or making the volume lower.

Some people say that cutting breaths will make the podcast sound unnatural, I would say that it depends on the show.

If you decide to remove all the breaths from the podcast, you will need to use a tool such as De-breather or Strip Silence. Doing it manually will take too much time.

Uhms and Aahs

It’s easier with scripted podcasts as you won’t have to deal with many ‘filler’ phrases. With live recordings, it won’t be that easy. Most people are not professional speakers and will use some kind of a filler when they speak.

These will need to be cut manually, but make sure you won’t turn the podcast into sounding too fake and robotic

It’s easy to go overboard with the editing so use your ears for better judgement.

Background Noise

Any background noise, in particular between the sentences, will need cutting and replacing with a neutral room tone. Other noise such as hiss and rumble will need special tools like De-Noise. But anything else can be easily cut manually or with functions such as Strip Silence.

Other simple audio editing techniques will depend on the nature of the podcast.

It can include pitching up or down the audio or even slowing it down (or making it faster). It was only a few weeks ago when I had to slow a recording down by few percent as the person speaking was talking a bit too fast. In any of these kinds of treatments, you need to be careful not to induce artificial sounding artefacts and change the sound too much.

Simple editing will be enough if you are working on a hobby project from home. Just listen to it, tidy it all up and make it sound a bit tighter.

When it comes to more complicated productions, you will need to use specialised tools, but I will get to them another time.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Podcasting and Marketing

Podcasting and Marketing

sound_header

09 JUNE 2017

written by Mike

PODCASTING

AND MARKETING

Should I start a podcast?

The question that every marketer asks, at least in the present times.

Why is that?

Well, if you google future of podcasting it’s easy to see that everyone seems to be bullish on the medium. From the Copyblogger’s The Astounding Growth of Podcasting to articles from Forbes and Business Insider we can learn that the podcasting trend is going up and up.

And yet, it is still quite a niche industry.

When I analyse Casefile statistics, over 56% of listens happen in the US, followed by Australia with 15% then UK and Canada + others. Podcasting is growing, but it’s still quite a few years away from becoming an established content medium, like YouTube or Instagram. Hence why I think the time to start one is now.

You can have a look at people like Gary Vaynerchuk and see why he is pushing audio content so much. Love him or hate him, he knows a thing about internet marketing, and he knows how to follow the attention. He knows how to market himself.

Another trend that I noticed is, most of the internet personalities are jumping on the podcasting bandwagon. Go and look at top 100 iTunes charts, see how many celebrities can you can recognise. I bet it’s going to be a few.

 

What does it mean?

It means that if you want to be successful, you should follow what successful people do. Apart from that we also have growth in audiobook industry, and we can already see the spill into podcasting. Only recently Audible announced a 5 million dollar fund for playwrights to write audio dramas. They wouldn’t be investing if they didn’t expect some kind of return.

There is no secret to podcasting; it can help you to establish expertise in the industry. Let’s say you are a graphic designer.

Who will you be able to attract better clients and higher rates?

A designer who works from home and attracts business on word of mouth only?

Or someone who also runs a design blog, podcast, course, book and others?

Podcasting is just another medium that can help you to market the business, to establish the expertise. The difference is that it is still niche with the low barriers to entry. Yes, there is some competition, but not as fierce as in other places on the internet.

It’s getting harder every day, so better not wait for too long. It’s a simple question of, do you think it’s easier to start a popular Youtube channel now versus ten years ago?

Podcasting industry is still not regulated; there’s isn’t a big corporation that rules them all. But with time it will happen, there will be rules, schemes, guidelines. If you start early enough, you can be the one who helps to write the rules, which helps to shape the industry.

So don’t dismiss the medium.

Of course, I want to finish with the disclaimer – podcasting is not for everyone.

Take me for example, I know how it works, I produce a popular one.

Why didn’t I start a podcast myself?

Self-awareness is the key here. If you don’t feel comfortable behind a microphone, then don’t force it. But if you think that you can give it a try, do it.

Best time to start was ten years ago; the second best is today.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Intro to Microphones

Intro to Microphones

sound_header

20 MAY 2017

written by Mike

INTRO

TO MICROPHONES

 

In this article, I want to share with you an overview of a recording microphone. Microphones or mics are the basic instruments of capturing sounds. You have one inside your laptop, your phone and your camera. Even in a smart watch.

Before we dig deeper in microphone placement techniques and various recording tips you need to learn more about mics.

What is a condenser? Dynamic?

When you work with professionals and experienced sound engineers, you will learn that choosing the right microphone is number one thing one the list. Every audio professional will have their preferred mic, and least favourite too. Whatever you want to do you will need to understand basic characteristics of a recording microphone. So, let’s start with that.

When it comes to elemental features of a mic, three things matter.

A transducer, frequency response and directionality.

I. Transducer

A transducer in a microphone transforms acoustic energy (e.g. your voice) into electrical energy. How a microphone registers sound depends on a type of a transducer. Two main ones are Dynamic and Condenser.

Dynamic

Dynamic microphones are quite cheap to build and robust.

So how do they work?

Dynamic mic operates on small electrical generator built from a diaphragm, voice coil and magnet. Let’s say you are recording yourself for a YouTube channel. The force of your voice, as a sound wave, makes a diaphragm vibrate. The diaphragm can be described as a thin membrane hidden behind microphone’s metallic mesh.

At the rear of the diaphragm is a voice coil, a coil of wire, which also vibrates. A small magnet forms a magnetic field around that wire. Physics. The movement of that coil within the magnetic field generates electrical signals that correlate to the force of your voice. Because dynamic microphones can survive in the toughest environments, they are number one choice for live performance.

It is almost impossible to overload a dynamic microphone. Good examples are Shure SM58 we use them for live sound and Shure SM57 another classic and cheap microphone. If you don’t know which one to buy you should get SM57. It will do the job.

Shure SM7B is a classic dynamic microphone used by sports commentators and radio presenters.

Have you ever wondered how is it possible that they shout their heads off, and the sound stays clear?

In most cases, Shure SM7B is the answer. My favourite dynamic microphone would also be Beyerdynamic M201A smooth sounding mic that works great on a snare but also on some louder singers.

 

monitors_article_1

 

 

Condenser

Condenser microphones are a bit more complicated than dynamic, more sensitive and more expensive (well, it depends…).

The basics of a condenser mic lie in a capacitor.

The force of your voice will resonate a thin metal or metal-coated membrane that sits in front of a rigid backplate. The space between the two contracts and the motion produces electrical signals.

Now, the biggest difference between dynamic and a condenser is that the latter requires additional power to run. There are two ways to power up your condenser microphone.

First one is with batteries, second, we call phantom powerPhantom power runs through the microphone cable from the interface e.g. mixing desk or audio interface. Condenser microphones are sensitive and delicate. They also produce more noise than their dynamic siblings. Maximum sound level specification means that if you shout into a condenser, there is a high probability that the recording will distort.

Good condensers are great in capturing a wide dynamic and frequency range. Try recording an acoustic guitar with a condenser and then with a dynamic microphone. You will hear that condenser will capture the smallest nuances and movements of the guitar.

A classic pair of condenser mics would be AKG 414. Sound engineers often use them as overheads for drums and choirs.

Neumann U87 is a classic studio microphone used for vocals. It is the first choice for ADR recordings or dubbings. Recording sound on sets also requires a sensitivity of a condenser. Microphones such as Sennheiser MKH-416 combine a subtlety of a condenser transducer and a robustness of a dynamic microphone. Remember also to buy a pop shield and keep an eye on a noise level.

II. Frequency response

Frequency response it the reason every music producer, sound engineer or a foley recordist has a preferred microphone. Transducer decides how the sound is captured; frequency response chooses what to capture.

Let’s say you recorded your dog. If your recording sounds 100% the same as your dog in real life, it means that the microphone that you used has a flat frequency response. It didn’t change the sound. Microphones with the flat response are used for measuring acoustics of space and can be quite expensive. Also, you don’t want to use them on your recordings.

Why not?

Well, the sound of a microphone can make your recording better. It can add depth and warmth. It can capture smooth low frequencies or sharp high frequencies. It can omit frequencies that you don’t want. Some microphones will add punch to your drums or presence to vocals. Other times you may wish to use a microphone with a detailed response.

You don’t want to omit anything when recording a wide frequency instrument such as piano. Before using a microphone check its frequency response and its desired use. It’s also good to experiment with different settings.

 

monitors_article_2

 

III. Directionality

The last one on our list is directionality.

Directionality describes the most sensitive side of a microphone. Polar patterns describe how a microphone will pick a sound and what is its best position for it. There are quite a few polar patterns to choose from, but today I will focus on three main ones.

Omnidirectional

The omnidirectional microphone will register sound at all angles. The polar pattern covers 360 degrees. It means it will pick up the sound from the back as well as from the front. With the same intensity. These polar patterns are great if you want to capture an ambience of a place, something like an inside of a cave.

Another use is to leave an omni in the room as a so-called ambient mic. You can then add this additional layer to your mix later on.

Unidirectional

As you probably guessed, unidirectional microphones will register sounds from one particular direction more than from others. Most popular will be a cardioid, a heart-shaped polar pattern. It will pick up less ambient sound than an omnidirectional microphone, and it works great when you want the focus.

For example, if you wish to record a dialogue on set you don’t want to capture a technical crew that is chatting in the corner. Unidirectional microphones are made for this kind of stuff.

Bidirectional

Bidirectional microphones are sensitive at front and back but omit material from their sides. They are great for vocal duets and individual stereo recording techniques such as mid-side, M-S.

This polar pattern is used when you want to dismiss unwanted sources of sound. As I mentioned before these are helpful on movie sets, during live music recordings or any environment with more than one sound source. Correct microphone placement is a skill in itself, and I will share with you some advice on that in another article.

To know your equipment is essential.

How it all works and why you want to use it?

These are the questions that you need to ask yourself before making any decision. Microphones are everywhere. You don’t have to know all the details and technical specs of their build, but don’t be ignorant. When it comes to selecting the right gear, ignorance is not bliss.

Liked the article? Follow me! 🙂

Subscribe for the latest updates

What is Sound Wave

What is Sound Wave

sound_header

12 MAY 2017

written by Mike

WHAT IS

SOUND WAVE

 

Let me ask you a few questions.

What is a sound wave?

What is the difference between transverse wave and longitudinal one?

And why does it matter?

It matters because it is a fundamental knowledge of sound. You can’t write a book without knowledge of the alphabet, right? Well, I guess you could. But it’s much easier to know the letters. Anyway, let’s start with something basic, a waveform.

What is a waveform and how do we describe it in modern, digitalized world?

monitors_article_1

 

Let’s start with physics. So the picture represents a sound wave. Now, notice that I said “represents” a sound wave. It doesn’t really look like it. A sound wave is a disturbance of particles within a medium. A medium can be anything like air, water or steel.

It is a mechanical wave and to better understand it, imagine a slinky. Now, a movement of a sound wave is kind of like a coil movement of a slinky. It either compresses or spreads apart.

If you got one nearby, grab the first coil and move it back and forth. It will create a disturbance. The first loop will push and pull on the second coil that then displaces the third one and so on.

So the energy introduced into the first coil is carried by a medium, our slinky, from one location to another. It is important to remember that.

Why?

Remember the picture?

It is only a representation of sound. We use a sine wave to portray the sinusoidal nature of the pressure-time variations. In simpler words, it’s easier to imagine a wave in this way. It does, sort of, look like a wave. In reality, sound passes through a medium and disturbs particles in the longitudinal, linear motion.

monitors_article_2

 

If you need to remember one definition of a sound, memorise this:

The sound is a longitudinal wave with compressions and rarefactions.

Let’s go back to the pictures and have a look at some of the details. First, we have a source of the wave. We can describe the source as a body that disturbs the first particle of the medium. A source can be anything that vibrates. So if you play the guitar, the string will vibrate. If you sing – your vocal chords do the job.

What about YouTube videos? Are they the source?

Yes, your computer makes the diaphragm of a speaker vibrate. Now, we have created a disturbance in a medium. But what it is exactly? A medium is just a term for a bunch of particles bundled together. They stay near each other, and they collaborate with each other.

So as I said before, a medium can be anything that will carry the energy, particle disturbance, forward. It can be anything, but the air is always our first bet.

Now, under the picture, I wrote medium – particle interaction.

Why is that?

I want you to understand why we define a sound wave as a mechanical wave. The energy, our sound wave, is moved from one place to another by particle interaction in a medium. Let’s say the medium is air.

The first bunch of particles moves from their position and they either pull or push next group of particles from their position. This neighbourhood disturbance is mechanical and carries on through the air.

Ok, but what is a transverse wave?

And what is the difference?

In a transverse wave, the oscillations happen at the right angle to the movement of the energy. So if you know Indiana Jones and his whip, you will remember that the wave travelled on one side. It doesn’t go up and down when he uses it. Light is a transverse wave and so is a ripple in a pond.

Let’s go back to the characteristics of our sound wave.

Wavelength

We describe a wavelength of a wave as a complete wave cycle. A repeat of the pattern. In a sinusoidal graphic representation of a wave (my first picture), we measure wavelength from peak to peak or from trough to trough. In a longitudinal representation of a wave (second picture), we measure wavelength from compression to compression or rarefaction to rarefaction. Both images represent a case of a repeating pattern, the waves.

Peak

The peak is the highest point in the wave. Sort of the loudest moment in a wave. Sound engineers often say that something is “peaking” or “clipping”. It means that the sound of the recording is trying to go beyond the loudness limit. You can see it on your meters; they will go red.

Or in your waveform. Instead, of a nice, round one, you will have a squashed square. You can see (and hear) it in a lot of modern music and movie mixes. Everyone wants it loud and big. Last week I went to see something in IMAX. The trailers were so loud that people in the audience were covering their ears. A bit of dynamic range would not hurt.

Anyway, remember that this is only a graphic representation of a sine wave; sound does not look like that.

Trough

The trough is the lowest point in the wave. Or the quietest moment in a wave. So the wave will always be in motion, from peak to trough. If you want to calculate the stretch from peak to a trough it is always twice the amplitude of a wave. Amplitude is a sort of strength of a wave. It can represent loudness.

RMS

RMS or root mean square is an average amplitude of sound waves. So, when you listen to a song on YouTube, your ears, and your head, do a little compression to the sound. They soften loud sounds a bit and protect your hearing.

For example, if you are in a dance club, the music won’t sound as loud after a while. It can still hurt your hearing, though. Earplugs are the answer. You can describe the RMS as “what your ears hear”. Not a mathematical representation of sounds, but human. Remember that our ears can pick up the softest sound and the loudest bangs.

All that is mechanical action and our “defence systems” help to make sense of all these crazy sounds around us.

Compressions

In a longitudinal description of a wave, compressions are particles bundled together. These are the regions of high pressure. Sort of like a train during busy hours. We, people, represent the particles.

Rarefactions

Rarefactions are opposite to compressions. The pressure is low, particles are spread apart, and there is a lot more room for activities.

 

Physics of waves are fundamental to your understanding of sound, music, recording and many other aspects of your life. From now on when you think of sound remember always to have two pictures in your head, one as a sine wave and the other one as a mechanical, long wave.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Sound Recording Basics

Sound Recording Basics

sound_header

04 MAY 2017

written by Mike

SOUND RECORDING BASICS

 

 

Throughout the years, capturing sounds has evolved in a dramatic way. From phonograph to a microphone in a mobile phone. From analog to digital.

People still use analog recording, but I will focus on a few aspects of digital recording. Digital recording is the most common, cheapest, and easiest method of capturing needed sounds.

Sound recording can be fun, exciting, hectic, tiresome, laborious and unforgiving gig. But with a few guidelines and basic knowledge, the difference between amateurish and well sounding production can be huge.

Just try to remember last Internet video that you watched.

Was a picture quality good?

What about the sound?

How many times do you have to play with your volume control when switching between videos?

How noisy are the recordings of that famous vlogger you follow?

The forgotten art of quality sound recording tells a difference between a wannabe Internet star and a professional.

 

monitors_article_1

RECORDING EQUIPMENT

The simplest setup for recording sound would be a microphone, cable/lead and a sound recorder. Connect the microphone via cable to the recorder and voila!

Of course, there is a lot more to it, and professional recording sessions are much more complicated. But the basic principles stay the same.

Let’s have a quick look at the basic three components of the setup.

Microphone

There are a lot of heavy, big books on microphones alone. But to have a good understanding of the subject we can distinguish two types of microphones: dynamic and condenser.

Condenser microphones are bit more sensitive than dynamic. You can use them to record vocals in the studio, wide range instruments such a piano or violin. DPA Microphones debunks some of the myths here. 

 

Cables/Leads

Most common cables used to connect a microphone to the recorder are XLR balanced connectors. They can carry the sound over a long distance without inducing any unwanted noise.

USB cables that connect a microphone to the computer are also popular.

Sound Recorder

The subject of sound recorders is wide as the sea but just try to think about it for a second. Anything that can capture a sound is a sound recorder. A mobile phone is the most common one; a simple stereo recorder like Zoom H4N can be handy too. At the professional end, there are a lot of different kinds of sound recorders.

Small, portable ones we use for interviews. The medium we can use for recording dialogue on a movie set. Recorders from Sound Devices have a good opinion.

For a beginner, a simple, direct USB microphone will do but even a basic setup through audio interface will always get you a superior quality.

 

 

monitors_article_2

TECHNIQUES

Techniques of recording audio are an art in itself. There is a choice of correct microphone, the placement of the microphone, recording levels and setting. These are only a few variables that a good sound engineer has to take into consideration. It is important to research the techniques that someone else used for the recording that you want to do.

Using an unusual placement or setup can lead to unexpected and often exciting results. Like using a “trash mic” for example. Every recording requires a different approach. It is important to have an open mind but also a good knowledge of basic procedures.

Have your standard set up in place and then another one as an experiment. And if you are just starting that will often be the case.

COMMON RULES

Like in everything experimenting and learning from mistakes is a great thing. But there are a few standard rules that you should apply if you want your recording to sound awesome.

Be wise when choosing the microphone

– it can mean a great difference to a general sound of your recording.

Use intelligent microphone placement

– remember the last time when you had to raise the volume to the maximum to listen to that famous vlogger? Or maybe you had to turn it right down?

Know your set up

– microphone, cable, and recorder. Using USB microphones is fine but even with the most basic audio interface connected to your computer the results will be much better.

Know your volumes

– a quiet recording will result in a noisy recording. Turn the volume up, but record too loud and the distortion will ruin your work.

Always record more than you need

– you will have more options to choose and also a backup if something happens to the original recording.

Do a test and listen back to it

– going back to the placement and choice of the equipment. It is always better to get it right at the beginning rather than trying to correct it later on.

Know your basics

– audio recording can be a complicated subject. The basic knowledge of recording, compression and EQ will make a big difference to your final project.

Have fun!

– Experiment and have fun with the process. The more you learn hands on, the better your projects will sound in the future.

It doesn’t matter if you are working on your Internet video channel, making a family holiday video or recording an interview at work. Follow these simple rules and each one of your productions will be better in the end.

Next time when you watch something, focus on listening. Not only on music but also on dialogue and ambience.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Pin It on Pinterest