FetHead Gain Booster for SM7B

FetHead Gain Booster for SM7B

fethead_header

22 MARCH 2021

written by Mike

FetHead Gain Booster

for SM7B

Beginner podcasters are best off with a simple USB microphone. One that you connect to your computer and are ready to go.

That’s not just my opinion; many people who were starting their streaming channels or podcasts grabbed a Blue Mic (Yeti or Snowball) or a similar USB microphone.

There was one problem, though – these are condenser microphones.

Let’s back up a little bit.

There are two main types of microphones – condensers and dynamics and the difference is how these microphones capture sound.

Dynamic uses a voice coil and magnet, and it is a plug and play microphone; they don’t need external power to run.

Condenser microphones use capacitors, a thin membrane and a diaphragm that vibrate.

Condensers need external power such as batteries or phantom power to work supplied by an audio interface, 48V.

What we have to understand about condenser microphones is that usually, these are sensitive mics. They will pick up every little detail around you.

They are awesome for studio recording, but for home – well, a lot of people vented their frustrations about how everything from neighbours, cars or whatever was picked up by the microphone.

Hence why I always recommended dynamic microphones, even as USB. These are less sensitive, usually have cardioid polar pattern and overall are best for voice recording at home.

monitors_article_1
monitors_article_1

Stop!

What does it have to do with the FetHead gain booster?

We’ll get to it in a second.

With time, podcasters, YouTubers and online personalities started to upgrade their gear and moved from USB mics to standard XLR microphones, the ones that need an audio interface to work.

Of course, the winner and the most popular mic was Shure SM7B – the legendary vocal microphone used by broadcasters and studios around the world.

I’m sure if you watch any YT vids, you are familiar with the microphone, from Joe Rogan to anyone.

And there is a reason for it – the microphone sounds great, it is relatively cheap, you don’t need a pop shield because the capsule is a safe distance from the top. It’s versatile, and it can take a lot of volume without distorting the sound.

However! 

There is an issue with it, and that’s its gain level. 

It is a quiet microphone, and it needs a lot of gain on input. When I record on it, I need to have my iD4 interface on 9, almost full 10, to have a decent volume. And I still had to boost it in post-production.

I had this microphone for years, and it served me well. I didn’t record much in the past, so I was okay with boosting the volume afterwards. However, as I started my Youtube channel and used the microphone more, the gain started to get annoying.

When we record from the camera, we use Rode boom, a condenser, then switch to SM7B, and the volume would be much lower. After recording a few vids like that, I knew I needed a gain booster.

What’s a gain booster? 

It’s a device that boosts your microphone gain with a clean signal, making it much louder at the source.

When it comes to boosters, the one that I always recommended for people was, of course, CL1 Cloudlifter. Which is a classic.

I went online. I put the Cloudlifter into the basket. However, I have to warn you, CL1 is quite pricy, over £100, leaning towards £150 in some stores.

I’m always on the lookout for a bargain and a discount, so I thought to myself ‘I’m sure there are alternatives,’ but the thing is, I’ve never checked and always defaulted to Cloudlifter.

I emptied the basket and started looking online – low and behold, and there are many cheaper alternatives.

monitors_article_2

I looked at Sub Zero, SE DM1 Dynamite, Klark Teknik CT 1 and FetHead.

These ranged in price, but one thing was sure, they were much cheaper than the original Cloudlifter!

All of them had good reviews and what I also noticed was that some of them were directly connecting to the microphone, which means that I wouldn’t need extra XLR cable like with Cloudlifter, which is an external box.

After doing a bit of research, I ordered FetHead from TritonAudio, which was £65 from Studiospares online shop.

Let’s look at specs:

TritonAudio products are made in Holland, and FetHead is advertised as a low noise gain booster or mic preamp with extra 27dB amplification.

One thing to remember is that these gain boosters need phantom power, just like condenser microphones. Even though your SM7B doesn’t need extra power, you need your phantom power when you connect it to FetHead.

FetHead arrived in a small cardboard tube, hidden in a pouch inside. It was much smaller than expected, which was fantastic as I was worried it wouldn’t fit at the end of my SM7B, sitting on the arm.

It did fit without any issues.

If you want to listen to my recording without Fethead and with it, you can view my video review on Youtube.

In short, Fethead does what it supposed to do. It boosts SM7B gain without affecting the sound, allowing for the mic gain on my iD4 interface to stay on the good lever.

And I don’t need to boost anything in post-production anymore!

Would I recommend it?

If you have SM7B or another dynamic (or ribbon) microphone and you find yourself lacking gains – then of course. Especially if you are podcasting or recording YT videos.

It boosts the signal transparently and cleanly, and you don’t have to drive your input gain to the top.

There are even cheaper alternatives. I didn’t want to go with the cheapest option, which had a few negative reviews, but FetHead definitely worth the money, and it is more than twice cheaper as the original Cloudlifter. Plus, you don’t need extra XLR cable!

Thumbs up from me, and you will hear the FetHead working hard on the future videos!

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Small Podcast Studio & Sonarworks Reference 4

Small Podcast Studio & Sonarworks Reference 4

Sonarworks

23 FEBRUARY 2021

written by Mike

Small Podcast Studio & Sonarworks Reference 4

Recently I’ve moved to a new house, and the big difference was – that this time it’s ours!

(A bank technically owns the house, but we got a mortgage and moved in November 2020.)

I migrated to the UK in 2006 and always rented. Renting is excellent because it offers flexibility and freedom, but it is a pain if a person wants to set up a sound production room.

There are always issues with the landlord, moving the stuff around amongst other hassles.

This time is different!

We got a lovely, three-bedroom house, and I could get one (albeit the smallest) room for my studio and set it up the way I wanted.

As I said, the room is small. It is a one bed/office space, with more weight on the office function as it would be a tiny bedroom. From the get-go, I understood that with that size comes a lot of problems.

Small rooms are notorious for bad acoustics. Sound waves (especially low frequencies) don’t have space to develop. They are bouncing around the walls; we get phase cancellations and that sort of thing.

Before ordering acoustics, I wanted to know what sound issues I’m facing, and I needed a special measurement microphone. I used to have a Behringer ECM8000 in the past and was inclined to get the same mic, but I also looked at other options online.

And that’s how I stumbled upon Sonarwoks XREF 20.

The microphone costs £50, and it came with a trial version of Reference 4 software. It looks like a standard measurement microphone, and it came without a clip mic. I was a bit disappointed – at first!

The way that measurement works are – you hold the microphone and move around the room with it. 

I’ve done some studio measurements and how you did it back then was to set up the microphone on a stand, in the listening position and run the sine waves. Sonarworks is a bit different and very smart, which took me by surprise. That’s why there is no clip in the box – you don’t need it for the measurements!

I measured the room, and as predicted, the space had many issues, mainly in the low-end area. Not only that – the low-end frequencies were also masking higher frequencies making the room impossible to work in.

monitors_article_1

I knew I had to treat the space, and after going back and forth with GIK Acoustics – I made the order.

After setting up side panels, ceiling panes and corner bass traps, I measured the room again with Sonarworks XREF 20.

You could see (and hear) the difference. Acoustic panels tamed some of the omnipresent low frequencies but not enough to have a good neutral mixing room.

That’s the problem with small spaces – I’ve run out of room to put up more panels!

monitors_article_1

And here where we start talking about Sonarworks Reference 4. Like I mentioned before, the microphone came with a trial version of the software.

It helped measure the room; however, that’s only a small fraction of what the plugin can do.

After the measurement is complete, Sonarworks software takes the frequency curve and flattens it with EQ so that your system’s output is entirely neutral.

At first, I couldn’t get used to it – especially on headphones. I was so accustomed to my cans and how they sound that Sonarworks processing sounded weird.

I gave it a couple of days, and now I can’t imagine working without it!

But first things first.

Reference 4 comes in different flavours – Headphone and Studio.

The headphone version is cheaper and offers help to headphone users only, and there are a few ways Sonarworks presents here.

You can find your pair of headphones in the long list of their presets called ‘average‘ for starters. 

What they did is they took a few pairs of the same headphones, measured their frequency curve and come up with an average image from all of them. It’s not a perfect measurement; however, I find it useful enough for my work.

Second, you can buy calibrated headphones from Sonarworks that will come with a custom profile.

And the third option is to send them your headphones for calibration.

These two options will cost you extra.

monitors_article_2

Studio edition will also work on your studio monitors – and that’s where you ideally need the microphone from Sonarworks (that comes with a personal profile) and measure your room.

Within the software itself, you do have some options.

Of course, the main one is bypassing the processing; we also have a frequency response curve and various additional displays available. You can use a bass boost, predefined curve and some latency settings. You can also check how the sound is in mono.

The more critical functions are dry/wet, which will allow you to get used to the new flat sound and the safe headroom option.

Safe headroom locks the gain fader. Because the software will boost some frequencies, it will have to adjust the gain accordingly. Safe headroom makes it impossible to go above the gain reduction – unless you switch it off.

It is also worth noting that bypassing the processing won’t change the gain, and it will always match.

This is important as you won’t get hit by a few extra decibels of sound each time you bypass the Sonarworks plugins – only if you completely switch it off.

Sonarworks Reference 4 also comes as two plugins – systemwide and standard DAW plugin. Systemwide works outside of your audio sequencer, meaning that any audio going out of your system will be processed.

DAW plugin version is only for use in your software, but they are precisely the same. When you run your audio software, the systemwide version is automatically inactive.

However, the most helpful thing is the Sonarworks prompts you when you are bouncing your session with the plugin still on!

The plugin needs to sit on your master at the very end; however, when you are bouncing the audio down – you don’t want Reference 4 to flatten your mix!

You only use it for listening, and thankfully people at Sonarworks thought of warning you when you forget to bypass, which I do all the time.

As far as the pricing goes – there are plenty of deals around. You can also lookup Educational discount for students and teachers. It is not the cheapest plugin, but it is definitely worth its price – I’m kicking myself for not discovering it earlier!

Would I recommend it for podcast producers?

If you are starting, more than likely, you will be working on headphones. In that case, I recommend getting a nice pair and trying out the Sonarworks headphones edition.

Even if you want to get studio monitors, it will be tough to mix on them if your room is not acoustically treated.

I also wouldn’t use Sonarworks Studio in an untreated room and hope for the best. It’s not a solution for lousy acoustics. It is another tool to help you get the acoustics to improve them, not fix them.

However, if you are producing on excellent studio monitors, you have a treated room, but you reckon it could be improved, then absolutely try Sonarworks Studio; it’s a no brainer.

I am glad that I can now improve this room, far from perfect even though it is comfortable and mine!

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Icon M+ Platform Control Surface for Podcast Production

Icon M+ Platform Control Surface for Podcast Production

m+_header

19 DECEMBER 2020

written by Mike

Icon M+

Control Surface for

Podcast Production

Are you looking for a control surface for your DAW but you are not sure which one to choose?

There are plenty to choose from; however, in this post, I wanted to talk about Icon M+ platform and Pro ToolsA control surface lets you control what is happening in your audio sequencer but with physical knobs and faders and buttons.

Is it necessary to produce podcasts? 

No. 

You can work with just a keyboard and mouse, and that’s what I’ve done for years.

However, as you progress and develop your skills, the longing for physical faders may start to grow – especially if you worked on proper consoles before. A simple control surface will be more than enough for podcasting, and there are plenty on the market!

Before I start, I want to mention that I upgraded my control hardware to Avid S1, but I wanted to make a review-post about Platform M+ as I used it for a long time.

monitors_article_1

I mentioned it before, I started on mouse and keyboard and worked purely in the box when I began working on Casefile podcast. After a while though, I missed having something physical to be able to control automation during a mix – I worked on consoles in the past, but I didn’t want to spend money (as I didn’t have any) just yet.

So for a little while, I used a control wheel on iD4 interface that let me control a single automation parameter.

After some time, I decided to move up a bit and get a proper control surface with eight faders.

As I work in Pro Tools, my choice was limited. Pro Tools works on EuCon, and it’s a closed system, meaning 3rd party hardware can’t manufacture for it – only Avid can.

Of course, their hardware is costly – at that time the eight fader surface they had was Avid Artist Mix, cost around 900 pounds.

I didn’t want to spend that much money on my first control surface plus Artist Mix was quite old already, and they were gossips that they would stop supporting the system. It’s now officially up to 2024 that they will keep this control surface.

So looking for something more budget but still with motorised faders I found Icon M+ platform which is a modular control surface and works with many audio sequencers.

It works in Pro Tools too, but it uses the HUI system, which is quite limited.

For my work, I only wanted faders, knob pans and transport. M+ was less than 300 pounds, so I got it a couple of years back. 

I haven’t tested it on other software – only Pro Tools, so my view and my experience are limited.

The first issue was that the control surface didn’t work straight out of the box. I downloaded the software, updated the firmware and I remember that I had to spend quite some time online looking for a solution.

I did find it eventually; also, there is a document on their website that helps to set it all up. It’s not plug-and-play, definitely not with Pro Tools.

monitors_article_2

But, let’s start with what I liked about the control surface first!

The Price

The Icon M+ surface is cheaper than some and cheaper than anything from Avid. The big selling points are the motorised faders  that move with the automation.

The Build

It’s a sturdy build, and it fits perfectly on the shelf under the desk, I also like the colours on all the buttons and how it lights up when you switch it on.

Motorised Faders

Faders are why I went for the system. It makes all the difference and gives you the feel of working on the proper console. They are touch-sensitive and are decent to track automation.

Let’s now get to the shortcomings of Icon M+.

HUI and set up

It wasn’t easy to set it up, definitely not plug-and-play. Also, HUI connectivity with Pro Tools is minimal. Master fader doesn’t work at all. Pan knobs also don’t work correctly, and it’s just not very intuitive with Pro Tools system. It feels restrictive.

Motorised Faders

The faders are touch-sensitive and motorised; however, sometimes they jump up or down when recording automation, they get stuck or block you from writing automation.

It doesn’t happen all the time, but it happens, and I reckon that’s also because of HUI connectivity.

Apart from all that, the transport buttons work fine. It’s easy to move through channels, and at its core, the control surface does what it needs to do. I used it for a couple of years and got past the limitations, but it also put me off digging more into the system. Setting my shortcuts felt like too much hassle.

So I just used the essential functions of the surface.

The Select, Mute and Solo buttons worked fine, but knobs not working correctly with the pan is a big drawback. They let you pan to one side on a stereo track, but I didn’t know how to switch the knob to the other pan. I guess it would be ok for Mono or linked pans.

Again – all of it is because of Pro Tools and HUI connectivity. M+ may work much better with other software, but I haven’t tested it.

I said that I’m upgrading. Last year Avid released the successor to Artist Mix – S1 control surface. It’s a modular, eight fader system that works flawlessly with Pro Tools it’s a EuCon system. It also works with a tablet on top and their Control App – so a proper next-generation control surface.

I knew that I wanted to upgrade and I felt that I should still go for Artist Mix, but after a bit of research, it became clear that S1 is a superior system and Artist Mix now outdated.

Of course, there is a price tag with S1, I got for 1060 pounds, and that’s without a tablet! I did get an Amazon Fire 10 inch tablet and that works well with S1.

M+ served me well, but it was time to say the final goodbyes.

And of course, I will record video and post a review on S1 console once I spend a bit more time with it,

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

New iMac 2020 for Podcast Production

New iMac 2020 for Podcast Production

imac_header

19 OCTOBER 2020

written by Mike

New iMac 2020 for Podcast Production

For the last five years, I worked on a 27inch late 2015 iMac system with 3TB Fusion drive and 32GB of RAM.

The system has served me very well, but it was time to update. 

At the beginning of August 2020, Apple released a new line of iMacs and when I checked out the specs I decided to get one. In this post, I will be looking at the performance and how much better the system is for podcast production.

I’ve upgraded to the 27inch iMac 3.8 GHz 8-Core Intel Core i7.

When shopping, there were a few options that I’ve selected as well.

But why, 27inch? 

I find it that one big screen is enough for my work. Anything less and I would probably need two screens, 27inch sits nicely on the desk. 

Also, 27inch iMac is one of a few Apple computers where you can still get 3rd party RAM modules which is very important!

But on that later.

When selecting the computer, I stayed with a standard glass monitor and CPU. With memory, I left it at 8GB, and I left the graphics card as is too. The storage is all SSD now, and it is costly. Like I mentioned earlier, my 2015 iMac had a 3TB Fusion drive, and I was using most of it. Basic 512GB would be too small, but going into 2TB or 4TB territory – very expensive. 

I’ve upgraded to 1TB for an additional £200.

Now with an Ethernet connection, I’ve paid an extra £100 to upgrade to 10 Gigabit. My connection is not that fast, but I did it to future proof the computer. If I work on it another five years and then want to sell, 10 Gigabit will be a good selling point.

I left Magic Mouse 2 and changed the keyboard to Numeric.

imac_article_1

Now, let’s talk about RAM and storage. The 27inch iMac is still one of the few computers that have upgradable RAM. That means you can access it from the behind and upgrade it yourself. 

The 2020 iMac was attractive because it supports up to 128GB RAM, which is a lot!

However, when you look at the shopping list, the upgrade from 8GB to 128GB will cost you an additional 2600 pounds! Which is more than the actual system! That’s insane.

The good news is that, just like with my old system, I could use 3rd party RAM, which works out much cheaper than 2600 pounds.

I usually go for Crucial RAM, but they were all out of 32GB modules.

After searching online, I found a website called Mr Memory, and they had 32GB Samsung modules in stock. I’ve ordered 4, which is 128GB.

The total cost was about 500 pounds, only 2000 pounds cheaper than from Apple!

The other thing was storage. I’m used to my 3TB Fusion Drive, and 1TB SSD wouldn’t be enough. The obvious option was to buy an external hard drive.

I use regular HDDs for archives, and after reading about the best one for work, it became apparent that I need to go for an external SSD, which is faster but more expensive.

Looking through the options, I decided on SanDisk SSD, which is a small, portable drive – I went for 2TB, which was 250 pounds at the time.

Back at Apple shopping site, 2TB was an additional 600 pounds and 4TB (they don’t have 3TB option) extra 1200. So I saved massively there as well!

That was about it when it came to purchasing.

Before the new system was delivered, I copied my work from the old one. I’ve decided to install software on the main drive; however, I would keep my instrument libraries (around 200GB) as well as working projects (over 1TB) on the external SSD. 

I copied everything over.

When the new system arrived, the first thing was to take the original 8GB RAM modules out and insert 128GB Samsung modules.

Once I did that I switched the system on, it asked me if I wanted to use their Migration Assistant to move files over from the old computer, but I declined. I then instilled my leading software that is Pro Tools, Omnisphere, plugins from iZotope, Adobe CC and the rest – I’ve linked the libraries to the external SSD.

The install was super fast – thanks to RAM and the SSD. I’ve copied my plugin settings and decided to test the system as I would when working.

The first thing is that everything opens much quicker; there is no lag at all. Pro Tools runs smoothly, opens quickly and commands work very fast. I run the sessions from an external SSD.

imac_article_2

iZotope RX – I mentioned in the past that I couldn’t work in full-screen mode as the old system wasn’t having it. Well, the lag is gone, and I can now edit in full screen without any issues. The modules are much faster as well, it still takes time to run the whole module chain, but it is quicker than before.

The other thing is bounces within Pro Tools.

My podcasting session is set up in a way that all my mixing, mastering plugins and reverbs run at all times – I would get system overload quite often, especially if I had a browser on in the background.

No such issues so far and bounces are much quicker too.

First bounce is Commit from MIDI to audio – it used to take at least a couple of minutes to commit a 5-minute long track, not it takes 30 seconds at most.

The big one is, of course, the primary bounce. That is the full episode bounce with all plugins running.

On the old system, the speed for offline bounce was x1.9. So for an hour of the podcast, it would take 30 minutes to bounce.

Now, when running from an external SSD, the bounce speed is x2.9 so more than 50% faster.

If I copy the session on the internal SSD, the speed is the same, so no change there. I can work from the external just as I would from internal.

I’ve also done some music writing running the libraries from the external SSD – the load time is much quicker than it was when working from an old internal Fusion drive on the old iMac, which is excellent news.

What’s the verdict?

As expected with a brand new computer and 4x the amount of RAM, everything runs much smoother, the bounces and renders are faster and internal operations quicker.

There are barely any load times – be it with keyboard shortcuts, libraries loadings or plugins.

Having the external SSD helps as I will use it as the main working drive and run sessions from there. It is much faster than my previous internal Fusion drive anyway.

Overall I’m happy with the upgrade, one thing I am a bit disappointed is that the computer design is the same as the old one. I would think that after so many years they would freshen up the system. It still looks good, but it’s not obvious that I changed anything!

If you are looking for an upgrade, remember that 27inch have the option to buy cheaper RAM, and it’s easy to upgrade storage with external drives too.

Apple will be soon moving to their ARM chips leaving Intel behind. If you are hesitant to upgrade because of that, remember that the transition will take a bit of time and also we aren’t sure that 3rd party software such as Pro Tools or various plugins will work with new chips straight away.

If you feel that your old system starts lagging a bit, I do recommend a new iMac (if you are looking at iMacs that is). It’s a substantial upgrade and will make the work faster and more enjoyable.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

How to Record a Podcast Remotely And Get It Right The First Time

How to Record a Podcast Remotely And Get It Right The First Time

record_remote_header

05 OCTOBER 2020

This article is originally published on descript.com

How to Record a Podcast Remotely And Get It Right The First Time

Remote interviews are a fact of life for every podcaster, and in today’s era of social distancing, more so than ever. Since you rarely get the chance at an interview do-over, nailing down your remote recording workflow is essential. We’ll show you how to prepare for and record a remote interview, so you get it right the first time — with some additional tips along the way to make sure all your bases are covered. 

Choose the right remote recording setup for your podcast

The first step is to determine the remote recording setup that best suits the format and content of your podcast and your production and editing workflow.

In most cases, your best solution will involve recording remote interviews on Zoom, Skype, Google Hangouts, or a similar online conferencing service. This low-friction setup makes it easy for guests or co-hosts to contribute, but you’ll need to make sure you have the right software to record these interviews.

It’s also wise to make sure you can record phone calls. Phone interviews don’t offer great audio fidelity, but they make a great backup option in case of technical problems or schedule changes. Phone interviews probably won’t be your first choice, but it’s a good idea to be able to record a phone call just in case you need to. 

If you’re recording with the same remote co-host on each episode of your podcast, consider a double-ender setup, in which you and your co-host record your own audio tracks locally and combine them in post-production. For most podcasters, this isn’t the most convenient solution, but it does translate into the highest audio fidelity for you and your co-host.

monitors_article_1

The best way to record an interview is to prepare for it

When it comes to interviewing — especially remote interviewing — a little preparation goes a long way.

Do some research into your guest’s background, expertise, and projects. Who are they? Why is their work notable? What do you (and in turn, your audience) hope to learn from them?

Putting together a rough outline of the questions you’d like to ask will come in very handy. Write down a handful of specific questions and key points, but keep your outline broad and high-level. That’ll allow you to more easily adapt to the flow of conversation.

Maintaining that conversational flow remotely can be substantially trickier than doing so person-to-person. Prime yourself to listen more than you speak — in particular, try not to interrupt your guest. Editing out awkward silences between speakers is much easier than dealing with too much crosstalk!

When it’s time to record the interview, take a couple final preparatory steps to ensure a clean recording. Close all unnecessary software and set your computer to “Do Not Disturb” mode to make sure unwanted distractions don’t pop up (or worse: end up in the recording).

How to record a Skype call, Zoom interview, or Google Hangout

For most remote recording situations, Zoom, Skype, or Google Hangouts are your platforms of choice. All three are easy to set up, simple for guests to use, and feature audio fidelity good enough for most podcasts. 

Both Zoom and Skype offer built-in call recording functionality, but Google Hangouts currently limits this offering to enterprise users. There’s an additional caveat: the file format (.MP4 or .M4A) that each platform outputs may not be what you want, depending on your podcast production and editing workflow.

For maximum control over your final product, you’re better off using third-party apps to record computer system audio directly into the recording software of your choice rather than relying on their recording functionality.

If you’re on a Mac, BlackHole is a great open-source tool that allows you to route audio between apps, which means you can record the audio output from Zoom (or Skype, or Google Hangouts) directly into your preferred recording software. On Windows, Virtual Audio Cable offers similar functionality. 

If you’re already using Descript to record, you won’t need to use additional audio routing software. When recording audio into Descript, open the Record panel, choose Add a Track, select your input, and choose “Computer audio.” Click the Record button whenever you’re ready, and audio from Zoom, Skype, or Google Hangouts will be piped into Descript. 

No matter which remote recording setup you use, make sure you test it — and test it again — with a friend or colleague before you’re actually recording your podcast. Troubleshooting when you should be interviewing ranks near the top of everyone’s Least Favorite Things To Deal With, so make sure everything is in order before your guest is on the line.

monitors_article_2

How to record a phone interview with Google Voice

Social distancing means nearly everyone has gotten used to handling calls and meetings on Zoom, Skype, or Google Hangouts. But maybe your podcast guest is really old-school, or their computer is on the fritz, or maybe they’re simply only able to access a phone during your scheduled call time. It’s likely phone interviews will never be your first choice, but being able to record an old-fashioned phone call will come in handy.

Recording phone calls can be tricky, but using Google Voice to make an outgoing phone call from your computer means you can use the same remote recording setup detailed above to record the call.

Follow Google’s instructions to set up Google Voice and then learn how to make an outgoing call. Once everything’s set up, you’ll be able to record phone calls with Google Voice just like you’d record an interview on Zoom or Skype. 

Again, make sure to test with a friend and then test again before your interview. 

If lossless audio quality is a must, record a “double-ender”

For most remote recording situations, Zoom, Skype, or Google Hangouts are your platforms of choice. All three are easy to set up, simple for guests to use, and feature audio fidelity good enough for most podcasts. 

But if you have a remote co-host that regularly appears on your podcast, and you want to maximize the quality of your audio, a “double-ender” is the way to go: Each host or guest records themselves locally, and audio tracks are combined in post-production. For an additional cost, you can use third-party recording platforms that simulate double-enders without each speaker managing their own recording software. 

A traditional double-ender sees each speaker recording their own audio track using their recording software of choice (Descript, Audacity, Quicktime, etc.), and then the host or editor combines each speaker’s recording into a finished product. Each speaker should have a decent microphone — if they’re using a laptop microphone to record, you probably won’t hear a substantial advantage with a double-ender over a Zoom, Skype, or Google Hangouts recording.

Alternatively, you can simulate a double-ender by using a platform like SquadCast, Zencastr, or Cleanfeed. These services record lossless audio from each speaker, upload each track to the cloud, and combine them automatically. These platforms cost money, but they’re a great alternative to a double-ender when guests or co-hosts don’t have the time or wherewithal to fiddle with recording themselves locally. Again, make sure each speaker has a decent microphone — otherwise you won’t reap the full benefits of lossless audio.

Make remote recording hassles a thing of the past

Recording your podcast remotely isn’t painless, but once you get the hang of it — and nail down your workflow — it’ll become second nature.

This article is originally published on descript.com.

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

iZotope RX8 & Podcast Production (First Impressions)

iZotope RX8 & Podcast Production (First Impressions)

rx8_header

10 SEPTEMBER 2020

written by Mike

IZOTOPE RX8 & PODCAST PRODUCTION

For people who aren’t familiar with iZotope, they make a highly respectable, top of the shelf tools for producers and audio engineers – everything from mixing, mastering to, of course, audio restoration.

On 1st of September, they released the latest version of their audio restoration tool, RX8 and I got the update as soon as I woke up! Just kidding, I did check it out on their website first.

I use iZotope software for all my work. Some time ago, when I was still collecting plugins I decided that I will focus on just a handful of them and try to master them. I wanted to limit myself consciously. I find freedom in certain restrictions. 

I used iZotope stuff the most, so I stayed with them.

I use Neutron for mixing, Ozone for mastering and of course RX for audio restoration. I started on RX4 back at my previous job at the movie studio, but for podcasting, I bought version 5 and upgraded from there. I also tend to get the upgrades with the Post Production suite rather than buying single plugins.

I use RX every day, I have my module chains, my shortcuts, my workflow. I was looking forward to RX8, wondering what they will come up with.

After they released the video, I watched it first, and even though I didn’t feel like post-production modules got the most significant upgrade and I thought they focused on music production more, I decided to get it.

I bought it as part of Post Production suite 5 upgrade. I could upgrade for 299 dollars from RX7 Advanced to RX8, or from Post Production suite 4 to 5. In Post Production suite, I had most of the tools already except for Nectar 3, so that was an easy choice.

The install was easy with their Product Portal, and I authorised to my iLok.

After the initial launch, I imported my presets and wanted to set up my keyboard shortcuts. 

When I upgraded from RX6 to RX7, preset import didn’t work that well. It didn’t get the modules nor the settings right, and I ended up re-doing everything manually.

What pleasantly surprised me was that import from RX7 to 8 was easy and worked well. I doubled checked the modules and settings, and they were on point.

All the keyboard shortcuts were automatically implemented as well, and I didn’t have to do anything at all.

rx8_article1

 

I mentioned RX7 full-screen performance in a post about my system upgrade, but long story short my 2015 iMac wasn’t happy with the full screen, so I have the window a bit smaller. I wondered if RX8 performance would change in that regard, but no. It was laggy when I made it full screen.

Fortunately, I am updating my iMac to the latest model, and it should be delivered in a few weeks. Hopefully, on a new and faster system, I will be able to operate in full-screen mode.

I work on module chains with various De-clicks, De-noise and others. However, I don’t think any of these were changed. If they updated the actual algorithm – I don’t know. But the ones I use every day and with module chains look the same.

The first significant and helpful improvement is horizontal scrolling. I use Magic Mouse, so this will be a big help. In the past, I had to operate with keyboard shortcuts and drag the screen from the window’s edge to move around. 

This update will make editing much faster and easier.

The interface looks the same, so no big changes there. 

What about new modules?

First one is Guitar De-Noise. It looks cool and will help guitar players to make their recordings cleaner. I won’t be needing it for the foreseeable future for podcast production though.

One option that may be beneficial is amp/buzz removal. Some dialogue clips that I work with suffer from that, so maybe this tool could help with cleaning them up.

Next one is Spectral Recovery. Now, this is interesting. It takes audio with limited frequencies like a Skype or phone call and tries to rebuild higher frequencies.

This is the tool that made my decision to upgrade, to be honest. At this moment I’m working on two Casefile Presents podcasts that have a tonne of audio clips – phone calls, remote recordings, location recordings. Most of them are of mediocre quality.

I decided to use Spectral Recovery on some of the clips.

 

rx8_article2

The verdict?

The render is relatively slow, similar to the old Dialogue Isolate. Of course, the performance may be better on a new iMac.

Second of all, it doesn’t work 100% perfect yet. 

Especially if you set a higher value of added frequencies, it seems to work great on sibilance – words like she, search, case. But it creates a weird effect where the sibilance is prominent and sounds out of place. So what I’ve been doing is setting the amount to quite low, around 20% and sibilance balance to -50. It still makes some of them stand out, which I then had to erase with Insert Silence manually.

When it works, it adds a little bit of presence to the clips, but it’s not a game-changer. However, I think that the idea behind this technology could be an absolute game-changer in the future versions. If they could rebuild the fundamental frequency so that it sounds natural, that could be amazing.

They’ve also improved Music Rebalance which I don’t use – although it would be beneficial back in the past when I was remixing Silent Waves podcast from just stereo MP3 files.

Loudness module update – again, I don’t use it within RX. I use iZotope Loudness Control standalone plugin.

Wow & Flutter – this looks cool, but I don’t know if it will be helpful for dialogue and podcasting.

A few more that were improved as well were De-Hum and Dialogue Isolate.

I used De-Hum in the past, but I wasn’t impressed with it, even in version 7 of RX. As I was working on draft 2 of one of the episodes, there were some clips recorded in a room with a very audible hum.

I cleaned it before, but it was still there. So, I applied the new RX8 De-hum module, and I was pleasantly surprised. It is definitely an improvement, and I’ve run adaptive mode once and once more with the Learn option, both very quite helpful. 

Now Dialogue Isolate

I love this module, but it was very slow in RX7, it takes ages to render but incredibly helpful in podcasting. 

They updated it in RX8, and again, I tried it on some clips for the upcoming Casefile Presents show. Clips I’ve already cleaned up, with previous Dialogue Isolate. I rerun it to check if it helps, and it does! 

Not only that, but the render is also much faster than previously so this one is actually what got me excited.

rx8_article3

With Casefile I’m spoiled. The show features one narrator who records in his vocal booth on an excellent mic. I’m used to good quality.

However, Casefile Presents shows are a challenge because they feature interviews, location clips, phone calls, archives and just different types of audio with varied audio quality.

Therefore I need more tools than just de-click, and simple de-noise in my arsenal and RX8 offers that.

Ok, so what’s the verdict?

Overall the performance is better, and even running my module chain I felt like it cleans up the dialogues better than in RX7 so that they may have improved stuff under the hood as well.

It’s a solid update, even though it seems like they did focus on music production this time rather than dialogue post-production. Maybe because their modules for dialogue are what’s enough? Just need improvement here and there.

Should you get the upgrade?

I’m excited about iZotope machine learning technology, and I love their stuff. I also do this every day, and I use their tools every day, so for myself, it was a no-brainer if I want to keep on top of it. However, I also know that their software can be pricey, especially the advanced versions.

I would suggest first to download their trial versions and learn it. See if it helps in your workflow.

Then look at the modules, what you use and what’s helpful and choose the version you need. They have three, Elements, Standard and Advanced.

RX8 is not a revolution if you are upgrading from RX7, but it’s a solid progression and makes work more enjoyable for sure. At least for me!

Liked the article? Follow me! 🙂
 

Subscribe for the latest updates

Pin It on Pinterest