Audio / Tips and Solutions

Five Considerations for Sound-Designing a Podcast

1Share

It’s a cliché to even say it, but everybody has a podcast. I myself have, like, I don’t know—six? I kid. It’s more like two. With a glut of podcasts hitting the media landscape, how do you make sure yours stands out? Marketing, of course—one hopes for the viral kind. But still, it helps immeasurably if your podcast is a pleasing listen, transporting audiences from their dull lives commuting to and from work, a fate from which there is no escape.

This calls for sound design: the creation of a sonic landscape, combined with good old-fashioned mix-engineering. For, while it helps to mitigate issues that detract from the experience, it also helps to deliver an experience.

With that in mind, here are some things to consider in this, the Golden Age of Podcasting.

Give Yourself a Solid Foundation in a Few Audio Processes

Look, I shouldn’t be telling you this, because it’ll get me a lot of stink-eye, but if you’ve recorded your voice even halfway decently, you can get a great result—provided you have, as Liam Neeson would say, “a very particular set of skills.” I’m not saying audiophiles will shell out extra money for DSD versions of your podcast. But today’s tools are so good (and the general ear of the public is so forgiving) that you can hang your hat with the best, even with suboptimal recordings. For de-noising, taking out plosives, fixing clicks and pops, taking out mouth noises, de-essing, and basic leveling, I use iZotope RX, and I love it. I’ve recommended this suite often enough to sound like a shill at this point, so I’ll note that you can get great results from the Waves Restoration Bundle, Waves’ WNS, and Sonnox’s Broadcast Production Plug-In Collection. Zynaptiq’s UNCHIRP has helped me, on occasion, to mitigate issues in skyped vocals.

Zynaptiq UNCHIRP Codec Artifact Removal & Transient Retrieval Plug-In

I’ll go out on another limb here and say that, for the most part, stock plug-ins will only get you so far in podcast pre-production. The stock EQs, delays, and compressors of major DAWs have come a long way, but this isn’t the case on the noise-restoration front. If you want to up your sonic restoration skills, then I’d strongly recommend shelling out the cash for some of the abovementioned software. Next, devise some exercises for familiarizing yourself with their processes (I can list a few if asked to in the Comments section). Keep in mind that it’s better to thoroughly learn one piece of software than to haphazardly employ the parameters of many.

Stand in and Out

Let the format of what you’re creating dictate the sonic landscape. For instance, in a “two-way” podcast (an interview between you and a guest), you’d think it’s best to noise-reduce everything to its clearest point—but interviews tend to benefit with some natural ambiance intact. Take WTF with Marc Maron: It’s one of the most popular podcasts, but it doesn’t sound noise-reduced. You can hear his garage, sometimes to the point of Marc asking his neighbors to stop mowing the lawn. The garage is often acknowledged, and since this is an auditory medium, his producer has made a wise decision to let you feel its space.

Investigative pieces like Reveal, on the other hand, tend to blend audio from disparate sources, sometimes recording in the field. You might want to smooth transitions with interstitial pieces of music, or a spoken cue (“for more on that, here’s Jim”). Still, don’t make the mistake of sterilizing the audio with de-noising to make disparate recordings sound similar. Context is your friend, and popular podcasts know this: In an episode of Reply All, Sruthi Pinnamaneni mentions she always records a minute of room-tone. She mentions this as part of the story, but it’s a good rule of thumb to give yourself room-tone for establishing space and maintaining sonic integrity within locales (i.e., laying down a noise-bed to make your edits less obvious).

Knowing the format helps you not only blend in, but stand out just enough to get noticed. Take Love + Radio: They’re an interview-based show with some notable twists. A nearly constant, bespoke soundtrack underpins episodes like “The Pandrogyne” beautifully; in all episodes, you almost never hear the interviewer, and if you do, the speaker comes from a distance, obviously not close to the mic. This effect has served the show well, and has possibly contributed to its laudations and awards.

Vocals Come First

Vocals are your most important element, no matter your sound-design scheme. Treat them with utmost care.

Unfortunately, in many podcasts, you’ll find noise-reducing to the point of distracting artifacts (unpleasant ringing tones; swishy noises surrounding breaths). Another big offender? Sentences dropping off in volume near the period, making their intended points hard to discern. Bad vocal-to-music balances abound, as do skype artifacts and, of course, the biggest offender—whistling, tear-your-head-off sibilance.

You can avoid these issues with a modicum of attention. For sibilance, try recording your voice slightly off-axis, and then employ the tips found in this article. Listen to every sentence with your eyes closed (the visual waveforms can be distracting) to hear a noticeable shift in level at the end of your phrases, and correct these drop-offs with clip-gain or automation.

Run your denoiser to a point where all the ambiance is gone, and then back off the parameters until the vocal is clearer over the ambiance. Yes, there will be ambiance, but you won’t be drowning in artifacts either. For skype interviews, use an EQ with a spectrum analyzer to find offending frequencies, and then attenuate these horrid noises either with dynamic EQs or static equalizers. Never implement any corrective process to the point of messing up the vocal.

In dealing with music, people often use a compressor to duck the score down when the vocals kick in, but this is not always the best practice. Simple automation gets the job done. Or, try this sneaky tip: Use a multiband compressor side-chained to the vocal on individual bands (FabFilter’s Pro-MB works perfectly here). Figure out which frequencies of the music rub against the vocals, and then sidechain them to duck whenever the vocals hit. When the vocals come in, they’ll sink into the space carved into the music. The vocals stay clear while the changeover from music to vocal feels invisible. I do not have the space here to give the point-by-point how-to, but ask and I can provide one in the Comments section.

FabFilter Pro-MB Multiband Dynamics Plug-In

These may seem more like mixing tips than sound-design tips, but good mixing underpins great sound design. Your goal is a pleasant experience, and thus, you must destroy anything that detracts.

Figure Out your Music/Sound Scheme

Are you investigating a topic, or segmenting your episodes into different sections? If so, interstitial background music is something you might want to create or license. A theme song might be a good idea too.

On the other hand, music is one of those elements that allows you stand out. Finding ways to experiment with musical/vocal interplay can turn heads. Again, Love + Radio is a good example, as is legendary radio personality Joe Frank, who creates sonic experiences like no one else.

If audio dramas are on the agenda, consider that you need to create the sense of place. Two actors talking to each other with no atmospheric correlation gets tedious and artificial quite quickly, as there’s no feeling for what they’re doing during the scene, or where they are. Drop in—and time out—their footsteps. If they’re in a bar, use an ambiance track and drop in the sound of ice clinking in a glass. Close your eyes and picture yourself in the scene: Anything that makes a sound can be represented in audio drama, from shifting around in your seat, to tossing paper-towel into the trashcan. If you don’t have access to a sound effects library, surely you have paper towel, a trash can, and a microphone: It’s time to make some Foley!

Consider the Medium of Delivery

Most likely, people are going to be listening to your podcasts in earbuds (during mass transit commutes, working out, sitting at work, and cleaning the house), or in car stereos. Conventional wisdom dictates that the mix should translate everywhere, but I’m going to go out on another potentially hateful limb by saying that for podcasts, you might get away with tailoring sound-design to these specific delivery systems.

What does this mean? Both the world and car-interiors produce significant self-noise. So, it may be wise to accentuate frequencies that cut through the noise, allowing you to hear the words and elements at reasonable volumes on the subway, or in your car. Likewise, in creating audio dramas for the earbud crowd, you have more leeway to engage in binaural panning and stereophonic effects, both in music and in sound effects. You can create the effect of characters moving across a space when their footsteps, mixed softly so as not to distract, move from left to right.

However, be careful not to go too crazy—you don’t want to alienate listeners who use traditional monitoring setups, nor do you want to distract people with excessive stereo information. An interview with voices on the right and left can make the listener play tennis in their head; it’s akin to old records with the drums on one side and the rest of the band on the other. You do not want to distract from the distraction that is podcasting.

We’ll be sure to go deeper into this topic with more concrete tips, but that’s all the space we have for now. As always, if there’s a specific situation you’d like help with, drop us a line—we’d be more than happy to help!

1 Comments

I am trying to remove echoes from an audio file. What are some good practices in preventing this, and what are some tips on how to get rid of the echo? The program i am using is Audacity. 

Close

Close

Close