In my previous blog post, we went over some basic tools and terminology for audio engineering. Now that we have an appreciation of the basic sculpting tools available to us, let’s talk about a holistic approach to the soundscape. In order to understand how to approach sound effect selection, application, and balance, it helps to visualize the balance of a finished (or interactive!) piece of audio like a pyramid, with the width corresponding to presence in the mix.
The wider parts of the pyramid have more room for sound, especially since the higher frequency stuff fatigues the ear super fast.
The human ear gets tired of higher frequencies much more easily than lower ones. Think of the sound of a heartbeat - a muffled, low frequency sound, opposed to a baby crying or a piccolo playing in a marching band. A heartbeat is primarily low frequency and doesn’t “pierce” in the same way a baby’s squeal or piccolo does. Use this understanding to help shape your game’s sound mix. If sound effects with frequencies in the “baby crying” zone are constantly playing, then no sound effects will seem more important than the others, you will irritate the player, and you will forfeit an entire avenue of gameplay state communication.
Let’s start with a straightforward example: a platformer game with a driving soundtrack and loud, bombastic sound effects like explosions, gunshots, electricity zaps, etc. The balance of the audio is approximately 60% music, 40% SFX.
You’ve got the base music track looping in the background. Since the music is intended to fill space and provide continuity in the experience, it’s mixed loudly and evenly. The low end is saturated with a kick drum and bass instrument that fill out the low end. The first thing you can do is ensure that low frequency content in the SFX doesn’t interfere with the music’s low end.
Think back to our tools, now. How do we do this? With EQ! You can generally take out lots of low frequency content from a sound without affecting the way it sounds. It’s pretty wild, actually - the human ear is SO GOOD at deriving pertinent information from an audio signal that it automatically filters a lot of it out anyway. And, in general, the more inaudible content you cut, the more headroom you’ve got overall.
Here’s an example: I wrote a quick synth dittie in Sylenth1 and directly exported it with no processing (good speakers or headphones recommended!).
Now, here’s the same exact sample, but with a low cut filter applied.
You should be able to hear that the boomy, bassy, low end is drastically reduced, but the character of the sound is retained. This is super useful! In music, if you want a kick drum to be the fundamental low end of your track, this helps make room for it, and if you mix properly, you won’t even miss the low end of the synth. It can actually sound bigger! Remember, if EVERYTHING sounds big, then NOTHING sounds big.
You might approach this entirely differently in a different game, but for this example, the music is a driving force to the experience and should be considered the “base” layer. That’s not to say certain SFX shouldn’t overcome the mix at some point and take over! We’ve got room in the upper levels of the pyramid, check it out:
The pyramid generally applies to a longer timescale - it can be upset and flipped-turned-upside-down at will for effect. Horror movies, for example, often crank up the high frequencies in the score to make the listener irritated or anxious. Listen for that next time you watch a horror movie.
You can also ensure that the music and SFX are not too “spiky” by making sure they’re compressed when necessary. Remember that compression smooths out the average volume of a sound, so it’s much easier to layer on other audio without causing problematic imbalances.
Here’s the problematic stuff I hear:
The synth from the first example kind of taking over - it’s stepping all over the bass guitar, kick, everything.
The kick drum is having a lot of trouble coming through, there are so many bass frequencies all over that it can’t find a home!
The organ has very little definition or character, it’s just…. sitting there.
The bass guitar is also having trouble getting through.
So here’s what it sounds like after some basic EQ and compression.
Here’s what I did:
Low-cut the synth, so that the low end that isn’t really the focus of the sound isn’t interfering with other low end instrument. I also put some moderate compression on it to even out the dynamics. It’s a very “jumpy” sound, and I wanted to retain some of that, but tame it enough to let other elements of the mix have a turn.
Increased volume of the kick and added a bit of bass around 80Hz to give it some low end to make up for the loss of low end in the synth.
Compressed the drum kit (with a type of compression called parallel compression, where I blend part of the original signal in with the compressed signal to even out the volume, but retain some of that pop! bang! bap! feeling).
Compressed the bass guitar just enough to allow the tone of it to come out more, without sacrificing the sound of the strings being plucked.
Put a compressor on the master. A very mild and slight compression on the whole mix can serve to even out levels and “glue” the elements together. This is a very hazardous art, however! Be careful with any effects on the master - especially compression. A little goes a long way, but too much can absolutely demolish the mix you’ve spent hours/days/years/lifetimes perfecting.
Once you understand the fundamentals here of a music mix, game sound effects will be much easier. Fitting the ever-so-slightly random hits of a drum kit into a mix is not that different from mixing punches, shots, and explosions on top of a music track in-game. The most important takeaway here is that digital audio tools are immensely powerful, even just a basic EQ and compression pass can make something sound way more polished and professional. It also serves to even out volumes and to make stacking elements much easier and less fatiguing for the listener.
Audio processing goes much deeper, but hopefully with these basics you can get a better grip on your game’s mix and get it sounding great.
Danny Baranowsky is a composer, musician and larger-than-life personality living in Seattle, Washington by way of Mesa, Arizona. Over the past decade, Danny has risen to the top of his field, composing the music for best-selling games Canabalt, Super Meat Boy, The Binding of Isaac, Desktop Dungeons, Crypt of the Necrodancer, and more. This year, Danny looks to expand his musical misadventures - working on solo material, game prototypes, chicken dinners, and even a live set! No task is too tall for Danny (he is 6’4”). Keep on the lookout for more music and tweets regarding the refresh rates and input latency of OLED monitors in the future.