Check it out for some tasty game audio tricks! 🙂
This post is for those of you who are passionate about sound, and are wondering how to become a sound designer for videogames, where to start, how to enter the industry, what software and tools you need to know, who to talk to, etc.
I get these kinds of questions a lot, and although there is no magic recipe, no step by step instructions that will guarantee you a successful career in videogames, there are some things that are useful to know, and that can help you develop an attractive curriculum.
What equipment to use and/or start with?
No two sound designers will use the same equipment, but I can tell you a bit about the type of equipment you would need and the workflow. The info that I give here is pretty much minimal requirement. You can most certainly take this much further, but here is what I consider essential, in terms of hardware and software.
The hardware concerns equipment you need in order to record your own sounds:
And the software, which you also need to record, but also to edit and mix:
Then, when working on an actual game project, you’ll need to implement your sounds into the game. The type of software needed is called audio middleware, and is what will communicate to the game engine and act as a bridge between the audio integration and the game events. Some large companies use their own in-house audio middleware (and game engine), but I’m not going to get into this. On the market, whether your game is made with Unity, Unreal or any other game engine, there are a few options in terms of audio middleware, which are usually compatible with any of the game engines (although not always). Three of them are worth mentioning: Wwise, FMOD and Fabric.
In my opinion, the best one out there is by far Wwise (read the Audio Middleware Comparison post to understand why). If you are working on a commercial title you need to consider licenses, but they usually all have some sort of deal (if not free) for Indie titles, students, or simply to use on non-commercial projects. This middleware is what gives a lot of creative freedom in the interactive design and integration.
To know more about audio integration, a good way to get introduced to the logic behind it is to watch tutorials, such as the Wwise tutorials. The advanced ones can be overwhelming, but the overview ones will be very useful in getting a better understanding of audio integration and how to design sounds with that kind of logic in mind.
You also need Sound Libraries. They are part of the workflow. Especially when working with low budgets and tight schedules, it can be challenging to record all the sounds you need yourself. Using sound banks is a good way to start, starting with good quality sound files and familiarising yourself with the editing process, which is one of the most creative parts of the design.
Be careful though, I strongly suggest to never use a sound directly taken from a sound library, but rather to use that and transform it, process it, layer it with other sounds in order to create your own assets. The reason is that those sounds are recognisable, and it reflects badly on the quality of the game and its originality if the audio content is not unique.
In terms of Digital Audio Workstation, my favorite is by far Reaper. It is very powerful and the license costs barely anything, as opposed to its competitors. Some would recommend Nuendo, Cubase, ProTools, Logic, etc. These are all professional DAWs and will work nicely for sound design. Which one to opt for is mostly a matter of habits and the type of workflow that suits you best (and the budget you have..).
An audio interface will help your computer deal with DAW sessions heavy in effects and plugins, but you could do without for a while if you are just starting and not recording yet. There are some very decently priced entry level audio interfaces from Steinberg (UR22), Focusrite (Scarlet 2i2 or 2i4), and many more. Once you get more serious and do a lot of recording, it might be worth investing in a good audio interface with quality preamps.
If low on finances, you can start recording with a portable recorder instead of getting expensive microphones and audio interface. I own a SONY PCM m10 and it is a very reliable and useful piece of equipment. Other equivalents such as the Zoom are also worth looking into. You can visit the Gear section of this blog to know more about what kind of equipment I use.
Game audio designing tricks
In game audio, you always want to avoid repetition, since hearing the same sounds over and over again, regardless of the quality of these sounds, will most certainly result in the player muting the audio. One way to create variety in game music is to compose a series of music segments that will play in sequence, that could also be layered together in a generative way.
For instance, you could have one loop of music that would serve as a ‘basic layer’, on top of which you could have music stingers or cues (with a few variations for each of them). The possibilities for music integration are endless. One of the key tricks to game music is to integrate the segments in such a way that the music is generative both horizontally and vertically. What this means is that, for instance, instead of having a single basic music layer which loops, imagine this loop actually being made of a few segments which can success each other in any order, or according to set conditions. This is your horizontal generative music. Then, at any moment (or rather depending on your metric and bars and set conditions), music segments and stingers (of which you would have a few variations) are layered additively to the ongoing basic layer. This is your vertical generative music.
In terms of sound effects, the key is to have more than one single sound for one game event. For instance, if a weapon is fired, you would have at least 3 (to put a number on it, but ideally 5 and over) variations of this specific weapon, to be triggered randomly every time it is fired. This avoids being annoyed by hearing the same sound over and over again. That’s variation in its simplest form, but you could also divide your weapon fire sound into 3 or even 4 parts (trigger, fire layer 1, fire layer 2, shell falling), and integrate these sounds (each of them with variations) in such a way that they could combine randomly, resulting in almost never hearing the exact same combination in game. The audio middleware (such as Wwise) would let you do that. It would also provide ‘randomisers’ on pitch, volume and other DSP effects so that you can create even more variations out of the sounds you already have.
When you design sounds for a game, you need to consider a certain idea of ‘sonic identity’. I suppose you could say the same for other media, but I find this to be especially relevant in games, since they are made of various sections, which the player can visit at anytime, from anywhere. A coherence and sonic identity is what will make your audio stand out. This can be achieved through designing, editing, processing and mixing techniques.
A good example of a game feature an amazing sonic identity is LIMBO. The sound integration is seamless, the whole atmosphere of the game is glued together with the sound being so coherent with itself and with the environment. A style has been decided on and has been successfully explored and maintained throughout the game.
How to get better at creating music/soundscapes for games?
Play a lot of games and listen. Try to notice what sort of game parameters affect the music (danger, discoveries, success/failure, etc etc). If you are not currently working on a game, imagine scenarios:
From the start music, you can either go to level 2 or die (your segments and transitions will need to be able to play seamlessly no matter the direction), the music on level 2 will be different, then you can go to level 3 or die, same principle. Then on top of this you could have music stingers for if the player picks up something, or if an enemy is approaching. You could have a ‘stress’ or ‘combat’ layer that would blend with or replace the original music. There are plenty of possibilities which can get more and more complex. It is a good exercise to go through the entire process, even with a hypothetical game.
You could also start from an existing game, analyse it, find the patterns and game parameters and re-do some music for it. Test it out in Wwise. Then it’s all about thinking outside the box, being creative and imagining ways to implement audio in a unique and original way.
Essential reads to learn sound design techniques
The Sound Effects Bible – Ric Viers
The Foley Grail – Vanessa Theme Ament
Getting into the industry and networking
Getting into the industry is the hard part. There are many talented people, for very few positions. This means that on top of your own skills, you’ll need to be very proactive in your hunt for projects. Work with cinema and game students in order to create a portfolio. Re-design sound over gameplay videos and cinematics. Look for Kickstarter projects and offer your services.
It involves a lot of hard work at first, but getting a decent portfolio is the first step towards a serious career plan.
Online networking is a good way to make yourself aware of the latest industry events, to which you should attend as much as possible, make yourself known and make sure you have something to show when asked. An online portfolio is one efficient way to do this.
In short, networking, practicing your sound design skills by re-designing sound on existing videos, collaborations with students and Kickstarters, being nice and social, and finally being proactive and organised are some of the helpful actions you can take if you want to be a game audio designer.
I hope this is helpful to some of you. Start by reading a lot about it and watch tutorials. Google is your friend. And play games!
Some more links to share!
First, this real time rendered Unity cinematic looks and sounds absolutely unreal. It’s been pretty popular in social media this week, but if you haven’t seen it yet, here it is!
The Land’s End game website, from Ustwogames (the creators of Monument Valley!) – can’t wait to get my hands on VR gear to play this one.
If you are working on a VR game and looking for some awesome spatialsed audio, the 3Dception engine is what you’re looking for!
This Kickstarter for TRANSMISSION, a game by Paper Unicorn with lots of potential! Looks and sounds great!
If you haven’t felt the hype for Playdead’s next game INSIDE, you are about to! This revisited, deeper version of Limbo seems more than promising – can.not.wait.
Recently found out about this cool looking, fun to play game with an awesome procedurally generated soundtrack! Go on, try it!
Audiokinetic is launching a new blog for their 10th anniversary, be sure to check it out to stay updated on game audio news!
This nature sound library has just become available. Haven’t really had time to check it out but you should!
And finally, a couple of interesting game audio online folios/blogs, enjoy!
This is post is now a few years old, so keep in mind that this is meant as a general guide, but the information you read here would be worth double checking with the latest, up-to-date documentation of each of the concerned software.
Thanks for you understanding.
The purpose of this post is to provide some insight about the 3 most popular audio middleware for game audio integration, and a bit about Unity 5’s audio engine too.
I am taking for granted that you know what an audio middleware is, if you don’t and are interested in learning what it means and how it works, I found this concise but detailed article very accurate and informative, great introduction to game audio and its tools.
I have used Wwise, FMOD, and Fabric to a similar extent on various projects, and thought it would be helpful to some that I write down a few of my conclusions. I will do my best to keep this info updated as I continue to learn these software and as they progress themselves.
I will establish my preference right now, as you will certainly feel my partiality throughout this article: Wwise, by a thousand miles. I will support this with facts and observations of course.
First, let’s talk budget. I made this chart a short while ago to compare license pricing between Wwise and FMOD. It doesn’t include Fabric, but I’ve added Fabric’s licensing right after, taken directly from the website, along with the details of Wwise and FMOD’s licensing.
Basically, this means that the choice of middleware can greatly differ according to your budget. Fabric is generally cheaper, and it’s main advantage is that it supports WebGL and all Unity platforms, but if your game is of a certain scale, middleware such as Wwise and FMOD will allow you to push the technical limits further.
I wrote a few comparative documents which I will share here, feel free to download them.
This should hopefully be helpful in determining what software has the best capabilities.
Wwise‘s specs summary (click to get pdf)
Studios using Wwise (non exhaustive)
Key features in Wwise
Why use Wwise (over other audio middleware)
SoundSeed Air plugins – generative sound sources using time-varying parameter sets to drive a synthesis algorithm.No source audio files are necessary (hence no space required).
Effect Editor – a series of audio processing effects that can be tied directly to Real Time Parameter Controls or other in-game variability.
Unity 5 VS Wwise Summary (click to get pdf)
Unity – Extensive scripting involved in audio integration: any behavior other than Play and Loop has to be scripted (see 1st degree manipulations below). This requires a considerable amount of a programmer’s time.
Wwise – Minimal scripting required: all audio behaviors are set within Wwise, the only scripting required is to call game parameters. This prevents unnecessary back and forth between the designer and a programmer.
1st degree manipulations
Unity – The only audio behaviors available to the designer are play, loop, high/low priority, volume, pitch, pan, and basic effects.
Wwise – The same functions and more are available in Wwise (including randomisers, initial delay, conversion settings, loudness normalisation, Real Time Parameter Controls, Game States, Motion Effects, sound instances limit, and more).
On top of basic manipulations, the designer can create multi-actions events as well as stop events (among others, see Wwise events below), avoiding the need for scripting these behaviors. It reduces complex audio events to simple manipulations
In addition, basic editing is available within Wwise, and allows to reuse the same samples more than once, saving space.
Unity – No 1st degree access to randomisers or containers dictating behaviors (such as random containers). Everything has to be scripted. Fewer possibilities for variability means that a higher number of sound files is needed in order to create variations (takes more space).
Wwise – Excellent possibilities for variability due to easy access to randomisers on volume, pitch, lowpass, highpass, a priority system, and other audio behaviors (see 1st degree manipulations above). These variations reduce the number of sound files needed in the game, saving space.
Unity – The only way to control game audio states and parameters is with Snapshots. Snapshots have limited flexibility, and any transition between them has to be scripted.
Wwise – Wwise allows for a much more flexible game parameter control, all manageable by the designer, and highly customizable: the Real Time Parameter Controls (RTPC).
Complex audio behaviors can be implemented without requiring more space or any more scripting than simple behavior. This creates greater possibility for creativity and elaborate sound design.
Unity – Non-existent. There is no differentiation between music and sound integration, making it difficult to create time-sensitive transitions and multilayered music implementation, essential for good dynamic qualities and to give feedback about the gameplay to the user.
Wwise – Wwise’s dedicated music integration engine is one of his greatest strengths: it allows for highly dynamic implementation, greater variability and flexibility in the integration.
Its features include bars and beat recognition, entry and exit cues which allow the layering of multiple tracks or sound cues in sync, a transition system allowing for seamless shifts and variations, and a stinger system making it possible to link game events with musical cues, in sync with the music.
This saves space due to the possibilities for dynamic integration, reducing the required amount of sound files needed.
Mixing and testing
Unity – In-game only, meaning that all the sounds need to be implemented in a functional way before the designer can be able to assess the result in relation to other sounds and to the gameplay. Modifications take more time due to back and forth.
Wwise – Allows the sound designer to mix as the work progresses, and can test all the sounds, as they would sound in-game with the Soundcaster session system. Wwise’s Soundcaster system allows to simulate a gameplay environment and listen to the sounds in real time. Modifications can be done instantly
Hierarchy and buses
Unity – Good system that allows micro and macro groups of sounds. Good for mixing levels and effects, but does not include behavior systems.
Wwise – Wwise has a similar “bus” and hierarchy system, but it includes parents of various kinds, determining the behaviors of the child (containers). This system of groups and containers include features such as Random, Sequence, Blend, Switch, Dialogue, and Motion.
Unity – Asset localisation can only be done with a licensed plugin and requires scripting.
Wwise – Wwise features localisation options: if any dialogue, Wwise can very simply generate multiple soundbanks for different languages, without having to replace assets or repeat manipulations, saving time.
Debugging and performance monitoring
Unity – More research needed.
Wwise – Wwise can connect to the game to monitor performance, to adjust the mixing and for debugging.
The Profiler and Performance Monitor built into the authoring tool makes debugging smooth and easy and greatly helps for optimization and memory usage. (You can watch the CPU performance, streaming buffers, voices playing and other details in real time).
Unity – Limited. The manipulations are mainly accomplished through scripting (see Scripting above).
Wwise – The user interface allows the designer to implement audio behaviors quickly and test them immediately.
It makes it easy for the designer to tweak audio behaviors and parameters (with interfaces such as the RTPC graph editor, the sound property editor, and the music editor), avoiding unnecessary back and forth between the designer and a programmer. It also allows for a more detailed integration.
Audio compression and format conversion
Unity – Must be done manually.
Wwise – Multiple options for audio compression and format conversions within Wwise, saving space and time.
Wwise can create non-destructive converted files needed for different platforms, saving run-time memory. Conversion settings for each platforms can be customised : number of channels, sample rate, compression Codec, and more. The interface also allows to compare data from the original audio files to the converted ones in order to assess how much memory is saved.
Unity – Free (audio engine comes with Unity license).
Wwise – Requires a license (see pricing above).
Wwise VS FMOD
I don’t have a fancy document about Wwise VS FMOD, but can talk a little bit about it, hopefully this can help you reach a decision.
First, both are good, and allow you to do many advanced things.
Quick advice before getting into details, I’d still chose Wwise over FMOD (mostly for all the reasons enumerated above about Wwise), but to be honest your main argument here might be budget. Depending on what ‘budget slice’ your company is, one or the other may be more expensive. If the license is the same, go for Wwise.
One argument in favor of FMOD that keeps coming back is the fact that it is designed like DAW (digital audio workstation, sound designers will know what I’m talking about).
So it is, kinda. But you have to remember that your grid is not always time, and that your objects are not always sound files but rather containers. Because games are not a linear media, and sound integration is not sound designing or editing.
Wwise is nothing like a DAW. So yes, it has a certain learning curve, but once you understand its layout and principles, you realise that it allows for a much more in depth integration and it opens possibilities beyond what you can even think of in terms of creative integration. To this day, I have never encountered any technical limitations using Wwise. This isn’t true for FMOD, or Fabric.
Wwise VS Fabric
Fabric was a fantastic tool when its only other option was Unity’s audio engine. It provided more control over audio implementations, allowing for better quality audio with more variations and possibility for interaction.
But now that tools such as FMOD and Wwise exist, the fact that Fabric is a set of tools within Unity instead of a standalone software gives it a lot to catch up for. FMOD and Wwise are way ahead of Fabric in terms of:
I’d like to talk to you about Iannis Xenakis, this Greek composer who created most of his work between 1950 and 1980.
This isn’t going to be a biography, it’s going to be about the nature of his work and why I consider him to be a brilliant sound designer way ahead of his time.
We all have those artists who inspire us greatly in our quest for an individual creative voice. Xenakis is most certainly on the top of my list, and I hope this post will help you understand why.
This also isn’t going to be about his music itself. Confession – I am familiar with only very few pieces from Xenakis (something which I need to address shortly..). What fascinates me about him are his ideas, his vision, his audacity, his overall multidisciplinary achievements, his daring, relentless creativity.
I read a book recently, entitled Iannis Xenakis – Composer, Architect, Visionary.
This compilation of essays enlighten the reader about Xenakis’s life, philosophy, and artistic and personal achievements. You could have never heard his famous Metastaseis and still enjoy the book greatly.
*The text in italic in this post is taken directly or paraphrased from the book
Xenakis was a creator before anything else. It would be a great misjudgement to label him as a composer only. His work involved experimental composition, experimental visual art and installations, experimental architecture, and much more. He was an experimenter. He dared to question our ideas about art, society and even science. His own passions were many, including archaeology, literature and astronomy.
When composing, he had a way of ‘imaging’ music which was far from our conventional music notation system with lines and dots. He wasn’t either creating graphic scores the way some of his contemporaries such as John Cage had started doing a little earlier in time. He was rather working through strategies to deploy physics and mathematics as means to organise sound. This is a direct reflection of his compositional philosophy, which was founded on mathematical and scientific ideas.
This way of drafting his work on paper and thinking through the hand was very much foreshadowing the avenue of Process Art (of which I’m a big fan of as well).
Xenakis once remarked that he did not compose at the piano, that instead his tools were mathematics and computer science.
One of his most brilliant insights was that it is by going to the very physical foundations of artistic phenomena – and their basis in physics – that one can find viable ways to move forward.
As much as Xenakis’s music was said to sweep over you like a force of nature, his artistic approach was very much inspired from it too.
Some tragedies in his life have led him to find meaning in things beyond the reach of accident and time, a realm of immutable laws he would find mirrored in Nature.
I think it is a pretty strong statement to be relying on Nature itself as means of last resort to find some sort of bearing in life, of stronghold, because everything else just isn’t strong or meaningful enough.
He desired to represent through music how everything is in flux, how everything around us moves, shifts, is in constant turmoil, and that we navigate in the provisional, we must reconsider each thought at every instant.
His piece Matastaesis (1953) was the first to express this vision, describing certainty and uncertainty, timelessness and motion. He relied partly on the probability theory to achieve this.
He then developed an approach that would later become one of his trademark sound, first heard in Pithoprakta (1955-56): the cloud of points. Xenakis suggested that these are like things heard in nature, such as swarms of cicadas, or rain pattering on a roof. This active listening for natural sound events is one of the many things I find inspiring in Xenakis’s sonic creativity.
His approach, which sometimes seemed hyper-cerebral, was in fact deeply grounded in nature as well as in human experience.
He enjoyed creating immersive environments (which you’ll find out is a passion of mine if you browse this blog), of which his site-specific multimedia works are some examples, such as the Philips Pavilion and its 425 coordinated loudspeakers.
One of his later pieces, Terretektorh (1965-66), was said to be partly inspired by one the Xenakis’s many intense experiences with nature and its sounds;
Xenakis spent summers in Corsica in the company of his wife and daughter, surrounded by the sea, gazing at stars, immersed in forest sounds, or rattled by the intensity of a tempest. It was an almost violent primordial feeling he was after, sound shifting from instrument to instrument, as if between loudspeakers – a line traced in space.
His passion for astronomy was also palpable in his Diatope (1978) in Paris, where lights, lasers, pivoting mirrors and prisms created galactic movement rendered accessible.
Other Polytopes include the ones of Mycènes (1978) and Persepolis (1971).
In an era where the worldwide avant-garde music scene consisted of a handful of influential composers tightly bound by common beliefs, Xenakis had the audacity to challenge and question the very foundations of their aesthetic choices.
While both Boulez and and Xenakis somewhat shared the view that new music had to reflect a modern conception of a universe in perpetual expansion, they had disagreeing ways of expressing it. Boulez and the other avant-garde composers’ music was known as ‘pointilliste’, featuring complex textures and meticulous organisation which in the end sounded more like restless randomness, as where Xenakis opted for a more global approach, more natural sounding to the ear, giving up the note-to-note approach of serialism for one better suited one to manipulating these global entities of density, texture, and tendency towards greater or lesser complexity.
This consideration for the medium and its reception, sound in this case, is why I consider him a brilliant sound designer. His creative approach wasn’t only about finding a way to send a message through art, but also about how it would be received. This message will be heard, it is thus important to factor in the human ear and how our brains will interpret this audible message.
This passionate artist, who became a famous timeless composer without any previous musical training, was always one to stand apart, not altogether fitting in any of his contemporaries’ company. An aura of loneliness surrounds his work, even his name ”Xenakis” can be translated as ”little stranger”.
I will leave you with an excerpt from Xenakis’s daughter’s memories, Mâkhi, featured at the end of the book.
He sometimes stared at the sky, searching for that particular moment when he could at last, in extreme hand-to-hand combat, draw close to the untamed elements of nature, so as to nourish and renew himself in them.
The thunder rumbles, we’ve taken refuge in our tent, And again his face is radiant, peaceful. He uses his watch to calculate meticulously the number of seconds between the brutal bursts of lightning that tear apart the night and the explosions of thunder as they grow closer and closer to us. When the storm is at last directly above our heads he leaves the tent, half-naked; he runs and disappears little by little into this grandiose spectacle of sound and apocalyptic light.
In the early morning, when dew covers every particle of the arid countryside, he crouches for hours, scrutinizing each very particular spiderweb. A multitude of parallel stretched lines sktetch out complex architectures comprising cut-off cones, convex and concave surfaces conjoined – they are the natural ancestors of the Philips Pavilion and the polytopes…
A second post about cool projects and links I’d like to share!
First, put your headphones on and visit this website for some acoustic ecology! Amazing recordings of some of the wildest places on Earth, in your ears. What more can you ask for.
Then, you may have heard of these, they have circulated a lot on social media already, and there is quite a bit of hype surrounding them. They seem genuinely promising and am looking forward to try them on myself! The Here Earbuds for active listening:
The Soundsnap website, because I wouldn’t want to take for granted that all of you already know it, although it’s getting quite famous. You’ll find loads of quality sound effects there, and decent deals on memberships to download them.
Radio Papesse, a non-profit organisation offering ”an open space to experiment in the field of art; with a particular attention to the sonic dimension of the artistic production”
This enlightening article by Bernie Krause about how the sounds of nature get quieter everyday..
The book Environmental Sound Artist meant to be released this August, available for pre-order (already pre-ordered mine!) – looks very promising.
The experimental tech company Magic Leap:
And finally a few cool visual designs and digital sculptures websites:
Yesterday (8 June 2016) I went to the State of Play event held in Dublin Institute of Technology.
It was overall a great event, many speakers with relatively short talks (10-20 minutes each) kept the evening dynamic and filled with a variety of sage advice and colorful demonstrations.
Among the speakers were (not in order)
Unfortunately I didn’t take note of all the names and can’t find a complete list of speakers, so might be forgetting one or more.. sorry!
(Also William Pugh was meant to be there but unfortunately could not make it due to his recent leg injury. We wish you a quick recovery William!)
I strongly suggest you check out those websites, all of them had interesting things to say.
Among my favorites, definitely Robin Baumgarten and his ‘hardware experimental game projects’. He showed us a bit of his process while working on projects such as the Line Wobbler and A Dozen Sliders
It is always inspiring to see someone creating something entirely new from scratch. Makes you want to lock yourself in a studio and do the same, because why not!
I was also otherwise surprised (or maybe not) that many of the talks related to the topic of coping with stress and creative blocks, motivation and self-care. The games industry is one to attract passionate, talented people hoping to fulfill themselves working on a project they believe in. Most of the times I like to think that this is true, but it would be foolish to ignore the harsh reality of crunch times, crazy deadlines and immense amount of pressure that come with the job.
I can imagine that all of the speakers went through this realisation more than once in their career, and provided us with their tips and techniques to try to stay sane in these periods of high stress.
There was also some talk about the value of networking (Kevin Murphy), as well as advice on how to create game narratives starting from personal experience (Sherida Halatoe)
Llaura’s talk, which was more of a storytelling than a speech, was also very strong while she played an excerpt of her latest game If Found Please Return, which seems to be really promising.
The generally informal tone to the evening made it refreshing and quite friendly. The event continued in an even more informal manner at the Odessa pub for some social drinks.
Looking forward to State of Play 2017!