How to make the most of Reaper as a Sound Design tool – Part 1: Getting Started

I have recently contributed to the A Sound Effect Blog with a 2 part series article on how to make the most of Reaper as a sound design tool.

The first article looks into getting started using Reaper and the initial set up. You can find it here.

The second will be up in about a week’s time and will cover more of the workflow and some good habits to be taken from the start. Keep an eye out!

ūüôā

Loudness and metering in game audio

This post is not a tutorial on loudness and metering in game audio. It is rather about sharing my findings on something I am currently researching on, hoping it can help those of you who would be in a similar position as me. I will definitely revisit this post at a later stage of my current project to share my experiences and conclusions on this info.

Since this is a work in progress, or rather a learning in progress, feel free to comment and let me know about any better/other ways to see or do these things.

I’ve been working on my current project for a few months now and, although I’ve been wondering about loudness and metering earlier in the process, the time has only recently come for me to make decisions on the matter, and hence look deeper into it.

First, I found this amazing resource which helped me understand more about all of it very quickly. This article from Stephen Schappler is a real gem and I strongly recommend you have a read. I will mention some of the things he shared in his article here, as well as develop according to my own experience.

This interview with Gary Taylor from Sony is equally¬†very instructive, going into further details about Sony’s Audio Standards Working Group¬†(ASWG) recommended specs.

 

Industry standards (or lack thereof) and game audio solutions

There are currently no standards set for loudness measurements in game audio, resulting in wide variations and discrepancies in loudness from one game to another. The differences in gaming set ups and devices also present a challenge in terms of developing those standards.

One way to start looking into this is to refer to the BS.1770 recommendations to measure loudness and true peak audio level.

To put it simply, these algorithms measure Loudness Level at three different time scales:

  • Integrated (I) – Full program length
  • Short Term (S) – 3 second window
  • Momentary (M) – 0.4 second window

What these mean for game audio will probably be different than what they mean in TV, as there is no full program length in interactive media, and 3 and 0.4 seconds may prove to be too short cuts to take any accurate measurement, again relating to the dynamic and interactive nature of the medium.

This is what Gary Taylor recommended about adapting the BS.1770 measuring terms to game audio (in this interview) :

We recommend that teams measure their titles for a minimum of 30 minutes, with no maximum, and that the parts of any titles measured should be a representative cross-section of all different parts of the title, in terms of gameplay.

As BS.1770 also indicates, it would be wise to consider the Loudness Range (LRA) and the True Peak Level. In order to do so, you would need good tools (accurate Loudness Meter) and a good environment (calibrated and controlled).

In terms of numbers, let’s look at the¬†R128 and A/85 broadcast recommendations, which we could assume would present a similar objective if working on console and PC games, where your environment and set up would be the same/similar as your TV set up.

Those recommendations are:

R128 (Europe)

  • Program level average: -23 LUFS (+/-1)
  • True peak maximum: -1 dBTP

A/85 (US)

  • Program level average: -24 LKFS (+/-2)
  • True peak maximum: -2 dBTP

 

However, these numbers may not apply to the mobile games industry, and¬†different¬†terms would need to be discussed in order to set standard portable¬†devices levels. Some work has already been done on that matter by Sony’s ASWG, who are among the first ones (if not the first) to consider standardising the game audio loudness metering process and providing recommendations. Here are their¬†internal loudness recommendations for their 1st party titles:

Sony ASWG-R001

  • Average loudness for console titles: -23 LUFS (+/-2)
  • Average loudness for portable titles: – 18 LUFS
  • True peak maximum: -1 dBTP

Gary Taylor mentioned in his interview that studios such as Media Molecule and Rockstar¬†are already conforming to¬†Sony’s¬†specs, both in terms of average loudness and dynamic range. This seems to indicate that progress is being slowly but surely made in terms of game audio loudness standardisation.

How to proceed?

The recommended process is to send the audio out from your game directly into your DAW and measure loudness with a specialised plugin. Be careful to make sure your outputs and inputs are calibrated and that the signal remains 1:1 across the chain.

Gary Taylor’s plugin recommendations to measure loudness:

As far as analysis tools, I personally have yet to find anything close to the Flux Pure Analyzer application for measuring loudness, spectral analysis, true peak, dynamic range and other visualisation tools. As far as loudness metering generally, Dolby Media Meter 2, Nugen VizLM, Waves WLM, and Steinberg SLM-128 (free to Nuendo and Cubase users) are all very good.

I have yet to experiment with those plugins and decide on my favorite tools. I happen to have the Waves WLM so will give that a try first, and plan to compare with the demo version of Nugen VizLM and see if I want to buy. I will update this article with feedback from my experience when ready.

Wwise and FMOD now also support BS.1770 metering, which is extremely convenient for metering directly within the audio engine.

In Fabric, there are Volume Meter and Loudness Meter Components which allow you to meter one specific Group Component. You could for instance apply those to a Master Group Component to monitor signals of the overall game.

loudnessmeter

 

However, I think that despite using these tools within the audio engines, it is worth measuring the direct output of your game directly from your DAW with the help of a mastering plugin.¬†I see this as a way to ‘double-check’, I’m a big fan of¬†making sure everything works as it is meant to, and listening to the absolute final end result of the product seems like a valid way to do this.

Finally, I unfortunately don’t have¬†the luxury of working in a fully calibrated and controlled studio environment. If you are in a similar position as me, I’d strongly recommend considering renting a studio space towards the final stages of the game production to perform some more in depth mixing and metering.

I hope this was useful even though this info is based mostly on research rather than pure experience. I will most definitely revisit this topic once my remaining questions are answered ūüôā

 

Additional documentation:

 

Audio processing using MaxMSP

If you follow me on twitter, you will have seen a few recent tweets about my latest experiments with Sci Fi bleeps and bloops.

I created a MaxMSP patch that allows me to process sound files in such a way that the original file is nearly unidentifiable, and the results sound nicely tech and Sci Fi.

My process there was that over time, I created a few simple individual patches performing this sort of processing:

  • Phaser+Delay
  • Time Stretcher
  • Granulator
  • Phaser+Phaseshift
  • Ring Modulator
  • Phasor+Pitch Shift

I decided to assemble those patches together in such a way that I could play with multiple parameters and multiple sounds at the same time.

In order to do so, I have mapped the various values and parameters of my patch to a midi controller [KORG nanoKONTROL2], and selected a few sounds a know work well with the different items of the patch to be chosen from a dropdown menu.

This is what the patch looks like:

scifipatch02.JPG

All the different ‘instruments’ are contained in subpatches. They are all quite simple but create interestingly complex results when put together.

The subpatches:

scifipatch03

Organised nicely in Presentation Mode, I can interact with the different values with my midi controller:

scifipatch01.JPG

The mapping system:

scifipatch04.JPG

I can then record the result to a wav file on disk, which I am free to edit in Reaper afterwards, selecting the nice bits and making cool sounds effects with these original sources.

Record to file:

scifipatch05

This process can be quite infinite as I can then feed the processed sound back to the patch and see what comes out of it.

Here is a little demo of the patch and its ‘instruments’:

 

And some bleeps and bloops I made using this patch:

 

You can visit the Experiments¬†page to hear more tracks ūüôā

 

 

Game Audio Asset Naming and Organisation

 

Whether you are working on audio for an Indie or a AAA title, chances are you will have to deal with an important amount of assets,which will need to be carefully named and organised.

A clear terminology, classification and organisation will prove to be crucial not only for yourself and find your way around your own work, but for your team members, whether part of the audio department or the front end team helping you implement your sounds into the game.

I would like to share my way of keeping things neat and organised, in the hope that it will help the less experimented among you start off on the right foot. I don’t think there is only one way to do this though, and those of you who have a bit of experience might have a system that already works, and that’s perfectly fine.

I will go over creating a Game Audio Design Document and dividing it into coherent categories and subcategories, using a terminology that makes sense, event naming practices, and folder organisation (for sound files and DAW sessions) on your computer/shared drive.

Game Audio Design Document

First, what is an Audio Design Document? In a few words, it is a massive list of all the sounds in your game. Organised according to the various sections of the game, it is where you list all the events by their name, assign priorities, update their design and implementation status, and note descriptions and comments.

The exact layout of the columns and category divisions may very well vary according to the project you are currently working on, but here is what I suggest.

COLUMN 1: Scene or Sequence in your game engine (very generic)

COLUMN 2: Category (for example the object type/space)

COLUMN 3: Subcategory (for example the action type/more specific space)

COLUMN 4: Event name (exactly as it will be used in the game engine)

COLUMN 5: Description (add more details about what this event refers to)

COLUMN 6: Notes (for instance does this sound loop?)

COLUMN 7: 3D positioning (which will affect the way the event is implemented in game)

COLUMNS 8-7-9: Priority, Design, Integration (to color code the status)

COLUMN 10: Comments (so that comments and reviews can be noted and shared)
It would look something like this:

AudioDesignDoc-Ex.jpg

 

This document is shared among the members of the audio team so that everyone can refer to it to know about any details of any asset. You could even have a ‚Äėname‚Äô or ‚Äėwho?‚Äô column to indicate who is responsible for this specific asset if working in a large audio team.

It is also shared across the art team if the art director is your line manager, and across any member of the front end team involved in audio implementation.

This list may also not be the only ‚Äėsheet‚Äô of the Audio Design Document (if you are working in Google Sheets, or equivalent in another medium). A few other sheets could involve a document created especially for the music assets, another for bugs or requests to keep track of, another for an Audio Roadmap, and so on. Basically, it is a single document to which all team members can refer in order to keep up to date with the audio development process. You can equally add anything that has to do with design decision, references, vision, etc.

While big companies may very well have their own system in place, I find this type of docs to be especially useful when working in smaller companies where such a pipeline has not yet been established.

I’d like to point out as well that, in the creation of such a document, it is important to consider that you will need to remain flexible throughout the development process. Especially if you join the project at an early stage, where sections/names/terminology in the game are bound to change. Throughout those changes, it is important to update the doc regularly and remain organised, otherwise it can rapidly become quite chaotic.

Terminology

In terms of terminology, this is again something that can be done in many ways, but I’d say that one of the most important things is that, once you’ve decided on a certain terminology, remain coherent with it. And be careful to name the events in your audio engine exactly the way you named them in your design document. Otherwise you will very rapidly get confused between all those similarly named versions of a same event, and won’t know which one is the correct one to use.

What I like to do is, first, no capital letters, all minuscules, so that it doesn’t get confusing if events need to get referred to in the code. Programmers don’t need to ask themselves where were those capital letters, which may seem like a small thing but when there are 200+ events, it is appreciated.

Then there is the matter of the underscore ‚Äė _ ‚Äė or the slashes ‚Äė / ‚Äė. ¬†That may depend on the audio engine and/or game engine you are using. For instance, using Fabric in Unity, all my events are named with slashes for the simple reason that it automatically divides them into categories and subcategories in all dropdown menus in Unity. This becomes very handy when dealing with multiple tools and hundreds of events.

Then the organisation of your audio design document would pretty much tell you how to name your event. For instance:

category_subcategory_description_number  (a number may not always be required)

base_innerbase_overall_ambience

character1_footsteps_grass_01

etc

If you dislike the long names you can find abbreviations, such as:

ch1_fs_gr_01

I personally find they can become quite confusing when sharing files, but if you do want to use those, simply remember to be clear on what they mean, for instance by writing their abbreviated and full name in the doc, and make sure that there is no confusion when multiple team members are working with those assets.

Folder organisation

Whether you are working as a one person department on your own machine or you are sharing a repository for all the audio assets, a clear way of organising these will be crucial. When working on a project of a certain scale (which doesn’t need to be huge), you will rapidly get dozens of GB of DAW sessions, video exports, and files of previous or current versions.

I suggest you separate your directories for your sound files, DAW sessions and other resources. Your sound files directory should be organised in the same way you organised your Audio Design Document. This way, it is easy to know exactly where to find sound(s) constituting specific events.

I also suggest that you have a different (yet similar) directory for previous versions. You may call it ‚ÄėPreviousVersions‚Äô or something equivalent, and have an identical hierarchy as the ‚Äėcurrent version‚Äô one. This is so that, if you need to go back to an older version, you know exactly where to find it, and can access it quickly. You can name those versions by number (keep the same terminology, and add a V01, V02 at then end).

Finally for your DAW sessions, you may decide to go for something a little bit different in terms of hierarchy, but I find that maintaining a similar order is very useful for self organisation and be able to quickly go back to sessions you may not have touched in a while.

I also highly recommend that you save your session as, in order to back them up, but also anytime you make changes to a version of a sound. First, corrupted sessions do happen, so you’ll be extremely happy when it happens and you haven’t lost weeks of work, but also if your manager prefers an earlier version of your sound, but with some modification, you can easily go back to exactly that version, and start again from there, while still keeping the latest version intact.

So, if my asset hierarchy and division in my Audio Design Document looks like the one in the image above, my folder hierarchy would look something like this:

 

And finally you can create a folder for Video Exports for instance, and have your video screencaptures there, again organised in coherent folders. The principle will remain the same for any other resources you may have.

I hope this was helpful, happy organisation ūüôā

 

 

 

 

Getting started in Game Audio

This post is for those of you who are passionate about sound, and are wondering how to become a sound designer for videogames, where to start, how to enter the industry, what software and tools you need to know, who to talk to, etc.

I get these kinds of questions a lot, and although there is no magic recipe, no step by step instructions that will guarantee you a successful career in videogames, there are some things that are useful to know, and that can help you develop an attractive curriculum.

What equipment to use and/or start with?

No two sound designers will use the same equipment, but I can tell you a bit about the type of equipment you would need and the workflow. The info that I give here is pretty much minimal requirement. You can most certainly take this much further, but here is what I consider essential, in terms of hardware and software.

The hardware concerns equipment you need in order to record your own sounds:

  • microphone (and xlr cable) and/or portable recorder
  • audio interface
  • decent computer
  • good headphones

And the software, which you also need to record, but also to edit and mix:

  • a DAW (Digital Audio Workstation)

Then, when working on an actual game project, you’ll need to implement your sounds into the game. The type of software needed is called audio middleware, and is what will communicate to the game engine and act as a bridge between the audio integration and the game events. Some large companies use their own in-house audio middleware (and game engine), but I’m not going to get into this. On the market, whether your game is made with¬†Unity, Unreal or any other game engine, there are a few options in terms of audio middleware, which are usually compatible with any of the game engines (although not always).¬†Three of them are worth mentioning: Wwise, FMOD and Fabric.

In my opinion, the best one out there is by far Wwise (read the Audio Middleware Comparison post to understand why). If you are working on a commercial title you need to consider licenses, but they usually all have some sort of deal (if not free) for Indie titles, students, or simply to use on non-commercial projects. This middleware is what gives a lot of creative freedom in the interactive design and integration.

To know more about audio integration, a good way to get introduced to the logic behind it is to watch tutorials, such as the Wwise tutorials. The advanced ones can be overwhelming, but the overview ones will be very useful in getting a better understanding of audio integration and how to design sounds with that kind of logic in mind.

You also need Sound Libraries. They are part of the workflow. Especially when working with low budgets and tight schedules, it can be challenging to record all the sounds you need yourself. Using sound banks is a good way to start, starting with good quality sound files and familiarising yourself with the editing process, which is one of the most creative parts of the design.

Be careful though, I strongly suggest to never use a sound directly taken from a sound library, but rather to use that and transform it, process it, layer it with other sounds in order to create your own assets. The reason is that those sounds are recognisable, and it reflects badly on the quality of the game and its originality if the audio content is not unique.

Some of the good and affordable sound libraries out there include: Boom, Blastwave, Soundsnap, and many, many more, which are easy to find with a bit of research.

In terms of Digital Audio Workstation, my favorite is by far Reaper. It is very powerful and the license costs barely anything, as opposed to its competitors. Some would recommend Nuendo, Cubase, ProTools, Logic, etc. These are all professional DAWs and will work nicely for sound design. Which one to opt for is mostly a matter of habits and the type of workflow that suits you best (and the budget you have..).

An audio interface will help your computer deal with DAW sessions heavy in effects and plugins, but you could do without for a while if you are just starting and not recording yet. There are some very decently priced entry level audio interfaces from Steinberg (UR22), Focusrite (Scarlet 2i2 or 2i4), and many more. Once you get more serious and do a lot of recording, it might be worth investing in a good audio interface with quality preamps.

If low on finances, you can start recording with a portable recorder instead of getting expensive microphones and audio interface.  I own a SONY PCM m10 and it is a very reliable and useful piece of equipment. Other equivalents such as the Zoom are also worth looking into. You can visit the Gear section of this blog to know more about what kind of equipment I use.

Game audio designing tricks

  • Variety

In game audio, you always want to avoid repetition, since hearing the same sounds over and over again, regardless of the quality of these sounds, will most certainly result in the player muting the audio. One way to create variety in game music is to compose a series of music segments that will play in sequence, that could also be layered together in a generative way.

For instance, you could have one loop of music that would serve as a ‘basic layer’, on top of which¬†you could have music stingers or cues (with a few variations for each of them). The possibilities ¬†for music integration are endless. One of the key tricks to game music is¬†to integrate the segments in such a way that the music is generative both horizontally and vertically. What this means is that, for instance, instead of having a single basic music layer which loops, imagine this loop actually being made of a few segments which can success each other in any order, or according to set conditions. This is your horizontal generative music. Then, at any moment (or rather depending on your metric and bars and set conditions), music segments and stingers (of which you would have a few variations) are layered additively to the ongoing basic layer. This is your vertical generative music.

In¬†terms of sound effects, the key is to have more than one single sound for one game event. For instance, if a weapon is fired, you would have at least 3 (to put a number on it, but ideally 5 and over) variations of this specific weapon, to be triggered¬†randomly¬†every time it is fired. This avoids being annoyed by hearing the same sound over and over again.¬† That’s variation in its simplest form, but you could also divide your weapon fire sound into 3 or even 4 parts (trigger, fire layer 1, fire layer 2, shell falling), and integrate these sounds (each of them with variations) in such a way that they could combine randomly, resulting in almost never hearing the exact same combination in game. The audio middleware (such as Wwise) would let you do that. It would also provide ‘randomisers’ on pitch, volume and other DSP effects so that you can create even more variations out of the sounds you already have.

  • Coherence, unity, identity

When¬†you design sounds for a game, you need to consider a certain idea of ‘sonic identity’. I suppose you could say the same for other media, but I find this to be especially relevant in games, since they are made of various sections, which the player can visit at anytime, from anywhere. ¬†A coherence and sonic identity is what will make your audio stand out. This can be achieved through designing, editing, processing and mixing techniques.

A good example of a game feature an amazing sonic identity is LIMBO.  The sound integration is seamless, the whole atmosphere of the game is glued together with the sound being so coherent with itself and with the environment. A style has been decided on and has been successfully explored and maintained throughout the game.

How to get better at creating music/soundscapes for games?

Play a lot of games and listen. Try to notice what sort of game parameters affect the music (danger, discoveries, success/failure, etc etc). If you are not currently working on a game, imagine scenarios:

From¬†the start music, you can either go to level 2 or die (your segments and transitions will need to be able to play seamlessly no matter the direction), the music on level 2 will be different, then you can go to level 3 or die, same principle. Then on top of this you could have music stingers for if the player picks up something, or if an enemy is approaching. You could have a ‘stress’ or ‘combat’ layer that would blend with or replace the original music. There are plenty of possibilities which can get more and more complex. It is a good exercise to go through the entire process, even with a hypothetical game.

You could also start from an existing game, analyse it, find the patterns and game parameters and re-do some music for it. Test it out in Wwise. Then it’s all about thinking outside the box, being creative and imagining ways to implement audio in a unique and original way.

Essential reads to learn sound design techniques

The Sound Effects Bible РRic Viers

The Foley Grail РVanessa Theme Ament

Game Sound: An Introduction to the History, Theory and Practice of Video Game Music and Sound Design РKaren Collins

Getting into the industry and networking

Getting into the industry is the hard part. There are many talented people, for very few positions. This means that on top of your own skills, you’ll need to be very proactive in your hunt for projects. Work with cinema and game students in order to create a portfolio. Re-design sound over gameplay videos and cinematics. Look for Kickstarter projects and offer your services.

It involves a lot of hard work at first, but getting a decent portfolio is the first step towards a serious career plan.

Online networking is a good way to make yourself aware of the latest industry events, to which you should attend as much as possible, make yourself known and make sure you have something to show when asked. An online portfolio is one efficient way to do this.

 

In short, networking, practicing your sound design skills by re-designing sound on existing videos, collaborations with students and Kickstarters, being nice and social, and finally being proactive and organised are some of the helpful actions you can take if you want to be a game audio designer.

I hope this is helpful to some of you. Start by reading a lot about it and watch tutorials. Google is your friend. And play games!

 

Audio Middleware Comparison – Wwise/FMOD/Fabric/Unity 5

The purpose of this post is to provide some insight about the 3 most popular audio middleware for game audio integration, and a bit about Unity 5’s audio engine too.

I am taking for granted that you know what an audio middleware is, if you don’t and are interested in learning what it means and how it works, I found this concise but detailed article¬†very accurate and informative, great introduction to game audio and its tools.

These descriptive charts and table also help make sense of the data surrounding audio middleware.

I have used Wwise, FMOD, and Fabric to a similar extent on various projects, and thought it would be helpful to some that I write down a few of my conclusions. I will do my best to keep this info updated as I continue to learn these software and as they progress themselves.

I will establish my preference right now, as you will certainly feel my partiality throughout this article: Wwise, by a thousand miles. I will support this with facts and observations of course.

First, let’s talk budget. I made this chart a short while ago to compare license pricing between Wwise and FMOD. It doesn’t include Fabric, but I’ve added Fabric’s licensing right after, taken directly from the website, along with the details of Wwise and FMOD’s licensing.

fmod-graph-price-03

Fabriclicense

WwiseLicense

FMODlicense

 

Basically, this means that the choice of middleware can greatly differ according to your budget. Fabric is generally cheaper, and it’s main advantage is that it supports WebGL and all Unity platforms, but if your game is of a certain scale, middleware such as Wwise and FMOD will allow you to push the technical limits further.

I wrote a few comparative documents which I will share here, feel free to download them.

This should hopefully be helpful in determining what software has the best capabilities.

Wwise‘s specs¬†summary (click to get pdf)

wwiselogo

Studios using Wwise (non exhaustive)

studios

Key features in Wwise

  • Can handle complex audio behaviors such as fades and containers like random, sequence, blend, and switch.
  • Game “syncs” allow the designer to update state settings, switch values, and adjust real-time game parameters.

GameSyncs

  • The graph editor makes changing and tweaking curves easy for things like speed or pitch ramps.
  • A built-in Soundcaster allowing the sound designer to¬†work in a game-like environment, simulating 100% of the gameplay, including Real Time Parameters, providing a powerful test engine.

Soundcaster

  • You can easily build hierarchies of containers for complex behaviors and conditions.

WwiseHirerarchy

  • An audio “bus” can be used to group related sounds such as music, voices, and effects for volume control, side chaining, ducking and elaborate mixing.
  • 1st degree randomisers on pitch, filters, and amplitude allow quick variability.
  • Multiplatform simultaneous render with the possibility of customizing the settings for each of them.
  • The Profiler and Performance Monitor built into the authoring tool makes debugging smooth and easy and greatly helps for optimization and memory usage. (You can watch the CPU performance, streaming buffers, voices playing and other details in real time).

WwiseProfiler

  • Translates complex multiple code lines scattered across scripts into few easy steps managed by the audio designer.
  • All of these features are done in the authoring tool, and can be changed and tested by the audio designer, without help from the programmer.
  • The programmer, instead of implementing the audio behaviors, just triggers the Wwise event name.
  • Excellent and rapid customer support if needed.

Why use Wwise (over other audio middleware)

  • Dedicated interactive music engine and layout, allowing greater variability and flexibility in the integration. This feature provides extensive adaptability to the gameplay, as opposed to the fairly limited integration options in FMOD. 1 hour of composed music, if well integrated in Wwise, may last for 30 hours of in-game music.

music

  • Large amount of Plug-ins and effects imbedded in Wwise, such as :

SoundSeed Air plugins – generative sound sources using time-varying parameter sets to drive a synthesis algorithm.No source audio files are necessary (hence no space required).

soundseed

Effect Editor – a series of audio processing effects that can be tied directly to Real Time Parameter Controls or other in-game variability.

Etc.

  • Midi support for interactive music and virtual instruments (Sample and Synth). This allows any MIDI input data (for example pitch bend or CC) to be attached to an RTPC-able properties on MIDI target of the music segment.
  • Put simply, Wwise can implement more complex audio behaviors in fewer manipulations and greater autonomy from the audio designer.

 


 

Unity 5 VS Wwise Summary (click to get pdf)

Scriping

Unity –¬†Extensive scripting involved in audio integration: any behavior other than Play and Loop has to be scripted (see 1st degree manipulations below). This requires a considerable amount of a¬† programmer’s time.

Wwise –¬†Minimal scripting required: all audio behaviors are set within Wwise, the only scripting required is to call game parameters. This prevents unnecessary back and forth between the designer and a programmer.

 

1st degree manipulations

Unity –¬†The only audio behaviors available to the designer are play, loop, high/low priority, volume, pitch, pan, and basic effects.

UnityMixer

Wwise –¬†The same functions and more are available in Wwise (including randomisers, initial delay, conversion settings, loudness normalisation, Real Time Parameter Controls, Game States, Motion Effects, sound instances limit, and more).

On top of basic manipulations, the designer can create multi-actions events as well as stop events (among others, see Wwise events below), avoiding the need for scripting these behaviors. It reduces complex audio events to simple manipulations

In addition, basic editing is available within Wwise, and allows to reuse the same samples more than once, saving space.

Variability

Unity –¬†No 1st degree access to randomisers or containers dictating behaviors (such as random containers). Everything has to be scripted. Fewer possibilities for variability means that a higher number of sound files is needed in order to create variations (takes more space).

Wwise –¬†Excellent possibilities for variability due to easy access to randomisers on volume, pitch, lowpass, highpass, a priority system, and other audio behaviors (see 1st degree manipulations above). These variations reduce the number of sound files needed in the game, saving space.

Game parameters

Unity –¬†The only way to control game audio states and parameters is with Snapshots. Snapshots have limited flexibility, and any transition between them has to be scripted.

Wwise –¬†Wwise allows for a much more flexible game parameter control, all manageable by the designer, and highly customizable: the Real Time Parameter Controls (RTPC).

Complex audio behaviors can be implemented without requiring more space or any more scripting than simple behavior. This creates greater possibility for creativity and elaborate sound design.

Music Integration

Unity –¬†Non-existent. There is no differentiation between music and sound integration, making it difficult to create time-sensitive transitions and multilayered music implementation, essential for good dynamic qualities and to give feedback about the gameplay to the user.

Wwise –¬†Wwise’s dedicated music integration engine is one of his greatest strengths: it allows for highly dynamic implementation, greater variability and flexibility in the integration.

Its features include bars and beat recognition, entry and exit cues which allow the layering of multiple tracks or sound cues in sync, a transition system allowing for seamless shifts and variations, and a stinger system making it possible to link game events with musical cues, in sync with the music.

This saves space due to the possibilities for dynamic integration, reducing the required amount of sound files needed.

musicEditor

Mixing and testing

Unity –¬†In-game only, meaning that all the sounds need to be implemented in a functional way before the designer can be able to assess the result in relation to other sounds and to the gameplay. Modifications take more time due to back and forth.

Wwise –¬†Allows the sound designer to mix as the work progresses, and can test all the sounds, as they would sound in-game with the Soundcaster session system. Wwise’s Soundcaster system allows to simulate a gameplay environment and listen to the sounds in real time. Modifications can be done instantly

Soundcaster

Hierarchy and buses

Unity –¬†Good system that allows micro and macro groups of sounds. Good for mixing levels and effects, but does not include behavior systems.

Wwise –¬†Wwise has a similar “bus” and hierarchy system, but it includes parents of various kinds, determining the behaviors of the child (containers). This system of groups and containers include features such as Random, Sequence, Blend, Switch, Dialogue, and Motion.

Asset localisation

Unity –¬†Asset localisation can only be done with a licensed plugin and requires scripting.

Wwise –¬†Wwise features localisation options: if any dialogue, Wwise can very simply generate multiple soundbanks for different languages, without having to replace assets or repeat manipulations, saving time.

Debugging and performance monitoring

Unity –¬†More research needed.

Wwise –¬†Wwise can connect to the game to monitor performance, to adjust the mixing and for debugging.

The Profiler and Performance Monitor built into the authoring tool makes debugging smooth and easy and greatly helps for optimization and memory usage. (You can watch the CPU performance, streaming buffers, voices playing and other details in real time).

WwiseProfiler

User interface

Unity –¬†Limited. The manipulations are mainly accomplished through scripting (see Scripting above).

Wwise –¬†The user interface allows the designer to implement audio behaviors quickly and test them immediately.

It makes it easy for the designer to tweak audio behaviors and parameters (with interfaces such as the RTPC graph editor, the sound property editor, and the music editor), avoiding unnecessary back and forth between the designer and a programmer. It also allows for a more detailed integration.

GameSyncs

Audio compression and format conversion

Unity –¬†Must be done manually.

Wwise –¬†Multiple options for audio compression and format conversions within Wwise, saving space and time.

Wwise can create non-destructive converted files needed for different platforms, saving run-time memory. Conversion settings for each platforms can be customised : number of channels, sample rate, compression Codec, and more. The interface also allows to compare data from the original audio files to the converted ones in order to assess how much memory is saved.

$$$

Unity –¬†Free (audio engine comes with Unity license).

Wwise –¬†Requires a license (see pricing above).

Summary

BadGoodpoints


Wwise VS FMOD

I don’t have a fancy document about Wwise VS FMOD, but can talk a little bit about it, hopefully this can help you reach a decision.

First, both are good, and allow you to do many advanced things.

Quick advice before getting into details, I’d still chose Wwise over FMOD (mostly for all the reasons enumerated above about Wwise), but to be honest your main argument here might be budget. Depending on what ‘budget slice’ your company is, one or the other may be more expensive. If the license is the same, go for Wwise.

One argument in favor of FMOD that keeps coming back is the fact that it is designed like ¬†DAW (digital audio workstation, sound designers will know what I’m talking about).

So it is, kinda. But you have to remember that your grid is not always time, and that your objects are not always sound files but rather containers. Because games are not a linear media, and sound integration is not sound designing or editing.

Wwise is nothing like a DAW. So yes, it has a certain learning curve, but once you understand its layout and principles, you realise that it allows for a much more in depth integration and it opens possibilities beyond what you can even think of in terms of creative integration. To this day, I have never encountered any technical limitations using Wwise. This isn’t true for FMOD, or Fabric.

You can learn more about how to use those software by browsing this blog, especially its Tutorials and Tips and Tricks sections.


Wwise VS Fabric

Fabric was a fantastic tool when its only other option was Unity’s audio engine. It provided more control over audio implementations, allowing for better quality audio with more variations and possibility for interaction.

But now that tools such as FMOD and Wwise exist, the fact that Fabric is a set of tools within Unity instead of a standalone software gives it a lot to catch up for. FMOD and Wwise are way ahead of Fabric in terms of:

  • Amount of scripting needed
  • 1st degree manipulations and workflow
  • Variability
  • Real time parameters
  • Music integration
  • Mixing and testing
  • Localisation
  • Debugging and profiling
  • User interface
  • Compression and ‘per platform’ settings
  • DSP and plugin usage
  • and more.

You can learn more about how to use Fabric by browsing the Tutorials and Tips and Tricks sections.