Loudness and metering in game audio

This post is not a tutorial on loudness and metering in game audio. It is rather about sharing my findings on something I am currently researching on, hoping it can help those of you who would be in a similar position as me. I will definitely revisit this post at a later stage of my current project to share my experiences and conclusions on this info.

Since this is a work in progress, or rather a learning in progress, feel free to comment and let me know about any better/other ways to see or do these things.

I’ve been working on my current project for a few months now and, although I’ve been wondering about loudness and metering earlier in the process, the time has only recently come for me to make decisions on the matter, and hence look deeper into it.

First, I found this amazing resource which helped me understand more about all of it very quickly. This article from Stephen Schappler is a real gem and I strongly recommend you have a read. I will mention some of the things he shared in his article here, as well as develop according to my own experience.

This interview with Gary Taylor from Sony is equally very instructive, going into further details about Sony’s Audio Standards Working Group (ASWG) recommended specs.


Industry standards (or lack thereof) and game audio solutions

There are currently no standards set for loudness measurements in game audio, resulting in wide variations and discrepancies in loudness from one game to another. The differences in gaming set ups and devices also present a challenge in terms of developing those standards.

One way to start looking into this is to refer to the BS.1770 recommendations to measure loudness and true peak audio level.

To put it simply, these algorithms measure Loudness Level at three different time scales:

  • Integrated (I) – Full program length
  • Short Term (S) – 3 second window
  • Momentary (M) – 0.4 second window

What these mean for game audio will probably be different than what they mean in TV, as there is no full program length in interactive media, and 3 and 0.4 seconds may prove to be too short cuts to take any accurate measurement, again relating to the dynamic and interactive nature of the medium.

This is what Gary Taylor recommended about adapting the BS.1770 measuring terms to game audio (in this interview) :

We recommend that teams measure their titles for a minimum of 30 minutes, with no maximum, and that the parts of any titles measured should be a representative cross-section of all different parts of the title, in terms of gameplay.

As BS.1770 also indicates, it would be wise to consider the Loudness Range (LRA) and the True Peak Level. In order to do so, you would need good tools (accurate Loudness Meter) and a good environment (calibrated and controlled).

In terms of numbers, let’s look at the R128 and A/85 broadcast recommendations, which we could assume would present a similar objective if working on console and PC games, where your environment and set up would be the same/similar as your TV set up.

Those recommendations are:

R128 (Europe)

  • Program level average: -23 LUFS (+/-1)
  • True peak maximum: -1 dBTP

A/85 (US)

  • Program level average: -24 LKFS (+/-2)
  • True peak maximum: -2 dBTP


However, these numbers may not apply to the mobile games industry, and different terms would need to be discussed in order to set standard portable devices levels. Some work has already been done on that matter by Sony’s ASWG, who are among the first ones (if not the first) to consider standardising the game audio loudness metering process and providing recommendations. Here are their internal loudness recommendations for their 1st party titles:

Sony ASWG-R001

  • Average loudness for console titles: -23 LUFS (+/-2)
  • Average loudness for portable titles: – 18 LUFS
  • True peak maximum: -1 dBTP

Gary Taylor mentioned in his interview that studios such as Media Molecule and Rockstar are already conforming to Sony’s specs, both in terms of average loudness and dynamic range. This seems to indicate that progress is being slowly but surely made in terms of game audio loudness standardisation.

How to proceed?

The recommended process is to send the audio out from your game directly into your DAW and measure loudness with a specialised plugin. Be careful to make sure your outputs and inputs are calibrated and that the signal remains 1:1 across the chain.

Gary Taylor’s plugin recommendations to measure loudness:

As far as analysis tools, I personally have yet to find anything close to the Flux Pure Analyzer application for measuring loudness, spectral analysis, true peak, dynamic range and other visualisation tools. As far as loudness metering generally, Dolby Media Meter 2, Nugen VizLM, Waves WLM, and Steinberg SLM-128 (free to Nuendo and Cubase users) are all very good.

I have yet to experiment with those plugins and decide on my favorite tools. I happen to have the Waves WLM so will give that a try first, and plan to compare with the demo version of Nugen VizLM and see if I want to buy. I will update this article with feedback from my experience when ready.

Wwise and FMOD now also support BS.1770 metering, which is extremely convenient for metering directly within the audio engine.

In Fabric, there are Volume Meter and Loudness Meter Components which allow you to meter one specific Group Component. You could for instance apply those to a Master Group Component to monitor signals of the overall game.



However, I think that despite using these tools within the audio engines, it is worth measuring the direct output of your game directly from your DAW with the help of a mastering plugin. I see this as a way to ‘double-check’, I’m a big fan of making sure everything works as it is meant to, and listening to the absolute final end result of the product seems like a valid way to do this.

Finally, I unfortunately don’t have the luxury of working in a fully calibrated and controlled studio environment. If you are in a similar position as me, I’d strongly recommend considering renting a studio space towards the final stages of the game production to perform some more in depth mixing and metering.

I hope this was useful even though this info is based mostly on research rather than pure experience. I will most definitely revisit this topic once my remaining questions are answered 🙂


Additional documentation:


Audio processing using MaxMSP

If you follow me on twitter, you will have seen a few recent tweets about my latest experiments with Sci Fi bleeps and bloops.

I created a MaxMSP patch that allows me to process sound files in such a way that the original file is nearly unidentifiable, and the results sound nicely tech and Sci Fi.

My process there was that over time, I created a few simple individual patches performing this sort of processing:

  • Phaser+Delay
  • Time Stretcher
  • Granulator
  • Phaser+Phaseshift
  • Ring Modulator
  • Phasor+Pitch Shift

I decided to assemble those patches together in such a way that I could play with multiple parameters and multiple sounds at the same time.

In order to do so, I have mapped the various values and parameters of my patch to a midi controller [KORG nanoKONTROL2], and selected a few sounds a know work well with the different items of the patch to be chosen from a dropdown menu.

This is what the patch looks like:


All the different ‘instruments’ are contained in subpatches. They are all quite simple but create interestingly complex results when put together.

The subpatches:


Organised nicely in Presentation Mode, I can interact with the different values with my midi controller:


The mapping system:


I can then record the result to a wav file on disk, which I am free to edit in Reaper afterwards, selecting the nice bits and making cool sounds effects with these original sources.

Record to file:


This process can be quite infinite as I can then feed the processed sound back to the patch and see what comes out of it.

Here is a little demo of the patch and its ‘instruments’:


And some bleeps and bloops I made using this patch:


You can visit the Experiments page to hear more tracks 🙂



Links and cool projects n° 4

Another wave of cool links to worth exploring 🙂

A few audio/game audio blogs full of neat tips & tricks and more:





2 games I find promising in terms of fun/originality/audacity:

Bound: http://bound.playstation.com/

The Floor is Jelly: http://thefloorisjelly.com/

Some awesome game music by Disaster Piece:


A very useful link about audio compression settings when working in Unity:

Wrong Import Settings are Killing Your Unity Game [Part 2]

Develop: Brighton Game Dev Conference

I just came back from Brighton for the Develop: Brighton game dev conference. I was there only on Thursday 14 July for the Audio Day, and here are my thoughts and brief summary.



The Audio Track was incredible, lining up wonderful speakers with so much to say!

The day started at 10 am with a short welcome and intro from John Broomhall (MC for the day), and a showing of an excerpt from the Beep Movie to be released this summer. Jory Prum was meant to give the introduction but very sadly passed recently from a motorcycle accident.

The excerpt presented hence showed him in his studio talking about his sound design toys:


10.15 am – Until Dawn – Linear Learnings For Improved Interactive Nuance

The first presentation was given by Barney Pratt, Audio Director at Supermassive Games, telling us about the audio design and integration in their game Until Dawn.

We learned about branching narrative and adapting film edit techniques for cinematic interactive media, dealing with Determinate VS Variable pieces of scenario.

Barney gave us some insight on how they created immersive Character Foley using procedural, velocity-sensitive techniques for footsteps and surfaces, knees, elbows, wrists and more. The procedural system was overlaid with long wav files per character for the determinate parts, providing a greatly realistic feel to the characters’ movements.

He then shared a bit about their dialog mixing challenges and solutions: where center speaker dialog mix and surround panning didn’t exactly offer what they were looking for, they came up with a 50% center biased panning system which seemed to have been successful (we heard a convincing excerpt from the game comparing these strategies). Put simply, this ‘soft panning’ technique provided the realism, voyeurism and immersion required by the genre.

Finally, Barney told us about their collaboration with composer Jason Graves to achieve incredible emotional nuances, from techniques once again inspired from film editing.

For instance, they wanted to avoid stems, states and randomisation in order to respect the cinematic quality of the game, as opposed to the techniques used for an open-world type of game.

The goal was to generate a visceral response with the music and sound effects. After watching a few excerpts, even in this analytic and totally non-immersive context, I can tell you, they succeeded. I jumped a few times myself and, although (or maybe because) the audio for this game is truly amazing, I will never play it, as to do so will prevent me from sleeping for weeks to come….

11.20 am – VR Audio Round Table

Then followed a round table about VR audio, featuring Barney Pratt (Supermassive Games), Matt Simmonds (nDreams) and Todd Baker (Freelance, known for Land’s End).

They discussed 3D positioning techniques, the role and place of the music, as well as HRTF & binaural audio issues. An overall interesting and instructive talk providing a well appreciated perspective on VR audio from some of the few people among us who have released a VR title.

12.20 – Creating New Sonics for Quantum Break

The stage then belonged to Richard Lapington, Audio Lead at Remedy Games. He revealed the complex audio system behind Quantum Break‘s Stutters – those moments during gameplay when time is broken.

The team was dealing with some design challenges, for instance the need for a strong sonic signature, the necessity of being instantly recognisable, and convincing. In order to reach those goals, they opted to rely on the visual inspiration the concept and VFX artists were using as a driving force for audio design.

Then, when they came up with a suitable sound prototype, they reversed engineered it and extrapolated an aesthetic which would be put into a system.

This system turned out to be an impressive collaboration between the audio and VFX team, where VFX was driven by real time FFT analysis operated by a proprietary plugin. This, paired with real time granular synthesis, resulted in a truly holistic experience. Amazing work.

// lunch //

I went to take a look at the expo during lunch time and tried the Playstation VR set with the game Battlezone from Rebellion.

I only tried it for a few minutes so I can’t give a full review, but I enjoyed the experience, the game was impressive visually. Unfortunately couldn’t get a clear listen as the expo was noisy, but I had enough of a taste to understand all that could be done with audio in VR and the challenges that it can pose. Would love to give this  try…

2 pm – The Freelance Dance

The afternoon session started off with a panel featuring  Kenny Young (AudBod), Todd Baker (Land’s End), Rebecca Parnell (MagicBrew), and Chris Sweetman (Sweet Justice).

They shared their respective experiences as freelancers and compared the freelance VS in-house position and lifestyle.

The moral of the story was that both have their pros and cons, but mostly they all agreed that if you want to be a freelancer, it’s a great plus to have some in-house experience first, and not start on your own right out of uni.

3 pm – Assassins Creed Syndicate: Sonic Navigation & Identity In Victorian London

Next on was Lydia Andrew, Audio Director at Ubisoft Quebec.

She explained how they focused on the player experience through audio in Assassins Creed Syndicate, and collaborated with composer Austin Wintory to give an immersive, seamless soundtrack giving identity to the universe.

They were careful to give a sonic identity to each borough of Victorian London, both through sound (ambiences, SFX, crowds, vehicles) and music. They researched Victorian music to suit the different boroughs and sought the advice of Professor Derek Scott to reach the highest possible historical accuracy.

Very detailed presentation of the techniques used to blend diegetic and non diegetic music, given by a wonderfully spirited and inspiring Audio Director.

4.15 pm – Dialogue Masterclass – Getting The Best From Voice Actors For Games

Mark Estdale followed with a presentation on how to direct a voice acting session, and how to give the actor the best possible context to improve performance.

Neat tricks were given, such as the ‘Show don’t tell’: use game assets to describe, give location, and respond to the actor’s lines. For instance, use the already recorded dialogue to reply to the actor’s lines, play background ambiance, play accompanying music, and show the visual context. Even use spot effects if the intention is to create a surprise.

5.15 pm – Stay On Target – The Sound Of Star Wars: Battlefront

This talk was outstanding. Impressive. Inspiring. Brilliant way to end the day of presentations. A cherry on the cake. Cookies on the ice cream.


You could practically see the members of the audience salivating with envy when David Jegutidse was describing the time he spent with Ben Burtt, hearing the master talk about his tools and watching him play with them, including the ancient analog synthesizer that was used to create the sounds of R2D2.

Along with Martin Wöhrer, they described how they adapted the Star Wars sounds to fit this modern game.

They collaborated with Skywalker Sound and got audio stems directly from the movies, as well as a library of sound effects and additional content on request.

In terms of designing new material, they were completely devoted to maintain the original style and tone, and opted for organic sound design.

What this means (among other things) is Retro processing through playback speed manipulations, worldising, and ring modulation, like they did back in the days.

It was a truly inspiring talk, giving a lot to think about to anyone working with an IP and adapting sound design from existing material and/or style and tone.


The day ended with an open mic calling back to the table Todd Baker, Lydia Andrew, Rebecca Parnell, Chris Sweetman and Mark Estdale to discuss the future of game audio.



Overall an incredible day where I got to meet super interesting and wonderful people, definitely looking forward to next year!! 🙂



Game Audio Asset Naming and Organisation


Whether you are working on audio for an Indie or a AAA title, chances are you will have to deal with an important amount of assets,which will need to be carefully named and organised.

A clear terminology, classification and organisation will prove to be crucial not only for yourself and find your way around your own work, but for your team members, whether part of the audio department or the front end team helping you implement your sounds into the game.

I would like to share my way of keeping things neat and organised, in the hope that it will help the less experimented among you start off on the right foot. I don’t think there is only one way to do this though, and those of you who have a bit of experience might have a system that already works, and that’s perfectly fine.

I will go over creating a Game Audio Design Document and dividing it into coherent categories and subcategories, using a terminology that makes sense, event naming practices, and folder organisation (for sound files and DAW sessions) on your computer/shared drive.

Game Audio Design Document

First, what is an Audio Design Document? In a few words, it is a massive list of all the sounds in your game. Organised according to the various sections of the game, it is where you list all the events by their name, assign priorities, update their design and implementation status, and note descriptions and comments.

The exact layout of the columns and category divisions may very well vary according to the project you are currently working on, but here is what I suggest.

COLUMN 1: Scene or Sequence in your game engine (very generic)

COLUMN 2: Category (for example the object type/space)

COLUMN 3: Subcategory (for example the action type/more specific space)

COLUMN 4: Event name (exactly as it will be used in the game engine)

COLUMN 5: Description (add more details about what this event refers to)

COLUMN 6: Notes (for instance does this sound loop?)

COLUMN 7: 3D positioning (which will affect the way the event is implemented in game)

COLUMNS 8-7-9: Priority, Design, Integration (to color code the status)

COLUMN 10: Comments (so that comments and reviews can be noted and shared)
It would look something like this:



This document is shared among the members of the audio team so that everyone can refer to it to know about any details of any asset. You could even have a ‘name’ or ‘who?’ column to indicate who is responsible for this specific asset if working in a large audio team.

It is also shared across the art team if the art director is your line manager, and across any member of the front end team involved in audio implementation.

This list may also not be the only ‘sheet’ of the Audio Design Document (if you are working in Google Sheets, or equivalent in another medium). A few other sheets could involve a document created especially for the music assets, another for bugs or requests to keep track of, another for an Audio Roadmap, and so on. Basically, it is a single document to which all team members can refer in order to keep up to date with the audio development process. You can equally add anything that has to do with design decision, references, vision, etc.

While big companies may very well have their own system in place, I find this type of docs to be especially useful when working in smaller companies where such a pipeline has not yet been established.

I’d like to point out as well that, in the creation of such a document, it is important to consider that you will need to remain flexible throughout the development process. Especially if you join the project at an early stage, where sections/names/terminology in the game are bound to change. Throughout those changes, it is important to update the doc regularly and remain organised, otherwise it can rapidly become quite chaotic.


In terms of terminology, this is again something that can be done in many ways, but I’d say that one of the most important things is that, once you’ve decided on a certain terminology, remain coherent with it. And be careful to name the events in your audio engine exactly the way you named them in your design document. Otherwise you will very rapidly get confused between all those similarly named versions of a same event, and won’t know which one is the correct one to use.

What I like to do is, first, no capital letters, all minuscules, so that it doesn’t get confusing if events need to get referred to in the code. Programmers don’t need to ask themselves where were those capital letters, which may seem like a small thing but when there are 200+ events, it is appreciated.

Then there is the matter of the underscore ‘ _ ‘ or the slashes ‘ / ‘.  That may depend on the audio engine and/or game engine you are using. For instance, using Fabric in Unity, all my events are named with slashes for the simple reason that it automatically divides them into categories and subcategories in all dropdown menus in Unity. This becomes very handy when dealing with multiple tools and hundreds of events.

Then the organisation of your audio design document would pretty much tell you how to name your event. For instance:

category_subcategory_description_number  (a number may not always be required)




If you dislike the long names you can find abbreviations, such as:


I personally find they can become quite confusing when sharing files, but if you do want to use those, simply remember to be clear on what they mean, for instance by writing their abbreviated and full name in the doc, and make sure that there is no confusion when multiple team members are working with those assets.

Folder organisation

Whether you are working as a one person department on your own machine or you are sharing a repository for all the audio assets, a clear way of organising these will be crucial. When working on a project of a certain scale (which doesn’t need to be huge), you will rapidly get dozens of GB of DAW sessions, video exports, and files of previous or current versions.

I suggest you separate your directories for your sound files, DAW sessions and other resources. Your sound files directory should be organised in the same way you organised your Audio Design Document. This way, it is easy to know exactly where to find sound(s) constituting specific events.

I also suggest that you have a different (yet similar) directory for previous versions. You may call it ‘PreviousVersions’ or something equivalent, and have an identical hierarchy as the ‘current version’ one. This is so that, if you need to go back to an older version, you know exactly where to find it, and can access it quickly. You can name those versions by number (keep the same terminology, and add a V01, V02 at then end).

Finally for your DAW sessions, you may decide to go for something a little bit different in terms of hierarchy, but I find that maintaining a similar order is very useful for self organisation and be able to quickly go back to sessions you may not have touched in a while.

I also highly recommend that you save your session as, in order to back them up, but also anytime you make changes to a version of a sound. First, corrupted sessions do happen, so you’ll be extremely happy when it happens and you haven’t lost weeks of work, but also if your manager prefers an earlier version of your sound, but with some modification, you can easily go back to exactly that version, and start again from there, while still keeping the latest version intact.

So, if my asset hierarchy and division in my Audio Design Document looks like the one in the image above, my folder hierarchy would look something like this:


And finally you can create a folder for Video Exports for instance, and have your video screencaptures there, again organised in coherent folders. The principle will remain the same for any other resources you may have.

I hope this was helpful, happy organisation 🙂