PLAY Expo Manchester

On 8-9 October took place the PLAY Expo event in Manchester, where The Sound Architect organised and put together a full 2 days of presentations and interviews. I had the opportunity to attend and listen to the valuable insights shared by the guest speakers and interviewees, as well as discuss and socialise with fellow game audio professionals. It was overall a successful event and a lovely weekend, allowing passionate people to get together and exchange knowledge. Here is my brief summary of the event.

Saturday 8 October

11:00 Presentation: Ash Read – Eve: Valkyrie


The weekend started with Ash Read, sound designer at CCP working on Eve: Valkyrie, telling us about his experience with VR audio.

We were first enlightened on some aspects in which VR audio differs from ‘2D’ or ‘TV’ audio, and briefly what the ‘sonic mission’ consists of in this context. Specifically in Eve: Valkyrie, a chaotic space battle environment where a lot is happening, constantly, everywhere, the role of audio includes:

  • Keep the pilot (player) informed
  • Keep the pilot (player) immersed

In a visually saturated environment, audio is a great way to maintain focus on the important gameplay elements and help the player remain alert and immersed.

What is also different in VR audio, is a greater level of listener movement, so that techniques need to be developed to implement audio in a context where the listener’s head doesn’t stay still. One of these techniques involves HRTFs (Head Related Transfer Functions).

Put shortly, the HRTFs help the listener locate where the sound is coming from and detail 3D positioning, but also more accurately portrays subtle modifications to sound while traveling.

For instance, the distance and positioning of an object is not only expressed sonically through attenuation, but also by introducing the sound reflections of a specific environment, and by creating a sense of elevation.

We then learned about how audio in VR may contribute to reducing the motion sickness often related with VR, while it helps the visuals to compensate for the feeling of disconnect, partly responsible for the motion sickness.

Since VR usually means playing with headphones on, the Valkyrie audio team decided to include some customisable audio options for the player, such an audio enhancement slider, which helps bringing focus onto important sounds.

The sound design of Valkyrie is thought to be rugged, to tell about the raw energy of the game, and to be strong in details. With that in mind, the team is constantly aiming to improve audio along with the game updates. For instance, they plan to breathe more life into the cockpit by focusing on its resonance and enhance the deterioration effects.

Ash’s presentation was concluded with a playback of their recently released launch trailer for PS VR, the audio for which was beautifully done by Sweet Justice Sound.

You can watch the trailer here:

12:00 Presentation: Simon Gumbleton – PlayStation VR Worlds


Technical sound designer Simon Gumbleton then followed to tell us about the audio design and implementation in Sony’s PlayStation VR Worlds.

The VR Worlds game is rather like a collection of bespoke VR experiences, each presenting a different approach to player experience. Over the course of the development of those various experiences, the dev and audio teams have experimented, learned, and shaped their approaches, while exploring uncharted territories and encountering new challenges.

1st experience: Ocean Descent

Being the first experience they worked on, it laid the foundation of their work, and allowed for experimentation and learning. The audio team then developed some techniques such as the Focus System, where the listener would start to hear accentuated details of what’s in focus after a short amount of time (of it being in focus). You could see it as a game audio implementation of the cocktail party effect.

They also developed a technique concerning the player breathing, where they introduce breathing sounds at first, and eventually pull them out once the player has acclimated to the environment, where they become somewhat subconscious.

Similarly, they explored ways to implement avatar sounds, and found that, while they usually reinforce the player in the world, in VR there is a fine line between them being reinforcing or distracting. In short, the sounds heard need to be reflected by movements actually seen in game. This means that you would only hear avatar sounds related to head movements which have a direct impact on visuals, as opposed to body movements which you cannot see.

2nd experience: The London Heist

In this experience, there was more opportunity to experiment with interactive objects. To design believable audio feedback  and to improve the tactile one to one interactions.

In order to do so, they implemented the sound of every interactable object in multiple layers. For instance, a drawer opening won’t be recorded as one sound and then played back on the event of opening this drawer in game. This drawer can be interacted with in many ways, so its sounds are integrated with a combination of parameters and layers in order to playback an accurate sonic response for the type of movement generated by the player’s actions.

Another example is the cigar smoking being driven by the player’s breathing. The microphone input communicates with the game and drives the interaction with the cigar for optimal immersive experience.

A detailed foley of the characters also proves to be something that helps bringing characters to life. Every detail is captured and realised, down to counting the number of rings on a character’s hand and implement its movement sounds accordingly.

Dynamic reverb tells the player info about the space and the sounds generated in it. A detailed and informative environment is created with the help of physically based reflection tails, as well as material dependent filters, all processed at run time. It’s all about making the environment feel more believable.

3rd experience: Scavengers Odyssey

This experience was developed later, so they were able to take their learnings from the previous experiences and apply them, and even push the limits further.

For instance, since this experience is taking place in space and there is no real ‘room’ to generate a detailed reflection based reverb, they focused on implementing the sound as if it was heard through the cockpit.

Simon also emphasized on how detail is important, while in VR, the player will subconsciously have very high expectations of detail.  The way this is achieved is through lots of layering, and many discrete audio sources within the world.

Such detail inevitably brings tech challenges in relation to the performance of the audio engine, which will require a lot of optimisation work.

The ambiences have been implemented fully dynamically, where textures are created without any loops and are constantly evolving in game.

In terms of spatialisation, they tied all the SFX to the corresponding VFX within the world for optimal sync and highly accurate positioning.

They also emphasized important transitions in the environment by adding special transition emitters in critical places.


As for the music, they experimented in regards to its positioning, whether it should be placed inside the world or not, and mostly proceeded with quad array implementation when in passive environments.

They did have some opportunity to experiment with the unique VR ability to look up and down, for instance in Ocean’s Descent where they accentuated the feeling of darkness and depths VS brightness and light when looking up and down in the water with adaptive music.

The Hub

This interactive menu is an experience in itself. It is the first space you are launched into when starting the game, and sets up the expectations for the rest. They needed to build a sense of immersion already, and put the same level of detail into the hub as anywhere else in order to maintain immersion when transitioning from one experience to another.

Finally, this collection of experiences needed to remain coherent overall and maintain a smoothness through every transition. This was accomplished through rigorous mixing, and by establishing a clear code regarding loudness and dynamics  which would be applied throughout the entire game.

PlayStation VR Worlds is due to be released on 13 October 2016, you can watch the trailer here:

13:00 Interview: Voice Actor, Alix Wilton Regan – Dragon Age, Forza, Mass Effect, LBP3


Alix Wilton Regan told us about voice acting in video games in the form of an interview, lead by Sam Hughes.

Some thoughts were share about career paths, working in games VS in television, and some tips were shared for starting actors.

Alix Wilton Regan has started a fundraising campaign, a charitable initiative to help refugees in Calais, check it out!

14:00 Interview: Composer, David Housden – Thomas Was Alone


Another interview followed with David Housden, composer on Thomas Was Alone and Volume. The interview was held in a similar way, starting with some thoughts on career progression, following with some details about his work on past and current titles, and concluding with advice on freelancing.

15:00 Presentation: Composer & Sound Designer, Matt Griffin – Unbox


Composer Matt Griffin then presented how the sound design and music for the game Unbox was implemented using FMOD.

One of the main audio goals for this entertaining game was to make it interactive and fun. In order to do so, Matt found ways to make the menus generative and sometimes reactive to timing, such as the menu music.

We were shown the FMOD project and its structure to illustrate this dynamic implementation. For the menu music, the use of transitions, quantizations and multi sound objects was key.

For the main world music, each NPC has its own layer of music, linked to a distance parameter. Some other techniques were used in order to make the music dynamic, such as having a ‘challenge’ music giving the player feedback on progression and timing, and multiplayer music with a 30 seconds countdown double tempo.

In terms of sound design, the ‘unbox’ sound presented a challenge while it is very frequently played throughout the game. In order to not make it too repetitive, it was implemented using multiple layers of multi sound objects, along with pitch randomisation on its various components and a parameter tracking how many ‘unboxes’ were heard so far.

An extensive amount of work was also realised for the box impact sounds on various surfaces, taking velocity into account.

For the character sounds, a sort of indecipherable blabber, individual syllables were recorded and then assembled together in game using FMOD’s Scatterer sound object.

16:00 Interactive Interview: Martin Stig Andersen – Limbo


Martin Stig Andersen, composer and sound designer on Limbo and Inside was then interviewed by Sam Hughes.

Similarly to the previous interviews, some questions relating to career paths were first answered, relating how Martin started in instrumental composition, shifted towards electroacoustic composition (musique concrète), and later into experimental short films.

His work often speaks of realism and abstraction, where sound design and music combine to form one holistic soundscape.

Martin explained how he was able to improve his work on audio for Inside compared to Limbo as he was brought onto the project at a much earlier stage, and was able to tackle larger tech issues, such as the ‘death-respawn’ sequence.

More info on the death-respawn sequence in this video :

Some more details were provided about the audio implementation for Inside, for instance the way the sound of the shock wave is filtered depending on the player’s current cover status, or how audio is used to communicate to the player how well he/she is doing in the progression of the puzzle.

We also learned more about the mysterious recording techniques used for Inside involving a human skull and audio transducers.

More details here:

17:00 Audio Panel: Adam Hay, David Housden, Matt Griffin

The first ended with a panel with the above participants, sharing some thoughts on game audio in general, freelancing, and what will come next.

Sunday 9 October

11:00 Interview & Gameplay: Martin Stig Andersen – Limbo

The day started by inviting Martin Stig Andersen again to the stage, where the interview was roughly the same as the previous day.

12:00 Interview: Nathan McCree, Composer & Audio Designer


At midday, the audience crowded up as the composer for the first three Tomb Raider games was being interviewed by Sam Hughes.

Some questions about career progressions were followed by some words about the score and how Nathan came to compose a melody that he felt really represented the character.

The composer also announce The Tomb Raider Suite, a way to celebrate Tomb Raider’s 20th anniversary through music, where his work will be played by a live orchestra before the end of the year.

More details here:

13:00 Presentation: Voice Actor, Jay Britton – Fragments of Him, Strife


Next, voice actor Jay Britton gave us a lively presentation on the work of a voice actor in video games, involving a demo of a recording session. He gave us some advice on how to get started as a voice actor in games, including:

  • There is no one single path
  • Start small, work your way up
  • Continually improve your skills
  • Network
  • Get trained in videogame performance
  • Get trained in motion capture and facial capture
  • Consider on-screen acting
  • Speak to indie devs
  • Get an agent

He followed by giving advice on how to come up with new voice character with your own voice, while giving us some convincing demonstrations.

14:00 Interview: Audio Designer, Adam Hay – Everybody’s Gone To The Rapture


Sound designer Adam Hay was then interviewed about his work on both Dear Esther and Everybody’s Gone To The Rapture.

He mentioned how the narrative journey is of crucial importance in both these games, and how the sound helps the player progress through them.

16:00 Audio Panel: Simon Gumbleton, Ash Read, David Housden


Finally, the weekend ended (before giving the stage to live musicians) with a VR audio panel, giving us some additional insight on the challenges surrounding VR audio, such as the processing power involved in sound spatialisation, and how everything has to be thought through in a slightly different way than usual.

Voilà, a very busy weekend full of interesting insights and and advice. A massive thanks to The Sound Architect crew for putting this together, hopefully this can take place again next year! 🙂

Develop: Brighton Game Dev Conference

I just came back from Brighton for the Develop: Brighton game dev conference. I was there only on Thursday 14 July for the Audio Day, and here are my thoughts and brief summary.



The Audio Track was incredible, lining up wonderful speakers with so much to say!

The day started at 10 am with a short welcome and intro from John Broomhall (MC for the day), and a showing of an excerpt from the Beep Movie to be released this summer. Jory Prum was meant to give the introduction but very sadly passed recently from a motorcycle accident.

The excerpt presented hence showed him in his studio talking about his sound design toys:


10.15 am – Until Dawn – Linear Learnings For Improved Interactive Nuance

The first presentation was given by Barney Pratt, Audio Director at Supermassive Games, telling us about the audio design and integration in their game Until Dawn.

We learned about branching narrative and adapting film edit techniques for cinematic interactive media, dealing with Determinate VS Variable pieces of scenario.

Barney gave us some insight on how they created immersive Character Foley using procedural, velocity-sensitive techniques for footsteps and surfaces, knees, elbows, wrists and more. The procedural system was overlaid with long wav files per character for the determinate parts, providing a greatly realistic feel to the characters’ movements.

He then shared a bit about their dialog mixing challenges and solutions: where center speaker dialog mix and surround panning didn’t exactly offer what they were looking for, they came up with a 50% center biased panning system which seemed to have been successful (we heard a convincing excerpt from the game comparing these strategies). Put simply, this ‘soft panning’ technique provided the realism, voyeurism and immersion required by the genre.

Finally, Barney told us about their collaboration with composer Jason Graves to achieve incredible emotional nuances, from techniques once again inspired from film editing.

For instance, they wanted to avoid stems, states and randomisation in order to respect the cinematic quality of the game, as opposed to the techniques used for an open-world type of game.

The goal was to generate a visceral response with the music and sound effects. After watching a few excerpts, even in this analytic and totally non-immersive context, I can tell you, they succeeded. I jumped a few times myself and, although (or maybe because) the audio for this game is truly amazing, I will never play it, as to do so will prevent me from sleeping for weeks to come….

11.20 am – VR Audio Round Table

Then followed a round table about VR audio, featuring Barney Pratt (Supermassive Games), Matt Simmonds (nDreams) and Todd Baker (Freelance, known for Land’s End).

They discussed 3D positioning techniques, the role and place of the music, as well as HRTF & binaural audio issues. An overall interesting and instructive talk providing a well appreciated perspective on VR audio from some of the few people among us who have released a VR title.

12.20 – Creating New Sonics for Quantum Break

The stage then belonged to Richard Lapington, Audio Lead at Remedy Games. He revealed the complex audio system behind Quantum Break‘s Stutters – those moments during gameplay when time is broken.

The team was dealing with some design challenges, for instance the need for a strong sonic signature, the necessity of being instantly recognisable, and convincing. In order to reach those goals, they opted to rely on the visual inspiration the concept and VFX artists were using as a driving force for audio design.

Then, when they came up with a suitable sound prototype, they reversed engineered it and extrapolated an aesthetic which would be put into a system.

This system turned out to be an impressive collaboration between the audio and VFX team, where VFX was driven by real time FFT analysis operated by a proprietary plugin. This, paired with real time granular synthesis, resulted in a truly holistic experience. Amazing work.

// lunch //

I went to take a look at the expo during lunch time and tried the Playstation VR set with the game Battlezone from Rebellion.

I only tried it for a few minutes so I can’t give a full review, but I enjoyed the experience, the game was impressive visually. Unfortunately couldn’t get a clear listen as the expo was noisy, but I had enough of a taste to understand all that could be done with audio in VR and the challenges that it can pose. Would love to give this  try…

2 pm – The Freelance Dance

The afternoon session started off with a panel featuring  Kenny Young (AudBod), Todd Baker (Land’s End), Rebecca Parnell (MagicBrew), and Chris Sweetman (Sweet Justice).

They shared their respective experiences as freelancers and compared the freelance VS in-house position and lifestyle.

The moral of the story was that both have their pros and cons, but mostly they all agreed that if you want to be a freelancer, it’s a great plus to have some in-house experience first, and not start on your own right out of uni.

3 pm – Assassins Creed Syndicate: Sonic Navigation & Identity In Victorian London

Next on was Lydia Andrew, Audio Director at Ubisoft Quebec.

She explained how they focused on the player experience through audio in Assassins Creed Syndicate, and collaborated with composer Austin Wintory to give an immersive, seamless soundtrack giving identity to the universe.

They were careful to give a sonic identity to each borough of Victorian London, both through sound (ambiences, SFX, crowds, vehicles) and music. They researched Victorian music to suit the different boroughs and sought the advice of Professor Derek Scott to reach the highest possible historical accuracy.

Very detailed presentation of the techniques used to blend diegetic and non diegetic music, given by a wonderfully spirited and inspiring Audio Director.

4.15 pm – Dialogue Masterclass – Getting The Best From Voice Actors For Games

Mark Estdale followed with a presentation on how to direct a voice acting session, and how to give the actor the best possible context to improve performance.

Neat tricks were given, such as the ‘Show don’t tell’: use game assets to describe, give location, and respond to the actor’s lines. For instance, use the already recorded dialogue to reply to the actor’s lines, play background ambiance, play accompanying music, and show the visual context. Even use spot effects if the intention is to create a surprise.

5.15 pm – Stay On Target – The Sound Of Star Wars: Battlefront

This talk was outstanding. Impressive. Inspiring. Brilliant way to end the day of presentations. A cherry on the cake. Cookies on the ice cream.


You could practically see the members of the audience salivating with envy when David Jegutidse was describing the time he spent with Ben Burtt, hearing the master talk about his tools and watching him play with them, including the ancient analog synthesizer that was used to create the sounds of R2D2.

Along with Martin Wöhrer, they described how they adapted the Star Wars sounds to fit this modern game.

They collaborated with Skywalker Sound and got audio stems directly from the movies, as well as a library of sound effects and additional content on request.

In terms of designing new material, they were completely devoted to maintain the original style and tone, and opted for organic sound design.

What this means (among other things) is Retro processing through playback speed manipulations, worldising, and ring modulation, like they did back in the days.

It was a truly inspiring talk, giving a lot to think about to anyone working with an IP and adapting sound design from existing material and/or style and tone.


The day ended with an open mic calling back to the table Todd Baker, Lydia Andrew, Rebecca Parnell, Chris Sweetman and Mark Estdale to discuss the future of game audio.



Overall an incredible day where I got to meet super interesting and wonderful people, definitely looking forward to next year!! 🙂



State of Play 2016 – Dublin

Yesterday (8 June 2016) I went to the State of Play event held in Dublin Institute of Technology.

It was overall a great event, many speakers with relatively short talks (10-20 minutes each) kept the evening dynamic and filled with a variety of sage advice and colorful demonstrations.

Among the speakers were (not in order)

  • Owen LL Harris (also MC for the night) –
  • Llaura NicAodh –
  • Kieran Nola –
  • Robin Baumgarten –
  • Evan Balster –
  • Charlene Putney –
  • Jen Carey –
  • Sherida Halatoe –
  • Kevin Murphy –

Unfortunately I didn’t take note of all the names and can’t find a complete list of speakers, so might be forgetting one or more.. sorry!

(Also William Pugh was meant to be there but unfortunately could not make it due to his recent leg injury. We wish you a quick recovery William!)

I strongly suggest you check out those websites, all of them had interesting things to say.

Among my favorites, definitely Robin Baumgarten and his ‘hardware experimental game projects’. He showed us a bit of his process while working on projects such as the Line Wobbler and A Dozen Sliders

It is always inspiring to see someone creating something entirely new from scratch. Makes you want to lock yourself in a studio and do the same, because why not!

I was also otherwise surprised (or maybe not) that many of the talks related to the topic of coping with stress and creative blocks, motivation and self-care. The games industry is one to attract passionate, talented people hoping to fulfill themselves working on a project they believe in. Most of the times I like to think that this is true, but it would be foolish to ignore the harsh reality of crunch times, crazy deadlines and immense amount of pressure that come with the job.

I can imagine that all of the speakers went through this realisation more than once in their career, and provided us with their tips and techniques to try to stay sane in these periods of high stress.

There was also some talk about the value of networking (Kevin Murphy), as well as advice on how to create game narratives starting from personal experience (Sherida Halatoe)

Llaura’s talk, which was more of a storytelling than a speech, was also very strong while she played an excerpt of her latest game If Found Please Return, which seems to be really promising.

The generally informal tone to the evening made it refreshing and quite friendly. The event continued in an even more informal manner at the Odessa pub for some social drinks.

Looking forward to State of Play 2017!