MIGS 2018: Designing the Sound of Reality in Shadow of the Tomb Raider

In 2018 I gave a presentation at the Montreal International Game Summit (MIGS), about the sound of Shadow of the Tomb Raider. This session was about sharing some sound design strategies to make reality sound more believable or interesting than it actually is, or in other words, to aim for ‘believability’ rather than ‘authenticity’, as was our approach in SOTR.

I decided to put this presentation into text (better late than never..!) simply for the sake of accessibility, and in an effort to share this information as widely as possible.

You can watch the full presentation here.

The following text is basically a transcript of the MIGS presentation and does not include any new information.

***

SECTION 1 – INTRO

My name is Anne-Sophie Mongeau, I am a sound designer at Eidos Montreal [that was true at the time, I am currently Senior Sound Designer at Hazelight in Stockholm], and in this presentation I’m going to tell you about designing the sound of reality in Shadow of the Tomb Raider.

What I mean by designing the sound of reality, is that Tomb Raider is a game environment in which the story may be fictional, but it sources its stylistic references from the real world, as opposed to other genres such as Sci-Fi, or Fantasy, where everything is made up and thought of from scratch, and where every sound is original and belonging to that made up world. In the realistic type of game, what we sound design, are sounds from the real world to start with, according to our expectations of them and how we think they should sound based on our experience of them.  In a game such as SOTR, a waterfall is a waterfall, a bird is an actual bird species that exist, a door opening is made from materials and mechanics that are known to us. When we see those visual references, we have expectations, of how they are going to sound.

These expectations are based on our real world experience of these things, but also on our experience of them through other media and other representations, and the cinematic, ‘hollywoodesque’ experience that often surrounds them. Not all of us here have seen actual jaguars, or have been through floods and earthquakes, but we all have some sort of idea of what it might, or even ‘should’ sound like.

So how do we, sound designers, meet these expectations, and walk that fine line between what it actually sounds like in reality (accuracy), and what we want it to sound like, based on ours, and the listeners’, biased expectations. WHILE also offering some originality and character – so not only staying within those expectations but surpassing them. In other words how do we contribute to the immersive, cinematic experience, as well as the storytelling, within a realistic context? 

So it’s not because the game genre is ‘realistic’ that the sound design is any less important or any easier to do. Because really, when we talk about the quality of being ‘realistic’ in games and entertainment in general, it means more being believable (or convincing), rather than being strictly authentic or accurate.

So how do we make reality more interesting than it really is? 

This can be achieved by putting into practice a few realistic sound design techniques which I will go through in a minute, as well as by identifying and taking advantage of the sound design opportunities that the game offers. A combination of those things will help reinforce a sense of place, immersion, character, uniqueness, and make the whole thing more memorable.

You will notice that a lot of my examples feature ambience sound design, because in Tomb Raider ambience has been a very important feature which provided us with many opportunities to reinforce immersion. Also I left the music playing in my examples because music also plays a role in the soundscape, but I made it deliberately lower in volume because we’re going to focus on the sound design.

SECTION 2 – GENERAL STRATEGIES

Let’s start with general techniques. These are things to keep in mind throughout the project.

These broad strategies (this is not an exhaustive list by the way, these are suggestions) can be used at different moments and in different contexts within the game. They have the specific purpose of reinforcing a sense of reality, without necessarily relying on reality (or accuracy) itself. 

These include :

  • Exaggeration
  • Evocation
  • Worldizing
  • Controlling focus

General Strategy 1 – Exaggeration

The title says it all, this strategy is about exaggerating what accuracy would want us to do, in order to compensate for the lack of sensory experience due to not actually being there. Considering that the field of vision is narrowed to the screen space, that the hearing is reduced to whatever sound system the audio is played back through, that you can’t touch or smell your environment, that your body doesn’t feel out of place, this technique’s goal is to rely on the available stimuli to deliver something that is closer to the sensory experience that should be had, by compensation.

For example, here are some screenshots of some of the stunning looking environments in Shadow of the Tomb Raider.

As you can see, even the visuals kind of overcompensate by making everything look absolutely stunning all the time. Our role as sound designers is to support and even enhance those beautiful environments, and sometimes that means cheating a little bit.

I was in the Alps last summer, and I was faced with similarly stunning landscapes.

The Alps, summer 2018 (photo I took myself)

So I took a moment when I was there to listen to the soundscape around me.

And this is what I heard :

That actually sounds pretty good, doesn’t it? And it feels nice as well. There is one good reason for it: because this is not actually what I heard. This is designed reality. Now this is what I actually recorded:

This, in the context of a game, would be quite disappointing, and underwhelming. This is why our Peruvian jungle sounds like this:

I’ve actually never been in the Peruvian jungle, but I like to imagine that this is what it sounds like. This is my expectation of it, at least, it’s how I want it to sound like.

This ambience has been ‘cheated’ in different ways. In this environment, every single sound source has been placed by hand, every bird, insect, tree creak and foliage rustle or rain drops. There is no stereo or quad ambiences, everything has its determined 3D place. I will talk a bit more about this later in the sound design opportunities. For now you get the idea about exaggeration. Make it sound better than what a scientific recording would sound like.

General Strategy 2 – Evocation

Next strategy, evocation.

Without digging too deep into cognitive functions, using the power of evocation is a way to build unique character by relying on associations made within one scene and the connections that exist between the different elements in it. For me it seems to have something to do with pattern recognition, somewhat like the psychology theory called recognition-by-components, but extended to the auditory senses rather than only limited to visual stimuli. 

This theory says we are able to recognize objects by separating them into geons, which are the object’s main component parts as you can see here.

But if start replacing the individual parts by geons that are somewhat to the outer limits of recognizable patterns, to the point that if I disconnect them they barely make any sense on their own, for instance if I do this :

This weird thing:

Plus this other weird thing:

Which are 2 barely recognizable objects, when put together, become something we can finally identify:

(It’s a weird mug).

As long as I keep them together, I can still identify the whole as something that makes sense.

And it’s not only a mug, but a very unique and original mug.

The whole helps us make sense of the individual parts. Sometimes that means we can take some freedom in the sounds that we use, as long as they make sense within the whole. So using certain sounds that are evocative rather than purely descriptive or scientifically accurate, allows us to offer something that has more of a unique character, and even gives us an opportunity to represent not only the visual and descriptive characteristics of a scene, but also its psychological quality and tone, its mood.

For example in the following area:

There is a large metallic structure, part of a dig site, that descends into this cavernous hole, through which Lara will go and find that it’s a bit of a horror scene. So the atmosphere we want to create here is not only one of a cave, but one of mystery, unease, apprehension. So basically it’s not only the physical characteristics that we are trying to depict through the sound design, but also the emotional ones.

I felt like this situation called for this type of evocative sound design strategy, so I used a recording a made with contact microphones of rain falling on a metallic fence. On its own, it’s not something you would hear in this context as this could not even normally be heard with naked ears. But when you put it in there with the rest, the fact that there is a metallic structure in the scene, and that metal is the main recognisable material in the recording, and because there are other metallic creaks in the soundscape, our brains just make it work by association, and it helps creating a more unique atmosphere rather that simply rely or accurately representing the space as we would hear it in reality.

Listen to the contact mic recording first so you can hear what to listen for in game:

And now watch this video example, which includes this recording in the scene:

So in summary, some abstraction can still feel realistic enough, provided that it is given the right context.

Strategy 3 – Worldizing

Most of you will be familiar with the technique of worldizing. But note that here I am talking about worldizing in the most simple possible way: simply, from the start, record the sounds you need in an environment that is as close as possible as what it is supposed to be in the game, instead of recording only ‘dry’ sound effects and trying to replicate or emulate certain conditions through effects and filters. 

For example, I used some recordings of trees creaking in which you can hear a lot of wind, birds, other things as well as a nice natural reverb, and I placed those within the jungle as positional sources, instead of placing only dry, isolated sounds and applying reverb on them, which there are as well, but using a combination can make the whole soundscape more believable.

Listen to the recording first:

And now watch how it sounds in game:

This could be applied to not only ambience but also interactive elements, for example you could have multiple recordings for various types of spaces for things like guns or anything else really.

Strategy 4 – Controlling Focus

In our experiences of the real world, our brains have this ability to focus our auditory attention on a particular stimulus while filtering out a range of other stimuli. This is called the Cocktail Party Effect, and it happens naturally, without us consciously making that effort. 

So in a game, so much can be happening at the same time, between ambiences, Foley, music, sound effects, surrounding non important and important VO, and so on, and it’s all concentrated in that screen in front of us, on which we focus our attention. So if we don’t fake that cocktail party effect, it actually ends up sounding less realistic, too chaotic, which doesn’t reflect the way that we would process our surrounding soundscape in reality. So here the goal is the get closer to that sense of realism by cheating and intervening with technology to replicate what we normally do organically. 

This can be achieved through:

Azimuth: Make volume curves based on the field of view – which will attenuate the sounds of the sources behind you and bring into focus the ones you are looking at.

Loudness: Making some sounds deliberately louder than they should be so that we can hear them no matter the context, such as making sure you can hear objects that important to the gameplay even through noisy ambiences like thunderstorms.

Ducking: Ducking some sounds when important ones occur so that they seem louder and come through the mix more easily, as we would perceive them to do in real life.

For instance our weapons and explosions duck some of the ambience, SFX and even music so that they have more impact:

SECTION 3 – SOUND DESIGN OPPORTUNITIES

Moving to sound design opportunities, through which all of the strategies mentioned above can be used as well.

Sound design opportunities are moments favorable to feature creative, original, ‘ear-catching’ sound design which will emphasize the emotional qualities that are meant to be transmitted through a particular moment of gameplay.
Those opportunities can arguably be the same whether the game is said to be ‘realistic’ or not, but the sound design strategies that we use in the context of a ‘realistic’ game might differ. In a linear, narrative driven game such as Shadow of the Tomb Raider, these opportunities are many I will go through a few examples.

Sound Design Opportunity 1: Location Reveal

Sound can contribute to emphasize the time and space where the next events are about to take place. It can also be an excellent opportunity to set an emotional tone.

While there is a lot of value in having the soundscape completely interactive and positional, on a location reveal or a vista, ambiences can be faked (scripted) to fit the screen and support the mood, then this faked ambience can disappear gradually and transition to the actual in-game interactive material. This allows more control to choose what kind of material and sentiment to present to the player during their first contact with the space and not leave it to chance.

For example, in this Vista of Paititi, you can hear birds, walla, horns blowing, a lot of liveliness in general, meant to communicate the sense of wonder Lara is feeling when discovering this hidden city for the first time, and taking in the scale of the place and its energy.

Sound Design Opportunity 2: Ambiences

I have a lot to say about ambiences.

In a similar way to location reveals, ambience design can be very useful to reinforce the emotional tone as well as the sense of place, but it is different in a way that instead of being punctual and scripted, ambiences play during long stretches of gameplay during which the narrative development stays relatively the same (the player usually remains within the same environment, exploring or traversing). So the strength of good ambiences is their contribution to a sense of immersion, reinforcing the feeling of actually being there and making the player believe in the environment.

How to achieve that can take many forms:

  1. Contrast

Emphasize the contrast between one space and another by using different types of assets. For instance even if the geographic location is not far, if the visual and emotional tones are different, the sound must also underline that change. And even if that visual tone would be the same or similar (and probably especially so), sound can really help create the feeling that you are actually in a different place than you were in the previous chapter, and help create new bearings through which the players can locate themselves more easily. In Tomb Raider, our ambience system involved placing by hand every single sound source that were part of the ambiences, there were no quads or 2D ambiences, only positional 3D emitters for every single element constituting the ambiences, this gave us a lot of control over the sense of progression through one space as well as from one space to another.

For instance, one forest can contain a few types of birds species while the other has different ones (even if all of them are said to live in that area),

Or

one forest can be more heavy on birds while the other one is on insects (maybe that says something about humidity and amount of sunlight that penetrates through the trees),

Or

one forest can feel more dense and claustrophobic by placing the sound sources much closer to the player’s path and the other more spacious and wide by placing the sound sources further away with more reverb.

To summarize this point let’s listen to some forest examples taken from the game, they are all in Peru, fairly close to each other, but they all sound quite different depending on the tone.

The first one is meant to be more hostile, dark and claustrophobic, in which Lara is vulnerable :

In this second one Lara goes from a small lush oasis to this forest laced with ruins. She believes she just found the hidden city but (spoiler alert!) she didn’t, in fact she’s about to find it just a bit after that. This space is just abandoned and feels very empty – kind of the underwhelming more realistic version of Paititi that Lara was expecting to find, which itself will contrast with the lively actual Paititi which we have seen in the reveal example earlier.

And finally I have 2 videos showing the same jungle area, but in the first one Lara is about to fight the creature that lives in this part of the forest and terrorizes the wildlife – it’s quiet and creepy. In the second one she killed the creature and the forest has come back to life, all the birds are back and there is even some additional reverb to make it feel even more lush and serene.

Before the fight:

After the fight:

The same strategies can be applied to any type of environment (hubs, puzzles, combat spaces, etc) – simply pinpoint a few elements that are meant to be specific to this place and emphasize on them.

2. Progression

Also, ambiences can greatly contribute to the storytelling and narrative development. Tomb Raider is not really an open world game, even if some hub or exploration areas let you move freely within them. The story takes you from point A to point B, never really looking back. There is rarely any possibility for backtracking, and if there is, you won’t get very far. The story and the game compel you to move forward, so it is important that the sound moves forward and evolves as well with the story and the character. One way to do that is to emphasize on a sense of progression, from the start to the end of the game, but also simply within one space, creating different moods and contrasts.

In this example, Lara goes from a village area to a jungle area, as part of the same general space, and this is where our 3D sources ambience system really helped us: you can hear the various elements of the ambience transform as she walks from one space to the next.

3. Sound design the invisible

Another way to make the most of ambiences as sound design opportunities is to give a story to the various locations, beyond what is visible on screen, and sound design the invisible. In a way you have just heard some of that in the jungle as none of the life we hear is actually seen, we place sound sources without there necessarily being a visual source. But it can go a bit further than that.

For instance, if Lara finds herself in a stone structure like a ruin of a temple which looks like it’s standing pretty still, it is still possible to use sound to reinforce the idea that this structure has been there for thousands of years and is effectively in ruins. For instance by adding some rock rumble and stress, debris falling and crumbling around the place, water drips to indicate that elements have penetrated the outer walls. Also some bats chirping indicating that nature has somewhat overgrown and overtaken the space and claimed it, so that it’s not quite welcoming for human visitors anymore. 

It is also possible to use more abstract sounds, which, according to my evocation strategy, will make sense within that greater environment.  For instance we have used eerie tones extracted from wooden whistles and flutes or instruments from that region, sounds to which our ears are not quite used to so can’t quite identify, and will blur the line between sound design and music, but will help bring more body to the ambience which might actually be quite dead if we only relied on what should be there to populate it.

In this example Lara moves from a puzzle area inside a temple in ruins, where you can hear fire as it is part of the scene, but beyond that there are some debris sounds, wood rattling in the wind, distant birds that you can hear through the whole in the roof, etc – and in the second part are after the puzzle space, it’s a lot more quiet and some abstract sounds have been placed, all positional, 3D in the scene (there is no actual music).

3. Scripted Events

Moving on to next opportunity: Punctual or interactive scripted events or sequences.

I am talking about an event or a sequence of events that is scripted to happen at some point in the narrative development. These moments are usually quite cinematic and rely on sensation/sensationalism to communicate a sense of drama and excitement to the player. 

In the same way camera movements and animations are often created custom to fit certain scripted events, sound should be designed having in mind the purpose of making that sequence stand out from the rest of the ‘systemic’ gameplay. The ways to achieve that highly depend on what the sequence actually is, so I will jump directly to an example :

Throughout this piece of traversal, Lara has to hang on to wooden cages, and ledges breaking under her so it’s full of scripted events:

And another straightforward example of a scripted destruction on Lara’s path:

4. Features and Mechanics

The specific features and mechanics present in a game kind of define the game. They are present throughout the game and constitute the most part of the gameplay experience. Examples include combat, weapons, traversal, underwater traversal, etc.

Each and everyone of these features need to have strong sound design support, reinforcing their respective nature (either aggressive, friendly, fast-pace, adventure, emotionally driven, etc).

In Tomb Raider, traversal is a really important part of the gameplay. This is a good opportunity for sound design to reinforce a sense of adrenaline, danger, vertigo, breathlessness, or even fear when needed.

In this first example, Lara is traversing underwater. Since there is less room for ambience design and moment specific sound design when she is just swimming around, the systemic swim sounds include movements for arms and legs so that as a player with just a controller in our hands we can still feel the motion and the impact or Lara’s movements in her environment.

In this Last example of my presentation, it’s actually a good example of traversal, scripted events and ambiences all in one:

SECTION 4 – CONCLUSION

In summary, in games, and in the broader context of entertainment and immersive media, scientific recordings and ‘accurate’ representations don’t always sound as good as we expect or as we want them to be, and this is where creativity comes in. So in order to deliver a unique, good sounding experience when working on this type of realistic game, the question to ask yourself is: how will you make reality sound better, more exciting, more immersive and more interesting than it really is?

Thank you!

PLAY Expo Manchester

On 8-9 October took place the PLAY Expo event in Manchester, where The Sound Architect organised and put together a full 2 days of presentations and interviews. I had the opportunity to attend and listen to the valuable insights shared by the guest speakers and interviewees, as well as discuss and socialise with fellow game audio professionals. It was overall a successful event and a lovely weekend, allowing passionate people to get together and exchange knowledge. Here is my brief summary of the event.

Saturday 8 October

11:00 Presentation: Ash Read – Eve: Valkyrie

14699576_2282334385153578_591648951_o

The weekend started with Ash Read, sound designer at CCP working on Eve: Valkyrie, telling us about his experience with VR audio.

We were first enlightened on some aspects in which VR audio differs from ‘2D’ or ‘TV’ audio, and briefly what the ‘sonic mission’ consists of in this context. Specifically in Eve: Valkyrie, a chaotic space battle environment where a lot is happening, constantly, everywhere, the role of audio includes:

  • Keep the pilot (player) informed
  • Keep the pilot (player) immersed

In a visually saturated environment, audio is a great way to maintain focus on the important gameplay elements and help the player remain alert and immersed.

What is also different in VR audio, is a greater level of listener movement, so that techniques need to be developed to implement audio in a context where the listener’s head doesn’t stay still. One of these techniques involves HRTFs (Head Related Transfer Functions).

Put shortly, the HRTFs help the listener locate where the sound is coming from and detail 3D positioning, but also more accurately portrays subtle modifications to sound while traveling.

For instance, the distance and positioning of an object is not only expressed sonically through attenuation, but also by introducing the sound reflections of a specific environment, and by creating a sense of elevation.

We then learned about how audio in VR may contribute to reducing the motion sickness often related with VR, while it helps the visuals to compensate for the feeling of disconnect, partly responsible for the motion sickness.

Since VR usually means playing with headphones on, the Valkyrie audio team decided to include some customisable audio options for the player, such an audio enhancement slider, which helps bringing focus onto important sounds.

The sound design of Valkyrie is thought to be rugged, to tell about the raw energy of the game, and to be strong in details. With that in mind, the team is constantly aiming to improve audio along with the game updates. For instance, they plan to breathe more life into the cockpit by focusing on its resonance and enhance the deterioration effects.

Ash’s presentation was concluded with a playback of their recently released launch trailer for PS VR, the audio for which was beautifully done by Sweet Justice Sound.

You can watch the trailer here: https://www.youtube.com/watch?v=AZNff-of63U

12:00 Presentation: Simon Gumbleton – PlayStation VR Worlds

14725217_2282334321820251_1606871350_o

Technical sound designer Simon Gumbleton then followed to tell us about the audio design and implementation in Sony’s PlayStation VR Worlds.

The VR Worlds game is rather like a collection of bespoke VR experiences, each presenting a different approach to player experience. Over the course of the development of those various experiences, the dev and audio teams have experimented, learned, and shaped their approaches, while exploring uncharted territories and encountering new challenges.

1st experience: Ocean Descent

Being the first experience they worked on, it laid the foundation of their work, and allowed for experimentation and learning. The audio team then developed some techniques such as the Focus System, where the listener would start to hear accentuated details of what’s in focus after a short amount of time (of it being in focus). You could see it as a game audio implementation of the cocktail party effect.

They also developed a technique concerning the player breathing, where they introduce breathing sounds at first, and eventually pull them out once the player has acclimated to the environment, where they become somewhat subconscious.

Similarly, they explored ways to implement avatar sounds, and found that, while they usually reinforce the player in the world, in VR there is a fine line between them being reinforcing or distracting. In short, the sounds heard need to be reflected by movements actually seen in game. This means that you would only hear avatar sounds related to head movements which have a direct impact on visuals, as opposed to body movements which you cannot see.

2nd experience: The London Heist

In this experience, there was more opportunity to experiment with interactive objects. To design believable audio feedback  and to improve the tactile one to one interactions.

In order to do so, they implemented the sound of every interactable object in multiple layers. For instance, a drawer opening won’t be recorded as one sound and then played back on the event of opening this drawer in game. This drawer can be interacted with in many ways, so its sounds are integrated with a combination of parameters and layers in order to playback an accurate sonic response for the type of movement generated by the player’s actions.

Another example is the cigar smoking being driven by the player’s breathing. The microphone input communicates with the game and drives the interaction with the cigar for optimal immersive experience.

A detailed foley of the characters also proves to be something that helps bringing characters to life. Every detail is captured and realised, down to counting the number of rings on a character’s hand and implement its movement sounds accordingly.

Dynamic reverb tells the player info about the space and the sounds generated in it. A detailed and informative environment is created with the help of physically based reflection tails, as well as material dependent filters, all processed at run time. It’s all about making the environment feel more believable.

3rd experience: Scavengers Odyssey

This experience was developed later, so they were able to take their learnings from the previous experiences and apply them, and even push the limits further.

For instance, since this experience is taking place in space and there is no real ‘room’ to generate a detailed reflection based reverb, they focused on implementing the sound as if it was heard through the cockpit.

Simon also emphasized on how detail is important, while in VR, the player will subconsciously have very high expectations of detail.  The way this is achieved is through lots of layering, and many discrete audio sources within the world.

Such detail inevitably brings tech challenges in relation to the performance of the audio engine, which will require a lot of optimisation work.

The ambiences have been implemented fully dynamically, where textures are created without any loops and are constantly evolving in game.

In terms of spatialisation, they tied all the SFX to the corresponding VFX within the world for optimal sync and highly accurate positioning.

They also emphasized important transitions in the environment by adding special transition emitters in critical places.

Music

As for the music, they experimented in regards to its positioning, whether it should be placed inside the world or not, and mostly proceeded with quad array implementation when in passive environments.

They did have some opportunity to experiment with the unique VR ability to look up and down, for instance in Ocean’s Descent where they accentuated the feeling of darkness and depths VS brightness and light when looking up and down in the water with adaptive music.

The Hub

This interactive menu is an experience in itself. It is the first space you are launched into when starting the game, and sets up the expectations for the rest. They needed to build a sense of immersion already, and put the same level of detail into the hub as anywhere else in order to maintain immersion when transitioning from one experience to another.

Finally, this collection of experiences needed to remain coherent overall and maintain a smoothness through every transition. This was accomplished through rigorous mixing, and by establishing a clear code regarding loudness and dynamics  which would be applied throughout the entire game.

PlayStation VR Worlds is due to be released on 13 October 2016, you can watch the trailer here: https://www.youtube.com/watch?v=yFnciHpEOMI

13:00 Interview: Voice Actor, Alix Wilton Regan – Dragon Age, Forza, Mass Effect, LBP3

14725314_2282334185153598_1492494353_o

Alix Wilton Regan told us about voice acting in video games in the form of an interview, lead by Sam Hughes.

Some thoughts were share about career paths, working in games VS in television, and some tips were shared for starting actors.

Alix Wilton Regan has started a fundraising campaign, a charitable initiative to help refugees in Calais, check it out!

https://gogetfunding.com/play-4-calais/

14:00 Interview: Composer, David Housden – Thomas Was Alone

14647420_2282334105153606_271274746_o

Another interview followed with David Housden, composer on Thomas Was Alone and Volume. The interview was held in a similar way, starting with some thoughts on career progression, following with some details about his work on past and current titles, and concluding with advice on freelancing.

15:00 Presentation: Composer & Sound Designer, Matt Griffin – Unbox

14689840_2282334028486947_1144932870_o

Composer Matt Griffin then presented how the sound design and music for the game Unbox was implemented using FMOD.

One of the main audio goals for this entertaining game was to make it interactive and fun. In order to do so, Matt found ways to make the menus generative and sometimes reactive to timing, such as the menu music.

We were shown the FMOD project and its structure to illustrate this dynamic implementation. For the menu music, the use of transitions, quantizations and multi sound objects was key.

For the main world music, each NPC has its own layer of music, linked to a distance parameter. Some other techniques were used in order to make the music dynamic, such as having a ‘challenge’ music giving the player feedback on progression and timing, and multiplayer music with a 30 seconds countdown double tempo.

In terms of sound design, the ‘unbox’ sound presented a challenge while it is very frequently played throughout the game. In order to not make it too repetitive, it was implemented using multiple layers of multi sound objects, along with pitch randomisation on its various components and a parameter tracking how many ‘unboxes’ were heard so far.

An extensive amount of work was also realised for the box impact sounds on various surfaces, taking velocity into account.

For the character sounds, a sort of indecipherable blabber, individual syllables were recorded and then assembled together in game using FMOD’s Scatterer sound object.

16:00 Interactive Interview: Martin Stig Andersen – Limbo

14647318_2282333761820307_701269280_o

Martin Stig Andersen, composer and sound designer on Limbo and Inside was then interviewed by Sam Hughes.

Similarly to the previous interviews, some questions relating to career paths were first answered, relating how Martin started in instrumental composition, shifted towards electroacoustic composition (musique concrète), and later into experimental short films.

His work often speaks of realism and abstraction, where sound design and music combine to form one holistic soundscape.

Martin explained how he was able to improve his work on audio for Inside compared to Limbo as he was brought onto the project at a much earlier stage, and was able to tackle larger tech issues, such as the ‘death-respawn’ sequence.

More info on the death-respawn sequence in this video : http://www.gdcvault.com/play/1023731/A-Game-That-Listens-The

Some more details were provided about the audio implementation for Inside, for instance the way the sound of the shock wave is filtered depending on the player’s current cover status, or how audio is used to communicate to the player how well he/she is doing in the progression of the puzzle.

We also learned more about the mysterious recording techniques used for Inside involving a human skull and audio transducers.

More details here: http://www.gamasutra.com/view/news/282595/Audio_Design_Deep_Dive_Using_a_human_skull_to_create_the_sounds_of_Inside.php

17:00 Audio Panel: Adam Hay, David Housden, Matt Griffin

The first ended with a panel with the above participants, sharing some thoughts on game audio in general, freelancing, and what will come next.

Sunday 9 October

11:00 Interview & Gameplay: Martin Stig Andersen – Limbo

The day started by inviting Martin Stig Andersen again to the stage, where the interview was roughly the same as the previous day.

12:00 Interview: Nathan McCree, Composer & Audio Designer

14672829_2282333701820313_1144241269_o

At midday, the audience crowded up as the composer for the first three Tomb Raider games was being interviewed by Sam Hughes.

Some questions about career progressions were followed by some words about the score and how Nathan came to compose a melody that he felt really represented the character.

The composer also announce The Tomb Raider Suite, a way to celebrate Tomb Raider’s 20th anniversary through music, where his work will be played by a live orchestra before the end of the year.

More details here:

http://tombraider.tumblr.com/post/143228470745/pax-east-tombraider20-announcement-the-tomb

13:00 Presentation: Voice Actor, Jay Britton – Fragments of Him, Strife

14647430_2282333598486990_396824108_o

Next, voice actor Jay Britton gave us a lively presentation on the work of a voice actor in video games, involving a demo of a recording session. He gave us some advice on how to get started as a voice actor in games, including:

  • There is no one single path
  • Start small, work your way up
  • Continually improve your skills
  • Network
  • Get trained in videogame performance
  • Get trained in motion capture and facial capture
  • Consider on-screen acting
  • Speak to indie devs
  • Get an agent

He followed by giving advice on how to come up with new voice character with your own voice, while giving us some convincing demonstrations.

14:00 Interview: Audio Designer, Adam Hay – Everybody’s Gone To The Rapture

14647363_2282333518486998_1175668964_o

Sound designer Adam Hay was then interviewed about his work on both Dear Esther and Everybody’s Gone To The Rapture.

He mentioned how the narrative journey is of crucial importance in both these games, and how the sound helps the player progress through them.

16:00 Audio Panel: Simon Gumbleton, Ash Read, David Housden

14699787_2282333485153668_2084948409_n

Finally, the weekend ended (before giving the stage to live musicians) with a VR audio panel, giving us some additional insight on the challenges surrounding VR audio, such as the processing power involved in sound spatialisation, and how everything has to be thought through in a slightly different way than usual.

Voilà, a very busy weekend full of interesting insights and and advice. A massive thanks to The Sound Architect crew for putting this together, hopefully this can take place again next year! 🙂

Develop: Brighton Game Dev Conference

I just came back from Brighton for the Develop: Brighton game dev conference. I was there only on Thursday 14 July for the Audio Day, and here are my thoughts and brief summary.

It.Was.Great.

Amazing.Instructive.Inspiring.

The Audio Track was incredible, lining up wonderful speakers with so much to say!

The day started at 10 am with a short welcome and intro from John Broomhall (MC for the day), and a showing of an excerpt from the Beep Movie to be released this summer. Jory Prum was meant to give the introduction but very sadly passed recently from a motorcycle accident.

The excerpt presented hence showed him in his studio talking about his sound design toys:

 

10.15 am – Until Dawn – Linear Learnings For Improved Interactive Nuance

The first presentation was given by Barney Pratt, Audio Director at Supermassive Games, telling us about the audio design and integration in their game Until Dawn.

We learned about branching narrative and adapting film edit techniques for cinematic interactive media, dealing with Determinate VS Variable pieces of scenario.

Barney gave us some insight on how they created immersive Character Foley using procedural, velocity-sensitive techniques for footsteps and surfaces, knees, elbows, wrists and more. The procedural system was overlaid with long wav files per character for the determinate parts, providing a greatly realistic feel to the characters’ movements.

He then shared a bit about their dialog mixing challenges and solutions: where center speaker dialog mix and surround panning didn’t exactly offer what they were looking for, they came up with a 50% center biased panning system which seemed to have been successful (we heard a convincing excerpt from the game comparing these strategies). Put simply, this ‘soft panning’ technique provided the realism, voyeurism and immersion required by the genre.

Finally, Barney told us about their collaboration with composer Jason Graves to achieve incredible emotional nuances, from techniques once again inspired from film editing.

For instance, they wanted to avoid stems, states and randomisation in order to respect the cinematic quality of the game, as opposed to the techniques used for an open-world type of game.

The goal was to generate a visceral response with the music and sound effects. After watching a few excerpts, even in this analytic and totally non-immersive context, I can tell you, they succeeded. I jumped a few times myself and, although (or maybe because) the audio for this game is truly amazing, I will never play it, as to do so will prevent me from sleeping for weeks to come….

11.20 am – VR Audio Round Table

Then followed a round table about VR audio, featuring Barney Pratt (Supermassive Games), Matt Simmonds (nDreams) and Todd Baker (Freelance, known for Land’s End).

They discussed 3D positioning techniques, the role and place of the music, as well as HRTF & binaural audio issues. An overall interesting and instructive talk providing a well appreciated perspective on VR audio from some of the few people among us who have released a VR title.

12.20 – Creating New Sonics for Quantum Break

The stage then belonged to Richard Lapington, Audio Lead at Remedy Games. He revealed the complex audio system behind Quantum Break‘s Stutters – those moments during gameplay when time is broken.

The team was dealing with some design challenges, for instance the need for a strong sonic signature, the necessity of being instantly recognisable, and convincing. In order to reach those goals, they opted to rely on the visual inspiration the concept and VFX artists were using as a driving force for audio design.

Then, when they came up with a suitable sound prototype, they reversed engineered it and extrapolated an aesthetic which would be put into a system.

This system turned out to be an impressive collaboration between the audio and VFX team, where VFX was driven by real time FFT analysis operated by a proprietary plugin. This, paired with real time granular synthesis, resulted in a truly holistic experience. Amazing work.

// lunch //

I went to take a look at the expo during lunch time and tried the Playstation VR set with the game Battlezone from Rebellion.

I only tried it for a few minutes so I can’t give a full review, but I enjoyed the experience, the game was impressive visually. Unfortunately couldn’t get a clear listen as the expo was noisy, but I had enough of a taste to understand all that could be done with audio in VR and the challenges that it can pose. Would love to give this  try…

2 pm – The Freelance Dance

The afternoon session started off with a panel featuring  Kenny Young (AudBod), Todd Baker (Land’s End), Rebecca Parnell (MagicBrew), and Chris Sweetman (Sweet Justice).

They shared their respective experiences as freelancers and compared the freelance VS in-house position and lifestyle.

The moral of the story was that both have their pros and cons, but mostly they all agreed that if you want to be a freelancer, it’s a great plus to have some in-house experience first, and not start on your own right out of uni.

3 pm – Assassins Creed Syndicate: Sonic Navigation & Identity In Victorian London

Next on was Lydia Andrew, Audio Director at Ubisoft Quebec.

She explained how they focused on the player experience through audio in Assassins Creed Syndicate, and collaborated with composer Austin Wintory to give an immersive, seamless soundtrack giving identity to the universe.

They were careful to give a sonic identity to each borough of Victorian London, both through sound (ambiences, SFX, crowds, vehicles) and music. They researched Victorian music to suit the different boroughs and sought the advice of Professor Derek Scott to reach the highest possible historical accuracy.

Very detailed presentation of the techniques used to blend diegetic and non diegetic music, given by a wonderfully spirited and inspiring Audio Director.

4.15 pm – Dialogue Masterclass – Getting The Best From Voice Actors For Games

Mark Estdale followed with a presentation on how to direct a voice acting session, and how to give the actor the best possible context to improve performance.

Neat tricks were given, such as the ‘Show don’t tell’: use game assets to describe, give location, and respond to the actor’s lines. For instance, use the already recorded dialogue to reply to the actor’s lines, play background ambiance, play accompanying music, and show the visual context. Even use spot effects if the intention is to create a surprise.

5.15 pm – Stay On Target – The Sound Of Star Wars: Battlefront

This talk was outstanding. Impressive. Inspiring. Brilliant way to end the day of presentations. A cherry on the cake. Cookies on the ice cream.

13728528_2107113226009029_1802414814_o.jpg

You could practically see the members of the audience salivating with envy when David Jegutidse was describing the time he spent with Ben Burtt, hearing the master talk about his tools and watching him play with them, including the ancient analog synthesizer that was used to create the sounds of R2D2.

Along with Martin Wöhrer, they described how they adapted the Star Wars sounds to fit this modern game.

They collaborated with Skywalker Sound and got audio stems directly from the movies, as well as a library of sound effects and additional content on request.

In terms of designing new material, they were completely devoted to maintain the original style and tone, and opted for organic sound design.

What this means (among other things) is Retro processing through playback speed manipulations, worldising, and ring modulation, like they did back in the days.

It was a truly inspiring talk, giving a lot to think about to anyone working with an IP and adapting sound design from existing material and/or style and tone.

///

The day ended with an open mic calling back to the table Todd Baker, Lydia Andrew, Rebecca Parnell, Chris Sweetman and Mark Estdale to discuss the future of game audio.

13663441_2107112809342404_2073688048_o

 

Overall an incredible day where I got to meet super interesting and wonderful people, definitely looking forward to next year!! 🙂

 

 

State of Play 2016 – Dublin

Yesterday (8 June 2016) I went to the State of Play event held in Dublin Institute of Technology.

It was overall a great event, many speakers with relatively short talks (10-20 minutes each) kept the evening dynamic and filled with a variety of sage advice and colorful demonstrations.

Among the speakers were (not in order)

  • Owen LL Harris (also MC for the night) – http://owenllharris.com/
  • Llaura NicAodh – http://dreamfeel.org/
  • Kieran Nola – http://kierannolan.com/
  • Robin Baumgarten – http://aipanic.com/
  • Evan Balster – http://imitone.com/
  • Charlene Putney – http://alphachar.com/
  • Jen Carey – http://www.ficklegames.com/
  • Sherida Halatoe – http://www.beyondeyes-game.com/home/4578546094
  • Kevin Murphy – http://www.retroneogames.com/

Unfortunately I didn’t take note of all the names and can’t find a complete list of speakers, so might be forgetting one or more.. sorry!

(Also William Pugh was meant to be there but unfortunately could not make it due to his recent leg injury. We wish you a quick recovery William!)

I strongly suggest you check out those websites, all of them had interesting things to say.

Among my favorites, definitely Robin Baumgarten and his ‘hardware experimental game projects’. He showed us a bit of his process while working on projects such as the Line Wobbler and A Dozen Sliders

It is always inspiring to see someone creating something entirely new from scratch. Makes you want to lock yourself in a studio and do the same, because why not!

I was also otherwise surprised (or maybe not) that many of the talks related to the topic of coping with stress and creative blocks, motivation and self-care. The games industry is one to attract passionate, talented people hoping to fulfill themselves working on a project they believe in. Most of the times I like to think that this is true, but it would be foolish to ignore the harsh reality of crunch times, crazy deadlines and immense amount of pressure that come with the job.

I can imagine that all of the speakers went through this realisation more than once in their career, and provided us with their tips and techniques to try to stay sane in these periods of high stress.

There was also some talk about the value of networking (Kevin Murphy), as well as advice on how to create game narratives starting from personal experience (Sherida Halatoe)

Llaura’s talk, which was more of a storytelling than a speech, was also very strong while she played an excerpt of her latest game If Found Please Return, which seems to be really promising.

The generally informal tone to the evening made it refreshing and quite friendly. The event continued in an even more informal manner at the Odessa pub for some social drinks.

Looking forward to State of Play 2017!