I was recently in Amsterdam and my loyal Sony pcm m10 came with me!
Here are a few recordings I’ve made during my trip 🙂
I was recently in Amsterdam and my loyal Sony pcm m10 came with me!
Here are a few recordings I’ve made during my trip 🙂
Being passionate about environmental sound art, this post is about providing some insight into some of my own creative processes, but also (and maybe mostly) about spreading knowledge on these sometimes forgotten but fascinating topics.
Wabi-Sabi, Entropy and Acoustic Ecology as both conceptual and practical methods greatly influence my creative thinking: they shine through my sound design and recording approaches when working on personal projects, as well as contribute to give me a sense of artistic direction and intention.
Hopefully the ideas expressed below will help the reader gain a better understanding of my perspective on environmental sound art.
First, let’s talk about Wabi-sabi. If you’ve never heard of it, here is the short version: a Japanese world view celebrating the beauty of things imperfect, impermanent and incomplete. It disregards the grandiose and flawless as aesthetic criteria, and rather looks for unique and unconventional characteristics in humble objects of everyday life. It is the acceptance of things as they are and of the constant motion occurring in nature.
Now for the long version.
Wabi-sabi is the opposite of materialism, of modernism, of stillness, of the spectacular, and of the orderly and the symmetrical.
It is nature-based, and refers to the rustic, the simple, the unsophisticated, and the unpretentious. It concerns the spatial and temporal events and occurrences surrounding us.
In japanese, the meaning of ‘Wabi’ originally bore rather negative connotations regarding poverty and the barbarism that comes with living in remote regions, with few material goods and means. Over time, the evolution of the term transformed into something much more positive, communicating how this type of life away from society and in isolation ‘’fosters an appreciation of the minor details of everyday life and insights into the beauty of the inconspicuous and overlooked aspects of nature’’.
Wabi-sabi and nature are intimately related, even though it is possible to observe the philosophy in any environment. It believes in the fundamental uncontrollability of nature, which resonates well with the complementary ideas of chaos, randomness and complex patterns.
In a way, it romanticises nature, calling for a deeper sense of perception than the superficial act of looking. It requires thinking, observation, patience, attention, and care.
It is especially calling attention to natural degradation processes, such as corrosion and contamination. All forms of natural transformation make the expression of wabi-sabi richer.
One of the most important wabi-sabi spiritual values is that truth comes from the observation of nature. The Japanese have suffered extreme natural conditions over time including earthquakes, volcanic eruptions, typhoons, floods, fires, tidal waves, and more, and the wabi-sabi philosophy expresses some of their lessons learned:
All things, both tangible and intangible, wear down. Permanence can only ever be an illusion.
Nothing is flawless. So embrace the flaws as unique features, instead of masking them.
All things, including the universe itself, are in constant, never-ending state of becoming or dissolving. The notion of completion has no basis in wabi-sabi.
The dimension of time, transformation, degradation or metamorphosis are all present in the wabi-sabi philosophy.
It insists on the non everlasting quality of things – To every thing there is a season.
Some of its metaphysical basis include principles such as the idea that ‘’things are either devolving toward, or evolving from, nothingness’’.
Wabi-sabi is about the delicate traces, this faint evidence, at the borders of nothingness. The universe destructs and constructs, evolves and devolves, and nothingness is, unintuitively, alive with possibility, or potential: ‘’In metaphysical terms, wabi-sabi suggests that the universe is in constant motion towards or away from potential’’.
According to the above statement, I find that memories and perception are equally part of the wabi-sabi recurring themes. It is about the subtleties, the non obvious, the once was, the potential to be, the insignificance of us as individuals in the grand scheme of things. Exploring wabi-sabi is exploring the line between construction and deconstruction, evolution and devolution.
Wabi-sabi states that ‘’greatness exists in the inconspicuous and overlooked details’’.
It is the opposite of the ideal of beauty as something monumental, spectacular and enduring.
Wabi-sabi is found in nature at moments of inception or subsiding. It is not about the gorgeous flowers, majestic trees or bold landscapes, it’s about the minor and the hidden, the tentative and the ephemeral: things so subtle and evanescent they are invisible to vulgar eyes.
It is quite easy to make a parallel with recording approaches here: consequently to experience wabi-sabi, one has to slow down, be patient, look very closely (or listen), and pay attention to details. Patience is key.
Wabi-sabi also expresses that ‘’beauty can be coaxed out of ugliness’’.
Beauty is a dynamic event that occurs between you and something else. It is an altered state of consciousness, not an absolute. Thus the separation of beauty and non-beauty or ugliness is not in accordance with the wabi-sabi way of thinking.
Again, the parallel is easy to draw – who is to say which sounds are pleasing, which aren’t? Who is to determine what are the universals in beauty? To me, this is a personal, holistic experience, where individual perception plays a significant role. Whatever you capture out there is part of a greater, more intimate moment between you and your subject, and what you find beautiful may not appeal to someone else, but that’s part of what makes your subject unique.
Part of the wabi-sabi state of mind is the acceptance of the inevitable, and the appreciation of the evanescence of life. Wabi-sabi serenely contemplates our own mortality and finality, as part of a greater ensemble. Ecosystems are a good demonstration of this mindset: they are continually evolving (not everlasting), transforming, and complex, while their components are ephemeral and incomplete.
Wabi-sabi also celebrates natural degradation and entropy (in an artistic sense). For instance, signs of corrosion are a manifestation of nature following its course. In materials, this translates as the observation of cracks in clay as it dries, the color and textural metamorphosis of metal when it tarnishes and rusts. Those occurrences we are able to witness are a ‘’representation of the physical forces and deep structures that underlie our everyday world’’.
In terms of aesthetics, wabi-sabi always consists of a suggestion of natural processes.
Things wabi-sabi are expressions of time frozen. They are made of materials that are visibly vulnerable to the effects of weathering and human treatment. They record the sun, wind, rain, heat, and cold in a language of discoloration, rust, tarnish, stain, warping, shrinking, shriveling, and cracking. Their nicks, chips, bruises, scars, dents, peeling, and other forms of attrition are a testament to histories of use and misuse.
They are irregular (non repetitive), intimate (observed in proximity), unpretentious, and simple.
One way wabi-sabi translates into my recording approaches is through welcoming accidents and the unplanned. I accept the unexpected and imperfect as part of a greater order, and find the beauty in overlooked details.
For instance, I remember this one day when I did some field recording in a remote forest. It wasn’t a hiking forest, it was an actual wild forest where it is absolutely impossible to set foot in because the ground is either too dense or unsafe. Only one small dirt road was going across it, which allowed me to get closer. I went as far as I physically could in the forest, which was realistically right at the edge of a path. There I set up with my portable recorder, and hit ‘record’. Headphones on, I could hear how alive the forest was. There was a mild breeze, making the tips of the trees dance and creak. Most trees were dry and in pretty rough shape, some bits were falling apart here and there. The entire forest was lamenting, it was really spooky. So I was recording, never wanting to hit ‘stop’ as I was entranced by what I was listening to, and suddenly this ‘accident’ happens. Somewhere not too far, a big heavy branch falls down, totally ruining my set levels. Also I think I shouted a bit, I was totally taken by surprise. But that’s all fine. Because that was part of the moment, the experience, that unique soundscape. I kept all of those recordings.
I have an equally strong attraction to romanticism, which in many ways contradicts the wabi-sabi experience: bold landscapes, mountains, forces of nature and large scale events also fascinate me. But how better to record a mountain and capture the grandiose than seize all of its living components, its motion, its overlooked details, its changes, its gradation and degradation.
Now, about entropy.
In a scientific sense (and put simply), entropy is the measurement of disorder. It refers to a principle of thermodynamics dealing with energy, considering the amount of unavailable energy in a closed thermodynamic system as a property of its state, i.e. its order or disorder.
In a poetic or artistic sense, it’s the quality of chaos and randomness, it’s the natural decay and transformation of things surrounding us, it’s the appreciation of uncertainty and finality, and the recognition of a tendency for all things to degrade towards nothingness.
I like to marry the concepts of entropy and wabi-sabi through the ideas of the uncontrollability of nature, the construction and deconstruction, and the evolution and devolution of the universe, as well as the idea that all things are in continuous flow (the opposite of stillness).
While wabi-sabi appreciates the patterns in nature, the instability, the overlooked details of natural decay and transformation, entropy reinforces those ideas by affirming the constant movement of things and greater natural forces, and supports those conceptual and metaphysical views by describing perceptible physical phenomenons.
The way this translates into recording or designing approaches can be many. It concerns just as much what you chose to record, how you record it, and what are you going to do with it.
Some recurring themes may include perception of patterns, the passage of time, and the transformation of matter, the exploitation of unplanned behaviours, the unexpected and randomness. There is no exhaustive list of course, and this exploration is not only about the subject but also very much about the process itself.
During my Master’s degree at the University of Edinburgh, I participated in creating an audiovisual installation exploring the concept of entropy, along with 4 other extremely talented audio and visual artists (Juanjo Ripalda, Gaby Yanez, Euan McKenzie and Adam Howard).
In this experimental piece, which roughly consisted of a self regulated feedback system played back through rusted metal plates (with audio transducers) and reacting to the space, we communicated the idea of entropy through 6 different levels, as follows:
Level 1 – The rust
We ‘prepared’ 4 large metal plates, making them rust over a period of time, and documented the process through video recording. This is in line with natural decaying processes.
Level 2 – The curves
We modeled formal oxidation measurement curves and implemented them in our installation in such a way that the audio played back would follow these curves in a cyclical nature, thus highlighting the perception of decay over time.
Level 3 – The feedback system
Inspired by composer Agostino Di Scipio, the feedback system is set up according to the notion of ‘audible ecosystems’. This concept illustrates the complexity of the relation between sound and its surrounding environment, and how both interact with each other. It intends to show how any organised system, while being altered by its context and place will ultimately function on its own and potentially lead to unexpected behaviours.
Level 4 – Audio and visual processes
We illustrated entropy’s ‘coloration’ both visually and sonically by playing back the corresponding videos of the metal plates on 4 different screens. Both sound and video were processed in various ways in order to portray digital decay (loss of information, quality degradation, glitches), following the ‘rust curves’ evolution mentioned above.
Level 5 – The human interaction
The human agent contributes to the unpredictable nature of the installation. From the moment interaction occurs, it is impossible to predict how the system will react, change and adjust. Also, as entropy is a process rather than a state or a finality, the concept of interaction emphasizes on the procedure itself rather than the result.
Level 6 – Material
Entropy as a ‘transformation’ was also communicated through the materials chosen as means of display. For instance, audio playback was amplified through mismatched and deteriorated speakers, each of them offering a differently ‘tinted’ sound, spread across the space.
As you can see, entropy, as any other idea at a conceptual level, can be explored and communicated through various means. The idea is to revisit those concepts and find different ways to present them. I find that making parallels and marrying artistics intentions (such as combining entropy with wabi-sabi and acoustic ecology) is a great way to foster creative ideas and maintain subjects alive – the possibilities are just infinite.
The terms (and ideas of) acoustic ecology and soundscape are relatively new. It’s only in the 1970s that these concepts were first introduced, as part of a greater consideration concerning climate change and environmental deterioration.
R. Murray Schafer, in his book The Tuning of the World, wrote that «The soundscape of the world is changing. Modern man is beginning to inhabit a world with an acoustic environment radically different from any he has hitherto known».
His concerns about our sonic environment eventually led to Acoustic Ecology, which is today known as a discipline exploring ‘’the relationship, mediated through sound, between human beings and their environment’’.
Acoustic Ecology is part of a greater environmental sound art movement, consisting of a large body of work, contributing to connect us to the world in a various ways. In order to get a better understanding of the field in general, I’ll weave in some of the defining characteristics of environmental sound art, of which acoustic ecology is a branching creative practice.
Sound art is ‘’ultimately an art form that is inherently diverse, constantly expanding, and conceptually elusive’’. While the purpose and meaning of environmental sound art may very well vary from one artist to another, some characteristics certainly do bind the work of these various artists together. For instance, the strategies of appropriation of structure, processes, materials and impulses derived from the environment around us.
In acoustic ecology, some of the most important points are the following:
Sound art from acoustic ecologist can come to life through various ways. From data sonification of natural processes (such as seismic activity – John Bullitt), to playing natural or human made landmarks (Tower Music – Joseph Bertolozzi), or taking advantage of natural forces in order to create musical pieces (Sea Organ – Nikola Bašić), the means are many, but they do have one thing in common: the environment surrounding us. The goals also vary from an effort in raising environmental awareness, an invitation to connect with nature, an exploration of the mathematical patterns that govern our existence, etc, etc. Needless to say, these goals are not mutually exclusive – the intentions can be many.
There is also a significant appreciation of the process itself just as much as the sonic output that results from it.
In some cases the mere description of the work’s process or structure can be pleasing even without experiencing the work sonically. The ideas themselves can be elegant and intellectually fulfilling.
This makes me think of the fascination I have for the work of experimental composer Iannis Xenakis; while I greatly appreciate his motives, his ideas, his processes and intentions, I am moderately touched by the results of his musical practice. But that doesn’t really matter, because he has inspired me with his creativity either way. To learn more about Xenakis, take a look at this previous post.
Location and context are of primary relevance in the field of environmental sound art. A strong connection to specific spaces seems to be a unifying thematic thread. ‘’It is the space that brings context to the work’’.
I like this example from Cheryl Leonard, who recorded melting ice from glaciers in Antarctica for her work Meltwater. In many ways, this work is in accordance with both the wabi-sabi philosophy and entropy, in the attention to details and the overlooked, and the small as opposed (or as part of) the grandiose, the natural degradation processes and the changes in states.
«One of the allures of making music out of natural materials and environmental field recordings is delving into the minutia of the very quiet». (Cheryl Leonard)
There are plenty of examples of site specific installations. Some invite to reflect on the spaces and our relationship with them, such as the piece Bivvy Broadcasts by Dawn Scarfe, where real-time audio signal was streamed between a remote forest location and people located in urban areas. This was intended to ‘’reflect on the differences between urban and rural ambience, and to explore the imagined space of the forest as much as the physical reality’’.
Sometimes it’s about finding the musical elements within a natural environment and use them as a basis for creativity, inviting people to find connections with their surroundings and reflect on common interactions with them. (Sounding Underground – Ximena Alarcón, David Rothenberg, Matthew Burtner).
Acoustic ecology is known to explore themes centered towards nature and the various socio-political topics and questions surrounding it. Environmental awareness is certainly a common thematic thread, but it would be a misjudgment to reduce the discipline to this particular angle only.
Acoustic ecology is drawn to the principles of design and structure inherent in nature, which presents both orderliness, stability and balance, as well as chaos and randomness. Some elements that can be explored and observed from this complex tapestry are perhaps the mathematical beauty of reiterated forms, the power of repetition, and the forces of physical energy.
For me, all three concepts of acoustic ecology, wabi-sabi and entropy come together to provide a sense of direction and intention. My artistic statement is in accordance with the wabi-sabi philosophy, inclusive of the idea of entropy, and accomplished through the means of acoustic ecology.
I’d like to conclude this article by describing a beautiful sound installation by field recordist and sound artist Chris Watson, one I had the privilege to experience: Hrafn: Conversation with Odin (October 2014).
The installation consisted of a multi-channel, spatialiased sound installation playing back recordings of thousands of ravens returning to roost. The speakers were hidden among the trees, and the audience was taken to the location at twilight to experience the event of the birds arriving and commencing their conversations, culminating into a full raven roost overhead.
Through an intensely immersive perspective, I found that Chris Watson’s work portrayed the metamorphosis of a space, the transformation of an ecosystem, and offered a memory of it. Recorded and natural sounds blended together to form one beautiful audible painting. Many aspects of this installation fell in line with the wabi-sabi philosophy:
This installation beautifully incorporates ideas from all 3 concepts I described in this post, and have inspired me to develop similar projects and search for symbiosis between sound art and environment.
I hope this was insightful and inspiring!
I recently published an article on The Sound Architect website, about what it means to be a ‘one-person’ audio depart in a videogame studio.
This is based on my experience while working in DIGIT Game Studios and is meant to give some insight on the game audio workflow, and provide an overview of the responsibilities, tasks, challenges and rewards surrounding such a role.
You can find the article here.
On 8-9 October took place the PLAY Expo event in Manchester, where The Sound Architect organised and put together a full 2 days of presentations and interviews. I had the opportunity to attend and listen to the valuable insights shared by the guest speakers and interviewees, as well as discuss and socialise with fellow game audio professionals. It was overall a successful event and a lovely weekend, allowing passionate people to get together and exchange knowledge. Here is my brief summary of the event.
Saturday 8 October
11:00 Presentation: Ash Read – Eve: Valkyrie
We were first enlightened on some aspects in which VR audio differs from ‘2D’ or ‘TV’ audio, and briefly what the ‘sonic mission’ consists of in this context. Specifically in Eve: Valkyrie, a chaotic space battle environment where a lot is happening, constantly, everywhere, the role of audio includes:
In a visually saturated environment, audio is a great way to maintain focus on the important gameplay elements and help the player remain alert and immersed.
What is also different in VR audio, is a greater level of listener movement, so that techniques need to be developed to implement audio in a context where the listener’s head doesn’t stay still. One of these techniques involves HRTFs (Head Related Transfer Functions).
Put shortly, the HRTFs help the listener locate where the sound is coming from and detail 3D positioning, but also more accurately portrays subtle modifications to sound while traveling.
For instance, the distance and positioning of an object is not only expressed sonically through attenuation, but also by introducing the sound reflections of a specific environment, and by creating a sense of elevation.
We then learned about how audio in VR may contribute to reducing the motion sickness often related with VR, while it helps the visuals to compensate for the feeling of disconnect, partly responsible for the motion sickness.
Since VR usually means playing with headphones on, the Valkyrie audio team decided to include some customisable audio options for the player, such an audio enhancement slider, which helps bringing focus onto important sounds.
The sound design of Valkyrie is thought to be rugged, to tell about the raw energy of the game, and to be strong in details. With that in mind, the team is constantly aiming to improve audio along with the game updates. For instance, they plan to breathe more life into the cockpit by focusing on its resonance and enhance the deterioration effects.
Ash’s presentation was concluded with a playback of their recently released launch trailer for PS VR, the audio for which was beautifully done by Sweet Justice Sound.
You can watch the trailer here: https://www.youtube.com/watch?v=AZNff-of63U
12:00 Presentation: Simon Gumbleton – PlayStation VR Worlds
Technical sound designer Simon Gumbleton then followed to tell us about the audio design and implementation in Sony’s PlayStation VR Worlds.
The VR Worlds game is rather like a collection of bespoke VR experiences, each presenting a different approach to player experience. Over the course of the development of those various experiences, the dev and audio teams have experimented, learned, and shaped their approaches, while exploring uncharted territories and encountering new challenges.
1st experience: Ocean Descent
Being the first experience they worked on, it laid the foundation of their work, and allowed for experimentation and learning. The audio team then developed some techniques such as the Focus System, where the listener would start to hear accentuated details of what’s in focus after a short amount of time (of it being in focus). You could see it as a game audio implementation of the cocktail party effect.
They also developed a technique concerning the player breathing, where they introduce breathing sounds at first, and eventually pull them out once the player has acclimated to the environment, where they become somewhat subconscious.
Similarly, they explored ways to implement avatar sounds, and found that, while they usually reinforce the player in the world, in VR there is a fine line between them being reinforcing or distracting. In short, the sounds heard need to be reflected by movements actually seen in game. This means that you would only hear avatar sounds related to head movements which have a direct impact on visuals, as opposed to body movements which you cannot see.
2nd experience: The London Heist
In this experience, there was more opportunity to experiment with interactive objects. To design believable audio feedback and to improve the tactile one to one interactions.
In order to do so, they implemented the sound of every interactable object in multiple layers. For instance, a drawer opening won’t be recorded as one sound and then played back on the event of opening this drawer in game. This drawer can be interacted with in many ways, so its sounds are integrated with a combination of parameters and layers in order to playback an accurate sonic response for the type of movement generated by the player’s actions.
Another example is the cigar smoking being driven by the player’s breathing. The microphone input communicates with the game and drives the interaction with the cigar for optimal immersive experience.
A detailed foley of the characters also proves to be something that helps bringing characters to life. Every detail is captured and realised, down to counting the number of rings on a character’s hand and implement its movement sounds accordingly.
Dynamic reverb tells the player info about the space and the sounds generated in it. A detailed and informative environment is created with the help of physically based reflection tails, as well as material dependent filters, all processed at run time. It’s all about making the environment feel more believable.
3rd experience: Scavengers Odyssey
This experience was developed later, so they were able to take their learnings from the previous experiences and apply them, and even push the limits further.
For instance, since this experience is taking place in space and there is no real ‘room’ to generate a detailed reflection based reverb, they focused on implementing the sound as if it was heard through the cockpit.
Simon also emphasized on how detail is important, while in VR, the player will subconsciously have very high expectations of detail. The way this is achieved is through lots of layering, and many discrete audio sources within the world.
Such detail inevitably brings tech challenges in relation to the performance of the audio engine, which will require a lot of optimisation work.
The ambiences have been implemented fully dynamically, where textures are created without any loops and are constantly evolving in game.
In terms of spatialisation, they tied all the SFX to the corresponding VFX within the world for optimal sync and highly accurate positioning.
They also emphasized important transitions in the environment by adding special transition emitters in critical places.
As for the music, they experimented in regards to its positioning, whether it should be placed inside the world or not, and mostly proceeded with quad array implementation when in passive environments.
They did have some opportunity to experiment with the unique VR ability to look up and down, for instance in Ocean’s Descent where they accentuated the feeling of darkness and depths VS brightness and light when looking up and down in the water with adaptive music.
This interactive menu is an experience in itself. It is the first space you are launched into when starting the game, and sets up the expectations for the rest. They needed to build a sense of immersion already, and put the same level of detail into the hub as anywhere else in order to maintain immersion when transitioning from one experience to another.
Finally, this collection of experiences needed to remain coherent overall and maintain a smoothness through every transition. This was accomplished through rigorous mixing, and by establishing a clear code regarding loudness and dynamics which would be applied throughout the entire game.
PlayStation VR Worlds is due to be released on 13 October 2016, you can watch the trailer here: https://www.youtube.com/watch?v=yFnciHpEOMI
13:00 Interview: Voice Actor, Alix Wilton Regan – Dragon Age, Forza, Mass Effect, LBP3
Alix Wilton Regan told us about voice acting in video games in the form of an interview, lead by Sam Hughes.
Some thoughts were share about career paths, working in games VS in television, and some tips were shared for starting actors.
Alix Wilton Regan has started a fundraising campaign, a charitable initiative to help refugees in Calais, check it out!
14:00 Interview: Composer, David Housden – Thomas Was Alone
Another interview followed with David Housden, composer on Thomas Was Alone and Volume. The interview was held in a similar way, starting with some thoughts on career progression, following with some details about his work on past and current titles, and concluding with advice on freelancing.
15:00 Presentation: Composer & Sound Designer, Matt Griffin – Unbox
Composer Matt Griffin then presented how the sound design and music for the game Unbox was implemented using FMOD.
One of the main audio goals for this entertaining game was to make it interactive and fun. In order to do so, Matt found ways to make the menus generative and sometimes reactive to timing, such as the menu music.
We were shown the FMOD project and its structure to illustrate this dynamic implementation. For the menu music, the use of transitions, quantizations and multi sound objects was key.
For the main world music, each NPC has its own layer of music, linked to a distance parameter. Some other techniques were used in order to make the music dynamic, such as having a ‘challenge’ music giving the player feedback on progression and timing, and multiplayer music with a 30 seconds countdown double tempo.
In terms of sound design, the ‘unbox’ sound presented a challenge while it is very frequently played throughout the game. In order to not make it too repetitive, it was implemented using multiple layers of multi sound objects, along with pitch randomisation on its various components and a parameter tracking how many ‘unboxes’ were heard so far.
An extensive amount of work was also realised for the box impact sounds on various surfaces, taking velocity into account.
For the character sounds, a sort of indecipherable blabber, individual syllables were recorded and then assembled together in game using FMOD’s Scatterer sound object.
16:00 Interactive Interview: Martin Stig Andersen – Limbo
Similarly to the previous interviews, some questions relating to career paths were first answered, relating how Martin started in instrumental composition, shifted towards electroacoustic composition (musique concrète), and later into experimental short films.
His work often speaks of realism and abstraction, where sound design and music combine to form one holistic soundscape.
Martin explained how he was able to improve his work on audio for Inside compared to Limbo as he was brought onto the project at a much earlier stage, and was able to tackle larger tech issues, such as the ‘death-respawn’ sequence.
More info on the death-respawn sequence in this video : http://www.gdcvault.com/play/1023731/A-Game-That-Listens-The
Some more details were provided about the audio implementation for Inside, for instance the way the sound of the shock wave is filtered depending on the player’s current cover status, or how audio is used to communicate to the player how well he/she is doing in the progression of the puzzle.
We also learned more about the mysterious recording techniques used for Inside involving a human skull and audio transducers.
17:00 Audio Panel: Adam Hay, David Housden, Matt Griffin
The first ended with a panel with the above participants, sharing some thoughts on game audio in general, freelancing, and what will come next.
Sunday 9 October
11:00 Interview & Gameplay: Martin Stig Andersen – Limbo
The day started by inviting Martin Stig Andersen again to the stage, where the interview was roughly the same as the previous day.
12:00 Interview: Nathan McCree, Composer & Audio Designer
At midday, the audience crowded up as the composer for the first three Tomb Raider games was being interviewed by Sam Hughes.
Some questions about career progressions were followed by some words about the score and how Nathan came to compose a melody that he felt really represented the character.
The composer also announce The Tomb Raider Suite, a way to celebrate Tomb Raider’s 20th anniversary through music, where his work will be played by a live orchestra before the end of the year.
More details here:
13:00 Presentation: Voice Actor, Jay Britton – Fragments of Him, Strife
Next, voice actor Jay Britton gave us a lively presentation on the work of a voice actor in video games, involving a demo of a recording session. He gave us some advice on how to get started as a voice actor in games, including:
He followed by giving advice on how to come up with new voice character with your own voice, while giving us some convincing demonstrations.
14:00 Interview: Audio Designer, Adam Hay – Everybody’s Gone To The Rapture
He mentioned how the narrative journey is of crucial importance in both these games, and how the sound helps the player progress through them.
16:00 Audio Panel: Simon Gumbleton, Ash Read, David Housden
Finally, the weekend ended (before giving the stage to live musicians) with a VR audio panel, giving us some additional insight on the challenges surrounding VR audio, such as the processing power involved in sound spatialisation, and how everything has to be thought through in a slightly different way than usual.
Voilà, a very busy weekend full of interesting insights and and advice. A massive thanks to The Sound Architect crew for putting this together, hopefully this can take place again next year! 🙂
The second article of my two part series on making the most of Reaper for Sound Design is now up on the A Sound Effect Blog!
You can access it from here.
While the first part is about getting set up and started using Reaper, this second part reveals some useful workflow tips and tricks and reviews some of Reaper’s unique features.
If you missed the first part, here it is!
I was recently in Iceland and, while I brought my Sony PCM m10 with me, I took the opportunity to capture some of its sonic atmospheres. Here are the results.
Similarly to my recordings from Scotland, only a tiny amount of low frequencies have been removed to get rid of some wind noise. No other processing has been applied.
I have recently contributed to the A Sound Effect Blog with a 2 part series article on how to make the most of Reaper as a sound design tool.
The first article looks into getting started using Reaper and the initial set up. You can find it here.
The second will be up in about a week’s time and will cover more of the workflow and some good habits to be taken from the start. Keep an eye out!
I just spent a long weekend in Scotland, on the Isle of Arran, for some camping and hiking and enjoying the beautiful July weather.
I took the opportunity to do some field recordings – here are some of the results 🙂
A very small amount of filtering has been applied to those recordings to remove some low frequencies (Scotland can get pretty windy), but other than that no processing has been done.
About these last 2: I was roughly at the same location, the first one I recorded while facing the waves crashing on the beach, the second i recorded while facing the opposite direction. I think the second is interesting if you’d be looking to have a nice beach waves background ambiance while not really focusing on them.
This post is not a tutorial on loudness and metering in game audio. It is rather about sharing my findings on something I am currently researching on, hoping it can help those of you who would be in a similar position as me. I will definitely revisit this post at a later stage of my current project to share my experiences and conclusions on this info.
Since this is a work in progress, or rather a learning in progress, feel free to comment and let me know about any better/other ways to see or do these things.
I’ve been working on my current project for a few months now and, although I’ve been wondering about loudness and metering earlier in the process, the time has only recently come for me to make decisions on the matter, and hence look deeper into it.
First, I found this amazing resource which helped me understand more about all of it very quickly. This article from Stephen Schappler is a real gem and I strongly recommend you have a read. I will mention some of the things he shared in his article here, as well as develop according to my own experience.
There are currently no standards set for loudness measurements in game audio, resulting in wide variations and discrepancies in loudness from one game to another. The differences in gaming set ups and devices also present a challenge in terms of developing those standards.
One way to start looking into this is to refer to the BS.1770 recommendations to measure loudness and true peak audio level.
To put it simply, these algorithms measure Loudness Level at three different time scales:
What these mean for game audio will probably be different than what they mean in TV, as there is no full program length in interactive media, and 3 and 0.4 seconds may prove to be too short cuts to take any accurate measurement, again relating to the dynamic and interactive nature of the medium.
This is what Gary Taylor recommended about adapting the BS.1770 measuring terms to game audio (in this interview) :
We recommend that teams measure their titles for a minimum of 30 minutes, with no maximum, and that the parts of any titles measured should be a representative cross-section of all different parts of the title, in terms of gameplay.
As BS.1770 also indicates, it would be wise to consider the Loudness Range (LRA) and the True Peak Level. In order to do so, you would need good tools (accurate Loudness Meter) and a good environment (calibrated and controlled).
In terms of numbers, let’s look at the R128 and A/85 broadcast recommendations, which we could assume would present a similar objective if working on console and PC games, where your environment and set up would be the same/similar as your TV set up.
Those recommendations are:
However, these numbers may not apply to the mobile games industry, and different terms would need to be discussed in order to set standard portable devices levels. Some work has already been done on that matter by Sony’s ASWG, who are among the first ones (if not the first) to consider standardising the game audio loudness metering process and providing recommendations. Here are their internal loudness recommendations for their 1st party titles:
Gary Taylor mentioned in his interview that studios such as Media Molecule and Rockstar are already conforming to Sony’s specs, both in terms of average loudness and dynamic range. This seems to indicate that progress is being slowly but surely made in terms of game audio loudness standardisation.
The recommended process is to send the audio out from your game directly into your DAW and measure loudness with a specialised plugin. Be careful to make sure your outputs and inputs are calibrated and that the signal remains 1:1 across the chain.
Gary Taylor’s plugin recommendations to measure loudness:
As far as analysis tools, I personally have yet to find anything close to the Flux Pure Analyzer application for measuring loudness, spectral analysis, true peak, dynamic range and other visualisation tools. As far as loudness metering generally, Dolby Media Meter 2, Nugen VizLM, Waves WLM, and Steinberg SLM-128 (free to Nuendo and Cubase users) are all very good.
I have yet to experiment with those plugins and decide on my favorite tools. I happen to have the Waves WLM so will give that a try first, and plan to compare with the demo version of Nugen VizLM and see if I want to buy. I will update this article with feedback from my experience when ready.
In Fabric, there are Volume Meter and Loudness Meter Components which allow you to meter one specific Group Component. You could for instance apply those to a Master Group Component to monitor signals of the overall game.
However, I think that despite using these tools within the audio engines, it is worth measuring the direct output of your game directly from your DAW with the help of a mastering plugin. I see this as a way to ‘double-check’, I’m a big fan of making sure everything works as it is meant to, and listening to the absolute final end result of the product seems like a valid way to do this.
Finally, I unfortunately don’t have the luxury of working in a fully calibrated and controlled studio environment. If you are in a similar position as me, I’d strongly recommend considering renting a studio space towards the final stages of the game production to perform some more in depth mixing and metering.
I hope this was useful even though this info is based mostly on research rather than pure experience. I will most definitely revisit this topic once my remaining questions are answered 🙂
If you follow me on twitter, you will have seen a few recent tweets about my latest experiments with Sci Fi bleeps and bloops.
I created a MaxMSP patch that allows me to process sound files in such a way that the original file is nearly unidentifiable, and the results sound nicely tech and Sci Fi.
My process there was that over time, I created a few simple individual patches performing this sort of processing:
I decided to assemble those patches together in such a way that I could play with multiple parameters and multiple sounds at the same time.
In order to do so, I have mapped the various values and parameters of my patch to a midi controller [KORG nanoKONTROL2], and selected a few sounds a know work well with the different items of the patch to be chosen from a dropdown menu.
This is what the patch looks like:
All the different ‘instruments’ are contained in subpatches. They are all quite simple but create interestingly complex results when put together.
Organised nicely in Presentation Mode, I can interact with the different values with my midi controller:
The mapping system:
I can then record the result to a wav file on disk, which I am free to edit in Reaper afterwards, selecting the nice bits and making cool sounds effects with these original sources.
Record to file:
This process can be quite infinite as I can then feed the processed sound back to the patch and see what comes out of it.
Here is a little demo of the patch and its ‘instruments’:
And some bleeps and bloops I made using this patch:
You can visit the Experiments page to hear more tracks 🙂