What it means to be a ‘one-person’ audio department

 

header_image

I recently published an article on The Sound Architect website, about what it means to be a ‘one-person’ audio depart in a videogame studio.

This is based on my experience while working in DIGIT Game Studios and is meant to give some insight on the game audio workflow, and provide an overview of the responsibilities, tasks, challenges and rewards surrounding such a role.

You can find the article here.

Enjoy! 🙂

Advertisements

PLAY Expo Manchester

On 8-9 October took place the PLAY Expo event in Manchester, where The Sound Architect organised and put together a full 2 days of presentations and interviews. I had the opportunity to attend and listen to the valuable insights shared by the guest speakers and interviewees, as well as discuss and socialise with fellow game audio professionals. It was overall a successful event and a lovely weekend, allowing passionate people to get together and exchange knowledge. Here is my brief summary of the event.

Saturday 8 October

11:00 Presentation: Ash Read – Eve: Valkyrie

14699576_2282334385153578_591648951_o

The weekend started with Ash Read, sound designer at CCP working on Eve: Valkyrie, telling us about his experience with VR audio.

We were first enlightened on some aspects in which VR audio differs from ‘2D’ or ‘TV’ audio, and briefly what the ‘sonic mission’ consists of in this context. Specifically in Eve: Valkyrie, a chaotic space battle environment where a lot is happening, constantly, everywhere, the role of audio includes:

  • Keep the pilot (player) informed
  • Keep the pilot (player) immersed

In a visually saturated environment, audio is a great way to maintain focus on the important gameplay elements and help the player remain alert and immersed.

What is also different in VR audio, is a greater level of listener movement, so that techniques need to be developed to implement audio in a context where the listener’s head doesn’t stay still. One of these techniques involves HRTFs (Head Related Transfer Functions).

Put shortly, the HRTFs help the listener locate where the sound is coming from and detail 3D positioning, but also more accurately portrays subtle modifications to sound while traveling.

For instance, the distance and positioning of an object is not only expressed sonically through attenuation, but also by introducing the sound reflections of a specific environment, and by creating a sense of elevation.

We then learned about how audio in VR may contribute to reducing the motion sickness often related with VR, while it helps the visuals to compensate for the feeling of disconnect, partly responsible for the motion sickness.

Since VR usually means playing with headphones on, the Valkyrie audio team decided to include some customisable audio options for the player, such an audio enhancement slider, which helps bringing focus onto important sounds.

The sound design of Valkyrie is thought to be rugged, to tell about the raw energy of the game, and to be strong in details. With that in mind, the team is constantly aiming to improve audio along with the game updates. For instance, they plan to breathe more life into the cockpit by focusing on its resonance and enhance the deterioration effects.

Ash’s presentation was concluded with a playback of their recently released launch trailer for PS VR, the audio for which was beautifully done by Sweet Justice Sound.

You can watch the trailer here: https://www.youtube.com/watch?v=AZNff-of63U

12:00 Presentation: Simon Gumbleton – PlayStation VR Worlds

14725217_2282334321820251_1606871350_o

Technical sound designer Simon Gumbleton then followed to tell us about the audio design and implementation in Sony’s PlayStation VR Worlds.

The VR Worlds game is rather like a collection of bespoke VR experiences, each presenting a different approach to player experience. Over the course of the development of those various experiences, the dev and audio teams have experimented, learned, and shaped their approaches, while exploring uncharted territories and encountering new challenges.

1st experience: Ocean Descent

Being the first experience they worked on, it laid the foundation of their work, and allowed for experimentation and learning. The audio team then developed some techniques such as the Focus System, where the listener would start to hear accentuated details of what’s in focus after a short amount of time (of it being in focus). You could see it as a game audio implementation of the cocktail party effect.

They also developed a technique concerning the player breathing, where they introduce breathing sounds at first, and eventually pull them out once the player has acclimated to the environment, where they become somewhat subconscious.

Similarly, they explored ways to implement avatar sounds, and found that, while they usually reinforce the player in the world, in VR there is a fine line between them being reinforcing or distracting. In short, the sounds heard need to be reflected by movements actually seen in game. This means that you would only hear avatar sounds related to head movements which have a direct impact on visuals, as opposed to body movements which you cannot see.

2nd experience: The London Heist

In this experience, there was more opportunity to experiment with interactive objects. To design believable audio feedback  and to improve the tactile one to one interactions.

In order to do so, they implemented the sound of every interactable object in multiple layers. For instance, a drawer opening won’t be recorded as one sound and then played back on the event of opening this drawer in game. This drawer can be interacted with in many ways, so its sounds are integrated with a combination of parameters and layers in order to playback an accurate sonic response for the type of movement generated by the player’s actions.

Another example is the cigar smoking being driven by the player’s breathing. The microphone input communicates with the game and drives the interaction with the cigar for optimal immersive experience.

A detailed foley of the characters also proves to be something that helps bringing characters to life. Every detail is captured and realised, down to counting the number of rings on a character’s hand and implement its movement sounds accordingly.

Dynamic reverb tells the player info about the space and the sounds generated in it. A detailed and informative environment is created with the help of physically based reflection tails, as well as material dependent filters, all processed at run time. It’s all about making the environment feel more believable.

3rd experience: Scavengers Odyssey

This experience was developed later, so they were able to take their learnings from the previous experiences and apply them, and even push the limits further.

For instance, since this experience is taking place in space and there is no real ‘room’ to generate a detailed reflection based reverb, they focused on implementing the sound as if it was heard through the cockpit.

Simon also emphasized on how detail is important, while in VR, the player will subconsciously have very high expectations of detail.  The way this is achieved is through lots of layering, and many discrete audio sources within the world.

Such detail inevitably brings tech challenges in relation to the performance of the audio engine, which will require a lot of optimisation work.

The ambiences have been implemented fully dynamically, where textures are created without any loops and are constantly evolving in game.

In terms of spatialisation, they tied all the SFX to the corresponding VFX within the world for optimal sync and highly accurate positioning.

They also emphasized important transitions in the environment by adding special transition emitters in critical places.

Music

As for the music, they experimented in regards to its positioning, whether it should be placed inside the world or not, and mostly proceeded with quad array implementation when in passive environments.

They did have some opportunity to experiment with the unique VR ability to look up and down, for instance in Ocean’s Descent where they accentuated the feeling of darkness and depths VS brightness and light when looking up and down in the water with adaptive music.

The Hub

This interactive menu is an experience in itself. It is the first space you are launched into when starting the game, and sets up the expectations for the rest. They needed to build a sense of immersion already, and put the same level of detail into the hub as anywhere else in order to maintain immersion when transitioning from one experience to another.

Finally, this collection of experiences needed to remain coherent overall and maintain a smoothness through every transition. This was accomplished through rigorous mixing, and by establishing a clear code regarding loudness and dynamics  which would be applied throughout the entire game.

PlayStation VR Worlds is due to be released on 13 October 2016, you can watch the trailer here: https://www.youtube.com/watch?v=yFnciHpEOMI

13:00 Interview: Voice Actor, Alix Wilton Regan – Dragon Age, Forza, Mass Effect, LBP3

14725314_2282334185153598_1492494353_o

Alix Wilton Regan told us about voice acting in video games in the form of an interview, lead by Sam Hughes.

Some thoughts were share about career paths, working in games VS in television, and some tips were shared for starting actors.

Alix Wilton Regan has started a fundraising campaign, a charitable initiative to help refugees in Calais, check it out!

https://gogetfunding.com/play-4-calais/

14:00 Interview: Composer, David Housden – Thomas Was Alone

14647420_2282334105153606_271274746_o

Another interview followed with David Housden, composer on Thomas Was Alone and Volume. The interview was held in a similar way, starting with some thoughts on career progression, following with some details about his work on past and current titles, and concluding with advice on freelancing.

15:00 Presentation: Composer & Sound Designer, Matt Griffin – Unbox

14689840_2282334028486947_1144932870_o

Composer Matt Griffin then presented how the sound design and music for the game Unbox was implemented using FMOD.

One of the main audio goals for this entertaining game was to make it interactive and fun. In order to do so, Matt found ways to make the menus generative and sometimes reactive to timing, such as the menu music.

We were shown the FMOD project and its structure to illustrate this dynamic implementation. For the menu music, the use of transitions, quantizations and multi sound objects was key.

For the main world music, each NPC has its own layer of music, linked to a distance parameter. Some other techniques were used in order to make the music dynamic, such as having a ‘challenge’ music giving the player feedback on progression and timing, and multiplayer music with a 30 seconds countdown double tempo.

In terms of sound design, the ‘unbox’ sound presented a challenge while it is very frequently played throughout the game. In order to not make it too repetitive, it was implemented using multiple layers of multi sound objects, along with pitch randomisation on its various components and a parameter tracking how many ‘unboxes’ were heard so far.

An extensive amount of work was also realised for the box impact sounds on various surfaces, taking velocity into account.

For the character sounds, a sort of indecipherable blabber, individual syllables were recorded and then assembled together in game using FMOD’s Scatterer sound object.

16:00 Interactive Interview: Martin Stig Andersen – Limbo

14647318_2282333761820307_701269280_o

Martin Stig Andersen, composer and sound designer on Limbo and Inside was then interviewed by Sam Hughes.

Similarly to the previous interviews, some questions relating to career paths were first answered, relating how Martin started in instrumental composition, shifted towards electroacoustic composition (musique concrète), and later into experimental short films.

His work often speaks of realism and abstraction, where sound design and music combine to form one holistic soundscape.

Martin explained how he was able to improve his work on audio for Inside compared to Limbo as he was brought onto the project at a much earlier stage, and was able to tackle larger tech issues, such as the ‘death-respawn’ sequence.

More info on the death-respawn sequence in this video : http://www.gdcvault.com/play/1023731/A-Game-That-Listens-The

Some more details were provided about the audio implementation for Inside, for instance the way the sound of the shock wave is filtered depending on the player’s current cover status, or how audio is used to communicate to the player how well he/she is doing in the progression of the puzzle.

We also learned more about the mysterious recording techniques used for Inside involving a human skull and audio transducers.

More details here: http://www.gamasutra.com/view/news/282595/Audio_Design_Deep_Dive_Using_a_human_skull_to_create_the_sounds_of_Inside.php

17:00 Audio Panel: Adam Hay, David Housden, Matt Griffin

The first ended with a panel with the above participants, sharing some thoughts on game audio in general, freelancing, and what will come next.

Sunday 9 October

11:00 Interview & Gameplay: Martin Stig Andersen – Limbo

The day started by inviting Martin Stig Andersen again to the stage, where the interview was roughly the same as the previous day.

12:00 Interview: Nathan McCree, Composer & Audio Designer

14672829_2282333701820313_1144241269_o

At midday, the audience crowded up as the composer for the first three Tomb Raider games was being interviewed by Sam Hughes.

Some questions about career progressions were followed by some words about the score and how Nathan came to compose a melody that he felt really represented the character.

The composer also announce The Tomb Raider Suite, a way to celebrate Tomb Raider’s 20th anniversary through music, where his work will be played by a live orchestra before the end of the year.

More details here:

http://tombraider.tumblr.com/post/143228470745/pax-east-tombraider20-announcement-the-tomb

13:00 Presentation: Voice Actor, Jay Britton – Fragments of Him, Strife

14647430_2282333598486990_396824108_o

Next, voice actor Jay Britton gave us a lively presentation on the work of a voice actor in video games, involving a demo of a recording session. He gave us some advice on how to get started as a voice actor in games, including:

  • There is no one single path
  • Start small, work your way up
  • Continually improve your skills
  • Network
  • Get trained in videogame performance
  • Get trained in motion capture and facial capture
  • Consider on-screen acting
  • Speak to indie devs
  • Get an agent

He followed by giving advice on how to come up with new voice character with your own voice, while giving us some convincing demonstrations.

14:00 Interview: Audio Designer, Adam Hay – Everybody’s Gone To The Rapture

14647363_2282333518486998_1175668964_o

Sound designer Adam Hay was then interviewed about his work on both Dear Esther and Everybody’s Gone To The Rapture.

He mentioned how the narrative journey is of crucial importance in both these games, and how the sound helps the player progress through them.

16:00 Audio Panel: Simon Gumbleton, Ash Read, David Housden

14699787_2282333485153668_2084948409_n

Finally, the weekend ended (before giving the stage to live musicians) with a VR audio panel, giving us some additional insight on the challenges surrounding VR audio, such as the processing power involved in sound spatialisation, and how everything has to be thought through in a slightly different way than usual.

Voilà, a very busy weekend full of interesting insights and and advice. A massive thanks to The Sound Architect crew for putting this together, hopefully this can take place again next year! 🙂

How to make the most of Reaper as a Sound Design tool – Part 2: The Workflow

The second article of my two part series on making the most of Reaper for Sound Design is now up on the A Sound Effect Blog!

You can access it from here.

While the first part is about getting set up and started using Reaper, this second part reveals some useful workflow tips and tricks and reviews some of Reaper’s unique features.

If you missed the first part, here it is!

How to make the most of Reaper as a Sound Design tool – Part 1: Getting Started

I have recently contributed to the A Sound Effect Blog with a 2 part series article on how to make the most of Reaper as a sound design tool.

The first article looks into getting started using Reaper and the initial set up. You can find it here.

The second will be up in about a week’s time and will cover more of the workflow and some good habits to be taken from the start. Keep an eye out!

🙂

Loudness and metering in game audio

This post is not a tutorial on loudness and metering in game audio. It is rather about sharing my findings on something I am currently researching on, hoping it can help those of you who would be in a similar position as me. I will definitely revisit this post at a later stage of my current project to share my experiences and conclusions on this info.

Since this is a work in progress, or rather a learning in progress, feel free to comment and let me know about any better/other ways to see or do these things.

I’ve been working on my current project for a few months now and, although I’ve been wondering about loudness and metering earlier in the process, the time has only recently come for me to make decisions on the matter, and hence look deeper into it.

First, I found this amazing resource which helped me understand more about all of it very quickly. This article from Stephen Schappler is a real gem and I strongly recommend you have a read. I will mention some of the things he shared in his article here, as well as develop according to my own experience.

This interview with Gary Taylor from Sony is equally very instructive, going into further details about Sony’s Audio Standards Working Group (ASWG) recommended specs.

 

Industry standards (or lack thereof) and game audio solutions

There are currently no standards set for loudness measurements in game audio, resulting in wide variations and discrepancies in loudness from one game to another. The differences in gaming set ups and devices also present a challenge in terms of developing those standards.

One way to start looking into this is to refer to the BS.1770 recommendations to measure loudness and true peak audio level.

To put it simply, these algorithms measure Loudness Level at three different time scales:

  • Integrated (I) – Full program length
  • Short Term (S) – 3 second window
  • Momentary (M) – 0.4 second window

What these mean for game audio will probably be different than what they mean in TV, as there is no full program length in interactive media, and 3 and 0.4 seconds may prove to be too short cuts to take any accurate measurement, again relating to the dynamic and interactive nature of the medium.

This is what Gary Taylor recommended about adapting the BS.1770 measuring terms to game audio (in this interview) :

We recommend that teams measure their titles for a minimum of 30 minutes, with no maximum, and that the parts of any titles measured should be a representative cross-section of all different parts of the title, in terms of gameplay.

As BS.1770 also indicates, it would be wise to consider the Loudness Range (LRA) and the True Peak Level. In order to do so, you would need good tools (accurate Loudness Meter) and a good environment (calibrated and controlled).

In terms of numbers, let’s look at the R128 and A/85 broadcast recommendations, which we could assume would present a similar objective if working on console and PC games, where your environment and set up would be the same/similar as your TV set up.

Those recommendations are:

R128 (Europe)

  • Program level average: -23 LUFS (+/-1)
  • True peak maximum: -1 dBTP

A/85 (US)

  • Program level average: -24 LKFS (+/-2)
  • True peak maximum: -2 dBTP

 

However, these numbers may not apply to the mobile games industry, and different terms would need to be discussed in order to set standard portable devices levels. Some work has already been done on that matter by Sony’s ASWG, who are among the first ones (if not the first) to consider standardising the game audio loudness metering process and providing recommendations. Here are their internal loudness recommendations for their 1st party titles:

Sony ASWG-R001

  • Average loudness for console titles: -23 LUFS (+/-2)
  • Average loudness for portable titles: – 18 LUFS
  • True peak maximum: -1 dBTP

Gary Taylor mentioned in his interview that studios such as Media Molecule and Rockstar are already conforming to Sony’s specs, both in terms of average loudness and dynamic range. This seems to indicate that progress is being slowly but surely made in terms of game audio loudness standardisation.

How to proceed?

The recommended process is to send the audio out from your game directly into your DAW and measure loudness with a specialised plugin. Be careful to make sure your outputs and inputs are calibrated and that the signal remains 1:1 across the chain.

Gary Taylor’s plugin recommendations to measure loudness:

As far as analysis tools, I personally have yet to find anything close to the Flux Pure Analyzer application for measuring loudness, spectral analysis, true peak, dynamic range and other visualisation tools. As far as loudness metering generally, Dolby Media Meter 2, Nugen VizLM, Waves WLM, and Steinberg SLM-128 (free to Nuendo and Cubase users) are all very good.

I have yet to experiment with those plugins and decide on my favorite tools. I happen to have the Waves WLM so will give that a try first, and plan to compare with the demo version of Nugen VizLM and see if I want to buy. I will update this article with feedback from my experience when ready.

Wwise and FMOD now also support BS.1770 metering, which is extremely convenient for metering directly within the audio engine.

In Fabric, there are Volume Meter and Loudness Meter Components which allow you to meter one specific Group Component. You could for instance apply those to a Master Group Component to monitor signals of the overall game.

loudnessmeter

 

However, I think that despite using these tools within the audio engines, it is worth measuring the direct output of your game directly from your DAW with the help of a mastering plugin. I see this as a way to ‘double-check’, I’m a big fan of making sure everything works as it is meant to, and listening to the absolute final end result of the product seems like a valid way to do this.

Finally, I unfortunately don’t have the luxury of working in a fully calibrated and controlled studio environment. If you are in a similar position as me, I’d strongly recommend considering renting a studio space towards the final stages of the game production to perform some more in depth mixing and metering.

I hope this was useful even though this info is based mostly on research rather than pure experience. I will most definitely revisit this topic once my remaining questions are answered 🙂

 

Additional documentation:

 

Audio processing using MaxMSP

If you follow me on twitter, you will have seen a few recent tweets about my latest experiments with Sci Fi bleeps and bloops.

I created a MaxMSP patch that allows me to process sound files in such a way that the original file is nearly unidentifiable, and the results sound nicely tech and Sci Fi.

My process there was that over time, I created a few simple individual patches performing this sort of processing:

  • Phaser+Delay
  • Time Stretcher
  • Granulator
  • Phaser+Phaseshift
  • Ring Modulator
  • Phasor+Pitch Shift

I decided to assemble those patches together in such a way that I could play with multiple parameters and multiple sounds at the same time.

In order to do so, I have mapped the various values and parameters of my patch to a midi controller [KORG nanoKONTROL2], and selected a few sounds a know work well with the different items of the patch to be chosen from a dropdown menu.

This is what the patch looks like:

scifipatch02.JPG

All the different ‘instruments’ are contained in subpatches. They are all quite simple but create interestingly complex results when put together.

The subpatches:

scifipatch03

Organised nicely in Presentation Mode, I can interact with the different values with my midi controller:

scifipatch01.JPG

The mapping system:

scifipatch04.JPG

I can then record the result to a wav file on disk, which I am free to edit in Reaper afterwards, selecting the nice bits and making cool sounds effects with these original sources.

Record to file:

scifipatch05

This process can be quite infinite as I can then feed the processed sound back to the patch and see what comes out of it.

Here is a little demo of the patch and its ‘instruments’:

 

And some bleeps and bloops I made using this patch:

 

You can visit the Experiments page to hear more tracks 🙂

 

 

Game Audio Asset Naming and Organisation

 

Whether you are working on audio for an Indie or a AAA title, chances are you will have to deal with an important amount of assets,which will need to be carefully named and organised.

A clear terminology, classification and organisation will prove to be crucial not only for yourself and find your way around your own work, but for your team members, whether part of the audio department or the front end team helping you implement your sounds into the game.

I would like to share my way of keeping things neat and organised, in the hope that it will help the less experimented among you start off on the right foot. I don’t think there is only one way to do this though, and those of you who have a bit of experience might have a system that already works, and that’s perfectly fine.

I will go over creating a Game Audio Design Document and dividing it into coherent categories and subcategories, using a terminology that makes sense, event naming practices, and folder organisation (for sound files and DAW sessions) on your computer/shared drive.

Game Audio Design Document

First, what is an Audio Design Document? In a few words, it is a massive list of all the sounds in your game. Organised according to the various sections of the game, it is where you list all the events by their name, assign priorities, update their design and implementation status, and note descriptions and comments.

The exact layout of the columns and category divisions may very well vary according to the project you are currently working on, but here is what I suggest.

COLUMN 1: Scene or Sequence in your game engine (very generic)

COLUMN 2: Category (for example the object type/space)

COLUMN 3: Subcategory (for example the action type/more specific space)

COLUMN 4: Event name (exactly as it will be used in the game engine)

COLUMN 5: Description (add more details about what this event refers to)

COLUMN 6: Notes (for instance does this sound loop?)

COLUMN 7: 3D positioning (which will affect the way the event is implemented in game)

COLUMNS 8-7-9: Priority, Design, Integration (to color code the status)

COLUMN 10: Comments (so that comments and reviews can be noted and shared)
It would look something like this:

AudioDesignDoc-Ex.jpg

 

This document is shared among the members of the audio team so that everyone can refer to it to know about any details of any asset. You could even have a ‘name’ or ‘who?’ column to indicate who is responsible for this specific asset if working in a large audio team.

It is also shared across the art team if the art director is your line manager, and across any member of the front end team involved in audio implementation.

This list may also not be the only ‘sheet’ of the Audio Design Document (if you are working in Google Sheets, or equivalent in another medium). A few other sheets could involve a document created especially for the music assets, another for bugs or requests to keep track of, another for an Audio Roadmap, and so on. Basically, it is a single document to which all team members can refer in order to keep up to date with the audio development process. You can equally add anything that has to do with design decision, references, vision, etc.

While big companies may very well have their own system in place, I find this type of docs to be especially useful when working in smaller companies where such a pipeline has not yet been established.

I’d like to point out as well that, in the creation of such a document, it is important to consider that you will need to remain flexible throughout the development process. Especially if you join the project at an early stage, where sections/names/terminology in the game are bound to change. Throughout those changes, it is important to update the doc regularly and remain organised, otherwise it can rapidly become quite chaotic.

Terminology

In terms of terminology, this is again something that can be done in many ways, but I’d say that one of the most important things is that, once you’ve decided on a certain terminology, remain coherent with it. And be careful to name the events in your audio engine exactly the way you named them in your design document. Otherwise you will very rapidly get confused between all those similarly named versions of a same event, and won’t know which one is the correct one to use.

What I like to do is, first, no capital letters, all minuscules, so that it doesn’t get confusing if events need to get referred to in the code. Programmers don’t need to ask themselves where were those capital letters, which may seem like a small thing but when there are 200+ events, it is appreciated.

Then there is the matter of the underscore ‘ _ ‘ or the slashes ‘ / ‘.  That may depend on the audio engine and/or game engine you are using. For instance, using Fabric in Unity, all my events are named with slashes for the simple reason that it automatically divides them into categories and subcategories in all dropdown menus in Unity. This becomes very handy when dealing with multiple tools and hundreds of events.

Then the organisation of your audio design document would pretty much tell you how to name your event. For instance:

category_subcategory_description_number  (a number may not always be required)

base_innerbase_overall_ambience

character1_footsteps_grass_01

etc

If you dislike the long names you can find abbreviations, such as:

ch1_fs_gr_01

I personally find they can become quite confusing when sharing files, but if you do want to use those, simply remember to be clear on what they mean, for instance by writing their abbreviated and full name in the doc, and make sure that there is no confusion when multiple team members are working with those assets.

Folder organisation

Whether you are working as a one person department on your own machine or you are sharing a repository for all the audio assets, a clear way of organising these will be crucial. When working on a project of a certain scale (which doesn’t need to be huge), you will rapidly get dozens of GB of DAW sessions, video exports, and files of previous or current versions.

I suggest you separate your directories for your sound files, DAW sessions and other resources. Your sound files directory should be organised in the same way you organised your Audio Design Document. This way, it is easy to know exactly where to find sound(s) constituting specific events.

I also suggest that you have a different (yet similar) directory for previous versions. You may call it ‘PreviousVersions’ or something equivalent, and have an identical hierarchy as the ‘current version’ one. This is so that, if you need to go back to an older version, you know exactly where to find it, and can access it quickly. You can name those versions by number (keep the same terminology, and add a V01, V02 at then end).

Finally for your DAW sessions, you may decide to go for something a little bit different in terms of hierarchy, but I find that maintaining a similar order is very useful for self organisation and be able to quickly go back to sessions you may not have touched in a while.

I also highly recommend that you save your session as, in order to back them up, but also anytime you make changes to a version of a sound. First, corrupted sessions do happen, so you’ll be extremely happy when it happens and you haven’t lost weeks of work, but also if your manager prefers an earlier version of your sound, but with some modification, you can easily go back to exactly that version, and start again from there, while still keeping the latest version intact.

So, if my asset hierarchy and division in my Audio Design Document looks like the one in the image above, my folder hierarchy would look something like this:

 

And finally you can create a folder for Video Exports for instance, and have your video screencaptures there, again organised in coherent folders. The principle will remain the same for any other resources you may have.

I hope this was helpful, happy organisation 🙂