Student Materials

Lessons

Chapter 1 - Introduction


Game Development Roles Defined

Design

Game designers are the heart and soul of game development. Game designs vary markedly based on the style of game being created. In platform games much of the game design goes into planning out levels and the visual aesthetics. For first-person shooters (FPS) game designers spend a lot of time planning the weapons and battle arenas while games with a focus on puzzle mechanics may require a thoughtful approach to challenges. In broad terms, game designers come up with the game concept, story, and the world of the game. Additionally, there is a lot of application once the planning is done as game designer are also heavily involved in aspects of the art, programming, or both.

Game Design Roles

Game Designer / Level Designer - This role is similar to that of a film director. A game designer develops the initial story and ensures that the gameplay follows her vision. She is responsible for the characters, plot, and conflict of the game. Depending on the team, a game designer may also be the programmer or level designer.

UI / UX (User Interface/User Experience) Designer - UI / UX is used to provide a functional system for delivering feedback to the player. The size of the team will define the UI / UX designers role. Smaller teams might not have a specific person for UI design or it may be up to the game’s artist to develop the assets.

Writers - A game writer (or Narrative Designer) will work within the context of the design to provide the story and dialogue for the game. He works with the game designer to merge the story and game mechanics into an immersive experience for the player. The writer will design in-game missions, puzzles, or quests to create a compelling story and strong characters. For some games, he will also develop a backstory that will help guide the decisions of the level designer and insure a cohesive story.

Programming

If game design is the heart of game development, programming is the backbone. Programmers are the ones who code mechanics and functionality into the game. Every type of interaction requires code, and a game almost always has more interactions than you expect. Natural physics doesn’t just happen in a video game. Programmers have to code in collisions with objects so that players don’t drop through the floor or fly up into the sky. This takes a ton of work, so programmers are usually the first to be brought into the game development pipeline after designers.

Programming Roles

UI Programmer - In charge of the logic and integration of the user interface and heads up display elements in game.

Tools Programmer - Depending on the size of the development team, this person will be tasked with building tools that interface with 3rd party applications and even proprietary software.

General Programmer - The person in this role can wear “many hats,” and may work by themselves or with a few different people depending on the size of the team. Programming generalists can be tasked with things like AI scripting, Gameplay, Physics, and Network Communications.

Audio Programmer - This person works under the supervision of the sound team or Audio lead to create the tools necessary for the implementation of audio elements and triggers into the game. The audio programmer may or may not be a composer or sound designer but should have a solid understanding of audio specs and digital signal processing.

Art

Art and audio are similar in that they both influence mood and emotion. Together they create much of the immersion that is so important to a successful game. Although players are actually interacting with game mechanics (which is the programmers domain), players usually feel the most connected to the visuals. It is the first thing that stands out about a game even before they press start, so art is an incredibly important part of game development. Artists are usually brought on pretty early to create concept work, but usually not before a game prototype has been established.

Artist Roles

Concept Artist / 2D Artist / Illustrator - Concept artists define the visual style of the game from environments to characters. Artists can use a combination of digital and traditional art skills to produce concepts. The category of “game artist” can be broken down into more specific roles when required. 2D and illustrator artists might be the main art team on 2D games.

3D Artists / Modellers - These artists use concept art as a reference to model 3D objects and characters in the game. Software like Z-Brush and Maya can be used to create models. Texturing artists can then enhance 3D textures so they don’t look flat or unrealistic.

Animators - Animators combine art and technology to create movement for images and environments in games.

Level Editor / Environment Artists - These artists are in charge of creating environments and scenes in the game world using level editor tools. A level designer can also be named “Environment Artist” and may be responsible for tagging materials in the game for which an audio designer can use to trigger sound.

QA (Quality Assurance)

While this role is often thought of as simple playing games for a living, it’s actually more about being able to break games. This means testing game play mechanics, logic, and audio to their limits. A person on the QA team should be organized and detail oriented. Reporting bugs goes far deeper than telling someone an issue was found. There are systems in place for bug and task tracking and often times tracking down bugs means playing the same part of a game over and over again.

Audio

At last we have arrived at audio. Audio is incredibly important for immersion because our ears are very sensitive and intuitive to changes. Even if we can’t articulate why, we can very easily tell when audio sounds unnatural or distracting. A good audio team creates an immersive soundscape, and ties the player emotionally to the story of a game.

Audio Roles

Sound Designer - Sound designers usually have a broad list of responsibilities. They create assets, edit audio, and implement it into the game. This often includes dialogue and music. This role can be expanded into a Technical Sound Designer role (which can be defined differently from studio to studio). This basically entails the more technical side of implementation, which could mean scripting and designing systems.

Composer – The composer creates the soundtrack for a game in various styles and may work with live musicians to record it. Creating a soundtrack for a game is a complex process. There are various techniques for composing and implementing music into a game. Horizontal and vertical scores with stingers and transitions, multi-track sets of stems which fade up or down on top of each other, and more elegant system involving intros, loops, outros, transition cues, stingers, and so on. We’ll get to each of these in turn as we move forward in the text.

Sound Designer - Sound designers create and implement audio assets into the game. Some source sounds from a sound fx library, others record their own using field recording skills (or alternatively recording any number of essential game specific elements in the studio). Most are then tasked with implementing the sounds into the game world. Most AAA game developers with large audio teams will expect their audio designers to be able to manage at least some of the sound implementation (and to be honest, with audio engines like FMOD and Wwise implementation and playback logic is more accessible to non-programmers than it ever has been). “Asset Cannons” will one day be a thing of the past.

Technical Sound Designer - These audio professionals may do all the above tasks a sound designer does, with the addition of advanced implementation, and possibly some coding and systems creation. A solid understanding of source control and some DSP can also be useful.

Audio Programmer - These programmers handle game-related audio coding, DSP, and create tools for the audio designers. Some Audio Programmers may be tasked with implementing sounds into the game and dealing with playback logic. Unlike the Technical Sound Designer role, which sometimes requires coding, audio programmers will be expected to write code.

Orchestrator – Orchestrators transpose music and write / edit scores based off the composer’s initial draft or sketch (for more information see Chapter 5 in the textbook).

Musician – Musicians are often hired to help scores sound more natural and exciting.

Audio Director - Audio directors oversee all aspects of audio on the project. Expertise in audio design and production is required, often along with proficiency in music composition, voice production, and implementation. Often ADs are skilled at managing, developing, and mentoring audio staff.

Voice Director - This role comes with many responsibilities. They include, but are not limited to: breaking down of the script by character, by scenes, beats within each scene and determining the flow of the voice actor’s performance as well as the correct edits and cuts. Organizational and communication skills are important to the success of the director because they are essentially a liaison between actors and the engineers, writers, and producers. Prior to the voice over recording session, the director will have met with the producers and writers to fine-tune the tone of the project as well as the personalities and attitudes of each of the characters to help pull a perfect performance from them.

Dialogue Recordist, Editor & Localization – This person runs dialogue sessions, oversees proper microphone setup, and the clean-up all of the dialogue to prepare it for implementation.

Voice Artist – These people are the talent behind any narration and dialogue. Similar to musicians, they can be brought on to make a game more exciting by giving the player dialogue to listen to rather than read.

Foley Artist / Recordist – Foley artists are responsible for recording Foley footsteps, ambiences, and sfx in the studio and in the field.

Mixer – All the sound design elements, all of the music, and all of the dialogue must fit together in a coherent and emotive way. It is the task of the mixer to combine all sound elements and balance the levels. The final mixer could be someone on the sound team or audio director. Whether it is just one person doing this or a whole team of people, the task must be scheduled, carried out, and finessed if a game is to sound good, and its players are to feel totally immersed in the overall game-play experience.

Essential Soft Skills and Tools for Game Audio

Looping

Check out this link on looping audio assets by Akash Thakkar Video Game Sound Design Tutorial - How to Make Looping Sound Effects (https://youtu.be/NbahluQfbpc)

Let’s discuss a few tips for creating assets that are ready to be looped:

  • Patterns and Pitch - If your sound is tonal or has a repetitive pattern it's fine if it moves into other pitches or has variation on the pattern, but you will want to ensure your asset matches the pitch (or resolves the pattern) so it sounds smooth as it loops.
  • Volume - It’s perfectly fine to have fluctuations throughout the asset, but be sure to end the asset with volume that matches the starting level.
  • Effects - Many effects modify the sound or add a tail (such as reverb, chorus, delay, etc.). These effects have a natural decay time, so if your sound ends when the effect is still ongoing, you will have a bump at the beginning of the file. Without careful consideration of how the effect will sound at the loop point you may find the start of the file has no effect or the applied effect is inconsistent with the tail. This applies specifically to effects “baked” into your asset and not to effects that are applied in the audio engine.
  • Fades - Looped assets generally don’t sound great when there are fades longer than a few samples at the start and end point. In general, avoid large fades.
  • If any of the above items are keeping a sound from looping seamlessly, cut the sound file and splice it together (with fades) until it loops adequately. Simply cutting it in half and fading it together should create a perfect loop.

Another thing to consider when looping is reverb tail. To create reflections you need sound to activate them. When you bounce an asset from your DAW the reverb will be audible on the file only after the initial attack at the start of the file. So the start of the file won’t have the same reverb as the tail of the file. Here we will lay out the steps for a looping method, which keeps the reverb tail in consideration.

  • In your multi-track session repeat your sound three times.
  • Highlight and select all 3 sections and bounce them out as a wav file.
  • Load the file into your 2-Track audio editor or back into your DAW on a track without processing.
  • Crop out the center section of the 3 copies. (Cropping out the middle section can be done by checking the original length of your sound and doing the math to figure out the middle sections start and end points.)
  • Edit at the zero-crossing and add a slight fade (between 20 - 100 ms) to ensure there are no pops or clicks
  • While monitoring with headphones, enable the loop function and press play. You may find closing your eyes can help you focus solely to better catch any issues at the loop point.

The steps above describe how this can be done manually in any DAW, but in Logic Pro specifically there is a process called 2nd cycle pass which will add the effects tail to the start of the file as it is rendered. Essentially, it plays the regions twice and records the second pass so effects like reverb or delay will be folded over to the start of the loop.

When looping any asset you should focus on starting and ending the audio on the zero-crossing (see Figure 1.1). This is the point in your editor where the waveform crosses the zero axis. Technically, it is the point of zero amplitude and where the AC signal transitions from positive to negative. A horizontal line spanning the editing space usually marks it. Most editors offer a “Snap to Zero-Crossing” setting for convenience. Whether you activate this setting or not, you should always ensure your edit points are at the zero-crossing. Changes in amplitude at the edit point can cause pops and clicks in the loop (and sometimes in one-shot sounds as they start and stop in game).

Figure 1.1.png

Figure 1.1

Exercise:

This exercise to help you get more familiar with the skills required by game studios. Even if you don’t feel ready for jobs with certain titles you can add them to your “skills to master” list.

Review Sound Design job posts from various game developers and create a list of the skills you need to acquire or strengthen.

Passion and Enthusiasm for Sound and Games

Having passion and enthusiasm for sound is a great place to start. However, they are two completely different passions. People who have a passion for sound often come from musical backgrounds. This can be a straightforward path because you will probably have mastered your DAW already, and have had some experience with recording equipment and editing. If you have been mixing and mastering your own music you can apply that skill on your path to a career in sound design. Often sound designers for film, television, or theater will also have interest in transitioning into the world of game audio. This has similar benefits to the musical background, but the added experience of actually dealing with (and often designing from scratch) sound effects themselves. This is even closer to the experience needed to land a role in game audio. In this case, most of the creative tools and techniques are there, but an understanding of nonlinearity and the toolsets of game audio will be needed.

Passion for games is a completely separate trait, and it is equally indispensable. Playing games is the best way to research industry trends and garner inspiration for your own work. Without playing games yourself it is impossible to understand how game’s work. There are a number of development platforms and an ever-growing list of games to play. It’s a good idea to try a variety of genres. If you are looking for a role at a particular studio, make sure to familiarize yourself with their style as well. If you don’t have access to all the games you would like to play you can supplement your research by watching gameplay walkthroughs on Youtube or by following “Let’s play” style streamers.

In any industry it helps to have passion for the work you are doing. Passionate people usually find themselves immersed in their interests, and therefore opportunities are more likely to present themselves. A passion for games will give you an understanding of the product you are trying to make, and a passion for sound will inspire you to try various techniques and equipment, and to continue sharpening your skills. New ideas will flow and you will grow more confident and comfortable in your work. Without a general enthusiasm for sound design or games it will be difficult to fully understand the work required of you.

Technical Knowledge

Technical knowledge of sound recording, field recording, synthesis and editing techniques is the core of the skills to establish yourself as a sound designer. An employer or client will expect you to have this expert knowledge along with a mastery of tools (e.g. DAWs and audio engines). Earlier in this chapter we discussed these topics, so here we will talk about acquiring these skills.

Game audio courses and majors via online or in person study are popping up all over the world. It’s a great way to gain a solid basic knowledge and experience in the field. Berklee College of Music and Berklee Online, School of Video Game Audio, Full Sail, and Game Audio Institute are a few options to explore. While these traditional methods are great for someone just starting their education or interested in continuing education, there are also additional ways to supplement your game audio studies. For example, the Internet is filled with amazing tutorials and videos of professionals showing off their trade. There is a great video on Youtube of Marty O’Donnell demonstrating footstep foley (https://youtu.be/-FU_gMFW7Uk). He has little chains attached to his jeans, which “jangle” as he walks. The jangles were probably used as armor movement sounds. Bjorn Jacobson (www.youtube.com/user/BjarneBiceps) and Akash Thakkar (www.youtube.com/user/SexyTownBrown) also have great video series on Youtube focused on game audio. These are just two examples, but a simple Google search will yield many, many more.

Game engine and audio middleware developers do a great job of providing documentation, tutorials, and example projects to study and work with. Simply check out the FMOD and Wwise websites for the documentation and tutorials. They each have their own video series as well to help you get a jump-start into middleware. Facebook groups focusing on game audio and sound designare a great way to jump into conversation with professionals in the industry. There are Reddit and Slack Channels as well that focus on game audio. To get in on Slack channels you typically need an invite from the chanel organizer or someone already in the channel who will vouch for you. This may take a bit of reaching out to the right people, but once you are in these channels they can be immensely helpful for feedback and basic information on best practices. Podcasts are another great way to listen in on discussions that cover everything from demo reels and interviews, to techniques and equipment. Google search is your friend but some of our personal favorite podcasts are Beards, Cats and Indie Game Audio, Game Audio Podcast, and Reel Talk. Game audio blogs such as designingsound.org, blog.lostchocolatelab.com, asoundeffect.com, gamasutra.com/category/audio/ and thesoundarchitect.co.uk will keep up-to-date on the latest tips and tricks.

In addition to public channels, there are also plenty of organizations, meetups, and conferences to quench your game audio knowledge thirst. Game Audio Network Guild (G.A.N.G) is a network of game audio professionals that provides networking, education and advocacy. International Game Developers Association (IGDA) is a network for everyone in the game industry as a whole. They provide meetups, scholarships, education and advocacy. A conference like Gamesoundcon is a great place to energize your education and meet others in the game audio industry. The Game Developers Conference (GDC) is a major conference for the industry as a whole. It’s a great place to meet people from all game development disciplines but the Audio Track will provide you with access to various audio talks and demonstrations. There are a number of conferences worldwide that can be found via Google search.

Game Jams are “hackathons” that bring together people from various game industry disciplines to plan, design and develop a game with a specific theme over a short time period. They are typically held over a weekend and people can join the event with or without a team in place. It’s a great place to meet indie developers and gain some experience in game development.

Regardless of how you decide to go about it there are plenty of resources to work with to study game audio in addition to this book. Make no mistake though; studying game audio is no replacement for doing game audio. No matter how much research you’ve done, potential employers always want to see your work, and they are looking for top quality. “Expert knowledge” also requires practical application of these game audio concepts.

Communication and Time Management

Game development almost always requires teamwork. Working with a team requires solid communication skills as you will be relaying and receiving information from teammates that is critical to the project. Communication isn’t just about knowing the right words to say, rather it is about practicing being responsible and keeping the team up-to-date. Communication can be via phone, chat, Google Docs, email or other team-based collaboration tools like Slack or Trello. If you are working away on your tasks, but ignoring your responsibility to update the status of your work on a spreadsheet or task manager (e.g. Trello), other members of the team won’t know where you are in the development process. This can delay or even block your teammates from completing their tasks. This is why it is important to always communicate your work and status with your team. Keep in mind that as a game audio professional you are part of the game development team - not an island. Prioritize the game development process as a whole, and maintain a team-oriented mentality and you will make the experience easier for everyone.

Time management also helps keep the ship moving forward. Without time management skills the project may never reach its “Going Gold” deadline. The Game development process requires everyone to meet their deadlines and keep on course. If an animator delays an animation, the sound designer is delayed on finalizing the sound. Likewise, if the sound designer delays the delivery of a sound asset, the programmer will be delayed in integrating it to the final project. It’s worth noting here that as the audio professional you are essentially the last in the pipeline. This means that if you miss a deadline, those sounds might fail to be integrated into the game.

Managing time also comes in handy with the ever-evolving deadlines and quick turnarounds that often surface during development. If you are organized and manage your time well you can take on these unexpected changes with little effort. If you are disorganized and scrambling to manage your current workload you may sink further and further into disarray. Teams often implement SCRUM, Kaban or Design Sprints as typical methodologies to organize work in the development pipeline. Having some form of understanding about sprints, user stories, requirements could be valuable for preparing you to work well with teams. In the case of freelance sound designers, the sustainability of your career often lives or dies based on your ability to manage your time between various projects.

Self-Sufficient Problem Solving

When working on a team everyone has their own focus and set of tasks for the day. While it’s great to have knowledgeable members on the team, it really pays off to be self-sufficient. We mentioned previously that the Internet is your friend. If you are faced with an issue it helps to do a little research and try to solve the problem on your own before bringing in several members of the team. People tend to enjoy working with others who are self-sufficient. Additionally, the ability to troubleshoot your own issues will save you time, and teach an incredible amount about the field of game audio. With that said, it’s important not to spend too much time trying to solve an issue without connecting with the team to ensure you are on the right path. Doing a bit of research before reaching out to the team for help will arm you with facts and data to help ask the right questions.

There is a wide range of problems that can come up during development. Software development involves many cutting edge tools from which any number of issues can arise. Let’s imagine that you are starting a new project and the developer has provided you with files to open the game build in the engine’s editor. When you open the project you notice a pop-up warning asking you to upgrade the project. You click okay and the project updates and opens. When you try to play the project you are met with numerous errors. What could be wrong? What do you do?

There are two ways to handle this situation. The first and seemingly easiest way is to ask the programmer about the errors. You may end up sending screenshots of the error console and going back and forth a bit, wasting time. The other option is to put on your problem-solving hat and do some Internet searches for answers. You can run searches based on the specific errors you are seeing. You can also backtrack the steps you took when opening the project. This is the same as getting lost in the woods and retracing your steps to find your way out. By backtracking you might remember the programmer clearly instructed you to use a specific version of the software, thus identifying the problem and a possible solution. These situations are all over the place during game development; so being able to solve them on your own will save you and your team of time and stress. It’s important to remember that troubleshooting is part of game development. We will repeat that because it is crucially important: troubleshooting is unavoidable. You will run into errors because game development is complex. It’s important to be able to work your way through a problem by thinking critically.

Curiosity for all Areas of Game Development

This is more of a general mindset than a skill, but it can be very rewarding for a composer or sound designer to embrace game development as a whole. When you look at what it takes to make a game from all sides of development, you will find technical limitations, deadlines, workflow, processes, various software and systems, and more. These are all important areas of the development process, and it really helps put things in perspective to cultivate a genuine curiosity for each of them. By cultivating an attitude of genuine curiosity and respect for the game development process and by following your interests you are likely to 1) learn skills quickly and 2) find yourself in a situation where these new skills and interests are now paying off with a job opportunity.

Exercise to get you more familiar with the development process as a whole.

Exercise:

Start thinking about the game development process as a whole. Do some research and be honest with yourself: what do you like about other roles? What roles have shared skills that you may even already have? Think about all the roles in the audio industry as a whole. Find time to pursue these interests and learn more about them. You may have an opportunity for paid work in the near future, and learning a tangential skill could earn you the gig.

Exercise to get you more familiar with the tools used by teams and how to get some quick experience working with smaller teams.

Exercise:

  1. Make yourself familiar with some of the collaboration tools discussed in this section.
  2. Join a Game Jam in your area or even join in an online Jam. It’s a great way to get experience working with a team.
  3. Post your demo reel on indie gamer website forums or online community groups where appropriate to connect with teams.
  4. Attend local and non-local industry events and meet-ups to connect with game developers.

Working With Feedback

The work we do in game audio or any creative field is subjective. As artists, we take a lot of pride in our work and it can be hard to accept criticism and feedback. When working on a team developing a game we need not only to be open to receiving feedback objectively, we need to seek it out and welcome it with a smile. Refer to Chapter 12: Digesting Feedback for more details.

Critical/Analytical Listening

An indispensable skill for a game audio designer is the ability to listen to sound critically and analytically. This goes hand in hand with aural skills, which will help you identify frequencies, harmonics and acoustics. Aural skills, however, are not enough. You must also be able to play a game and understand how the implementation has been executed. To do this it is important to play games often, and really listen not just to the sonic content, but exactly how the sound and music supports the narrative and adapts to gameplay. How does the music set the tone or mood of the scene? How does the ambience change from scene to scene? Do the sound effects adequately direct the player’s attention? These are important concepts to be familiar with.

An additional way to get inspired is listening to the game music tracks without visuals or context. This allows for direct focus on the musicality and the composition of the piece. Of course it’s a good idea to both play the game and experience how the sound and music is implemented and listening to the soundtrack or gameplay without visuals. Practicing these techniques will help you listen more critically to a game’s soundscape, and in turn will help you be more analytical about your own work.

Networking

Networking skills are not typically listed in job ads but its an important skill to have if you plan to work as a contractor or freelancer. Relationship building and networking skills are great for working in-house as well. Making lasting impressions on the people you meet throughout your journey can help keep you working in the future. Part IV: Business and Networking of this book provides in depth info on networking and the business side of game audio.

Tools

Every game composer has a unique set of tools that help to create soundtracks. In today’s world, technology has become ubiquitous, especially in the game industry. Because of this it is easy to confuse the contents of your studio with your ability to compose quality game music. Despite the allure of modern music software, your ability to write effective game music comes from practice and experience - not froma particular plugin or sequencer. With that said, modern technology offers us some amazing tools to broaden creativity and increase productivity. As long as you keep in mind that these are all tools to support your own unique workflow, the following list will likely help you organize and build a studio for game audio:

  • A computer with adequate ram, CPU and GPU
  • A DAW (Digital Audio Workstation) and optional 2-Track editor
  • Audio Interface and Midi Keyboard
  • Virtual Instruments and Plugin Effects
  • SFX library
  • Middleware and Game Engines
  • Field Recorder
  • Foley and sound design props
  • Monitors and Headphones (Closed Back / Open Back)
  • Microphones (Dynamic, Condenser with various pick up patterns)
  • Microphone stands, suspension mounts, boom pole, pistol grip, windsock for microphones, reflection and pop filters

This list is a great starting point but as you get to know your tools and develop your skills your workflow will evolve and you may outgrow some of your existing setup. Some of the items in the lists above are pretty self-explanatory, but let’s break down some of the items that are in need of further discussion.

Keeping in mind the phrase, “The tools you use won’t make you great” is a good way to avoid buyer’s remorse. People interested in entering the industry often ask me “what plugins or DAW are you using?” It’s a great question if you are looking for something new because you have outgrown your existing software or hardware or are looking to solve a specific problem. If you are asking the question in hopes to find a magic solution to improve your sound, a piece of software isn’t a likely solution. Learning to work with the tools you have is the best way to improve your work. - Gina

Computer

A computer is a necessary device for a digital audio artist. Your main computer should have sufficient ram, CPU and GPU to run your DAW, various virtual instruments and plugin effects, collaboration software, game engines, middleware, and of course - games! Oftentimes a good video card and fast processor is overlooked but being able to smoothly run the game from within the engine’s editor will help strengthen your workflow.

CPU speed affects the execution speed and the number of processes the computer can run at once. A slower CPU can affect audio performance if too many tasks are executing at the same time. The amount of RAM you require is dependant on the number of plugins you plan on running. 16GB of RAM is a good starting place but you may need up to 64GB or even 128GB for optimal performance if you have an extensive virtual instrument template. Of course you can also reduce the quality of your project in the engine’s editor or freeze tracks in your DAW to reduce the strain on the CPU and RAM, but it is always advisable to work at the highest possible quality. When choosing CPU, GPU, and RAM configuration consider not only the requirements of your audio software but also that of the game engine, middleware, and VR head mounted displays.

The big question is which operating system (OS) to choose. Most game developers will be using Windows machines so if you end up working in-house it would be a good idea to be familiar with the OS. Working as a contractor or freelancer you may be limited if your only computer is OSX based and the developer is building for Windows. A big part of game audio is being able to test assets in game to ensure they are properly hooked up and triggering correctly. Many musicians and audio artists prefer working on Mac OSX but it really depends on what your budget is and what you feel comfortable working with. Some have a dual computer setup where they focus on asset creation on Mac OSX and implementation, and scripting and testing within the engine on Windows. Likewise, if you are working on game projects for mobile it’s helpful to have a mobile phone or tablet for testing and playing builds of the game. Regardless of which device you choose, you should aim to become technically proficient with whatever computer setup you have.

Digital Audio Workstation

As far as tools go, the most foundational is the Digital Audio Workstation or DAW. A DAW is a program such as Cubase, Nuendo, Logic, Pro Tools or Reaper that allows you to record and edit audio and MIDI (software instruments). There are many DAWs available to use, and there is also much debate about which DAW is “best” for game audio. Despite this, the best DAW is really whichever one is most efficient and inspiring for your particular workflow.

Your DAW is going to be your main tool for sourcing and designing game audio. All your audio recording, editing, processing, MIDI, and mixing will be done in your DAW. It’s important to choose one that best fits your workflow and budget. There are a variety of great options to choose from and it’s not a bad idea to learn more than one DAW as each one has its own strengths and weakness. When working in-house you might be tied to the particular DAW the team is already using. So it will be helpful to be flexible about jumping between the different software. As a contractor or freelancer you may find yourself working with another audio artists who may be using a specific DAW or you may find one DAW offers more flexibility in batch exporting and naming than another. Some DAWs are geared more for music production while others focus on audio post-production. Popular DAW’s such as Reaper, Nuendo, Cubase, Pro Tools, Logic, and Ableton Live each have their own features and price points so it’s advisable to do some research before settling on one or more options.

In addition to your main DAW some workflows include the use of a 2-Track Editor such as Adobe Audition, Sound Forge, Ocean Audio, or Twisted Wave. Most DAW’s have a sufficient built in editor but a dedicated editor can offer some time saving steps while putting your assets under the microscope for editing. For smaller budgets or to test the workflow, Audacity is a free 2-Track Editor option.

Let’s discuss some of the benefits of using a 2-Track editor in your workflow. A stand-alone editor can be useful for checking edits at the zero crossing to avoid any pops or clicks in the audio. A dedicated 2-Track Editor will allow you to easily trim the heads and tails of bounced assets from your DAW via destructive editing which allows for a quick save of each file without having to re-export. Re-creating this same process in your DAW would result in having to edit the heads and tails using non-destructive editing, thus adding the extra step of a second round of exports. Destructive editing shouldn’t be off-putting to you as most editors have multiple levels of undo prior to saving and removing the file from the editor. Keep in mind that some DAWs offer the option of applying destructive editing regardless. A dedicated 2-Track Editor can also be useful for batch processing of multiple files at one time.

Audio Interface

An audio interface will be necessary when running your DAW. Think of it as the connection between your outboard gear like microphones, line instruments and MIDI controllers. It also processes the digital data from your computer and converts it into an analog signal that plays back over your headphones or monitors.

Choosing an audio interface can be overwhelming since there are so many options available. Since the interface connects to your computer you will want to choose one that best fits the available connection types you have available. Typically they connect via USB, FireWire, and Thunderbolt. It’s important to ensure the interface drivers work with your OS of choice as some are built specifically for one particular OS.

Once you have decided on the connection type for your interface you will want to decide on the number of inputs and outputs available. It’s always a good idea to be equipped with a few more than you initially think you need so you can grow into it. Be sure to consider both digital and analog I/O’s. If you plan on using condenser microphones with the interface you will want one that has phantom power.

Another important consideration is bit depth and sample rate. If your interface limits you to 16-bit depth (or word length) you will end up with limited dynamics and a higher noise floor (ratio of noise compared to the desired audio signal) in your recordings and mixes. The professional standard is 24-bit depth, so ensure that your interface can handle this bit rate. Sample rate is also important to consider, but there are many arguments over how it affects the sound in general. While sample rates of 44.1kHz will be acceptable quality to work with and can produce any sound within human hearing range, it is suggested that 48kHz and higher will capture additional information which can be useful when pitch shifting or time stretching assets. Generally speaking, it will produce a smoother final product. Recording and editing at higher sample rates and bit depths will ensure quality assets that can always be down sampled and compressed to fit the delivery requirements, so it’s usually better to err on the side of higher quality.

It’s important to note that sample rate has been a topic with various opinions throughout the audio community. Some will say anything over 48kHz is wasted memory and processing while others believe there are harmonics above 20Hz that affect how we perceive frequencies below it.

MIDI and Virtual Instruments

With your DAW and audio interface up and running you will want to choose your MIDI controller for virtual instruments (i.e. software instruments) and daw automation. MIDI controllers are typically either connected via MIDI port to the audio interface or directly to the computer via USB. The type of controller you choose should compliment your workflow. For example a sound design specialist may only need a 25 key controller for inputting data but might be more interested in slider and pad controllers. However a composer may require a full size 88 key controller with weighted action and an additional 25 key controller for key switching (see Chapter 6) virtual instruments. Sound designers use virtual instruments and synthesizers to generate source material and tonal content but may not need the extensive virtual instrument template that composers require. Sample libraries are important tools for composers and they often set up large templates of sample libraries in such a way that it recreates an entire orchestra. Orchestral samples will be discussed in detail in Chapter 6. There is more information about DAW configuration for music template in Chapter 7.

By contrast, synthesizers build up sounds from scratch using Oscillators, Wavetables and Samples. Synthesizers are specialized tools, and it is a good idea to have some working knowledge in case a developer using a reference with a synth. Chapter 3 discusses synthesizers as a source for sound designers.

Synthesis

Both composers and sound designers can benefit from synthesis. Synthesis can be an important part of the sound design process. Having a variety of synthesisers, whether sample-based, subtractive, additive, FM, or granular, will offer greater flexibility for creating movement in sound and when blended with mechanical or real world sounds it can offer credibility to the listener. Virtual synths like Spectrasonics Omnisphere combine multiple synthesis types along with a multi-effects engine with offers interesting modulation possibilities. Since variation is key in game audio, having subtle elements randomized in your ambience can make for an immersive backdrop.

Plugin Effects

There are many plugin effects that are perfect for sound design and music composition. While most DAW’s and 2-Track Editors come standard with a suite of plugins to get you started, 3rd party plugins can be easily hosted in your software of choice. Having the latest and greatest tool doesn’t automatically ensure great audio unless you know how to use it inside and out. Sound designers must familiarize themselves with the plugins and learn their strengths and weaknesses to put them to good use. There are a variety of effect types to choose from, but a core set of tools will be necessary. It would be wise to choose your tools based on your own personal goals and preferred workflow, as everybody’s preference are unique.

With these fundamental plugins you will have the core tools necessary for sound manipulation. When it comes to shaping and designing original source sounds into something unique, knowing what tools are available to shape sound is a key part of the process. Experimenting with plugins that aren’t specifically developed with sound design in mind can be used to create different and interesting effects as well.

Sound Effects Libraries

You would be hard pressed to find a sound designer working in the industry without some kind of sound effects library. Composers may have a smaller effect library collection for more atmospheric and sound design oriented music. These libraries usually consist of both commercially licensed assets and personal source recordings compiled over the years. A well-rounded library at your disposal will arm you for any project that comes your way.

In the past sound libraries were expensive to acquire as they were usually bundled into a large collection. The library had to be purchased on DVD or a hard drive. To get started artists would purchase a general collection that covered the majority of the sources necessary and set up recording sessions for whatever the library didn’t cover. These collections cost upwards of a thousand dollars but provided five thousand sounds or more. Today’s libraries offer more flexibility in format, cost, and size. While the larger libraries from companies like Sound Ideas are still available, there are a good number of independent professional collections readily available. Companies like Boom Library, The Recordist, Tonsturm, and Chuck Russom FX provide smaller, affordable, and more focused independent collections through online marketplaces such as ASoundEffect and Sonniss. Now artists can purchase smaller libraries based on their needs. For instance, if you were designing sounds for a game with a robot theme you could license a library specifically focused on mech source sounds. These independent effects can be purchased per asset or as a small collection of sounds. Prices range from a few dollars to a few hundred dollars. The assets are available for direct download for instant access. This offers you the freedom to use the hard drive of your choice to store your full collection. FMOD (audio middleware software) has a built in sound library database. There is a search function built into the interface, which allows designers to search for and purchase a license in-app for quick use in any project. We discuss sound libraries and asset managers later in this chapter.

It’s important to read and fully understand the licensing terms of each library you add to your collection. Most of the libraries are royalty free to use in both commercial and non-commercial projects but often come with a few restrictions. Rarely a license agreement will require crediting the library’s creator in the game credits, but it can happen. More commonly, a license agreement requires one user per license so you can’t share it with your friends or colleagues. Designing Sound has a blog post that links to independent library resources.

A quick Internet search for a sound effect can link you to a variety of sources. You will want to ensure the source is reputable to avoid any copyright issues. Typical contracts between the developer and the sound contractor place all of the responsibility on the contractor to ensure there will be no copyright issues with material used.

It’s important to remember you are only granted a license for use of the sound, which restricts you from selling the sound directly. You are charging for your services rendered, and not the sounds themselves. Since you own the license to the sounds acquired from libraries, you are technically charging your client for your time-spent designing and manipulating sounds. It is good practice to use the source material, as an element within a larger work, so be sure to apply some manipulation to the source before delivering it. Basically, you can’t just wrap a sound you licensed into an asset and directly sell that asset.

Sound libraries can be considerable in size, so your workflow should include an asset management or metadata app to keep your library organized and easily searchable by granular detail. Some DAWs have a built in library manager such as Nuendo Media Bay or Reaper Media Explorer but you will want to explore its features to ensure it fits your workflow. There are various tiers of sound library management software, so consider your needs and how you plan to use it.

Some users only require a simple way to search and audition sounds while others may need the ability to search on a more granular level and edit metadata gloss. To start, free apps like iTunes and Soundly can be used for organization and simple searches. Paid versions of Soundly offer more options for team collaboration of your library as well as cloud-based backups. Other apps like AudioFinder allow for a bit more options regarding metadata. On the top tiers you will find apps like BaseHead and Soundminer which provide better organization options, sophisticated search engines, and easy transfer of sounds. Whether you are using library sounds or importing your personally recorded source material, it’s important to keep all of your source material in its original state. Anytime you want to manipulate an asset in your DAW or editor, make a copy and save it in the local session folder. This will ensure your source material is preserved for other projects.

Another helpful tip is to always keep a backup of your sound library. It’s easy to forget the value of virtual assets but you will have put a lot of time and money into your sound library and want to ensure you have a plan in place. Hard drives fail and can be difficult or costly to recover. If you chose to direct download most of your library source or have a large collection of your own recordings they will be lost forever without a proper back up plan. Cloud based storage is an option for backups but if you chose to ghost (clone) your library drive to another external hard drive be sure to put it in a safe location like a firebox or another space away from your studio. A combination of cloud and local drive storage can add extra security. There are many ways backups can be done so find a solution that fits your workflow and needs.

Middleware and Game Engines

Game engines such as Unity and Unreal and Middleware programs like FMOD and Wwise are both tools and skills to learn. The majority of sound designers will encounter these in one way or another. Composers on the other hand sometimes go their entire careers without diving into implementation. Despite this we are seeing that composers are more often becoming responsible for implementation as well as the creation of the soundtrack. This is a good thing. It allows composers more agency over their music in its final format. It also offers the opportunity to incorporate creativity into how the music functions as well as how it sounds. Middleware programs will allow you to smoothly transition your music and add depth of adaptivity to your score without learning to code. Even if your clients never need you to use a middleware program, it is still useful to be familiar with them. (see Chapter 8 and 9 for more on audio implementation)

Locations, Microphones, and Field Recorders

Although “in the box” (or entirely software-oriented) methods are employed more and more by composers, it is really handy to have at the very least a simple microphone setup available. Especially if you are an instrumentalist, adding in a hint of live-recorded material can really make a track stand out. It’s even helpful in certain genres to record and process your own sound effects and employ them in an abstract way. The results can be striking compared to using only MIDI instruments. On the other hand, a sound designer will need to have more options available for capturing source.

Locations are not a tool per se, but part of having a useful set of microphones and field recorders is selecting and preparing locations for recording. Sound recording for games can be done pretty much anywhere, but the environment needs to be controlled. Common locations are acoustically treated studios, home studios, and pre-selected outside locations. When recording indoors it is important to avoid picking up computer fans, heating, pipes, dripping faucets, and outside traffic. This can be done by acoustically treating your studio, or if you have a lower budget, using mattresses, blankets, or gobos (moveable acoustic isolation panel). It will also help to turn off any running electronics and appliances. If you chose to record in your kitchen for example, you will want to wait for the refrigerators motor to stop running or unplug it altogether - but be sure to remember to plug it back in! You may also have to turn off the heat or AC while you are recording to avoid the noise of forced air or steam. However you decide to record, choosing an environment that you can control will be easier to manage when processing sounds later on.

Field recording becomes necessary when the source material cannot be accessed in a controlled space. This includes things like vehicles or animal sounds, and especially outdoor ambiences. Let’s say you need the sound of crickets for a game project. It’s a lot easier to take your recording rig outside than it would be to fill a room in your home with crickets. Think about the time it would take to gather enough crickets, and the difficulties you will have in getting them to leave!

Capturing clean sound outdoors presents some challenges that don’t exist in more controlled indoor environments. You won’t have control over traffic, human voices, animal vocalizations, airplanes flying overhead, or wind noise. There are many other little noises you might pick up as well when recording outside. A wind sock or blimp gloss can help reduce wind noise.

Check out www.wildmountainechoes.com/equipment/protecting-microphones-wind/ for further information.

One final note is that boom poles, suspension mounts, and pistol grips can all help avoid handling noise from getting into in the recording. Use of these tools can make or break viable source recordings; so don’t leave home without them!

Microphone Types

We can break microphones down into two general categories Dynamic and Condenser. These categories refer to the type of transducer in the mic and not its directional characteristics.

(Note: Ribbon mics are an additional category but we won’t go into detail here. You can read further information here: www.musicianonamission.com/types-of-microphones/)

Dynamic mics are generally a bit more durable and resistant to moisture than the more delicate condenser. Most dynamic mics pick up less noise when recording but don’t do well in capturing the finer details of a sound that a condenser will easily pick up. Condenser mics used along with phantom power allows for higher input gain. It’s important to note that Ribbon microphones capture sound in a process similar to dynamic microphones.  Instead of a moving coil connected to a diaphragm the ribbon microphones uses a thin piece of foil.

Diaphragms

Microphones pick up sounds through their diaphragm, which vibrates as sound travels through it. The vibration converts sonic energy into electrical energy. Condenser mics come in small and large diaphragm or a hybrid of the two.

Mics with small diaphragms (or pencil mics) are lighter and easier to position. They are designed to have a wider dynamic range and handle higher sound pressure levels.
Large diaphragm mics pick up small changes in sound pressure levels. This feature creates the characteristic natural sound for which this type of mic is known. These mics are most common in recording studios and can be used to record a wide variety of sounds.

Polar patterns determine the directional characteristics of the mic. There are six polar patterns to choose from. Understanding the characteristics of each will play an important role in choosing the right mic.

Polar Pattern

Under the umbrella of dynamic and condenser we can break down microphones by polar pattern and diaphragm size. Selecting an appropriate polar pattern gloss on your microphone can also reduce extraneous noise if the placement is right. Polar patterns (or pick up patterns) are a way for microphones to exclude directional sound from a recording. In other words, a narrow polar pattern like cardioid or hyper cardioid will only pick up sound front right in front of the microphone, thus excluding sound from the sides in three dimensions. These patterns are fantastic for capturing specific sounds in detail. Other patterns include figure 8 and Omni directional gloss can be useful as well, especially for ambiences. We will explore polar patterns and microphone placement techniques in further detail below.

Omni directional microphones pick up sound from all sides, which means no rejection or isolation. They handle wind noise and plosives (such as harsh “P’s”) better than shotgun and cardioid microphones. To isolate a sound when using an omni microphones you will have to consider the proximity to the source you are trying to capture. Since the pattern allows sound pick up from all sides each sound within range will be competing in the recording. A closer proximity can help single out the sound you are targeting.
Cardioid mics captures sound in the front of the mic, less on the sides, and rejects sound from the rear.

Super and Hyper Cardioid microphones have the same front side direction but a narrower area of sensitivity for more direct isolation.

Shotgun microphones have a very tight directional characteristic - much stronger than Super and Hyper cardioid mics. Their narrow focus and great side and rear rejection makes them great for both Foley work and for capturing sounds at a further distance.
Figure 8 microphones have a pattern that looks like the number 8. The front and back pick up areas make the mic great for stereo recordings.

There are also various microphones that are used for specific recording needs. Binaural mics simulate a 3D sound sensation which mimics more accurately how we hear sound. Hydrophones are designed for liquid based recordings such as underwater explosions. Contact mics are placed directly on an object's surface and picks up vibrations from contact with the object.

As Neil Tevault, longtime NPR broadcast recording technician, explains it:

“Pickup patterns are like the inverse of lights (since light shines out but sound radiates towards a mic). An omnidirectional mic is like a bare light bulb, shining on all things equally. A cardioid mic is a like a flashlight, shining forward in a wide but focused pattern and blocking light behind. The shotgun mic is like a laser, narrowly focused on one spot.”

(www.training.npr.org/2016/06/28/which-mic-should-i-use/)

We once used a treadmill, with a failing motor, as source material for a spaceship engine start up and shut down. Two mics were used for capturing the treadmill as it was turned on and off. A shotgun mic picked up the vibrations or sound waves in the air while a contact mic, placed directly on the frame, picked up vibrations as the unit powered up and down. The recordings from the contact and shotgun were blended to create a more massive sounding ship with a solid sub layer. - Gina

Field Recorders

While these mics can be used indoors or out in the field, you will need a way to record sources outside of the studio. Field recorders offer a convenient way to make you mobile. Portable recorders like the Zoom H4n start out around $200 while higher end models by Sound Devices can cost several thousand dollars. There are also the in between models, such as Sound Devices Premix series, to consider if your budget allows. Budget is always an important part of the decision process but there are several other considerations to keep in mind.

Preamp quality is one of the more important considerations as it affects the noise floor. A mic preamp supplies preamplification to the mics. This is not the same as phantom power. Microphones require preamplification to raise signal levels, whereas condenser microphones require phantom power in addition to preamplification just to power the mic.

Some of the lower end recorders have a higher noise floor due to lesser quality preamps. Noise floor is the background noise generated by the device while recording. A recorder with a high noise floor will produce lesser quality samples than a device with a lower noise floor. The quieter the better when it comes to producing quality audio but at the same time capturing a lower quality sound on the fly is sometimes better than not having access to the sound at all. Most of the recorders available today have better preamps than their predecessors but with some of the lower end units you may need to do some clean up with a denoiser before the files can be considered suitable source material for your sound design. You will want to research the pre-amp for both onboard and external mics as some recorders have lesser quality preamps for the latter.
Recording format is another important consideration. Good source material starts with quality recordings, so recording at the highest possible sample and bit rate is important. Look for a device that will allow wav files at high-resolution 96 -192 kHz, 24-bit. While CD quality 44.1 kHz, 16-bit might seem good enough; it won’t preserve the quality when manipulating the files with pitch shifting or time stretching.

You will also want to consider the number of tracks the device can record at once. Storage capacity and battery life will matter if you plan on taking your session outside the studio or anywhere close to a power source and computer for backup. The more portable recorders typically have on board mics but you may want to use an external mic, so be sure the recorder you choose is equipped with the proper connectors and phantom power.There are a variety of blogs and Youtube videos that review these field recorders and provide samples recordings to compare. It’s a good idea to do some research before you commit to one.

Foley and Sound Design Props

Now that you have your mics and recording equipment sorted what will you record? There are plenty of sources in the environment around you but having some props can come in handy. Foley is an art form originated in the 1930’s by Jack Foley. It’s the process of performing live sound effects in sync with picture. Large studios such as Skywalker Sound have hundreds of props and fully built Foley pits gloss for recording sound.

You can build a small Foley pit in your home studio with various terrain types to record footstep Foley. Sound designer and composer Aaron Brown provides a step-by-step guide on his blog for creating a Foley pit for under a hundred dollars. Outside of the Foley pit you can use a range of props from frozen vegetables to power tools and obscure musical instruments for generating sound design source. For example, vegetables are great props for recording “Gore” source. Frozen celery or carrot sticks make for an excellent bone crunching snap. A rope mop soaked in water and splat down onto the floor works well for blood splatter. Switches, mechanical keyboards, and binder clips are good props for button UI sound effects. There are a lot of great ways to use your creative energy to turn everyday objects into unique sounds. In Chapter 3 we will present some interesting Foley ideas and exercises for you to try out.

Monitors and Headphones

Studio monitors need to provide an accurate and uncolored representation of your audio. This is important in ensuring your mix translates well to all playback units whether they are headphones, mobile devices, TV’s, or other sound systems. While accuracy is important there are a few other considerations to keep in mind. When choosing monitors you will want to avoid going for what looks best in your room and instead go for what best fits the dimensions of your studio. In smaller spaces you will get better results with smaller monitors but usually larger monitors will provide a wider frequency spectrum. Ideally you want to try out different monitors, as they will sound slightly different in various spaces. Listen for an accurate balance across all frequencies and playback that reveals both the good and bad details your mix.

Monitors should be calibrated gloss for better mixes, headroom and translation to other monitors or speakers. It is a simple process and you will find all pro systems are properly calibrated so your monitors should be too.

Here is a link to Aaron Brown’s blog article “Keep it Calibrated!"

Learn How and Why You Should Calibrate Your Studio Monitors for Video Game Audio” for specific details on calibrating your system.

www.playdotsound.com/portfolio-item/keep-it-calibrated-learn-how-and-why-you-should-calibrate-your-studio-monitors/

Since you are investing in your monitors set aside some budget to acoustically treat your room so you can reduce room reflections for more accurate monitoring. An acoustically treated room will offer you control over clarity, reverb and bass response. For more details on acoustic treatment check out the “Acoustic treatment buying guide” by Sweetwater Sound. (www.sweetwater.com/insync/acoustic-treatment-buying-guide/)

Checking your mix on a few listening devices always helps to accurately adjust the sound for a variety of playback systems. However, if you find yourself unable to preview your mix on monitors due to budget, space, or noise limitations you can work with a solid pair of headphones. Some audio designers even choose headphones over studio monitors regardless of the constraints previously mentioned.

When you break down the difference between auditioning on monitors vs. headphones the relevant factor is the sound travels to our ears. Studio monitors send the soundwaves to from both channels to both ears. Your left ear will receive the left channel as well as the right channel but the information from the right will be slightly delayed and at a reduced volume. This crossfeed doesn’t happen when listening on headphones, which is very different from the natural way we listen. There are plugins that offer simulated crossfeed, which adds a bit of the left channel mix into the right headphone and bit of the right channel mix into the left. It takes some getting used to in becoming comfortable with mixing on headphones.

Should you choose to mix with headphones we recommend you invest in a pair that is over the ear and open-backed. Some audio artists prefer closed back headphones for the bass bump since the smaller diaphragms don't replicate low end as well. They also offer better isolation and can be useful in a noisy environment where the benefits of improved accuracy in frequency response could be offset by an increased noise floor. It’s not ideal to mix with earbuds as you are hearing all the sound very up close, which can cause ear fatigue.

Keep in mind that there is a difference between mixing and checking your mix. As we stated previously, it is important to check your mix on multiple formats but the process of mixing requires dedicated listening over long periods of time. - Gina

A grot box such as an Avatone Mixcube is a great addition to your studio monitors or headphones, and a great way to work out issues within your mix. More commonly, a single mono speaker is used as a grotbox to simulate consumer playback systems. A grotbox will help you control low, mid and high frequencies in your mix for playback across mobile devices mono speakers. Games are often played on phones or tablets without headphones just as console games may be played on a TV without a surround system. Ensuring your mix will work for all listening formats will make for happier listeners.

If a pro-grade grot box doesn’t work for your budget, you can try a lower end pair of computer speakers or build your own with a cardboard or lacquered box and a small speaker. Another option is to transfer your mix to a mobile device or stream through your TV speakers via a file share site like Dropbox or Google Drive. The downside of doing it this way is having to go back and forth between your DAW and device as you make changes. There are a few plugin manufacturers, such as Audreio, that support streaming from your computer to mobile device in real time. There are sometimes connection issues or lag but it bypasses having to create a new mix each time you want to check it.

Regardless of how you choose to monitor, you will want to be sure your mix sounds great across all consumer platforms, which means checking your mix in a variety of ways.

Tools and Skills Summary

The gear used in your workflow will evolve overtime, as you move onto new projects and find that you reached the limit with your existing gear and are ready to upgrade. It’s not necessary to start out buying the latest and greatest equipment.

Here is a final exercise in this chapter to help you get more familiar with your gear.

Step outside of your comfort zone and get to know your gear better. Start by review your microphone and plug-in collections and try the suggestions below to further experiment with them.

1: Recording Source

  • Start a list of interesting Foley props based on various objects around you.
  • Choose a source to record.
  • Try all the polar patterns available on your mic. Compare the recordings with different pick up patterns and determine which would work best for your selected source. Why is one pattern better than the other for this specific sound?
  • Using the same source, experiment with various mic placements and positions. Compare those recordings and determine which works best for the source you chose.

2: Prepping source

  • If there is any noise in your source recording it should be cleaned up before you start editing. Hiss from noisy preamps, noisy environment, polar pattern choice or mic placement can all cause the source to have some unwanted sounds.
  • This is a good time to get familiar with a denoiser in your plugin collection. Be sure you're not overdoing it on the clean up. Check for unwanted artifacts in the sound or extreme loss of the important frequencies in the sound.

3: Exploring your gear

  • Choose any plug-in and use it in a non-traditional way.
  • Take some time to get familiar with new gear you feel might fit your tool box. There are a number of trials for plugins you can download.  Ask friends or colleagues if you can borrow some equipment and get outside of your comfort zone.

Production Cycle and Planning

When working with a game development team here are some tools you may find necessary for communication, fire sharing and planning.

Table 1.1

Team Voice and Video Chat Team Project Management and Collaboration Team File and Document Sharing
Zoom Slack Google Docs
Google Hangouts Discord Dropbox Paper
Skype Trello Evernote/OneNote
Viber Teamwork SVN / GitHub
WebEx Confluence Hightail

Making use of Downtime

Here will lay out some tips for making the most out of downtime.

  • Read! We recommend reading as much as you can. It doesn’t need to be game audio related. Finance, management, and other business essential books are all great to read and can help you regardless if you are in-house or freelance. Be sure to be well rounded in your reading and include materials like self-help books, industry reports and news to improve your confidence and to better understand trends across industries. Apply what you learn across various materials to your niche and business model. Check out our Further reading section of this site for specific material we recommend.
  • Practice! Just as you would practice with an instrument you can practice with sound design and field recording, composing music and implementation. Give yourself some tasks and be sure to complete them. Get to know your virtual instrument, sound effect and plugin libraries. Pick one plugin and see how it affects a few different sound sources. Write a few melodies or go all out and practice composing adaptive scores.
  • Network! Get to conferences, meetups and participate in online communities like facebook, Discord or Slack game audio groups. Apply to job listings or even just browse through them to make a note of the skills you will need to acquire to land said jobs.
  • Work on your demo reel! It’s a good time to add new work or improve your portfolio
  • Research! And by research we mean play games. Do some critical listening to soundscapes and soundtracks but also check out game mechanics and make a note of the function of the audio in games. Be sure to also enjoy playing the games.
  • Have a healthy work - life balance. Do something relaxing, meditation, exercise, spend time with family and friends.

Chapter 2


The Basics of Nonlinear Sound Design

Here is an exercise to get your ears tuned in to the sound all around you.

Exercise

The next time you find yourself sitting in a room or maybe a park with a few minutes to spare, close your eyes and take in all the sound you hear around you. Make a list of those sounds. You will be surprised by all the little details you will hear.

Essential Skills and Tools for Sound Designers

A Well-Developed Ear

Ear training will boost your confidence and allow you to really trust your ears. This will in turn strengthen your skills to shaping sound. Here we present more information regarding ear training to strengthen your ear.

Humans can hear sounds at frequencies from about 20 Hz to 20,000 Hz. While this seems like a wide range our hearing is best from 1,000 Hz to 5,000 Hz. While this still sounds impressive dolphins can hear frequencies up to 100,000 Hz and elephants can hear sounds around 15 Hz. None of this makes for a good sound designer’s ear though.

Here are some ear training apps and websites for you to explore:

Critical listening suggestions

Let’s wrap up chapter 2 with some assignments critical listening suggestions.

Assignment

Choose a video game and play a few minutes of it without sound. It could be your favorite game or any game you have been thinking about playing. While certain genres with higher intensities can heighten this test it isn’t necessary to choose something like horror to experience how lack of sound effects the experience.

Now play a few minutes with sound and make note of how the audio helps define your emotions through the experience. Listen to the changes in the music and sound design as the game state changes. There will be some extra details you hear when you aren’t focused just on the game play. Listen for the footsteps and any associated Foley movement. How does the ambience change as the scene changes?  If the player dives underwater does the ambience change?  Perhaps there is a low pass filter applied to the sound to emulate the submerged state – how does that affect your experience?. How does sound attenuate as you move the player character closer to and away from an object that is emitting sound?

Make an assessment of how UI, Foley, Ambiences and Hard SFX affect your immersion into the game.

Assignment:

Choose some scenes from your favorite game or film. TV shows and set out to design one sound a day for a month. Set a timer for 60 minutes per day and work to design from scratch a sound from those scenes. Try to vary the exercise by choosing sounds from realistic to hyper realistic. 15 days into the month try to decrease your time from 60 minutes to 45 and then 30. At the end of the month compare your recent design to sounds you crafted at the beginning of the month. You should find that the quality and task time has greatly improved.

Critical Listening

(Check YouTube or play the game if you have access. Listen to the various sounds to understand their function in game.)

Chapter 3


Revisions and Reiteration

Digesting Feedback

As a sound designer you may already be familiar with the term “fresh ears” to describe the process of walking away from a project for a set amount of time to come back and have a new perspective. When mixing and tweaking sound for hours at a time, your judgment becomes skewed and it’s difficult to tell if you are making it worse or better. This process demonstrates just how subjective analysis of audio can be.

Listening to similar content to what we are creating can help balance your subjective view of your own work by using reference material to produce a sound that is already proven to hold listeners attention.

Interpreting feedback can be much easier if it comes from someone who is very familiar with audio and how to implement changes. A seasoned composer or sound designer might be able to provide direct instructions on how to implement a change and possibly even offer an explanation on why it should be changed. When feedback comes from a non-audio team member it can be articulated in a way that is difficult to understand and be more difficult to decipher how to implement the changes.

The non-audio team members often have different levels of audio prowess that may improve or further confuse feedback. I’m sure you know of a lot of people who are audiophiles or musicians but chose to work in art, programming or on the business side of things. On the flip side, you will encounter non-audio team members who say they have no idea about audio but know what they want to hear.

When faced with a large list of tweaks, it’s important to focus on the important tasks first. This can be filtered based on the assets priority in the development schedule or how important the sound is in the game.

If feedback seems unclear you can ask for references or chat directly with the client to avoid assuming and getting it wrong.

Descriptors such as boxy, boomy, brittle or harsh can often be a bit vague in content. If your sound is bass heavy then you can link that with boomy and if your sound has a lot of high frequency content it could be prompting the brittle or harsh comments. In Chapter 3: Effects Processing as a Sound Design Tool, we discussed using a narrow bell curve on a dynamicrophone EQ with an interactive display to boost frequencies to listen for unwanted content. A spectral analyzer can also be used to look at where your frequencies are grouped and if you have too much in the lows or highs.

Muddy is another term that can be tricky as it could be the reverb is too wet and washing out the sound or it could be the sound has too much focus on mid continent or not enough highs. Boxy or hollow describing the sound can mean there is too much content in the mids (mid range on the frequency spectrum). In general you can listen and figure out the areas where the feedback is targeting but it is also helpful to try to speak to the person providing feedback using layman's terms to try to sort it out.

Not all feedback will be in regards to the quality of the sound, often it might be that the sound itself isn’t what they were envisioning. A sound may work great in the DAW against video but once it is in game and triggered randomly by the player’s actions, it might not be a good fit.

Some clients might try to mimicrophone the sound they are looking for using their voice. This might sound silly but it really can be helpful. As an example, while working on UI sounds for a game we were asked to revise a sound so it was like a ‘whale call’. Even though we know what that sounds like, we had no idea what this meant for implementing it into a quick ui sound. So we suggest the client record his voice making the sound he had in his mind. Humans can mimicrophone a lot of different sounds fairly easily. In a matter of minutes the client recorded his voice on his phone and sent it over and it revealed exactly what he was looking for. - Gina

The back and forth of delivering a sound and waiting for feedback can be tiring.Software like Source-Live allows for streaming audio and video from a DAW and allows clients to view via mobile or web enabled devices. It depends on how much control your clients wants or you want to offer them.

A difficult fork in the road when dealing with feedback can stem from the audio designer being very satisfied with the sound as delivered but the client wants a totally new direction. Since the audio designer feels the sound they originally created was perfect for the event in game, they might try to massage the existing sound into the new direction of the feedback. This often makes for more back and forth with the client and rarely gets the sound off in the proper direction. Sometimes you have to start fresh when a completely new direction is requested.

The tone in which feedback can come might be the most difficult to deal with. If the person delivering the critique doesn’t have a lot of experience in leading teams and providing feedback it may be delivered in a rough and more director manor. When someone has management experience they might have a proven way to soften the blow a bit before delivering the details. Feedback delivered in text can often be interpreted in the wrong way due to the lack of facial expression and sometimes you may just catch someone on a bad day.

As with anything, the more feedback you have over time, the better you will become at digesting and utilizing feedback to bring your work more into the direction of the teams expectations.

Sourcing Sounds Through Synthesis

Resources for Synthesis tutorials

Types of Synthesis

The types of synthesis are important to understand before you start designing sounds. Each type has a particular flavor that it is capable of producing. Having a working knowledge of the main types will allow you to make appropriate choices when designing your palette. Keep in mind that many synthesizers utilized multiple forms of synthesis.

Subtractive Synthesis

Subtractive Synthesis is a method of synthesis that attenuates harmonics of waveforms using filters. This method of synthesis has been around for quite a long time, gaining popularity in the 60’s. Hardware synths like the Minimoog and software synths like Steinberg’s Retrologue or OBDX take advantage of subtractive synthesis.

The usual application of this method of synthesis is to generate more harmonics than necessary, and then using one or more filters to subtract particular frequencies. The typical oscillator waveforms available for subtractive synthesis are sine, saw, triangle, and square.

Additive Synthesis

While subtractive synthesis works to carve away harmonic structure, multiple waveforms are combined together to achieve the desired sound in additive synthesis. Additive synths typically have only a sine wave oscillator. Multiple sine waves are then tuned to different frequencies to produce a new sound.

The Kawai K5000 and the Crumar GDS were hardware synths that offered additive synthesis, while software synths like Lemur and SPEAR (Sinusoidal Partial Editing And Resynthesis) offer the method.

FM (Frequency Modulation)

FM synthesis is a specific branch of additive synthesis. While the method isn’t the best for emulating real-world instruments, it is great for synthesizing bell-like instruments and percussion. It does this by taking advantage of modular architecturegloss. Oscillators can be routed to any other oscillator or back on themselves allowing for extremely complex harmonic interaction. Each interaction is capable of modifying the frequency of the signal to produce new harmonics. The resulting sounds are usually beautiful and sonorously bell instruments, or aggressive inharmonic Skrillex-style instruments.

The Yamaha DX7 is a well-known hardware FM synth. Native Instruments’ FM 8 is a software synthesizer that also utilizes FM synthesis.

Wavetable & Vector Synthesis

Wavetable synthesis is essentially a means of transitioning between, and combining waveshapes. This method employs a table-lookup system that combines and evolves waveshapes over time to generate sound. For example, Native Instruments Massive allows the user to select two waveshapes per oscillator. The user can then rotate a knob (or table) to switch between the two waveshapes, or mix between them. Wavetable synthesis comes in handy for creating pads and other digital sounds as it provides a wider palette of waveforms as a starting point.

Vector synthesis is based on a similar principle but with a slightly different operation. In this case a joystick is used to switch between the 4 oscillators to generate a dynamicrophoneally evolving sound. The Korg Wavestation is a great example of a vector synth. Its power came from multi-sampled waveforms, extensive filters, and multi-stating envelopes.

Physical Modeling

Physical Modeling Synthesis is a mathematical based of emulating physical sounds. In other words it uses algorithms to define the harmonic and acoustic characteristics of a sound. This method takes into consideration the makeup of the instrument being modeled to create realistic sounding instruments.

Sample-Based Synthesis

Sample-Based Synthesis does not use oscillators, but rather actual recorded samples to generate a sound. The samples are then pitch-shifted across the keyboard and processed in much the same way that any other synthesizer architecture would process pure waveforms. This method employs instant recall of samples when triggered by a keyboard or MIDI note in a DAW.

Granular Synthesis

This method is based on a similar principle as sample-based synthesis as it draws from a set of samples. Instead of playing back full samples however, granular synthesis plays audio back in tiny grains or snippets of sound. These grains can then be multiplied, delayed, or processed in just about any way. In this way, granular synthesis can create very interesting timbres and highly complex waveforms.

Hybrids

Many synths (software synthesizers in particular) offer a hybrid of synthesis types. For example, Native Instruments Massive is considered a wavetable synth but it employs FM phase modulation and subtractive filters. This makes the synthesizer a powerful tool in its design.

Making a Sound

The best way to start working with a synthesizer is to understand its capabilities. Once you have an idea of what you need for your sonic palette, look at how the synth handles modulation, filters, and effects processing. Try turning off the effects routing and modulation to start with a single oscillator (or in the case of sample-based synths, sound generator module). Save this as a default patch for use in the future.

Selecting a preset and editing is all fine and good when you are in a rush and need to quickly make a sound but you aren’t really learning about synthesis if you are starting with a polished sound and just changing a few parameters.

Some synths offer the option of choosing a default sound with is usually a simple sine or saw waveform. If your synth doesn’t have this feature you can strip away everything until you are left with a simple waveform. From there you can play with combinations of oscillators, filter types and modulating parameters to shape the sound. Critical listening, analyzing and experimenting are important parts of the process. Learn what the raw waveforms sound like and have an understanding of which to use to generate a particular sound.

Functions in Synthesis

Envelopes

Depending on the synth you chose to work with, the envelopes can be used to modulate many parameters. You are probably familiar with envelopes by the data points they employ, ADSR (Attack, Decay, Sustain, Release). Envelope control can vary from synth to synth. Some synths offer multi-stage envelopes for complex modulation capabilities. A synth that contains more than one envelope will typically route one of the envelopes to modulate the amplitude by default. Interesting sounds can be generated when you start to route envelopes to shape filters, pitch, and noise. The envelope modulation is triggered by user input and generates data based on the duration of the event. For example, a key press will start the envelope process and a key release will finish it. Some synths allow for more complex data points, which offer the user the ability to loop the sustain point. This is where the envelope modulation starts to blur with LFO modulation.

LFO

LFO (Low Frequency Oscillator) is another modulation source but unlike the traditional envelope, it is means to modulate at a specific rate or frequency. This is great for adding movement to sounds and can generate great sci-fi scanning or pulsing source layers. The LFO modulation can be automated to change over time for interesting effects.

Filters

Filters are an important part of the process when shaping your sound. This is the function that can smooth your sound, make it squelchy or gritty. Various synths offer different filter types but in general a filter will allow certain frequencies to pass through. Common filter types are low pass, high pass, band pass, notch and comb. They function mainly as their names suggest. A low pass filter will allow the low frequencies to pass through while a high pass will allow the high frequencies to pass. A band pass allows a specific and narrow band of frequencies to pass while a notch filters out a narrow band. The comb filter blends the original signal with a slightly delayed version of itself to create a series of notches through phase cancellation. By varying the amount of the delay time you can create interesting effects.

Filters have a few parameters, which help shape the sound so it’s important to understand them. The cut off is the point in the process where the filtering takes effect. The resonance or “Q” boosts the volume of the frequency at the cut off point. Adjusting the cutoff level along with setting a high resonance or “Q” produces an interesting phaser-like effect, which could be a nice source layer for a sci-fi weapon. Modulating the cut off and resonance or “Q” in real-time can provide a filter sweep effect.

Effects Processing

The effects processing options vary from synth to synth but there are always some basic staples available. Reverb is great for including spatial information in the sound and giving it more presence or pushing it back in the mix. Delay can thicken up the sound and provide a chorus, phase or flange effect.

When creating a sonic soundscape for your game it might be best to skip the synths on board effect processing and use a dedicated plugin across your sound design layers to give the sonic palette a unique and consistent identity.

Exercise:

1: At this point you should have a solid basic knowledge to get started working with synths. This is a good time to start with a default patch on a subtractive synth and get familiar with the raw waveforms. Create a list of sounds you might be able to create using each of the basic waveform types.

The video reference linked below demonstrates creating UI sound effects using synthesizers. This may give you some additional ideas to play with.

www.youtube.com/watch?v=bG1wAAbpGL0

2: Next, experiment with modulation and make note of how each element shapes the sound.

3: Finally, render the synth sound you created and import it onto a new track. Blend mechanical sound sources with it to combine the realistic elements with your synthetic sound so it might fit a specific visual more appropriately.

Foley and Sound Design Props

Fruits, Vegetables, and Foods

Head over to the Sound Lab (companion site) for an extensive list of sound design and Foley prop ideas.

Apples and potato chips are great props for creating a chomp sound. A close microphonerophone will be necessary to capture the little details of the bite. If the visual chomp reference is a bit more on the comicrophoneal side, the person doing the biting can add in a “haa-umph” (onomatopoeia) sound to exaggerate it. Watermelon has a high water content, just like humans. This extra water adds fantastic detail beyond simple impacts. For this reason, hitting a watermelon with a hammer is a great way to simulate a zombie attack or gores sounds in game. It will provide a solid impact when the hammer hits the melon, and the water splashing out will add depth and a tangible “ew” factor afterwards.

Carrots and celery sticks make great snapping or bone breaking sounds. Freezing the vegetables ahead of time can make the veggies pack even more crunch. Chop them up with a knife or break them with your hands. To create bone crunching or snapping, try twisting a celery stalk with your hands. Ripping apart a head of iceberg lettuce with your hands can also give you a nice crunch. If you need to up the gore factor you can wrap it in a water soaked towel or cloth. Freeze the head of iceberg lettuce and peel it apart. If you position the microphonerophone close to the lettuce you can generate some interesting ice crackling source.

Often good props need to be crafted. With a little searching you can find dog treats that are literally chicken feet! Attach them to a pair of gardening gloves with some Gorilla Glue duct tape and put the glove on one hand. This will allow you to mimicrophone walking sounds for creatures’ movement to be layered over footsteps, or for rat scurrying or scratching sounds.

Uncooked chicken cutlets can make fantastic splat sounds when thrown against a wall. Obviously this can also be very messy or even dangerous. Who wants to deal with salmonella on the walls? An alternative to this is to put the cutlet into a thin baggie with some water to keep the chicken contained while you work with it. The sound of the baggy might get picked up by the microphonerophone however, so consider molding the baggie tightly around the shape of the cutlet and taping it. Alternatively, you can keep the chicken out of the bag and instead cover the walls or ground. This might offer a better splat sound in the end.

You don’t need to transport yourself info a fairytale to recreate the thick, bubbling sounds of a Witch’s cauldron. You don’t need to burn yourself boiling hot water either. Instead, make a batch of thick oatmeal, put a straw in it and blow through. Something you always loved doing as a kid (but were told not to) can now be done for the sake of the project! Be careful not to get the goopy oatmeal on the microphonerophone. This happens to be a lesson that we both learned from experience... Jell-O is another squishy prop for fantasy or sci-fi genres where objects or characters “morph.” Simply make the Jell-O , insert your hands, and squish away with a microphonerophone placed closely to the Jell-O !

Pulling apart a rotisserie chicken (that isn’t overcooked) can offer some gooey and squishy flesh sounds. Shaking pâté (cat or dog food) out of a can will give you gooey splat sounds with a little suction. Dog food cans tend to be taller and can offer more pre-transient to the suction due to the size. Over-boiled pasta is another effective source for gooey sounds. If a single item isn’t giving you the goo you need, try combining some watermelon, tomato, and wet kitchen towels in a bowl and squish them with your hands.

Bacon frying is a classic way to recreate rain source. For example, recording heavy rainfall can result in too much white noise to be usable. Instead, layer in recordings of bacon sizzling with some light rain source to better approximate the sound. Frying bacon is so good at imitating the sound of rain that this method has in fact been used to great effect in Film audio for many years.

An action scene with an exploding volcano might have visuals of ash falling to the ground. This requires sound to bring the ash to the viewers attention, hopefully striking an emotional chord to match the imagery. By sprinkling shaved coconut onto a bed of lettuce you can quickly and effectively capture a sound that does exactly that. Keep in mind that you will need to position the microphonerophone very closely to the lettuce to record the detail at a reasonable volume.

Coconut halves can also be used for Foley in addition to the shavings. Clap the halves in rhythm on the appropriate terrain to create the sound of a horse gallop. This is an age-old Foley prop. In Monty Python and the Holy Grail’s “coconut sketch,” they poke a bit of fun at this technique.

When you need snow footsteps and you don’t have quick access to snow, try stepping on a bag of cornstarch. If the snow graphics present more of a frozen look try stepping on a bag of sugar to layer in more crunch.

Household Items

Older homes with wood floors often emit unique creaks and groans as you walk across them. Wooden creaks can be great source layers for a host of sound design needs. A pirate ship in a game could use some wooden creaks as rocks back and forth sailing on the water. These little details add realism to the scene and amplify the immersion. To capture the wood creaks without footsteps, find a spot that squeaks and straddle the area. Shift your weight back and forth to produce the creaks. Be sure your clothes aren’t adding noise into the recording!

Cutlery can generate lots of metal clink sounds that can be used for a variety of cool sound effects. A cooking game might benefit from cutlery clinks in it’s UI sound design, for example. This will work perfectly on a conceptual level, and provide some depth and interest to an otherwise bland UI experience. Alternatively, grating a fork over a cheese grater can produce some interesting metal scrape source that can be used in anything from cartoony animations to a horror soundscape.

Forced air devices like a hair dryer or a can of whipped cream can generate great source for vehicle or ship engines. Treadmills also make great engine source layers. Try recording the treadmill with a shotgun microphonerophone and a contact microphonerophone and blend the two layers together.

Try experimenting with various fabrics to produce source for winged animals and fantasy creatures. Heavier fabrics will generate larger sounding birds flaps, and they work great for dragon-like creatures. Lighter fabrics will work well for smaller, less threatening creatures.

Special abilities and fantasy spells with an ice theme might need impacts, cracking, or explosive debris sounds. In these cases, ice hits and scrapes can be recorded as source. By wearing a boxing glove or quick-wraps and punching a bag of ice you can create useful impact layers. Of course you might need to add a kick drum or some other low frequency sound to the impact to give it some necessary power.

Netting and plastic plants are both great source props for movement through foliage. Recording actual rustling through bushes sometimes sounds a bit too harsh and crispy, but netting and plastic plants offer a more believable sound that is easier on the ears and less distracting in a game scene. This is yet another example of how a literal recreation of a sound is not always the best method for recording source. Be critical about the sounds you are hearing, and always consider their application in a game. This will vastly impact the choices you make on your source material, and greatly benefit the overall sound design experience for the player.

Heavy paper clips, door locks, light switches, latches on briefcases, or even the click of an ink pen make great source layers for UI sounds. A carpeted cat tree can make a great prop for hockey puck impact sounds when hit with a wooden or metal baseball bat. And cellophane, which is used to wrap gift baskets, can be balled up and released to create a light cracking of a fire layer.

Footsteps and Armor Source

Head over to the Sound Lab (companion site) for a reference video demonstrating the proper way to achieve that heel-to-toe roll when recording footsteps.

In this video Susan Fitz Simon demonstrates the proper way to achieve the heel-to-toe roll when recording footsteps. www.pbs.org/video/whats-buzz-foley-footsteps-susan-fitz-simon/

Vocal Mimicry

Sounds generated from your mouth can be very useful in the sound design process. The YouTube links below demonstrate recorded mouth sounds, which were used to create the sounds of space for a short animation that was part of an asoundeffect.com contest. The first link demonstrates the audio without any effects processing to show the original source material. The second link includes effects processing on the source to create a soundscape that fits the visuals.

No Effects: www.youtube.com/watch?v=ZJBOyL8taME

Effects: www.youtube.com/watch?v=Pff5CWNvXnE

Exercise:

Take the time to experiment with mouth and voice source and challenge yourself to create the sounds listed below. Don’t worry so much about processing the sounds as you are focusing on creating source right now. Before you get started, choose a character or game to use as a reference so you have a visual guide.

  • Unique fantasy creature growl and idle sound.
  • UI sounds for a mobile time management restaurant themed game. Create sounds for these button taps: general button, back, select, level select, menu popup.

Field Recording

Microphonerophone Placement

In this video, EA/DICE sound designer Ben Minto offers a look into a gun recording session, which compares various microphonerophone types, positions and pre-amps.

https://vimeo.com/20869893

Designing Sound Effects

Layering

Exercise: Layering

In the three exercises below, we invite you to experiment with layering. Keep in mind that sound design, like any creative medium, has many variables and ways to get things done. However, keeping these basic ideas in mind, will offer you a start in the right direction.

Exercise 1: Sci-Fi Pistol

Let’s explore layering using an imagined Sci-Fi pistol as our visual reference. Opening a synth like Massive, selecting or designing a laser-y blip and pressing a key on your midi controller to record the fire will give you one of the elements you need but it won’t satisfy the player when they shoot the weapon in game. The feedback that sound will present would be minimal and won’t portray the effectiveness or power behind the weapon. A weapon that sounds small or toy like won’t offer a player on the battleground the confidence they need to win the match.

What the above example is missing is a low-end punch layer to add a kick and power to the pistol. This plus the laser blip layer will better sculpt the sound to provide the feedback necessary to make the player feel the power behind the weapon. Adding more power to the attack of a sound can be achieved by adding a microphonero lead in sound followed by a few milliseconds of silence before the transient of the punch layer. The bit of silence provides a dynamicrophone moment of quiet to make the weapon fire more powerful.

If we further examine the weapon we could find a use for some mechanical noises to add a bit of detail to those layers. Since it’s a Sci-Fi genre there could be a quick lead-in charge up or pulse sound followed by chamber movement and a reload after the fire.

Once you have your layers you will want to stage them so they aren’t all triggering at the same exact time. When you stack all your layers to trigger together you can end up with a wall of sound and risk layers masking one another or sounding out of phase.

Lining your laser blip so it is slightly offset to the left of the punch layer provides an impulse before the body of the sound. Play around with the layers until the sound feels satisfying to you. Now you have a pistol sound that provides a feeling of power to the player with each shot. Later in this chapter we will discuss the necessary editing and DSP to polish your sound.

When working with visuals you want to break them down to determine the type of material the object or character is made of and the actions it performs to help guide you in your source selection. Think of the process as being like putting together IKEA furniture. You need various parts to create the fully realized bookshelf or futon.

Exercise 2: It’s a Trap!

Let’s play off the Sci-Fi pistol example and imagine you are tasked with creating the sound for a bear trap. For a visual source you can think of it as being similar to the bear traps in the game Limbo by Playdead. First you would determine the main material of the trap to be metal but there are also other smaller parts like springs, clasps and triggers that control the state of the trap. Next you determine the actions of the trap as snapping shut and being set open. You would also want to determine what games objects can interact with the trap but for now we will focus on the trap itself.

You can find all the source material you will need in sound libraries, record your own or use a mix of both methods.

Starting with the sound of the trap snapping shut you need to use the animation as a guide to where you place your sounds. If your sound library search offered you the sound of a metal trap you can use that as a base layer. Notice how we aren’t just choosing one sound source and calling it complete. You want to start staggering and layering tracks of source to fully realize the action of the trap snapping shut. This includes the sound of the trigger, which sets the trap into the closing state, the springs bending as the metal pieces start to close and the impact of the metal on metal snap.

As you lay out the source material on tracks in your DAW you will want to time stretch and pitch shift the assets so they fit the animation. Scrub the video to precisely match your audio to the moving parts of the trap. Other processing might be necessary to ensure the source sells the idea of the visual. You should consider the weight of the object and how your sound will affect how believable the object is to the player.

Exercise 3: Character Weight

When designing footsteps one might consider recording a shoe stepping on a particular surface and calling it a day. With that workflow you aren’t considering the character or the visuals that support the character. Just as we broke down the trap and Sci-Fi pistol, we can break down our character. What is the perceived size? What is the demeanor of the character? How is the character outfitted? What terrains will the character interact with?

Size will help determine how much weight needs to be added to the footsteps. Demeanor will decide if the character walks with more of a stomp or a lighter step. The armor of the character will offer options for all the little details that will go into the movement. Terrain will also offer the opportunity for more detail.

As an example, let’s imagine a 6 foot 5 inch fantasy character made out of ice. He has a Hulk like physique and his armor consists of leather and chains. He can talk on a hard stone like surface and a softer dirt terrain.

Let’s break down the character, starting with the ice frame. As he steps on hard ground we might hear some ice chipping off. Now this doesn’t mean the character will eventually lose an icy foot as he continues to traverse the game but it accents the movement and gives the player something to relate to. If you drop a block of ice on stone it is likely to crack, break or chip. For the dirt terrain we can maybe do without the chipping sounds as a little accent. Next, consider the armor as this bulky character steps down with each footfall. To add a bit more detail to the image, let’s say the leather is around his wrists and waist. He has a leather flap that moves against the ice with each step. The chains will also make a bit of movement as he strides side to side. Lastly, we will focus on the actual step. Recording a human footstep won’t sell the player on this character. Audio sweeteners and layered ice impacts will be needed. Kick drum sounds that are really cleaned up with a denoiser and parallel pitch shifted can make a nice base for the step. An audio “sweetener” refers to the process of subtly mixing a sound with a pre existing sound to "sweeten" it. When you record a sound with a microphone you may have noticed the sound doesn’t always convey the power or feeling you want to achieve. This is where layering plays a key role. In the film Backdraft, sound design Gary Rydstrom, used animal sounds to sweeten sound of Fireballs to add a complexity to the effects.

Exercise:

Gather source material and give yourself a specific object to design sound for. It could be from your favorite game, film or virtual experience. This will help you think creatively about source and layers.

Transient Staging

Here is a quick video tutorial on transient staging to wrap up what we learned in the textbook.

Transient Staging

Effects as a sound design tool

Here we will discuss thinking outside the box when applying effects processing techniques for more creative sound design.

Reverb

The Wikipedia definition of ‘reverberation’ is in regards to psychoacoustics and acoustics as a persistence of sound after the sound is produced. Reverb as an effect is used to simulate spatial information within the sound. We hear reverb daily when sound waves interact with the surfaces around us. Our ears are used to translating this information into detailing the size, shape, and surface material around us.

As an effects processing tool we will explore some uses that make reverb effects useful in sound design. There are a wide variety of hardware reverb effects processors and software plugins to choose from. Here we will focus on software-based plugins for their ease of use and accessible pricing.

Reverb can be broken down by method into algorithmicrophone gloss and convolution gloss reverbs. Algorithmicrophone reverbs use mathematical algorithms (as the name implies) to simulate spaces. It uses significantly less processing power than its convolution counterpart, but it also may not sound as natural. Convolution reverbs use impulse responses (IR) gloss, recorded to simulate a particular space. This is a complex process and it takes up far more processing power than algorithmicrophone reverbs.

Convolution reverbs like Altiverb, Logic’s native Space Designer, and East West QL Spaces employ impulse responses for some very natural sounding spaces. Some convolution reverbs allow the user to import user recorded impulse responses, which will allow you to emulate your own spaces. This is where we begin to see opportunity for creative use of reverb. Instead of creating an IR of a concert hall or stage, you can record the IR of a glass vase or a plastic bucket. This will yield plenty of distinctive spatial information to use with your source.

Impulse responses are just recorded WAV files that can be imported into convolution reverbs. To record an IR you need an interesting space, a sound source to stimulate the space, and a field recorder or microphonerophone to capture it. The preferred method of stimulating the space is playing back a sine wave sweep. A sine sweep is a test tone, similar to white or pink noise, but the sine sweep produces frequencies with much higher energy. This method provides the best signal-to-noise ratio. You can generate your own or you can download them from source sites. Once the since sweeps are recorded, you can convert them to an IR. Altiverb and Logic’s Space Designer both offer a deconvolution utility that makes the conversion quick and easy. You will also need to trim the file and create a fade out. Another method, which is a bit easier and quicker to produce, is creating a loud, broadband burst of noise like clapping your hands hard, or popping a balloon to stimulate the space. You can then use this recorded to convert to an IR.

Resource

www.audiocheck.net/testtones_sinesweep20-20k.php
I have found some interesting spaces while out traveling with out any gear other than my mobile phone with a recording app on it and have used the clapping method to record impulse responses. Sometimes it is better to get the sound than to not have it at all. - Gina

When you have the ability, it’s best to record sine sweeps at 24 bit and at the highest sample rate possible. Avoid having too many pieces of gear in the signal chain as the quality of the recording will only be as good as the process it followed. Higher sample rates will allow for pitching or time stretching of the impulse response in post for unique effects. Adding effects like delay or distortion to the sine sweep prior to importing into the plugin can also yield some interesting results, so have some fun and experiment with your impulse responses!

Once you’ve created or chosen an IR, you may want to use more reverb on a sound to push it back into the mix or to give it a ghostly feeling. However, as with most effects, be careful how much you use. Too much reverb can ruin the sound and drown out the mix. Avoid selecting a preset and sticking with it. Try to play with the parameters a bit and explore before you settle on a setting.

Reverb is great for depth in a mix as it can be used to set sounds back in the mix. When sound travels through the air, it loses energy. High frequencies are attenuated more quickly than lower frequencies so generally the lower frequencies travel farther than the highs. When we hear a sound we are familiar with, but with reduced high frequencies, (and less detail) we perceive it as farther away.

To achieve this effect you can start with a high shelf filter attenuating by -6 dB, and slowly sweep it down from 20 kHz until you hear a subtle drop in high frequencies.

Next add in some reverb, by choosing a preset and then adjusting the wet/dry mix until you find the right balance of direct sound vs. reverb sound. A mix that is drier will sound closer while a wetter mixsounds farther away.  To further polish the illusion of distance, play with the pre-delay setting in the reverb plugin Keep in mind that a longer setting sounds closer, and a shorter pre-delay sounds farther away. Think of pre-delay as being how far the sound source is from the farthest solid object directly across from it. A short pre-delay setting will be make the sound be perceived as the sound source is very close to the far wall.

Below we have listed a few bullet points on reverb use that may come in handy when designing sounds:

  • Decreasing the room size parameter on a reverb plugin can offer a metallic like effect.
  • Render the reverb tail of a sound and re-import it into your session. Reverse it or add additional processing for interesting effects.
  • Try using no reverb at all to get a “dead” sound (Note: In PART III: Implementation, we will discuss baking effects like reverb into the sound vs. using the effects built into the audio or game engine. For now, try it both ways and listen critically to the result).
  • Try re-importing the rendered or bounced down reverb tail into the session. It will offer the ability to automate the volume so the reverb doesn’t wash out the attack of the sound. This is especially good for weapon and impact sounds.
  • Get creative with routing and try adding other effects to the reverb return. Change the order of the processing chain to place effects before and after the reverb and listen to the resulting sound.

Finally, we recommend checking out Akash Thakkar’s “The Magic of Reverb” quick tutorial www.youtu.be/H6SxeobcfHg

EQ (Equalizer)

Like reverb, EQ is probably already a staple in your workflow to adjust the frequency of your sounds. EQ is designed to shape sounds by allowing users to boost or cut frequencies. It is commonly used to clean up problematic frequencies and to emphasize important frequencies so that sounds sit well within a mix.

There are many software EQ plugins available and each of them offers a distinctive take on workflow. It’s best to think about what you need from an EQ before selecting one. Software trials are a great way to try out a plugin prior to making a commitment. Some EQ’s are great for surgical carving and cleaning while others add more color, saturation and a silky smooth high end. Generally speaking, graphical EQ’s to be clearer and more neutral, while hardware emulations result in more coloration of the sound. Fabfilter Pro Q-2 and Neutron 2 in digital mode are transparent EQ’s that won’t alter the sound and offer surgical clean up. Other EQ’s offer a “dynamicrophone mode,” which operates similar to compression (see section on “Compression” below). Remember that transparency is ideal for cleaning up sound, but coloration is often required when designing creatively. Your choice hinges on the needs of the particular sound effect and on your own preferences.

With any effects processing there is always a chance of overdoing it or making a poor choice for a particular sound. We think the team at Pro Audio Files does a great job of discussing common EQ mistakes in this article. www.theproaudiofiles.com/eq-mistakes/

Below we will discuss some of the slightly more extreme uses of EQ to redefine sounds entirely.Before we get started, it helps to be familiar with basic frequency ranges to better understand how EQ manipulates sound. Let’s quickly explore the spectrum we can shape.

Human hearing as we know it stretches from 20 Hz to 20 kHz. The low end of the spectrum sits between 20 Hz - 80 Hz, and is considered to be the sub-frequency range. It is often felt more than it is heard. Pitch is difficult to distinguish in this range.

80 Hz to 250 Hz is considered the low-end of the frequency spectrum. A buildup of frequencies set around 200 Hz can make your sound either warm or muddy.

250 Hz to 500 Hz is considered the low-mids. Too much of a build up between 250 and 500 can make your sound boxy. This is where a parametric EQ with a narrow bell curve can come in handy. If your sound is boxy, first identify the problem area by adding a 6-12db boosted bell curve and slowly move from 250Hz to 500 Hz. This is called a sweep of the spectrum. Once you’ve identified the problem, drop the bell curve until the “boxiness” is gone from the sound. Most parametric EQ’s offer a “Q” tool to adjust the width of the boost or cut to achieve the narrow curve. Fabfilter’s Pro Q2 has a nice feature that allows you to solo a specific data point so you can focus on just the intended frequencies. Avoid spending too much time in solo mode however. Find the troubling band quickly and cut it out.

Midrange is defined as 500 Hz to 2 kHz and can define how prominent your sound will be in the mix. Be mindful that too much build up in this range can cause ear fatigue.

The high mids are typically considered to range from 2 kHz to 4 kHz. This range is important, particularly because we hear important details of speech in this range. Aggressive cuts between 2 kHz and 4 or 5 kHz may reduce the intelligibility of narration or dialogue. Likewise this range plays an important role in music mixes. This is often the range that we will hear the beater of a kick drum, or the plucking of a guitar string. Too much manipulation here can make these instruments sound unnatural, especially when cutting.

The high-end is defined as frequencies 4 kHz and above. Too much high-end around 10 kHz+ can really be wearing on the ear in a game. Avoid overdoing the high-end layers in your design as it will overload the player with harshness. Equally important to keep in mind is that 10 kHz and above is usually where we hear “presence” in a sound. This is a very subtle “airy” effect. When used in proportion it can add some “shine” to a sound, but used too aggressively it can saturate a mix with top-end.

Frequency slotting as we discussed in Chapter 3 ensures the source layers all fit together to sound like one cohesive sound. Layers add detail and transient staging allows sound effects to develop over time. Frequency slotting will then ensure that your layers appropriately map across the frequency spectrum so that all elements are heard. It will also keep one range of frequencies from overpowering others.

Reference: www.vimeo.com/12381399 (A tutorial on Frequency Slotting)

Now we will get back to the creative uses. Often the most creative EQ processing comes from plugins that are not even marketed as EQ’s. There are a variety of plugin types that commonly have EQ parameters ripe for inventive use. Distortion plugins like Trash or Decapitator have EQ control that can add a dynamicrophone and gritty tonal shape to your sound. This is an extreme version of the colorization mentioned above because it comes packed with saturation and distortion as well.

Pitching down a layer or two can add weight to a sound effect, but EQ is more effective for adding power. Choose a layer that already has some low-end to it and boost frequencies from 150 Hz - 200 Hz. This should make the resulting asset stronger and more robust. Some engineers may advise against using additive EQ (boosting frequencies), but when used properly it can be really helpful to make creative use of a particular frequency range.

Removing frequencies from 20 Hz - 600 Hz will give your sound effect a lo-fi radio sound. Often this elicits an eerie mood and works great for horror games. This can also be used on multiple sound assets to hone in on a particular aesthetic. In general, the visuals will inform this kind of decision. If the art is extremely high quality graphics, then it may not work unless the audio director (or creative director) is aiming for a creepy mood for the game. However, if the visuals are more stylized, and there is flexibility in the direction then try it out! Landing on a singular audio “vision” is something that should be strived for on all projects, and EQ is one way to realize that vision.

Sometimes a precision, “surgical” EQ boost or cut is necessary to clean up a sound, but a wider bandwidth boost or cut can dramatically alter the impact of a sound.

Just because a source layer is lacking in higher frequencies doesn’t mean you are stuck with a dull uninteresting sound. Try using a high shelf to boost frequencies above 2k.

EQ’s like the Fabfilter Pro Q2 offer an EQ matching function. This will allow you to choose a sound with the high end you are looking for and “teach it” to the Pro Q2. It will then process the two assets and offer an EQ solution. The solution can then be tweaked to your preference afterward, but it is a great starting point.

Spectral analysis is another tool that will improve your creative EQ options and overall workflow. To illustrate this, let’s do a quick experiment with a parametric EQ. Import a machine hum sound that has a good amount of low-mids into your DAW and insert an EQ plugin on the track. Now create a few notches with a narrow Q and boost random frequencies form 1 kHZ to 10 kHZ. If done correctly, the material of the sound should have a metallic character to it.

A comb filter is yet another tool that can alter the character of a sound. Comb filters are technically a form of delay (see below), but like EQ’s they are capable of adding a metallic or mechanical resonance. Creating tonal sounds from a noise source is a fun way to experiment with comb filters. To do this, set up a noise generator in a synth like Massive. You can then add a comb filter on to create an ambient pad that responds to MIDI keyboard input.

Take some time to experiment with equalization on various sources. You will find it is an extremely useful tool that should be one of your “go to tools” when designing sound.

Compression

Hopefully compressors are also a part of your workflow. They are plugins (or hardware units) that adjust the dynamicrophone range of an audio signal. For this reason a compressor is one of those tools that can quickly ruin a sound if applied incorrectly. Dynamicrophones are necessary for immersive audio and too much compression will squash the dynamicrophones, making for a flat listening experience.

If you haven’t had much experience with compression just yet, check out these tutorials to get started:

When layering sounds to create a larger sound effect, we typically use compression to make source material louder, bring out more details or generally sound meatier. Compressors reduce the loudest part of the sound to make the average level more consistent. This can make the sound fuller and richer. It’s perfect for recorded source that sounds too thin to use as a layer in your design.

A compressor can be useful in a variety of ways. Setting a fast attack will slow down the transients and produce a rounder sound. This can be helpful when working with recorded paper movement, tearing and crumpling. As with most things there are a few ways you can handle the harshness of a sound source beyond a compressor. Analog tape saturation can help reduce some of the overly harsh sounds. They can also be rolled off with an EQ.

Punch can be added to weapon and attack sounds by applying a fast attack and release. Experiment with lowering the threshold until you arrive at the punch power that sounds good to your ears.

If you set a very slow or long attack and reduce the threshold enough the attack can be more prominent than the sustain making it sound exaggerated. When you think of these tools as sound shapers rather than by their true definition you will find a while sonic world open up.

As a sound design tool compression can be applied to smooth a very dynamicrophone ambience source layer. Setting a slow attack and release with a low ratio can reduce those rise and falls and smooth out the layer. Now you can make use of it in your sound design.

Multi-band compressors are equally important tools as they allow the user to define the frequency range of compression across multiple bands. Gain reduction can be set to one or more specified bands as well, making this an extremely powerful tool for dynamicrophones and well as frequency spectrum. It can be used to bring a dull source recording to life by setting a slight gain boost. You can then set the attack and release times to your preference as you work to bring out the high end.

Serial compression is the process of applying smaller bits of compression multiple times to one source. This trick works best when you apply different compressor types. For example, a slow acting classic compressor can be applied in the first step, followed by a more modern compressor afterward. This allows for a bit of the vintage outboard gear sound with a mix of the clean, more modern sound that today’s compressors are known for. If the goal is to add a 4db gain boost, apply 2db in the first pass and the additional 2db in the second. This will avoid an overly processed sound that can occur when using a single compressor.

In the days of hardware analog compression (which cost more than some automobiles), engineers would use a hard “knee,” and push the compressor to extremes. When making use of a hard knee today the compression begins immediately as the signal reaches the defined threshold. Using a soft knee triggers compression more gradually, which sounds more natural. When done correctly, the soft knee technique sounds similar to an analog tape effect.

You may be wondering how you can achieve this classic analog sound with software that doesn’t cost an arm and a leg. While not all software compressors offer the High Knee feature there are plenty that do and even some of the compressors that come standard with a DAW like Logic offers the flexibility. As always, use your ear to determine how good it sounds before committing to a plugin.

Compressors are an important aspect of every sound designer’s toolkit. They are useful for cleaning up audio and polishing sounds. However, by taking the time to experiment with compression in the ways we have discussed will spark new and engaging ideas for your designs.

Synths as Effects Processors

Earlier in this chapter we covered synthesis as a tool to generate source material for sound design. Some hardware and software synths offer the ability to route an input signal through its effects processor. Native Instruments Absynth is a versatile instrument that allows this kind of real-time effects processing. When loaded into your DAW, each oscillator can be set to an audio input. With the effect loaded on an insert track or as a return, just set the Oscillator to “Audio In” on the Patch window. You can then choose between stereo or mono channels and you can split the channels into dry and wet.

The intricate routing in Absynth offers a variety of tools advantageous to creative sound design.Absynth Cloud Filter is a combination of granular synthesis, pitch shifting and filtering.

Using the amplitude envelope in LFO mode can create some distinctive tremolo and stutter effects. The Mutate button will quickly randomize parameters, offering far more sonic possibilities than you might think of on your own. Finally, the envelope following function will allow you to control multiple parameters at once.Absynth’s frequency shifter and Aetherizer granular delay effect can create some amazing textures as well. By turning off the effects module on the lower right of the Patch window you can add various modules with filters and modulators to process the audio input through them directly.

Other synths make use of a modulation controller called a routing matrix while Absynth’s routing matrix allow users to send control signals into one or more parameters, thus provided an intuitive control source for modulation that goes way beyond the standard routing of an LFO to the pitch of an oscillator.

Resource:

www.adsrsounds.com/absynth-tutorials/beginner-tutorial-series-using-absynth-as-an-effect-processor/

Xfer Records Serum is a virtual synth with tremendous processing power. Serum allows user to drag and drop custom samples into the built in wavetables oscillators. The noise oscillator can then be assigned to the pitch of another oscillator for an even more interesting effect.

Camel Audio’s Alchemy is another great tool thanks to its transform pad. Automating the XY axes will create intricate, evolving soundscapes that take only minutes to produce. The company has unfortunately gone out of business, but for those of you using Logic Pro this is now a native tool available within the DAW.

We don’t often take the time to read the product specs or manuals these days because we usually want to jump right in and get started. But when we do this we miss out on some of the most interesting features. Take some time to dig into the manual of every plugin in your collection. You will likely find new and exciting way to create sounds.

Transient Designers

All sounds are made up of transients and/or sustains. A transient is the initial part of the sound that disappears very quickly. A snare drum hit, or a clap are both considered transient sounds. Transient designers function by enhancing the attack of the audio signal and then dropping the level of sustain. When used appropriately the impact of the attack can be maintained or emphasized while suppressing the rest.

There are a variety of plugins out there that manipulate the transient of a sound. They go by different names such as transient designer, transient shaper, transient modulators (Transmods), envelopers, and signal modelers. You may have one in your plugin collection without knowing it.

Each of these affects an audio signal in slightly different ways, but they all can be used to add “punch.” The term “punchy” refers to the contrast between silence and a loud transient. The loudest sound imaginable is only punchy when compared to a softer sound before it. This comes into play in many situations. In MMO Battle Areas for example, weapon and impact sounds are more than utilitarian as they provide the player with a sense of power. Designers need to create the impression of power and loudness at lower listening levels to compensate for numerous other sound sources in the arena. The way to accomplish this is to increase the punch without increasing the overall volume of the mix by using the transient designer. By increasing the attack of a weapon fire (say a pistol), the sound will peak through the mix briefly and then the sustain will dissipate, leaving room for other important sounds.

Compressors are very similar in nature to transient designers in that they can be used to manipulate envelopes, and they offer users the ability to emphasize or de-emphasize the impact of the transient and release of the sound. Transient designers are a bit more of a dedicated dynamicrophones processor with a focus on manipulation for attack transients. When a pistol is fired, compression can be added only to the tail of the shot. This will fool the player’s brain into thinking the attack is significantly louder, even though the peak level has not changed. In this way, compressors allow every ounce of available headroom to be squeezed out of the mix.

When adding punch to a sound the transient length matters. A very short transient will sound like a crack or snap while a longer transient will contain far more body in the sound. For this reason transient designers usually contain an attack parameter to adjust the timing. There isn’t really a hard rule to work from that will tell you how long to set the transient length for maximum punch. Instead try setting sustain to a lower value and then work your way up. It should be apparent when you hit the sweet spot. Keep in mind sustain (or release) also affects punch. Setting the release to the same value as the sustain will defeat the purpose and reduce the punch. Be mindful to keep sustain lower to preserve the transient nature of the sound.

Range, sensitivity, duration, and release; these are roughly equivalent to a standard compressor ratio, threshold, attack, and release controls. The latter perform slightly more specific functions however. Range changes the transient energy of the sound or notes. A positive value increases the level of transient impacts for a punchier sound. A negative value sounds mellower. Range and sensitivity work together as kind of a threshold. Lower sensitivity settings limit the processing to the loudest transients. Higher settings will allow for a wider range of transients to be processed. A lower sensitivity with a higher range will result in a punchier sound.

Duration allows you to set the length of the transient attack. Be mindful that using too long of a duration can overdo the effect. In the end your ear will determine the appropriate duration for the attack. Pushing a plugin to its limits can make for an interesting effect, but it can also sound heavy and forced.

The release parameter will set the time it takes for the signal to return to its original level, similar to a decay control. A high range and a short release time can really add impact to the initial attack on a sound.

Time-Based Effects

Pitch and time shifting are important for creating a performance in the sound. You can manipulate sound using pitch to show emotion and time shifting to match the sound to the animation.

Time-based effects alter the timing of a signal and are often used to shape the depth and dimension of sounds. There are quite a few effects that can be considered time-based effects but here we will focus on pitch shifting and delay.

Pitch shifting effects are a staple for most sound designers. It is possible to make drastic changes to the time and pitch of recorded source material as long as the sample rate is relatively high.. It is possible to make drastic changes to the time and pitch of recorded source material as long as the sample rate is relatively high. The frequency range of the microphonerophone you choose to record with is also something to consider. A super wide range microphonerophone like the Sanken CO-100K has a frequency response of 20 Hz - 100 kHz. Source recorded with this kind of microphone can be dramatically pitched because of the detail with which it captures the entire frequency spectrum. This makes it fantastic for creature sounds due to the numerous effects this kind of design requires. In the 2018 God of War game the World Serpent voice sounds like it was performed by a human and pitched down drastically, suggesting it may have been recorded with the Sanken microphonerophone.

We aren’t saying you need to run out and spend $3,000 USD on a microphonerophone though! With some research you can find the frequency response charts for various microphonerophone ranges. There has been a recent discovery in the audio community that the LOM mikroUši Pro microphonerophone can capture sounds up to 80 kHz for a price of about 125€. It’s something that will have to be tested to verify, but it illustrates our point well. Having a huge budget is helpful, but doing your research and knowing your equipment is a cheaper and more reliable alternative. - Gina

Duplicating the source and slightly pitching it up or down and layering it under the original sound can achieve subtle pitch effects. Try putting a pitch plugin at the end of a plugin chain and experimenting with the frequency. For more interesting results also adjust the mix knob to layer the original pitch with the new one. This can often leave you with a thicker, or even dissonant sound.

For more extreme pitch shifting have a look at the plugin Paulstretch. It breaks the audio down into component frequencies, modulates their phases, and reconstructs it.

Resource:

www.hypermammut.sourceforge.net/paulstretch/

Celemony’s Melodyne works really well for vocal-like sounds, but can also be effective for other types of sound design. In the Melodyne sound editor you can change pitch of each note, essentially turning recorded audio into MIDI.

Resource:

www.musicradar.com/reviews/guitars/celemony-software-melodyne-editor-230550

Pitch plugins usually offer a variety of control options. Many pitchers offer the user the option of changing pitch without affecting the speed or tempo of the sound. This is usually the default setting, but actually allowing the speed to change with the pitch can leave you with exciting and unpredictable sounds. Pitching down wind source and allowing the speed to be altered by the time shift function can craft extremely detailed ambiences. To play around with this approach try recording a sound and importing into a sampler like Native Instruments Kontakt. Map it across the entire keyboard and allow the sample to be pitched and time-stretched. It is difficult to overstate just how many usable layers of sound can come from this process.

Resource:

www.youtu.be/UZummKCrDcA (Importing sounds into Kontakt with Akash Thakkar)

Delay is yet another staple time-based effect for sound designers. It can be used to add width and depth to a sound. Duplicating a mono sound and panning both to opposite positions, then adding a delay around 10 - 30 milliseconds on one track will add depth to the sound.  Delay can also add texture and interest to ambiences. Plugins like Fxpansion’s Bloom Delay the user has control over three switchable delay models with the ability to combine frequency shifting, overdrive, envelope shaping, and saturation. It can produce some trippy sounds that will take you down the rabbit hole!

When you are aiming to push a sound further back in the mix, and reverb is washing out the mix, or muddying up the sound, you should try delay instead. Adding saturation to your delay will introduce warmth, and playing with reverse delay can be fun.

Another way to experiment with delay is to rendering a sound with delay baked in, and then import it back into the session. Now you can chop it up and randomize the sequence to add some glitch to the sound. Try using reverb to your new track to glue it all together. In a similar vein, Soundtoys’ Little Alterboy works well for robotic and vocoder-like effects.

Pan and Doppler

Many of you will already be familiar with the basic use of pan in a mix as it is fundamental in mixing music. In a music track panning is used to create space for instruments and a more immersive listening experience. Panning can be used to push some instruments off to the side or back to allow others to take center stage. In post-production panning allows the mix engineer to spread the sound across a stereo image to add depth and space. This is very similar to how panning is used in game sound, but there is one large difference. In games the overall panning and positioning of a sound is handled in runtime by the 3D settings prepared by the sound designer or programmer. However, panning within a sound asset can be a critical element of its effectiveness.

Magic spell casting sounds (especially in the first person perspective) can benefit from panning in this way. The VFX of spells like this almost always emerge from the player character’s hands and widen as the spell travels out towards it’s target. When designing these sounds, start with a narrow stereo width and use panning to broaden the sound over time. This will allow a 2D sound triggered at a first person perspective to sound like it has movement in the 3D space.

Panning can also help add detail in the size of the object. A sound with a smaller stereo width will sound smaller while more stereo width will be perceive as larger.

Doppler is used to create a sense of movement in a sound. This effect is closely related to pan but it specifically deals with sound sources that move. You may or may not be familiar with the term, but you most certainly have heard its effect in the real world. The Doppler effect is the audible change in frequency of a sound wave caused by the relative motion between the source and the listener. It was named after the Austrian physicist Christian Doppler, who discovered the phenomenon in 1842. A common real life example of this effect is the sound an ambulance makes when passing by a pedestrian.

The Doppler effects add a huge degree of realism to a game when applied appropriately. Using a pitch shifter to increase the frequency of a sound will create the illusion of an object getting closer to the listener. By contrast, pitching down a sound the further away as it gets further from the listener will create the illusion of that object traveling away. This can be great for sounds of VFX projectiles. Using a Doppler plugins can result in really interesting whooshes and swooshes as well.

There are a few plugin options to choose from that offer a wide range of control from quick and easy to detailed tweaking. Plugins like Waves Doppler and GRM Tools Doppler offer quick and easy ways to create the effect. They offer an observer point and an assignable start and end point. Of course you can also go out and record moving sounds, or playback sound from a portable speaker while moving it around in front of a microphonerophone. But the plugins allow for use of almost any sound source, even if it doesn’t emit from a moving object. This offers more creative flexibility. By applying a small amount of flange you can also produce the illusion of movement. As always, listen and experiment to decide with how fitting this effect is for your game.

Modulation Effects

Adding modulation to sustained or looped sound can increase its effectiveness in game. A static sound with no movement will be tuned out by the player, or leave them feeling annoyed. Modulation can be used to add fluctuation and variation, which will stand out a bit more in the mix. Modulation can also add some flavor to the sound or make it more fitting to the visuals. As we mentioned earlier, games like MMOs can have many sounds triggering all at the same time. The sonic space can become cluttered, so our ears need to be told what to pay attention to. If a sound is dull or flat and has no variety, it will be inaudible in a dense mix. To make sounds stand out and be recognized, designers need to modulate the sound in some way.

There are many options for modulation sounds. Tremolo is essentially an oscillator, which affects volume, and its use goes back to 16th century pipe organs! Tremolo is often used to add vibrato. Although vibrato is technically a function of fluctuating pitch (as opposed to the volume fluctuations of tremolo), the character of tremolo is similar. When working with longer assets, automating the rate of tremolo over time can vary movement over time. Tremolo can also be used in stereo to mimicrophone an autopan effect at slower rates. This is useful for filling out wider 2D sounds.

Soundtoys Tremolator is definitely one of my go to tools when I am designing sounds. The tweak button allows for a lot ofcontrol so you can go from subtle movement to a powerful wobble. The rhythm editor lets you add variation to the repeating waveform so it isn’t just a constant rate. You can store sequences of tremolo ‘events’. So you could have a tremolo that becomes heavier as the sound progresses. You can also define the wave shape used to modulate the audio. - Gina

Other modulation effects include flanger, chorus, ring modulator, phaser, and frequency shifter.

Pitch shifting and frequency Shifting are often confused, as they are very similar in nature. Similar to pitch shifting, the process of frequency shifting involves changing the frequency content of a signal. Frequency shifting however, is done in a very different way. Each frequency in a signal is moved by a set amount. So the harmonic relationships within the signal are changed, resulting in more of a metallic character than you would get out of a pitch shifter. This timbral difference is what will influence you to choose one over the other for a given sound effect.

The Uhbik Freq Shifter can sound like a ring modulator or a phaser, but it operates in the same way as above. It applies a fixed frequency shift to a signal and frequencies are shifted by different amounts. A metallic sound is produced with a larger interval shift, and a phaser-like sound is produced with smaller shift. Smaller shifts can also produce phase sweeps up or down. These can sometimes sound like a synthesizer LFO making this method very useful for intense warping of sound layers.

Distortion & Saturation

You used to have to use tape to get a good saturation but now there are a variety of plugins to take the edge off a sound or make it sound bigger.Foley with too sharp a transient ran through a tape saturation can smooth it out.

Earlier in the chapter we discussed how to add life to a dull recording with compression, but we can also do this with distortion. We often use Sound Toys Decapitator to add color and bite to sound effects. There are a variety of distortion plugins available; distortion units like Decapitator offer a huge number of options for changing the sound and controlling the timbre. It sounds great for analog saturation, but the “auto option” (which automatically adjusts the output level to compensate for the changes you make to the Drive module) makes very intuitive to use. While the “punish” button is very tempting to press, try using decapitator at low drive settings to add warmth and thickness to your sounds. This is a prime candidate for weapon sound design due to the power and warmth that it adds to source layers.

iZotope Trash 2 is another useful distortion unit because of its multiband option, which allows multiple distortion/saturations settings across definable frequency bands. The convolver also allows IRs to be imported and morphed via a wet / dry control. The stereo width can then be adjusted along with different microphonerophone to further manipulate the sound. This is a staple for dark ambiences and drone sounds. It can also be pushed to the extreme to create wildly insane sound deformation.

There are different types of distortion to get familiar with. Each type will offer different uses in your workflow. Tape saturation for example will smooth transients to glue the layers of a sound and even fatten it up. Fuzz distortion can be a bit harsh on the ears but applying a filter to roll off the high-end it can be useful to make sci-fi sounds grittier. Bit crushing is a “go to” effect for quickly creating sci-fi user interface or glitchy high tech sounds. Another use would be something along the lines of a radio transmission or signal that needs a bit of signal degradation, static or crunch. Distortion can also be great when trying to add back from high-end frequencies that were lost when heavily pitching down a sound.

Bass Enhancers

When recording sound effects or Foley, the captured sound often doesn't sound as “big” as it does from the location perspective. The human brain processes the sound that hits the eardrum and the vibrations received by the body to determine the weight of a sound. The microphonerophone used in the recording may not translate this information the same way you are hearing it therefore editing and processing is necessary to add more weight to the recorded source.

There are many psychoacoustics tricks that can make bass sound bigger than it actually is. Bass enhancing can help add a bit more low end to thin recorded source layers. Plugins like Waves MaxxBass generates additional harmonics in the low-mids, which gives the impression of louder bass. This can be great for game audio developed for mobile platforms. Other plugins like reFuse Lowender generates new subharmonics an octave down from the original, which generates deeper bass frequencies.

Boosting low-mids can also help bring out the bass. You will want to be careful not to make things too muddy in the mix so be sure to use your ears. Distortion is another way you can boost your low end to enhance bass.

With a bit of knowledge of psychoacoustics EQ can be used to produce perceived low end. We recommend researching the topic, as the subject is a whole other book in itself. In short psychoacoustics deals with how brain perceives sound. As a sound designer, understanding how to manipulate the listener with sound can be a handy tool. Getting back to EQ and how you can use it to produce perceived low end requires an understanding of harmonics and the fundamental frequency. Simply boost by 12 db, the 2nd, 3rd and 4th harmonic of the low-end frequency you wish to simulate. You may be wondering how you arrive at those harmonics. To demonstrate the formula let’s assume our fundamental frequency is 50Hz. This would make our formulas (50 * 2 = 100 Hz), (50 * 3 = 150 Hz) and (50 * 4 = 200 Hz).

Give it a try and experience the power of psychoacoustics and how it will make you think you are hearing 50Hz without generating that frequency.

Granular Effects

Sound Toys Crystallizer - Granular echo synthesizer works similar to a delay but this tool chops up the delays (granulary) and synthesizes the delays for interesting textures.

An audio director we worked with introduced us to GRM Tools. It’s a plugin set that has been around a long while and was founded by a group of French experimental electronic musicians who went by the name Groupe de Recherches Musicales.

GRM’s Shuffling takes clips of the incoming audio and shuffles it around in time, moving it back and forth, so it’s not quite a delay, but can sound similar. The user can choose the size of the clips to shuffle, how often fragments are played and how far away from the original sound in time they appear. This is another tool that is great for adding movement to sound. Depending on the size and density chosen, Shuffling enables a wide range of sound transformations: from conventional chorus, flanger and harmonizer to the unexpected textures and sounds. The user has control over pitch duration, delay prior to playback and can determine how often fragments play. the duration, pitch, delay before playback which can generate interesting resonances or turning a single sound into a multitude of sounds filling the space.

Ubhik’s Grain and Pitch plugin is another one of our go to tools. With control over the pitch of each grain and the speed of playback it makes the processing evolving atmospheres easy to create.

Soundtoys Crystallizer is a pitch shifting granular reverse echo. It turns chords or melodies or tonal sound effects into shimmering, textured soundscapes. Great for magic sound effects.

Audio Restoration

When discussing the topic of restoration it’s generally understood that recordings with all types of gear may ultimately pick up some noise whether it is from the capture environment or equipment, the unwanted sonic elements should be removed from the file leaving the final asset in a clean and very usable state. It is important to point out that this isn’t just intended for assets with a lot of background noise. Many sound designers and audio directors are adamant about their “no noise” policy when it comes to individual layers that make up the final sound. You will find a wide range of interpretation of this rule but the important part is to understand the why.

We discussed layering in the sound design process throughout this chapter. When you stack layers that have minimal or a boatload of noise it makes the final sound noisy which in turn makes it harder to hear in a mix. Even a high-end library sound source could use a click or crackle removal before being blended with the rest of the tracks. It’s important to use your ear and judgment here as overdoing it with clean up can add unwelcome artifacts and ruin the sound.

When looking at different noise-reduction plugins you should understand the difference between gating, intelligent reduction and manual noise-print reduction. Most importantly, use your ears to listen and ensure you aren’t applying too much reduction. These tools can introduce artifacts to the sound if used incorrectly or overused. If the rendered sound feels warbley or over-filtered you should pull back on the reduction. It’s helpful to process in smaller amounts over several instances rather than over processing in one go. When working with restoration tools sometimes it’s best to avoid real-time processing. Having access to a standalone app like iZotope RX Suite provides can offer visual editing and offline processing to help you achieve the best possible results.

If your software offers a Output only monitoring option be sure to use it to listen to what you are removing from the sound. When you are first starting out with noise reduction give yourself some practice files to start with. Try out the presets on your software and listen to the different reduction settings to train your ear.

The processing chain can also affect your work. If you try to eliminate hum using a denoiser you might find you are destroying the sound and not removing the hum at all. Get familiar with the various noises that can affect a sound and try to process your sound starting with isolate tonal and broadband noise. Next remove clicks, crackling and clipping and finally de-noising.

These tools are meant for subtle processing in multi stages. This way you can do a little clean up on one pass and listen for artifacts before you apply additional passes. Spectral repair with a paintbrush tool can often work best for delicate situations. Remember, the tools aren’t magic, they take some work and know how to produce quality results.

Products like iZotop RX, Znaptiq Un series and Klevgrand Brusfri as just a few of the restoration plugins available. Each year the algorithms that drive the process are greatly improved. You will want to use your ears to find the suitable process for your workflow. Listen for phasing issues that might be introduced by the process and don’t be afraid to mix up the noise-reduction plugins in your chain.

Even if the sound appears to be clean of noise, additional cleaning of the source with a denoiser helps add extra focus. The less noise the better in the mix or in the design the sound will fair. Most restoration suites expand beyond the straight forward noise or hum reduction and offer other processes like click repair, spectral, repair, de-esser, de-plosive, de-reverb and more. Applying click repair on a source sound can help clean it up so you don’t lose focus on the design. Click and pop restoration isn’t just for vocals or dialog source. Random pops and clicks, although not too audible, can degrade or distract the listener. Ensuring your source is clean will help the sound design layers better sit in the design.

Resource:

https://youtu.be/Dr-2VizPLec (Cleaning up sounds)

Graphical Modular Software

While a lot of tools offer a plug and play ability, if you really enjoy digging deeper into the tech and creating your own tools there is software that allows you to do just that.

Native Instruments Reaktor is a graphical modular software tool kit that allows sound designers and composers the ability to build their own instrument, samples and effects. To get started there are built in presets to explore in the modular blocks. Once comfortable with the setup, begin patching unique modular creations.

Cycling ‘74 Max/Msp is a visual programming environment that allows the user to build complex interactive patches like software instruments, samplers and effects processors. Max handles Midi operation while MSP handles the audio signal processing allowing the user to interact with hardware to control the software patches they build.

Spectral Analyzer

A spectral analyzer is an important tool that offers a look at your track’s frequencies across a graph. It’s a visual representation of your sound. Of course you always want to rely on your trained ears but it’s nice to have some help from your eyes too.

You might be wondering where you can find this tool. There is a good amount of software, some of which you might already own, that has a spectral analyzer built in. Fab Filter Pro Q2, iZotope RX Standalone, iZotope Insight and Blue Cat FreqAnalyst are just a few resources. Your audio interface might offer a standalone that sits on your master bus.

This tool can help determine if you have monitoring issues as well. If your monitors or your room aren’t ideally set up the spectral analyzer can help point out issues in a sound or mix.

Chain and Parallel Processing

Using one or more plugin to create a processing chain with a purpose.

While we discussed each of these effects in a singular scenario most of the time the best sound require a chain of effects or parallel processing. Be mindful of effects processing order (chain), when using EQ for clean up or surgical work, using a compressor prior in the chain will only boost the frequencies you are trying to get rid of.

Experiment with parallel processing by taking a source and duplicating, processing both sources independently and blending them back together. This works great when the duplicate is heavily processed and blended back with the original (dry layer) at a 80/20. With the heavily processed layer sitting lower in the mix it adds an enhancement to the sound without sounding out of control.

Thinking outside the box here, the original layer doesn't need to always be dry. Try independently processing with different types of effects. Bounce the newly affected channels and re-import them to play around with reversing, chopping them up into smaller bits or using volume automation to bring them in and out as desired.

If phasing becomes an issue with parallel processing add a phase alignment plugin like InPhase by Waves at the top of your effects chain.

Exercise:

Start by practicing with each of the plugin types mentioned above. Choose a single sound source and apply each effect individually while making note of how it changes the sound.

Next, practice with multiple effects at the same time on a single sound source. Also noting the affect they have on the sound. Try moving the effects order in the chain and listen to how the hierarchy of the applied effects can alter the final sound.

Sound Design Practice

Gun Sound Design Practice

In the video below, we will discuss how we tie in layering, transient staging, frequency slotting and effects processing from the textbook lesson.

Gun Sound Design Practice

Fire is the process of launching the projectile out of the muzzle this is the actual shot. Fire rate and weapon size should be considered when designing the sound. A Sci-Fi weapon might have a lasery synth pew-pew sound to it while a realistic pistol will have more of a mechanical sound with metal source material. EQ can be used to sharpen the shot or give it a bit more metallic feel by boosting somewhere around 1-3K Hz, depending on the source sound of course.

Library sounds can be used for weapon fire but remember, layering is an important part of the design. If you want to add some of your own source to the process but don’t have access to a gun range to safely record, you can capture some metal impacts. It could be anything from cookware pots and pans to soda cans and appliances. The metal sounds you recorded may have some hollow effects to it so you would want to use EQ to clean that up. It is typically in the mid-range and can be found by using a dynamicrophone EQ and boosting frequencies to listen for the unwanted frequencies.

If the weapon requires more of a fantasy sound, you can add some synth layers to provide more of that fantasy feel. Pitch shifting layers can help add a bit more fullness and power to the fire of the weapon. If you need even more low end on the source layer try using a bass enhancement plugin like reFuse Lowender or Waves Maxx Bass. Recording source very close to the microphonerophone can produce a proximity effect, which can sometimes be a useful technique for introducing more low end in a layer.

Body adds the punch to the weapon so it feels powerful. Sound designers often bump up the weapon caliber to make it sound bigger. A pistol or handgun visual might have a shotgun sound attached to it to make it feel powerful. Current games like Battlefield and Call of Duty strive for a more realistic sound for their weapons so certain processing and layering tricks are utilized. The punch is typically a few hundred milliseconds at the transient of the sound. The rest of the weapon fire is usually the mechanical elements of the weapon. Without that punch at the head the weapon may sound thin and toy like.

You can get creative with your source assets for body. An up-close punch, hitting a hard pillow with a baseball bat or a very cleaned up kick drum track duplicated and pitched down can work well as a sweetener for the weapon. Transient designers can help add some extra punch to the sound as well. A boom source layer could also work to add more body to the fire. If your body source has a tail on it you can use volume automation to curve off the tail shortly after the transient. This will clean up the layer and avoid muddying up the tail of the full sound. Volume automation in general can help give your attack a boost and slightly rolling off layers after the transient can make the overall weapon sound perceived as louder even though you are reducing part of the volume.

Punch often comes from mid and high mid content as well as the attack of the sound. The low end is used to convey size. Adding a source, like a kick, punch or synthesized drum attack at the head of the sound can help to build it’s punch. There may be a need for one or two layers which can be split up by frequency content with one layer providing a low sub tone and another for the high mids content with a quick attack. A multiband transient designer like Waves TransX or KiloHearts Transient Shaper inserted on the master bus can be tweaked to increase the attack for the high mids and reduce the sustain of the low sub.

This technique can be done in a few different ways with different tools so you will want to experiment until you get the sound you are looking for. Another way to do this is by automating the EQ on a sound. Plugins such as Izotope Trash or other dynamicrophone EQs like Ozone offer the ability to draw in automation. In this case the idea would be to boost the low punch at the very beginning of the sound and adjusting how much high mid there is on the attack. Experiment with a combination of punch source layers and transient designer / EQ automation techniques to find the right sound.

Mechanical elements cover parts of the weapon like the hammer, trigger, magazine, pin, charge up, spin of the barrel as well as reloads, weapon switch and aim. Mechanical sounds can vary based on weapon type. Dry fire sound sources, or recordings of firing the weapon without any ammunition loaded, make great source layers for the mechanical part of the sound. You can also use switches, gears, staplers, mechanical keyboards and other interesting source to create these higher frequency details that will add polish to your sound.

Fantasy weapons may require some core or plasma energy along with arching zaps to charge up or cool down. There are usually mechanical parts of various sizes that may need to sound big and clunky.

Essentially, you are building up layers with different details until you have a weapon fire sound that feels believable. Fantasy weapons often require a big beefy sound that might feel like a kick in the chest when it fires. A rapid-fire weapon might need some aggressive movement to keep the flow of energy interesting.

Just because an object is based on the hyper real it doesn’t mean you can go overboard with processing. Keeping some realistic elements can offer a familiar element to the listener, which can help them identify with the object.

With an automatic weapon and rapid fire you may have some shell casings hitting the ground around the weapon fire. These can be implemented in a way that scatters the sounds around in the 3D space, which helps make the action of firing the weapon more realistic and immersive.

With layers ready for the next step you will want to start processing. There are a few plugins that can help beef up the weapon sound. We already talked about shaping the sound with EQ when designing the fire and body so you may want to use saturation to warm up the sound a bit. To glue all the layers together a multiband compressor will come in handy. You can reduce the dynamicrophone range within the compressor to add more meat to the sound. A transient shaper can help add a bit more punch to the attack of the sound.

Tails are typically dependent on the environment and the reflections from the world. Define where the weapon will be fired and this will help you determine the tail and any projectiles required for the sound. If your game is set in an urban environment, the player may feel like something is missing without reflections after firing a loud weapon. This in turn can make the weapon feel less powerful and lose all the extra value added by incorporating the other 3 parts in your design. The tail is a great place to add a unique element to your weapon fire. The short millisecond burst of the weapon fire may not allow for adding anything too creative or unique to the sound but with the tail you have much more room to work in a unique character.

The complexity of your weapon sound design will be dependant on the games needs. A shooter may require more detail and emphasis on weapons than a puzzle adventure where you might only have limited weapons.

The level of detail might require additional layers to the sound. This can be anything from more mechanical sounds as we discussed above or extra details like shell casings dropping on the ground below the player character. Cloth Foley for movement as the player character reloads and fires can help make the weapon fire sound even more engaging.

You see, sound design isn’t taking a sound from a library or recording and placing it in game and publishing. It’s about making sure the sound uniquely fits the visuals and brings them to life. This is all about making the atmosphere immersive for the player. When you keep that in mind, really any sound design project will make you excited to work on it.

Weapons like AK47 and pistols can benefit from some punch on the transient. This can be created via transient designers or EQ’s like Neutron and Fab Filter. The key is to have the transient pack the punch and then the rest of the fire and tail curve down in volume a bit so there are some dynamicrophones and not just a wall of sound. Your Magnum Fire is a wall of sound currently. There is not much punch on the transient and there seems to be a reverb on it, which feels like a tight space in a tunnel. Try to look for drier sounds and add the reverb in the game or baked into your daw so that all weapons have a similar sense of space.

Getting back to punch, you can add some power to the sound by layering other sound sources to sweeten the overall sound effect. In reference to weapon sounds a sweetener might be a very clean sound like a punch or explosion.

The clean up can be done using a denoiser such as iZotope RX or Klevgrand Brusfri, and can hello add more low end to your weapon sound to give it more power. This is not only necessary to make a great sounding weapon but it helps for it to stand out in the mix.

If you find yourself working on realistic weapons that exist in the real world you will need to be mindful of how you match sound to the object. Players may be familiar with the weapons and can tell if you have placed an AK 47 sound on a G3 rifle. So you will want to do some research on the various weapons and start by using some specific reference source to help guide you. Of course you will still want to edit the weapon so it fits the requirements of the game play but always try to be as true to the original sound as you can.

The most important part of a gunshot sound is the decay. That's what makes them sound interesting, massive, and I suppose deadly. In what environment are your guns being shot? Do they ring off of walls, a building, or a hillside? Do you have distant recordings of guns in these environments? The best gun design in my opinion starts with these type of field recordings.

Layer your gun sounds with various sweeteners. In film, it's not uncommon to layer recordings of say a cannon, mortar shell firing, or even thunder under the sound of even a tiny handgun. A little reverb, slap delay, compression, or sub can help thicken them up as well. Experiment with layering them with the reversed sound of a an animal growl.

Tips:

  • DSP will help sell the loudness or the experience of firing a weapon in game. Weapons are often powerful and the experience is how loud the sound is but also how the weapon kicks back. Using Tape saturation to compress the high end a bit and push the lows and mids.
  • Limiter and Maximizers to create a louder and more full range sound across all frequencies.
  • EQ high-end shelving for less sizzle and more beef.
  • Transient designer to dial down tail and push the transient for more punch (transient master plugin)
  • Adding Pitch shifting to this signal chain will make the weapon sound more robust and beefy. It also steps it up a caliber.

Explosion Sound Design Practice

In this YouTube video Marshall McGee, sound designer at Avalanche Studios discusses designing an explosion sound using household objects as source material.

Resource

www.youtu.be/DWIrBcM_Bxo

Kicking the side of a metal dumpster, an old filing cabinet or even a cardboard box with some debris inside can be useful source that will fill expectations once you start processing the layers. Flapping

Earlier in this chapter we discussed transient staging to find a cadence that allows the individual layers to stand out rather than stacking them directly on top of each other. The sound becomes more unique as you experiment with the cadence between layers. Test out various placements of the attack, body and tail layers until you are satisfied with the flow.

Processing plays a big role when creating explosion sounds. When designing explosion sounds,we typically start out with distortion, saturation, multi band compression, EQ and reverb.

Manipulate library stock sounds and build layers on top of them to manipulate sound sources in a way to fit the games style.

When you can’t record unique explosions. Record interesting sources. Break down the sound to think about what it takes to make this sounds.  You will find that vocalizations for explosions with distortion and pitch shifting can add character.

Components – Transient, Body and Tail Just like weapons – The sound becomes unique when you experiment with the cadence of the sound. Refer back to our discussion of transient stacking earlier in this chapter.  We discussed staging the sound sources in multi tracks so that each source has a few milliseconds of space between another layers. Adjusting the timing of each layer will allow each element to shine through and add a unique sound and feel to the final sound.

Impacts – kicking objects, metal like a file cabinet, metal dumpster, Resonant sounds from the kick – Cardboard box with sledge hammer or kick it hard. Ad some various elements inside the box to add a bit of debris sound. Debris can be wood, rocks etc..Piano soundboard hits on low strings.

A higher pitched sound like ripping cloth or cardboard. Flapping a heavy carpet or blanket in front of a microphonerophone for that whoosh of the explosion. Remember, as you are designing the layers you will need to use a bit of imagination to decide how it will sound.  These source layers on their own won’t remind anyone of an explosion but with a few more source layers to complete the body and tail and some processing you will have your explosion sound that meets expectations.

Processing plays a big part in making explosion sound effects. Saturation to glue the sources together and add a bit more to the low end frequencies of the source, distortion to add the grit and add in some of the higher freq that were not present in the source material and multi band compressor.

High pitch sound – lower it (to avoid artifacts) remove all high freq with EQ and then run that sound through distortion and reverb to try to add in more of the high frequencies we cut out.

Tips: Finally, add some echo and compression to glue it back together.

  • Multiband compression: to make the sound really punch through the mix.
  • Decay: this helps with the size of the shot and space it lives in.
  • A high frequency sound / pre-transient: add it a few milliseconds prior to the main sound.
  • A low frequency thump or layer: just to give it more body. Often, recordings need that extra little cinematic push. We often layer this milliseconds after the attack of the main sound.

Spells and Special Ability Sound Design Practice

In the video below, we will discuss how we tie in layering, transient staging, frequency slotting and effects processing from the textbook lesson.

Spell and Special Ability Design Practice.

The games producer, designer or artists can provide insight into the characteristics of the spell or special ability. First inquire about the purpose of the ability. This means breaking down how the ability is used and what type of damage it does. Next, inquire about the visual aspect of the design. This is usually offered in a concept sketch or video from in game. Once you have this information you can start to break down the various parts of the ability as we did with guns above. Concept art can be extremely helpful if it marks the material type for the various parts of the object. This will help you determine the sonic elements necessary for the design. Videos of the action can help determine how it moves and at what speed, how it’s cast and what it does when it reaches a target.

The character using the special ability can tell a lot about the sonic requirements. Is the character good or evil? Darker elements can change the tone of the ability and create a link to a theme in the listener's mind.

Whooshes and low-end hits can provide a great base layer but you also need to incorporate the elemental aspects of the visual into the sound. Choice of layers and how they are laid out can really make a difference in the final output.

Let’s discuss those elemental elements, which spells and special abilities in games may require. In the card game Runewards we were tasked with crafting attack sounds for VFX that varied in textures and tone. Elements of fire, ice, water, earth and ethereal were various card attacks that required different approaches.

Fire elements were designed with whooshes, flame throwers, torches, glass, crackling granular synthesis and animal roars. The animal sounds add an extra oomph to the fiery blasts to accompany the VFX. If the visuals do not have a resemblance of an animal then you want to manipulate the source so it sits sort of camouflaged in the mix.

Electricity VFX can benefit from a combination of water and electric zapping sounds. The water will help add the forward blast the sound might need.

Thinking about what you want to hear in the low, mids and high-end of the sound can help you better stack your layers. The whooshes and thumps will cover the low-end while the mids might consist of the more meaty substance, which pushed the spell forward leaving the crack of the spell fire in the high-end.

Often times working with source that is directly linked to the element you are trying to design may not work. For ice spells we captured some recordings of ice impacts, ice scrapes, shard movement but also used a bit of glass and crumbling rocks. Aerosol sprays were helpful in some of the blast power during the spell cast.

Experimenting with designing special ability and fantasy spell sounds is a great way to flex your sound design muscle and improve your skills. If you were to ask 5 different designers how they go about crafting these magical sonic elements they might all have different ways they go about it since a lot of it is thinking outside the box and testing the limits of software and hardware.

As we mentioned above, swooshes and whooshes are good starters for the casting part of the ability or spell but first determine how much punch the casting requires. A snappy, low-end impact sound can help the sound stand out in the mix by emphasizing punch and clarity. To add some punch to your cast sound chose an impact sound that has a good bit of a thud to it and duplicate it several times so you can apply separate processing to the layers. Pitch some of the layers up using the time option to shorten them into quick transient pops and pitch other layers down into a low-sub thumping sound. A bit of distortion can help these layers avoid being lost in the mix.

Once your punch is designed, you can continue with the whooshes to add movement to the sound. Torch movement and other passby sounds can be used here to layer in with your whooshes for a more unique flavor. Using animal growls can also help sell a particular feel of the ability. Let’s say the spell is linked to a character that has a demonic presence, the use of animal growls, pitched and stretched can help sell the demon side of the ability.

Whooshes may lack movement as is so adding in some tremolo can help provide a sense of speed. Duplicating layers and reversing one into the original can provide for an interesting effect. Movement and stereo width is important to keep in mind when creating special ability sounds. If the game is first person, think about the initial impact or attack of the sound and what the stereo width needs to be. The sounds layers might need to become more narrow as it moves past the attack and into the body or tail. The sound may need to become wider as it hits the target. Stereo panning and delay on left and right channels is a great way to add width to a sound. The change in stereo width will offer a perception of a big sound starting in front of the player character and moving forward to the target.

Projectiles might be necessary if the VFXshows an object flying through space over a distance. These can be comprised of passby’s with Doppler effects on them.

Another important part of the ability is the impact. What does it sound like when it hits the target or misses the target? How does the spell dissipate once it hits the target? Pitch and intensity of the sound as it travels to the target is important to pay attention to. Making the sound bigger or stand out more just before it fizzles out can make a difference in the dynamicrophones of the sound and the impact it has on the listener. Using punch and kick impacts and pitching them down with a bit of EQ to clean them up can make for a great low end impact layer. If you feel it needs a tonal quality you can take a few milliseconds of metallic hits or synth blips and layer those at the transient of the impact for a more unique sonic signature.

If your layers have an element you want to use but there are clicks or a screeching element you don’t like about it you can use spectral repair to clean the layer instead of taking the time to find another suitable source.

Later in this Chapter we discuss synthesis as a sound design tool and we explore the various types of synthesis. Crafting unique sounds with synth source is often dependent on the flexibility of the software or hardware. U-he makes some very versatile software synths like ACE and Bazille, which offer a huge amount of modulation, potential.

When working with synthesized sounds it's often a good idea to have a model or reference in the form of a real world asset so your design can provide the listener with a bit of familiarity to help them better interpret the sound.

Individual layers may need some pitch shifting and time stretching or reversing. Reverb and delay can be used to set the spell in the scene. Modulation like tremolo will help add movement to the sounds and don’t forget EQ to clean up and balance your layers.

Critical analysis of the real world sound broken down into parts can help you synthesize the sound in steps. The reference sample can also be viewed with a spectral analyzer so you can determine the frequency trend of the sound. This will help you apply processing that can closer bridge the gap between the synthesized sound and the real world reference. Lastly, when you work with software synth engines that are designed emulate vintage synths or if you are working with a vintage hardware synth, it’s a good idea to run it through a compressor to control random transients or stray peaks generated from the sound. It can also be useful to run the sound through a de-click restoration app to avoid any unnecessary noise.

Creature Sound Design Practice

In this video Akash Thakkar shares his techniques for making nightmarish otherworldly monster sound design https://youtu.be/PStJp4idz00

In this video we demonstrate how we go about creating creature sound design.

Creature Design Practice

As the sound designer we ask the game designer or producer what they imagine this voice to sound like. For the purpose of this exercise, let’s say you have been given full creative freedom. Even with the green light to craft your own direction, it can be helpful to have an image to work off. So as a starting point let’s imagine this tree creature is something like the Treebeard from Lord of the Rings. For a bit more description, we will say character is male, whose face is embedded into the trunk of the tree. The various branches are mobile and the roots enable the tree creature to slowly move about the terrain. You might be wondering what the movement has to do with the vocal sound. Well, it’s important to inspect the character as a whole to get an idea of how to proceed with the voice design. Getting back to this concept of this tree creature, let’s say he is 850 years old and an enemy character in the game. His bark is old, dark and crumbling and he attacks by swinging a branch at the player character, similar to a lasso. With a good grip around the body of the target the tree creature quickly lifts the them up and slams them on the ground while letting out a deep bellowing yell. The yell is what you are tasked with designing. Armed with all this information about the creature, let’s walk through the thought process as we approach the voice design.

Since the designers have not come up with a plan for the inner workings of the creature we are free to assume an imaginary vocalization system. Even thought this is for a fantasy genre, this tree character might not have soft tissue innards that can make up a vocal chord system so we have to reply some creativity and research. A quick Internet search reveals a dinosaur named Parasaurolophus, which has a hollow tube in the crest, that lead to the nose and throat. Paleontologists are unsure if the dinosaur had vocal chords since soft tissue would not survive fossilization. This has left some researchers to hypothesize that the tube was a resonant chamber for vocalization.

Based on this information, human voice yelling through wood might be a good idea for experimenting. A wooden flute or a wooden storage box with the bottom cut out could do the trick. The experiment will consist of stacking several wood boxes or flutes together and trying out different microphonerophone placements until you get a nice resonant wood sound. Try out various vocalizations and different shape and sized containers, if they are available. This could help sell the creatures size as he pushes the sound up the trunk and out through the mouth.

For the next layer, we can try to think about some mouth or neck movement. Place your hand lightly around your neck and speak. You can say anything really but it would be good to mimicrophone the voice of the creature so you can feel how your neck moves as you vocalize. After this little experiment we can think about how the jaw or “neck” of this tree creature might move as he bellows. Some wood creaks can be layered into the voice design for those additional details. We don’t want them to get in the way of any Foley movement we might design later so keep them well blended in the mix.

Pitching the recorded voice down quite a bit and even using a little tremolo and distortion can help to age the sound so it fits the 850 years the tree has lived. Starting out with a voice artist that already has a nice booming voice will be beneficial but if you have to make due with your resources be sure to capture the voice with high quality, ultra wide frequency microphonerophone, preamp and at the highest sample and bit rate your equipment can manage. If you don’t have access to this type of equipment don’t let it stop you from going after the sound. You can still pitch and stretch your source and clean it up using some of the techniques we discuss later in this chapter.

To add a bit more oomph to the voice we could duplicate a layer and pitch the copy down even further and nudge it slightly ahead of the original. This can help fatten the sound and give it more power. Try a low pass filter on the copy to remove the sibilance (Glossary) and you may want to high pass the original to push out the ‘sssss’ in the sound a bit more.

When it comes to reverb we might want to use a convolution reverb and try a thunderclap sample for the impulse response. All of these ideas are just that, ideas to experiment with to find a unique sound the players will relate to and remember.

Of course, recording the voice source with a closer proximity to the microphonerophone can help boost the low end. You can also boost with EQ around 150 Hz to bring in some of those lower frequencies. You will want to use your ears as you adjust to avoid making things too muddy. Compression and saturation can help glue all the layers together and the saturation on the low-end source layers can help give them a boost.

Approach the compression in stages so you don’t overly squash the sound in one pass. Smaller stages will help keep some dynamicrophones alive in the voice. Parallel compression might be a nice final touch. Add a send to a dedicated aux channel, process the signal with a fast attack and release and blend it back with the original.

As always, have fun with it and think outside the box. Experimenting will uncover new workflows for you to deploy the next time you are tasked with creating a similar sound.

Tips: Pitch and time stretch are important processes for creature sounds.

Vehicle Sound Design Practice

In this video Sound designer at PlayStation Europe, Loïc Couthier shows how he used MASSIVE to create the engine sounds for WipEout Omega Collection.

Resource:

https://youtu.be/LNZ1NH9TAWc

Recording vehicles requires microphonerophones that can handle high SPL and good quality wind jammers to protect the recording and microphone from heavy wind when the car is driving along a roadway. Typical questions about environment come up when planning a session. Will there be other traffic to worry about? Will weather conditions interfere? Do you have a long enough stretch of road to capture higher speeds? How will you monitor the recording to avoid clipping?

The microphone choice needs to be small enough to attach to the car and keep it there. DPA miniature microphonerophones are a good choice as they have a transparent sound and handle the wind well. Low sensitivity is important as well as high SPL. The wind and bumps on the road will ruin the recording if the microphone is overly sensitive to it.

Managing to record great vehicle source can be a costly venture since you will need different microphones to cover the various parts of the vehicle. A stereo pair for capturing the exterior sounds and another pair for in vehicle. A microphone on the exhaust and one for the engine is also necessary. Setting up a microphone to record tire on road or gravel as well as one for suspension will help capture all of the smaller details that go along with the vehicle.

Before you attach the microphones it’s best to do some listening by getting your ear in front of the different parts of the car to find the sweet spots. There are a few options for taping down the microphonerophones and cables but be sure you use something that won’t ruin the paint on the vehicle. Once the microphones are attached, some tests need to be done to ensure the limiters are working and the levels are correct.

The field recorder you choose should allow for high quality capture with limiters. Capture the source at least 96Khz to ensure the best quality which will translate well during editing and processing.

Prior to setting up the session, a plan must be mapped out to ensure the necessary source is captured. This will mean working with the game designer, producer or audio director to understand the needs of the game and how the assets will be implemented.

The view of gameplay will play an important part in what you will need to capture. Racing games often allow the player to switch between interior front and rear views. 1st person or 3rd person perspective also matters.

Implementation plays a big part in understanding how much and what type of audio needs to be captured. Racing games these days are highly interactive and require a good understanding of the implementation tools, game play and the vehicles. If the implementation will make use of loops and pitching with blend containers the source will need to have longer segments at a steady rpm. A range of rpm’s will need to be captured as well. Don’t forget about the ramp which will need to be one shot sounds that are triggered with gear shifts.

Some racing games utilize granular synthesis to reproduce engine sounds in game. While this method is sonically improving, a well trained ear of a car enthusiast might still feel like something just isn’t right.

In the end it seems that a combination of loops and grain works best to create a flexible and realistic sound. Crankcase Audio makes a plugin named Rev that works across game engines and middleware. Their website claims ‘It works in the frequency domain to allow bi-directional and variable speed playback of recorded accelerations, rather than simply looping a flat section at a static RPM.’ Basically the plugin can scrub through the audio, both backwards and forwards, to offer the illusion of dynamicrophone RPM changes.

UI and HUD Sound Design Practice

In the video below, we will discuss how we tie in layering, transient staging, frequency slotting and effects processing from the textbook lesson.

UI Design Practice

The player may be met a variety non-diegetic buttons, sliders, window popups and scrolling interfaces within the games menu screens. In game, diegetic interfaces or heads-up displays (HUD) can be found in the form of holographic displays like the ammo or health pickups, maps or meters built into the player characters suit. You can even go as far to say that weapon reloads and swaps might be categorized as part of the user interface as it provides feedback to the player in regards to which weapon they are armed with and how much ammo they hold.

We already discussed how Hearthstone sound designers utilized wooden elements and latches to capture the tavern like feel. The UI sound design in Irrational Games ‘Bioshock Infinite’, incorporates mechanical sounds like gears and latches. A ‘steampunk’ theme comes to mind when going through the menu options. This sonic choice is fitting for the rest of the game audio and the visuals. Warner Bros ‘Lego Batman’ ui sound design incorporates the sound of plastic lego pieces making contact. While casual games like King’s ‘Candy Crush’ relies on a fun and pleasant, well polished feel for the design. It's all about finding the right sonic character to unify the sound pallette and add to the immersion. A game that is designed around a cooking or restaurant mechanic might fare well with some cutlery and knife scrapes or swooshes as source in it’s design while hyperreal Sci-Fi games will work best with glitchy and synthesized sounds.

There are quite a few parameters sound designers can use in combination to craft their ui sounds.

Defining the direction of the sound and if it will have a positive or negative connotation to it.

Even negative feedback in games need a sound that warns the player without feeling overly dreadful, unless of course that is the intended purpose. This is where working with the games designer, audio director or producer can really help unify the teams vision for these sonic elements. Even the best of sound designers work with feedback and take direction so it’s a good idea to learn to work this way.

The duration of the sound is important. If the player interacts with a button but the sonic response is too short or way too long it can ruin the transfer of information. Different users will interact with games at varied speeds which means some testing and thought should go into the duration of the sound. Of course if there is an animation in response to an interaction, the sound should sync accordingly.

Once a sketch of the design is defined by duration, movement and mood, the designer can start to think about pitch, timbre and volume. If the player tries to press a button for a function that is not yet available in game the sound designer can chose to present the simulation of a locked chain rattle sound or an electronic dissonant sound that falls backwards in pitch. There are many choices and the games unique visuals will help determine the path taken.

As the designer creates the various sounds for interactions like accept, back, upgrade and window popup / close they should be mindful of the sonic palette. The sounds need to appear to come from the same or a like group of source material as well as feel like they are from the same production. Implementing random sounds from different SFX libraries without editing can have the reverse effect in that the production values, while high quality, can feel different even to an untrained ear.

A sound designer also has a choice of using mechanical, synthesized or a hybrid of both source material. Synths offer a variety of parameters to shape the sound. Later in this chapter we review synths as a sound design tool but let’s take a quick look at some parameters that can shape the sound of your UI design.

  • Synthesis types should be considered first as it can affect the direction of the sound overall. FM synthesis has metallic elements to the sound it produces and could be a good direction for Sci-Fi genre games.
  • Envelopes shape sound over time and can be assigned other parameters such as amplitude, pitch/frequency, and timbre. This parameter might be a good place to start experimenting. Try setting the tone for your design by tweaking pitch and amplitude.
  • Filters are great for creating accept / back interface sounds. Filter sweeps can produce some solid interface open or close actions by adding the intended movement to the sound.
  • LFO can add some movement to sounds like sliders or scrolling effects. It also works well for computing and scanning sounds.
  • Delays or subtle echoes can be used in the effects chain to help give the UI sounds a different sense of space compared to the other game sfx.
  • Reverbs are necessary to help place the sounds in the environment, giving it the correct spatial information.

Generally we keep delay and reverb use to a minimum in ui sound design, unless it really works for the genre. These parameters can push the sound back in the mix a bit more so should be used carefully to ensure the purpose of the sound isn’t re-defined.

A more organic approach might be more suiting for the game. In a game like Bioshock Infinite, the source material comes from objects found in the real world. Gears and metal latches are the body of the interface sounds as the player clicks through the menu ready to enter into the steampunk game world. This source can be found in libraries or recorded from props. Later in the chapter we discuss compression as a sound design tool and explain how to add a bit more punch or fatten up thinner sounds like latches and switches.

Of course, a combination of the two can work with the right game. The point is to avoid using an overly synthesized ui sound set when the game has more of an organic feel to it and vice versa.

UI sounds can vary from game to game so take the time to do some critical listening of the ui sounds in a wide range of game genres to understand how each sound artist approached the design.

Footstep Sound Design Practice

In the video below, we will discuss how we tie in layering, transient staging, frequency slotting and effects processing from the textbook lesson.

Footstep Design Practice

To create highly detailed footstep and movement assets we need to think about the terrain or surface material and if there are any additional surface elements on top of that material. Think of these elements as leaves on an asphalt walkway. Next, we need to consider the shoe type of the character doing the stepping. Lastly, we consider the cycle of movement as a walk or run. The size of the character also should come into consideration, as the weight should be perceived by sound.

Depending on the level of detail that goes into the game and the production time or budget the NPC’s might have their footsteps grouped into categories like civilians, enemy, enemy 2 and boss. There are no set rules and it is all about what works best for the game.

Some games do without footsteps entirely. This might be the case when the implementation lacks a dynamicrophone mix, which we will cover in Chapter 8, and footsteps would be too loud amongst things like weapon fire and explosions. A system with a proper dynamicrophone mix would allow for footsteps to stand out when there isn’t much more than ambience and music in the background and attenuate the footsteps in the midst of battle so the more important sounds stand out.

Armor movement Foley can be used to relay the type of armor the player’s character has. This could be a nice added detail to show value when the player’s character levels up and gains new armor or weapons. Imagine your character sounding like there is some light leather movement that accompanies the footsteps as level 1 in game. By level 15 the character has chains and more leather movement to relay a sense of new power and protection that reflects the visual upgrade.

When a sound designer sets out to capture unique footsteps, a session might be set up with a “Foley Walker”, proper shoes and terrain materials. A walker in this sense is someone who is experienced in character movements to picture. You may be wondering “aren’t we all experienced in walking?”

It’s true, anyone with the ability to walk and has been doing it for quite some time can be considered experienced but try setting them up in front of a microphone and you will find it isn’t that easy. We all get a bit nervous in front of the microphonerophone and our gait can be affected by these nerves. If the walker you hire for the session hasn’t worked in this area before the results might be less than stellar. We have heard some first goes at recording footsteps and a lot of times the instinct is to stomp to get a loud enough sound. While stomping in theory would make for a louder captured source but think for a moment what stomping would sound like attached to a character simply walking on asphalt. It wouldn’t make sense, so practicing and experimenting with capturing a natural heel-to-toe movement will get you on your way to high quality footsteps.

Another thing to consider is the amount of source you want to capture in your session. Always record more than you feel you need. Having extra source to edit will be essential as a lot of the source might be edited out anyway. Be sure to capture lighter/harder, softer, louder steps in a variety of takes to allow for different choices in the editing phase. Scuffs, jumps and lands are also something you should take the time to record. Your game might require the character to jump or the walk cycle might benefit from a scuff when the character stops.

During the session as you do the standard checks of environmental noise and input levels, be sure to listen for cloth movement from the walkers outfit. Tie back any loose material to avoid capturing it along with the footsteps on terrain. Ideally, the cloth or armor movement sounds will be captured as separate layers for more flexibility during editing.

The walker should walk in place to avoid capturing footsteps that move away from the microphonerophone . If you find walking in place too difficult, try putting the shoes on your hands and mimicrophone walking this way. If you give this technique a try, be careful of capturing cloth movement produced by your shirt and breathing sounds.

If you have a few options for microphonerophones, set up a few so you have more choices when editing. Sometimes it can be difficult to know what source will work best during the session. Back up microphonerophones can be useful in event the walker bumps the microphonerophone or the distance to the microphonerophone might be too close. Different positions and pick up patterns can provide enough flexibility in editing.

Levels are often a bit of a struggle when capturing quieter source like footsteps and cloth. Recording in an acoustically treated room or better yet a booth that is isolated can make the noise floor less of an issue. If you don’t have this type of location to record then you will want to choose a microphonerophone with better sensitivity and a good quality preamp. Triton audio and a few other manufacturers make in-line microphonerophone pre’s that can boost around 20db of clean gain. This will bring out more detail in the sound you capture as well as give you enough amplitude to work with. Don’t forget, a condenser microphonerophone with phantom power will offer better pick up for these more delicate sounds than a dynamicrophone microphonerophone. Pick up pattern can also define how focused the captured sound will be. A shotgun microphonerophone or a hypercardioid pattern will focus on the source you are recording and avoid picking up background ambiences or noise. The key is to capture the source as close and dry as possible. Effects can be applied during editing or implementation and having reverb already on the sound could make things more difficult.

Terrain is the last thing we will cover here. How do you manage to get the various terrains? We talked about a Foley pit as an option in Chapter 2 but building a proper one can be difficult. There are a lot of great Foley studios that have all of the terrains ready to go but if your budget doesn’t work for this option you can go outdoors and record on different terrains. This of course can introduce a lot of environmental noise so you would need to plan the best location and time of day to set up the session. Another option is making the best out of the room you have available to record in and bring some terrain indoors. Various tiles, terracotta, soil and carpet can be found at hardware stores. You only need a small amount so it should be budget friendly. Bringing some elements from outdoors into the room you use to record could also work. We have done that quite a few times with our DIY Foley pit. It’s a simple 2x4 area bordered by wood and an insulation material to avoid having the sound resonate through the wood.

However you decide to approach Foley for footsteps, be sure to explore your options and plan your session before jumping into it. Foley can be a lot of fun to experiment with when the time allows. Having a plan in place before a tight deadline approaches will help you feel more prepared as you enter production.

Chapter 4


Voice Production

Editing Dialog

In this video we demonstrate some basic examples of sibilance and plosives.

plosives and sibilance

Here are some additional reference links for dealing with sibilance and plosives.

www.soundonsound.com/sound-advice/q-how-can-i-deal-plosives

www.theproaudiofiles.com/vocal-sibilance/

In this video we demonstrate some basic examples of sibilance and plosives.

Tutorial dialog editing

Example of a marked up script

Image showing and example of a marked up script

Mastering Dialog

The final process of editing and mastering audio assets for in game can vary from studio to studio. It’s best to inquire what will specifically be required of you before you get started.

Editing will typically include file naming, cleaning up the tops and tails of assets, volume automation and removing any problems such as mouth noises, bumping the microphone stand, throat noises, plosives, sibilance, breaths etc.

Mastering the assets usually includes putting the final polish on the assets and normalizing or balancing levels so you have low, normal speaking volume and loud categories for which sounds fall into.  Compression, EQ and any effects chains will most likely be a part of the mastering process. Cutting a bit around 250hz with EQ can clean up the low end that would just add mud to the mix. Additive EQ boosting between 1-3K can help add clarity to the dialog and help it stand out in the mix. Or cutting some of the high-mids may work best if there is a case of a nasal sound. Compression can provide an extra boost to thinner dialog. In the case of overly dynamic performances apply a compressor with a very short attack on a 2:1 ratio to tame back the peaks.

Chapter 5


Essential Skills for the Game Composer

Basic Loops and Stingers

Let’s dive into the main four elements that must be consistent between the first and last bar in order for a cue to loop:

Instrumentation - This refers to the number and type of instruments that are playing at the beginning and end of the loop - they should be consistent. However this also extends into the timbres that these instruments are playing as well. In the example below you’ll see that looping works best if, for example, the strings start and end with the same articulation (pizzicato, legato, etc.).

Dynamics - How loud or soft each instrument is playing. Linear cues that start soft and end very loud are quite common. But a loop that begins at ppp and ends in fff will sound unnatural and distracting to the player.

Density is another important factor. Density is loosely referring to the number of instruments, and their perceived “thickness” in your arrangement. You may want to end a track with a big bang (tons of instruments all playing at once), but the loop will sound unnatural if it moves from a highly dense and energetic few bars and then drops abruptly low again. However, the opposite can sometimes work if you have a strong first bar. In this case the ending will have to feel like a buildup, or a crescendo back into bar 1. If the contrast is too great, it will be easily noticeable as a loop. Lastly, the tempos from the beginning and end have to match. Let’s pause here to create a simple melody from scratch, and work out a method for looping it.

Tempo - The speed of the musical cue, as measured in beats per minute (or bpm).

Now, these elements alone should be sufficient to make a cue loop. However, in some more complex cases, if these are not sufficient and something is still off in your loop you will need to address a fifth element: voice leading.

Voice leading - The way that each voice in a cue moves from chord to chord. This is probably the most important thing to consider in game music orchestration (especially when writing loops) as we shall see in Chapter 7. Awkward jumps in a voice can be very obvious to the listener. Because game music is dynamic, it should be voiced as smoothly as possible at transition or loop points. If you aren’t sure of your voice leading, zoom into each bar and watch how the top voice transitions. Is it melodic? Does it take the smallest jumps possible from chord tones to chord tones? Adjust as necessary and then move onto the next highest voice and so on.

Now let’s look at some examples:

In the following example (Ex 5.1) you will hear a simple puzzle loop. Open it up in your DAW (Digital Audio Workstation), import the tempo, and set it to loop. You will hear that all of the above criteria are met and there are no abrupt clicks or pops. Looking at the reduced score in Ex 5.2 (note that the harp part is notated as “piano”) you should see that the dynamics move around within the piece itself, but in the last bar the dynamic marking is exactly the same for the first bar. Notice that the instruments and articulations in the first bar and the final bar are exactly the same to give the loop as much continuity as possible. As viewed in the score, the voicings are carefully planned out so that there are no awkward leaps from the final note back to the first note in the harp.

Ex 5.1

Ex 5.2 - Download Now (PDF 89KB)

Ex 5.3 is an audio file of a simple melody for solo violin. This melody could be used as a basis for a theme, or for a gameplay track. This example is much shorter than what you would normally write for a fully developed cue, but let’s use it for now as a basis to understand looping. Download and import Ex 5.3 into your DAW and set it to loop. Is it obvious that the melody is looping every time it starts over?

Ex 5.3a

The answer is yes, and for a few reasons. First, looking at the sheet music (Ex 5.3b) the melody ends on the same note it begins with, which makes the loop obvious. Second, the dynamics start off at mf, but the final bar is at f, so the dynamics are not consistent at the beginning and the end of the loop. The density is also not consistent because the final bar includes a double stop (two notes played simultaneously), which is much denser than the first bar due to the extra note. It sounds great as a linear ending to this melody, but it doesn’t work at all as a loop. Lastly, the articulations are different at the beginning and the end of the loop. The loop starts with a pizzicato plucked string and ends with an arco bowed string. The timbres are so disparate that it may as well be two separate instruments playing. Let’s rewrite this melody so that all of the parameters mentioned above are properly met.

Ex 5.3b - Download Now (PDF 17KB)

Ex 5.4 - Download Now (PDF 17KB)

Ex 5.4 is likely the simplest solution to the problem. We have added one extra bar to the melody so that all four parameters are adequately brought back to mirror bar 1: the dynamics move back to mf, the melody is changed to include a descent back down to the starting note of D which makes the voice leading more consistent, and the articulation is switched back to pizzicato. However, this only works if you have the freedom to change the length of your loop. Ex 5.5 is another solution which keeps the articulation and dynamics consistent throughout to satisfy the looping parameters. Use your DAW to compare and contrast each loop.

Ex 5.5 - Download Now (PDF 16KB)

Let’s take this example one step further and add a chordal instrument. Ex 5.6 shows the same violin melody with a piano accompaniment. The piano part follows the changes in dynamics/volume but at a slightly lower level to keep the violin from being overshadowed. The accompaniment is also of consistent density and articulation to facilitate looping. The voice leading in this example is smooth because there is as little movement as possible between the four voices in the piano. Notice also that the piano drops out on the last beat of the fifth bar. This is to highlight the eighth notes in the violin, which are now acting as a pickup back into the start of the loop.

Ex 5.6 - Download Now (PDF 19KB)

Ex 5.7 and Ex 5.8 are examples of poor voice leading, which results in the lack of a cohesive loop. In Ex 5.7 the right hand in the piano in the final bar makes an arbitrary leap to a major 6th, breaking the flow of the loop. The second example has a lack of density consistency because of the added voices in the last measure. As mentioned earlier, if there is to be a change in density in a loop between the first and last bar, it sounds more natural to add voices to the start of a loop. When the loop occurs it breaks the cohesion when the last moment of a cue is highly dense, and then at the loop point the density disappears.

Ex 5.7 - Download Now (PDF 43KB)

Ex 5.8 - Download Now (PDF 39KB)

Ex 5.9 is a short example of a stinger. This stinger in particular would work as a death stinger due to the “finality” of it. Notice that unlike a loop, this stinger can change dynamics and articulation at the composer’s discretion. Here the dynamics increase steadily throughout, and end in fortissimo. We also end with another double stop, which increases the dynamics and density of the stinger. If you listen carefully you can also hear the note hold on the last bar. This would not be appropriate with a loop because the rhythmic rubato would feel uneven when transitioning back to the start of the cue. Stingers like this are short because they need to match an animation or briefly highlight something in the game. However, stingers can also be longer at which point they function more like a linear cue.

Ex 5.9a

Ex 5.9b - Download Now (PDF 18KB)

Exploring Music as an Immersive Tool

Immersion and Mood

Let’s now take a look at how a music system can add immersion to a game scene by changing the mood:

Ex 5.9 is one approach to immersive scoring for this hypothetical area (as mentioned in Chapter 5). Listen through the track a few times and take stock of the mood and pacing. The tempo here is medium (about 90bpm), so it does not drive too much activity in the scene. This is important because with this track players will not feel rushed as they explore. There are times when this may be desirable, but for now imagine our forest scene is a pleasant area to explore without fear. This track is very sparse. In fact it is comprised only of percussion, so there is no harmonic content. What then can we say of the mood? We can say that the mood here is neutral. This is an important concept. Not every game scene requires a heightened emotional impact, which is exactly why gameplay music can be harder to write than thematic music. Here, we are left with a very minimal, but successful percussion accompaniment to our forest exploration.

Ex 5.10

Ex 5.11

Ex 5.10 is a second approach. We can hear immediately that this cue has far more instrumentation than Ex 5.9. The percussion is still there, but now we have some harmonic and melodic elements from the harp and woodwinds. These harmonic elements are not neutral or ambiguous as it would be if the only intervals present were pure - octaves, fifths, or fourths. The presence of thirds, sixths, and sevenths here ensure that our original neutral example is now much lighter. Often we use these kind of visual cues to describe music. In this case the rhythmic motifs in the woodwinds are quick and spritely. In addition to the major tonality this solidifies the cue as more of a light hearted approach to forest exploration. We also have some metallic percussion elements which more often push things towards the lighter side. Here they outline the chords, which solidifies the major tonality.

This approach may work well with a fantasy forest, possibly magical in nature. However it would probably be too light for a game like Limbo, which depicts some graphically violent scenes. If this example was used with Limbo the result would be either a disconnect from the visuals (which wouldbreak immersion) or it would sound like an ironic contrast to the visuals of the game.

Ex 5.12

Ex 5.11 offers us a similar track with the same tempo and percussion layering. But this cue is clearly darker and denser than the other examples. We now have a small orchestra moving through a dark and mysterious chord pattern. The tonality here is more of a harmonic minor, which also plays into the mystery of the atmosphere, as does the major VI chord that pops in now and again. The density of this piece actually could have an impact on the speed that the player takes the game. Increased density or intensity can sometimes add excitement, so players may find themselves rushing a bit as they explore. It also might add a sense of heightened anticipation. It certainly feels more ominous, as if we can expect a battle rather than exploring without concern. This might work better than our last example when paired with a darker setting. The harmonies unique to this track add a strong sense of mystery as well.

One thing to note about immersion is that implementationplays a large role. We will discuss this more in chapters to come, but these three examples are actually separate elements which layer together in an example of vertical layering(see Chapter 9). Vertical layering is a method of writing music in which layers of sounds or instruments are combined and separated to suit the needs of the game. In this case, the neutral layer loops until a game event triggers a shift either toward the light layer or the dark layer. This makes the gameplay music more adaptiveto the scene, and therefore we are more easily immersed in it. These examples actually offer more than just a mood shift in an exploration loop. We will come back to this later on in Chapter 9, but for now check out the video below where we explore these mood shifts using FMOD.

<COMING SOON (EX 5.13)> 

Game music is always dancing on a fine line between focusing the player’s attention on the gameplay, and focusing the player’s attention on the mood of a scene. If our soundtrack draws too much attention, we lose immersion. However, if the music is too subtle, it may not add any emotional impact to the game whatsoever. A Game that expertly navigates this line is Journey. Much of the in-game music is relatively ambient, yet the mood remains strongly tied to the visuals. Strong and memorable themes intermittently emerge organically during key moments in the story. By carefully planning where the player’s attention must be drawn, composer Austin Wintory is able to support player action with subtler ambient cues and still maintain emotional resonance through the recurring themes. Identifying when and where to focus the attention of the player is a crucially important skill for composers to learn when creating an immersive atmosphere.

https://www.youtube.com/watch?v=bkL94nKSd2M

Critical Listening

Loops and Stingers

The Legend of Zelda: Ocarina of Time - The Lost Woods, All Temple/Puzzle Areas

This game is a great example of iconic loops and stingers. Most of the music, whether from a puzzle or from a battle, loops seamlessly. Thematic and
memorable stingers trigger when a puzzle is solved or a treasure chest is opened.

Super Mario 64 - All Areas

All areas of this game are worth exploring for their looped cues. Each level has a unique track with thematic and textural differences. Another important feature of this game is the “timed item collection” tasks. These are tasks that must be completed within a timeframe, and every item that is collected triggers a short musical fragment. The fragments coalesce as they are triggered consecutively, so the cue is therefore played at a speed determined by the player. This is a wonderful example of adaptivity in music.

Final Fantasy (VI - X) - Battles and City/Town Areas

The Final Fantasy series is another great example of loops and stingers. Nobuo Uematsu composes looping cues with genuine personality for each area of the map, and as overworld themes. The battle tracks are also iconic and memorable, with win and lose stingers that trigger depending on the outcome.

God of War (2005) - All Battles

The original God of War is a fantastic example of basic horizontal scoring. Pay close attention to regular enemy encounters. How does the music start? What happens in the middle? How does the cue end? Does the music feel like it has a definite structure? How is this structure accomplished?

Dead Space (2008) - Atmospheric Music

Dead Space is another great source to study, this time for its exemplary use of vertical scoring. Listen to the atmospheric music. The palette is full of extended orchestral techniques, but this adds fully to immersion and the tension in the game. As you explore the USG Ishimura take note of when the music actually enters the game scene and what instruments and sounds are being played. As the tension rises try to spot new layers being added on top of the old ones.

Immersion

Bastion - Descent toward Zia

As mentioned, Bastion makes wonderful use of diegetic music. The moment when the player descends towards Zia as she plays the guitar and sings “Build That Wall (Zia’s Theme)” is a particularly poignant moment that aids the immersion of the scene. Bastion is also a great example of looping battle/gameplay cues.

Transistor - Training Areas

Darren Korb’s second soundtrack also makes great use of diegetic music. In Transistor the player can turn on and off songs using a jukebox in special training areas. These songs tie into the story because our character was a club singer.

The Last of Us - All Areas

The Last of Us is massively immersive as “All Gone (No Escape)” plays towards the end of the game as mentioned in the previous chapter. However the entire game is worth studying for its immersive qualities. The battle tracks are subtle, and almost feel part of the game. In addition, many of the track written by Gustavo Santaolalla are linear, which adds a sense of momentum to each scene.

Journey - All general gameplay and thematic cues

As mentioned in the chapter, Journey in its entirety is successful in immersing the audience. The soundtrack is extremely thematic and emotive, but many musical cues also function adaptively almost as if they were sound effects. For example, at any point in the game the player can activate a “call” to which the environment responds. Each call is a musical stinger that varies randomly, as do the responses. However these fragments act as an antecedent consequence phrase, making the player feel as though the environment is interacting or speaking to them.

Mobile Format

Jetpack Joyride - Gameplay loop

This game is a wonderfully addictive infinite runner, and the gameplay loop features a catchy and jazzy melody that changes timbres and arrangement throughout. The track has enough variety to keep things interesting, but it also has a very constant and driving beat to keep the energy up.

Plants vs. Zombies - Gameplay loops

This is another great example of gameplay music loops. This soundtrack was composed by Laura Shigihara, and she succeeds in conveying a lighthearted and fun approach to a zombie game.

Candy Crush Soda Saga - Gameplay loops, and stingers

This soundtrack is an elegant example of looping cues and stingers. This was recorded with the London Symphony Orchestra, and the tasteful orchestration makes this matching game a delight to play and listen to.

Exercises

Exercise 5.1

Set a timer for 30 minutes per day. Every day for a month, write a full piece of music within that time frame. Start with small ensembles and work your way towards bigger more elaborate arrangements. At the end of the month compare your recent work to music you wrote at the beginning of the month. You should find that the quality has gone up as well as your overall output.

Exercise 5.2

Compose a 16-bar looping melody. Import it into your DAW and make sure the loop is seamless, and all of the parameters for looping cues are met. Then add chordal voicing to it in any accompanying instrument. Check for smooth voice leading, and make sure it loops in your DAW. Repeat this exercise in 3-4 different styles.

Exercise 5.3

Create 3 unique musical loops and 3 unique stingers with different ensembles. In what scenarios besides battles would horizontal resequencing be useful?

Exercise 5.4

The next time you play a video game, pay careful attention to where your focus is naturally drawn. Take note of the game’s use of atmospheric music vs. thematic music, and how it affects your attention. As you begin working on your own projects, make it a priority to try and identify where the player’s focus needs to be, and compose your score with that in mind. Try using ambient/atmospheric music where the player needs to focus in on gameplay, and use thematic or unconventional scoring where the player should be emotionally connected to the story.

Assignments and Practice

Assignment A

  1. Create a 1:00 minute gameplay cue for a retro-style pixel platformer. Imagine the art style is similar to an early game in the Zelda or Final Fantasy franchise with a bit of a cyberpunk/sci-fi twist. Export it as a WAV and import it back into your DAW. Check that it loops seamlessly.
  2. Create a 1:00 minute exploration cue for an american style RPG. The direction should be similar to Diablo or World of Warcraft. Export it as a WAV and import it back into your DAW. Check that it loops seamlessly.
  3. Create a 1:00 minute battle cue for an open world game in the style of Dark Souls. This track is meant for a boss battle, so make sure it is exciting and active to add tension to the action on screen. Export it as a WAV and import it back into your DAW. Check that it loops seamlessly.

Assignment B

  1. Compose the following hypothetical stingers for a modern FPS in the style of Call of Duty: 1) Respawn, 2) Mission Complete, 3) Mission Failed
  2. Compose the following hypothetical stingers for an open world fantasy in the style of The Witcher: 1) Item Obtained, 2) Objective Complete, 3) Player Death
  3. Compose the following hypothetical stingers for an experiential first-person game in the style of Everybody’s Gone to The Rapture or Firewatch:
    • 1. Game Start
    • 2. New Area Accessed
    • 3. Objective Complete
    • 4. Objective Failed

Chapter 6


Generating Melodic Ideas

Exercise:

Take a scene from a few different games. Without worrying about specific notes or keys, try to draw the shape of a melody that fits the mood of each scene. Use tension and resolution as a starting point. Are you trying to increase, or decrease tension for each scene? Should the melodic line follow player expectations, or should it deviate in a surprising way? How will your melodies reflect this?

Exercise:

Take a gameplay capture from a scene in a game that contains a good deal of activity (a boss battle, or a complex puzzle, etc.). Score that entire scene using a single monophonic instrument. In other words, use an instrument like a flute or a cello rather than a guitar or piano. Try to focus on melodic contour and development rather than on harmony or a groove to achieve the desired emotional effect.

Artwork

Exercise:

Take a look at an illustration or piece of artwork in detail. Then take 15 minutes to brainstorm all of musical associations you can make with it. Think of instrumentation, melodies, chord progressions, timbres, or anything at all that pops into your head. When you are done, pick your favorite elements and combine them into a single musical track.

Modes

If you are unfamiliar with the concept of modes, they are simply variations on the half-step whole-step pattern of the major scale. The diatonic modes can be found by using the same key signature, but the mode starts on a different note than the tonic and ends on that same note an octave up. In essence you are using the same “bag of notes,” but with a different root. For example, in the key of C major (no sharps no flats) if you start on D and end on D using all the notes in the key signature, you will be playing the D Dorian mode. This amounts to a natural minor scale with a raised 6th scale degree. Normally a D minor scale would have a Bb as scale degree 6, but in the Dorian mode we have a B natural (as you would expect in the key of C, because there are no sharps or flats). Ex 6.1 shows all of the diatonic modes with a tonic of C.

EX 6.1 - Download Now (PDF 17KB)

Semitone Offset

Let’s take a look at two examples of semitone offset.

Ex 6.2 is in the key of C major. It’s a simple melody that uses sequences to move from C to D. By changing C natural to C sharp at the end of bar 2, itcreates a smooth momentum toward D. The sharpened tonic is unexpected, added a surprising lift to the melody.

Conversely, Ex 6.3 in the key of C minor feels dark and somewhat deflated because we have lowered scale degree 5 by a semitone (G natural to G flat/F sharp). This is useful for situations that require an unexpected darkening of emotional content.

EX 6.2 - Download Now (PDF 23KB) EX 6.3 - Download Now (PDF 25KB)

Generating Harmonic Ideas

Exercise:

Think of a scene from a game with a particular mood and pretend you have been hired to re-score that scene. Find some source material to study, and analyze the harmony. What do your references have in common? What don’t they have in common? Take what you have learned and write 3-5 different chord progressions that you feel satisfy the mood of the scene.

Cluster Chords

Cluster Chords are a great way to add some “spice” to otherwise simple harmonic progression. As mentioned in the text, clusters are just chords stacked in seconds (major or minor). Ex 6.4a shows cluster chords using only the white keys.

EX 6.4a - Download Now (PDF 11KB)

Another simple device is to take a common progression and convert a few of the chords to clusters. Jazz progressions work particularly well due to the prevalence of upper extensions. Ex 6.4b shows a simple I - vi - ii - V jazz progression converted into clusters.

EX 6.4b - Download Now (PDF 18KB)

Note that in many cases, simply displacing the extensions of a jazz chord (the 7th, 9th, or 11th) by an octave will create a cluster. By moving these extensions an octave lower, they end up smack in the middle of the chord, thus eliciting the cluster effect.

The track “Mirror in the Mirror,” from the game BestLuck is a helpful example of cluster chords. In this scene the player has arrived at the final puzzle in the game, so there is a sense of finality to it. To accommodate this, cluster chords and added notes are used heavily. The harmony remains intentionally ambiguous in terms of tonality until the main harp melody enters. The overall impression is of stasis and reflection, which ties in well to the visuals and the context of the story. The mood here achieved by using the major scale as a framework and converting a simple progression into added note and cluster chords.

EX 6.5

The Twelve Tone Method

To review, the twelve tone method takes all twelve notes in a particular order called a row. The composer then performs operations on the row such as transposition, inversion, and retrograde.

As an example in the text, the tone row from measures 8 - 9 is shown. The row consists of the following notes:

[F - E - Eb - C# - D - B - C - Bb - F#(Gb) - G#(Ab) - A - G]

...And is presented in the viola part in Ex 6.6a:

EX 6.6a - Download Now (PDF 119KB)

This is a very loose example of a tone row, because in the following measures rules are broken, and the row back in at times only partially. However it is a great example of how you can use this method to generate melodic and harmonic material and develop it in your own way. Below we’ll briefly break down the three operations that can be performed on a tone row.

Transposition: raising or lowering the pitches (or more accurately pitch classes) in a tone row. This operation is exactly equivalent to strict transposition of any melody or chord progression. For example, if we were to transpose our initial tone row by a whole step it would look like this:

[G - F# - F - D# - E - C# - D - C - G# - A# - B - A]

Inversion: Inversion is slightly different from the tradition inversion of a chord, but the same principle applies. Inversion “flips” the row on an axis of symmetry. Commonly this axis will be the first note in the row (or the bottom note of a chord). So an interval of a half step (like the one we find in our current row from F to E) going down, will then become a half step moving up (F to F#). Note that this is how harmonic inversion can occur using the twelve tone method. A chord made of a major second and a minor third from the root would then become a major second and a minor third from the top note (see Ex 6.6b). If we inverted our initial row it would look like this:

[F - F# - G - A - G# - B - A# - C - E - D - C# - D]

Retrograde: Retrograde is quite an easy concept to explore. It simply means that the tone row is reversed! Our original tone row would look like this:

[G - A - G# - F# - A# - C - B - D - C# - D# - E - F]

Retrograde Inversion: Finally, retrograde inversion is a combination of both operations. The initial tone row is inverted, and then reversed:

[D - C# - D - E - C - A# - B - G# - A - G - F# - F]

As a final challenge, below is the full score to “Rain Puzzle,” the track from BestLuck  that utilizes a tone row. Take a look at the row and analyze how it develops. Can you find any other spots where the row is used?

EX 6.6b - Download Now (PDF 17KB)

If you are interested in exploring the twelve tone method further, check out www.musictheory.net/calculators/matrix. This tool will allow you to input tone rows, and it will output all possible transpositions, inversions, retrogrades, and retrograde inversions of that row. The resulting schema is called a twelve tone matrix.

Generating Rhythm

Rhythm can sometimes be an underestimated aspect of game music. Many composers only think of rhythm when writing for percussion instruments. From a pedagogical standpoint, this probably stems from rhythm being an afterthought compared to melody and harmony. It is not generally thought of as a starting point for musical composition because it is limited only to unpitched material. As a consequence composers forsake detail in their rhythms, the prevailing notion being that there isn’t much to develop. But this couldn’t be further from the truth. It is blatantly obvious when composers over-utilize “copy and paste rhythms.” When rhythmic motifs fail to include adequate variation, or when their motivic development is neglected an entire cue can become stale and repetitive as a result. The solution then, is to focus on developing both unpitched rhythm and as an aspect of melody and harmony.

In his score for Dead Space, Jason Graves often uses rhythm as a focal point for more high intensity tracks. In “The Necromorphs Attack” the pitch and harmony are very much secondary to the rhythmic ideas. You won’t find a typical I - IV - V progression anywhere in here. The tonal elements are mostly used to provide a chaotic texture for the rhythmic motifs to thrive. You’ll hear a handful of rhythmic ideas in this track bouncing from one instrument section to the next and the effect is striking. Short bursts of rhythmic activity are used to propel the music forward, and syncopation is used to develop ideas and create space and variation. Note that rhythmic activity plays a large role in the development of pitched instruments as well as percussion!

www.youtube.com/watch?v=YNppLMh4ZsA

Aleatory

Below we’ve laid out a couple of aleatoric examples. Study them and create your own!

EX 6.8a - Download Now (PDF 32KB)

Above we have the full score of the aleatoric “Chase” stingers from the game Evil Nun (see Chapter 7 in the text). Note how clearly each effect is notated so that there is as little confusion as possible. There is still ample room for improvisation from the performer.

EX 6.8b - Download Now (MP3 115KB) EX 6.8c - Download Now (MP3 129KB) EX 6.8d - Download Now (MP3 137KB)

The above examples are the audio recordings overdubbed violin, clarinet, and saxophone aleatory. These were recorded on a guerilla budget in a home studio!

EX 6.8e - Download Now (MP3 512KB)

There is no score to this Evil Nun cue, but you can hear improvised woodwind multiphonics creating a creepy bed of sound throughout this track.

EX 6.8f - Download Now (PDF 345KB)

This score is a contemporary orchestral piece rife with aleatory!

Developing Your Material

Exercise

One game that does a wonderful job balancing themes and novel motifs is Abzu. As an exercise, play through the game and take note of how many times you hear the main theme, and when it occurs. How is it changed each time? Is the orchestration different? Is it in the same key every time? Does the same instrument voice the theme when it occurs? Use this as a foundation for ideas the next time you write a theme.

Critical Listening

Melody

Super Mario Galaxy

Castlevania

Bioshock

Spider-Man (PS4)

Harmony

Abzu

Horizon: Zero Dawn

Rhythm

Cuphead

Detroit: Become Human

Dead Space

Assignments and Practice

Assignment A

  1. Using the twelve tone method, create a 1:00 minute gameplay cue for a horror game.
  2. Using aleatory only, come up with 5 horror stingers written for a single instrument. When you are happy with the stingers, prepare the notation and try and get them recorded. If you can, record it yourself. If not, ask around on various music forums and social media (especially game-related forums) and see if you can find a musician who would be interested in recording it for you.
  3. Take a 1:00 minute gameplay capture from a game of your choice. Using any mode, compose a melody that fits the mood of the gameplay. Now create a harmonic progression for the melody.
  4. Now compose 2 - 3 alternate harmonizations that completely change the mood of the gameplay. This exercise will help you broaden your harmonic language, and result in more options when submitting work for a game project.

Assignment B

  1. Take a 3:00 minute gameplay capture from a game of your choosing. Make sure the 3:00 minutes covers a variety of situations and scene transitions. Using as many of the techniques described in this chapter, score the gameplay capture to the best of your ability. Make sure that regardless of the techniques you use, the music fits the mood of the scene, and transitions when appropriate.

Chapter 7


Virtual Instruments and Sample Libraries for Orchestration

Below we’ve provided a brief list of some of our favorite VST’s (virtual instruments) and sample libraries in a few categories to get you started. Note that this is not an exhaustive list, and includes only libraries that will be useful for standard orchestral models. Before you break out your credit card however, be sure to research price point and workflow. Above all, if you have the chance, try the software before you buy!

Orchestral Strings

  1. L.A. Scoring Strings
  2. Berlin Strings
  3. Cinestrings
  4. Spitfire Strings
  5. Spitfire Chamber Strings
  6. Orchestral String Runs (specifically for legato runs)

Orchestral Winds

  1. Vienna Symphonic Library Woodwinds
  2. Berlin Winds
  3. Spitfire Studio Woodwinds
  4. Spitfire Symphonic Woodwinds
  5. Audio Modeling Saxophones

Orchestral Brass

  1. Cinebrass
  2. EWQL Hollywood Brass
  3. Berlin Brass
  4. Sample Modeling Brass

Orchestral Percussion

  1. Cineperc
  2. EWQL Hollywood Percussion
  3. EWQL Stormdrum
  4. Action Strikes
  5. ProjectSAM True Strike 1&2

Orchestral Effects Libraries

  1. ProjectSAM Symphobia 1, 2, & 3
  2. Hollywoodwinds
  3. London Contemporary Orchestra Textures

Hybrid Libraries and Synthesizers

  1. Spectrasonics’ Omnisphere II
  2. Native Instruments’ Absynth
  3. Zebra 2

Template Setup

Here we’ll explore how to set your template up step by step. This is a practical guide to organizing your template, and it applies to any DAW you choose to use. Any mixing and audio routing techniques mentioned are universally applicable, but you may have to check your DAW’s user manual for some specific information if you get stuck.

When organizing a template the most important thing is to group your instruments in a consistent and logical way. Apart from that, the only thing that matters at this stage is that it makes sense to you and fits efficiently in your workflow. For our purposes, the first consideration is to group instrument sections together and lay them out in terms of register from highest to lowest. There are two points of view on this topic. Some composers like to group instrument sections as they would appear on a score. The layout would then be woodwinds followed by brass; percussion would follow and strings would be last. This works very well for composers and orchestrators that routinely work with live orchestras as it allows them to keep clear visual track of their score as a whole. Other composers prefer to lay their template out in terms of the most used instruments. Usually this means that the strings are on top, followed by winds and brass, and then percussion and auxiliary synths and sample libraries. We recommend choosing the layout that allows you to compose quickly and intuitively. In our experience, this is usually the second option for beginners because it allows quick access to the frequently used strings.

Once you have decided on the broad layout for your template, you will have to decide on what instruments to include. This will depend largely on your computer specifications. It is necessary to have the core range of instruments as mentioned above, but beyond that it is actually unhelpful to include too much in your template because you may end up spending obscene amounts of time scrolling around looking for instruments. Mac users may be faced with the spinning wheel of death every time they need to save their work. In short - include all core orchestral instruments and all frequently used instruments, but do your best to keep your template light enough to operate quickly.

It has been brought to our attention by some friends and colleagues that some DAWs (Cubase in particular) allow composers to collapse full sections and decouple them from Vienna Ensemble Pro (Networking/Template Software). This can help decrease the load on your computer because you can use as much or as little as you need. - Spencer

Most composers then choose to group these instruments from high to low. If you are organizing a simple string setup it would look like this:

Violin I
Violin II
Viola
Cello
Bass

A Woodwind setup would look like this:

Piccolo
Flute(s)
Oboe(s)*
English Horn
Clarinet(s)*
Bassoon(s)
Contrabassoon

Clarinets can play higher and lower than the oboe. For our purposes the order of oboes and clarinets are interchangeable. However, in score setting oboes would always come first. Additionally there are a number of versions of flutes and clarinets with varying ranges (alto flute bass clarinet, etc.). The busses should be grouped in terms of register, but in terms of the layout can alternatively be grouped by family based on your preference. - Spencer

And brass:

Trumpets
Horns*
Tenor Trombones*
Bass Trombone
Tuba

Pitched percussion is considered a separate grouping here, but these instruments can be grouped together after the brass section in any order:

Xylophone
Glockenspiel
Marimba
Harp*
Piano*
Timpani

Due to mixing considerations the harp and piano are usually on their own separate tracks. See “Mixing Your Template” below for specifics.

Unpitched percussion can still be usefully arranged from high to low approximately:

Cymbals (Crash, Suspended)
Snare
Tom-Toms
Bass Drum
Tam-Tam
Taiko
Auxiliary and/or World Percussion

The arrangement from high to low helps not only to organize the template logically, but it also establishes the foundation of an effective mix. The instruments that will be mixed similarly (i.e. instruments in similar registers and families)can be loaded up on the same channel strip, using the same instance of your sampler (usually it will be Kontakt). Do not load up instruments that will be mixed differently onto the same instance of your sampler. As we will see shortly, all of your instruments will be associated with reverb sends (and potentially some light equalization and compression; see “Mixing Your Template”). If you load instruments using the same sampler instance you are more or less forced to mix them similarly. Try to create at least two instances of each orchestral group separated into “highs” and “lows.” This will allow you to apply appropriate reverb to each timbre and register. It will also allow you to group each section as a whole into mix busses, which is a critical aspect of balancing your template.

When all is said and done, and all of your instruments are loaded into your template it should look something like this:

Ex 7.11

Ex 7.1

Reverb Setup

According to Webster’s Dictionary, Reverberation is the persistence of a sound after its source has stopped. Consequently reverberations will build up and then decay as it is reflected in a particular environment. It is essentially an effect of the physical propagation of sound as it interacts with the materials around it. For this reason, reverberation is an important element of any sound, and especially of musical instruments because it gives the listener clues as to what kind of space that sound is in. Reverb is a shorthand version of the former, and it is also used to describe a device or plugin that emulates natural reverberation. When producing orchestral music with samples the single most important factor to maintaining realism is reverb. Because of this, we have added a section solely dedicated to reverb.

When dealing with electronic instruments, the function of a reverb plugin is to add depth and dimension. Reverb adds reflections of the sound (within adjustable or selectable timeframes) as well as a bit of color. It also lengthens the release of the sound. When building a template, reverb takes the place of an actual physical space. A convolution reverb is essentially a plugin that emulates a concert hall. This is important because we literally never hear an orchestra that isn’t in a medium to large space. It’s physically impossible to fit an orchestra into an isolation booth, so mixing a cue as if it was recorded up close and dry can sound very unnatural to our ears. Orchestral instruments need some amount of reverb to sound physically tangible. Adding reverb to a template ensures that the space and color sound natural.

The best way to add reverb to an orchestral mix is to use send busses. A common setup is to have two sends per section. One send will be for the higher instruments in a section, and the other send will be for the lower instruments. In the case of a string section we would use bus 1 to send the violins and violas to an auxiliary channel strip. Then bus 2 would be used to send the cellos and basses to another. Next, we would add a high quality (usually convolution) reverb to each channel strip.

Ex 7.2

Ex 7.2

Some reverb plugins like EWQL Spaces make things easy by offering instrument-specific reverb settings. If you are looking to choose the settings yourself, we recommend investing some serious time into comparing actual orchestral recordings that you like and analyzing reverb characteristics, especially the timing of the release/decay. Due to the physical placement of instruments in a concert hall, the reverb length will change slightly from section to section. Closer instruments will have a slightly shorter reverb length, and farther instruments will have a slightly longer reverb length. A good starting point for high strings is a reverb length of ~3 seconds, and from there you can experiment to see what works best.

The reason for splitting each section into high and low instruments (at least) is so that you can filter out boomy low frequencies and ultra-bright high frequencies when they pop up. Reverb can exacerbate both of these issues, so your filter and reverb settings will likely look slightly different. On top of that, it will ensure that you can add extra processing plugins where needed without affecting your entire orchestral section. To allow for these adjustments your template will have about two dedicated reverb busses per orchestral section. Busses 1 and 2 will be for string reverb, busses 3 and 4 will be for woodwind reverb, busses 5 and 6 will be for brass reverb, and so on. Some composers even add separate reverbs for the french horns, solo instruments, and any extra libraries that you include beyond the core orchestral instruments.

When all send effects have been introduced and reverbs have been loaded up, your auxiliary tracks and mix busses (see “Mixing Your Template”) should look something like this:

Ex 7.3

Ex 7.3

Mixing Your Template

Mixing your template might be the most time-consuming and technically demanding part of the template creation process. This is because sample libraries are all recorded in different spaces with different microphone setups. This makes it a real challenge for a composer to create a template that sounds balanced, natural, and cohesive. Before getting into the nuances of mixing, let’s first cover the basic components of a sampled orchestral mix.

Panning and Stereo Space

The first thing to consider when it comes to mixing an orchestral template is where in the stereo field each of your instruments will be. Orchestras have so many musicians that each instrument needs to have its own sliver of the stereo field or you could run into some serious mix problems.

Although orchestras can be huge, panning orchestral instruments as you would electric guitars would sound unnatural. Rhythm guitar tracks are typically panned hard, especially if they are doubles of the same part. By comparison, every instrument in your orchestra should land approximately between “9 o’clock” and “3 o’clock” on the panner. This is because an orchestral mix usually reflects what we would hear if we were in the audience at a concert. We’ll call this the concert style template. There are other ways to mix an orchestra of course, but this is a great starting point.

So where does each instrument land? The easiest way to plot out the stereo space of an orchestra is to use an orchestral map (check out this website for an image of an orchestral map: www.virtualmuseum.ca/edu/ViewLoitDa.do%3Bjsessionid=4B9B8B974EEE05F8B0A15632A681210C?method=preview&lang=EN&id=15602. This chart plots the layout of each section of the orchestra in a typical concert setup. In this concert setup the strings are panned from left to right, and from the highest to lowest register. So the first violins are panned farthest left followed by the second violins. The violas are in the center, and cellos are panned slightly right with the basses panned opposite the first violins. The flutes and clarinets are a bit farther back and to the left, with the oboes and bassoons opposite on the right. Trumpets, trombones, and the tuba are farther back still on the right, with the french horns slightly to the left of center. The various percussion instruments are then laid out all the way in the back and towards the far right or left.

Characteristically this setup is foundational for a reason. It is very sonorous and it allows for a balance between sections. Percussion and brass are the loudest sections, so they occupy the space farthest to the back of the stage. The winds are in the middle and the strings are up front so that they aren’t drowned out.

The types of sample libraries you choose to purchase will affect your panning. Some libraries such as LA Scoring Strings and Cinesamples come pre-panned. This can be a huge time saver because these instruments will sound natural right out of the box. Other libraries such a Vienna Symphonic Library don’t come pre-panned. This allows for a bit more flexibility down the road, but it will force you to put in some time organizing your instruments up front.

In many cases, even if the library comes pre-panned, you will need to individually adjust the panning on your solo instruments. For example, when dealing with Cinesamples brass, the trombone section comes pre-panned but the solo trombone patches do not. This means if you were to set up a trombone quartet, they would each need to be properly positioned in the stereo field. We recommend following the chart above for your approximate panning, and then allowing a few degrees separation between each solo instrument so that they occupy slightly different spaces.

The concert setup outlined above is not the only type of orchestral setup that you can use with your template. A slightly more complex setup is what we call the hollywood style template (also called the European Orchestral Layout).This setup is used commonly for epic film scores. Howard Shore used a setup similar to this to fantastic effect in his score to The Lord of the Rings. In this setup the violins are split. The first violins remain at about “9 o’clock”, but the second violins now take up the “3 o’clock” position opposite the first violins. Typically this is used for larger string sections, so these panning positions may even be exaggerated to about “8 o’clock” and “4 o’clock”. Consequently the basses are then moved to the center of the stage so that booming low frequencies (which are a staple of film scores) are more balanced. The violas and cellos then sit opposite each other on either side of the basses.

Another notable difference from the concert setup is the percussion. Typically percussion would be panned left or right and pushed toward the back of the stage. In the hollywood setup, percussion is usually mixed either front and center, or as a wide ensemble stretching from one side of the stage to the either. This adds drama and excitement to action scenes and trailers.

This setup can be tricky because of the above-mentioned pre-panned libraries. It can take some time to perfect, but the end result is rewarding. It’s important to make sure that your samples, particularly strings are up to the task. Violin sections should consist of 16 or so players in order for the split violins to maintain their sonic impact. Ideally your library should support divisi as well, because this setup will force you to change the way you orchestrate (see “Writing for String Samples). The most important factor when using this setup is to really listen as you’re mixing, and make sure that you aren’t losing the cohesion of your ensemble as you organize your sounds. If your entire template consists of samples that are not pre-panned then take care to give each individual instrument its own space.

Once you’ve panned your core sections you can start setting your auxiliary instruments and hybrid libraries along with your synthesizers into the stereo space as well. Here, the sounds you will be using are usually so particular to each project that it can be helpful to load the plugins that you want without being too specific about the panning at this time because you may end up wasting time readjusting later on.

Reverb

Remember when we painstakingly laid out all of our reverb plugins via send effects? Now is the time to even more painstakingly balance the reverb settings of each instrumental section. There are two factors to consider here:

  1. Dryness of your samples
  2. Depth within the virtual “concert hall”

Where panning is the left-right (or x) axis for our concert hall, depth is the forward-back (or z) axis. Depth is really just telling us how far away your instrumental section will be on your virtual stage. Just as with panning, there are standards for where orchestral instruments will be, but ultimately it is up to you to make some of your own decisions as well.

For example, (as mentioned earlier) the norm for a concert percussion ensemble is to be all the way in the back of the orchestra to avoid overpowering the pitched instruments. But in a trailer mix that might actually be a desired effect! Changing the reverb (and volume) settings is the primary way for a composer to place an instrument closer or farther from the listener.

***Note: Reverb and volume are very inter-related. As you will see in the following sections, adjusting the reverb for an instrument can make it sound farther away, but a proportional volume adjustment should also be made to maintain a natural balance.

How do we use reverb to “place” our instruments on the virtual stage? The answer to this is more intuitive than you might think. Imagine you are standing at the back of a giant concert hall with a friend. If your friend speaks to you from a foot away, what do you hear? You will certainly hear some reverberations, but mostly you will hear the sound coming directly from your friend’s vocal chords. This is the dry signal, or sound that you hear directly from its source. As your friend moves farther away from you, this will change. You will still hear your friend’s voice, but the farther she is from you, the more you will hear of the reflections of the sound. This is the sound that is bouncing off the walls and meeting your ears. This is what we call the wet signal. The closer a listener is to the source of a sound, the greater the ratio of dry (direct) signal to wet (indirect or reverberated) signal. The farther a listener is from the source of a sound the greater the ratio of wet to dry signal. This ratio is an important parameter to be aware of in any reverb plugin.

The ratio of wet/dry signal is not the only thing that changes as your friend moves farther back in our imaginary concert hall. In this scenario, the sound of your friend’s voice has farther to travel to reach your ears, so the length of the reverb your hearing will also change. The farther back she goes, the longer the reverb tail. A reverb tail is what you hear after the original sound stops. It is the sound of the blurred reflections that build and dissipate, and the length of this dissipation is an important factor to be aware of.

To make things more clear, let’s look at a couple of examples. If you pull up a violin section and use a send effect to add some reverb, you now have to decide on the ratio of wet/dry signal, and you have to decide how deep into the virtual stage this section will be placed. Determining how dry you want your violins is going to mostly depend on how dry the sample library was recorded. Some libraries have a good amount of reverb “baked in” to the samples. In this case, you may not need a whole lot of wet signal to add to your violins because they already have enough. On the other hand, if these samples were recorded very closely, and with very little sound of the hall, then you may have to add some heavy wet signal to make them sit back in the mix and sound natural.

Personal preference and the intent of the composition is also a key factor here. If the intent is to create an “epic” orchestral track you would most likely use a longer reverb (around 3-4 seconds), and possibly boost the wet/dry ratio a bit. If your intent was to create an intimate chamber piece you would likely do the opposite. You might use a reverb tail closer to 2.5-3 seconds, and keep the ratio on the dryer side. Of course it’s always wise to choose samples that fit the desired effect as well. For the epic track a larger string section would work best, ideally one with a good amount of reverb baked into the samples. For the latter you would try and find a smaller, dryer violin section with a bit more detail and uniqueness.

Both the wet/dry ratio and the length of the reverb tails are important aspects of your mix that need to be balanced within your template. It is highly advantageous to go through all of your samples before you choose a reverb and really listen to the dryness of each sample. Often having flexible microphone positions is helpful at this stage. For instance, if you have a woodwind library that is recorded very dry, and a brass library that is recorded wet you may want to use these microphone positions to balance the two sections out. Bring down the room microphone on the brass, and bring up the hall microphone on the woodwinds. The goal is to make all your libraries sound like they were recorded in a similar space. This can be an arduous task, but it is worth it to make your template sound cohesive.

Now that we’ve done all we can using microphone positions, it’s time to set the length of the reverb tails. Plugins like EWQL Spaces make it easy by allowing composers to select an instrument with a built-in natural reverb length. For example the first violins will come with a default reverb length for each of the concert hall choices. Still others will allow you to adjust the length of the reverb itself. Vienna MIR will allow you to visually place a section of the orchestra wherever you’d like!

Keep in mind that despite the temptation to wildly place instruments on the virtual stage, we are striving for cohesiveness and balance. In most cases the same concert hall should be used for each section of the orchestra. The length of the reverb tail will only be tweaked slightly, and the wet/dry ratios should sound similar between each section, or you risk that section sticking out of the mix. This can also be exploited if the situation calls for it, as in the example with the concert percussion vs. trailer percussion.

This last point brings us to the differences between reverb settings in our concert setup and our hollywood setup. For most composers, using two reverbs (one for the higher instruments in a section and one for the lower instruments in a section) is adequate. However, to maintain a bit more control over the mix some composers elect to separate short notes from long notes and use different reverb settings for each. This would require planning early on, and loading your long and short articulations onto (at least) two different tracks. This works particularly well with strings as it keeps the crispness of the spiccato articulations without losing the buttery legato sound of the longer ones. It is entirely personal preference however, and it should be noted that a setup like this, applied to all pitched sections of the orchestra would require four reverb busses per section. This will inevitably have a massive impact on your computer system, so plan carefully and experiment with each approach before finalizing.

Mix Busses

The final step in creating a template is organizing everything into a mix bus. A mix bus is just a way to group a number of tracks together so that they can be balanced as a unit in addition to being balanced as individual tracks. This is very helpful for mixing in many genres of music, but in orchestral music it is almost a necessity. Often sections can sound cohesive and balanced on their own (especially if using the same library for the entire section), but can stick out slightly or be buried by other sections. Having a dedicated mix bus for every section is massively helpful for balancing the orchestra as a whole

DAWs accomplish this task in different ways, but the basic idea is to send the output of every instrument in a section to a particular bus. Then you must also send the output of that section’s reverb bus to the mix bus as well. Ex 7.6 shows the output of all of the woodwinds being sent to Bus 63. The two reverbs dedicated to the woodwinds (“Winds Hi” and “Winds Lo”) also have their outputs pointing to Bus 63.*

Ex 7.6

Ex 7.6

***Note: It is very important to set the outputs only to the mix busses when dealing with the reverb sends. Using another send on the reverb bus itself will result in feedback!

When this process is applied to each section of the orchestra you will a template where every single instrument should have a corresponding reverb send, and every track should have its output pointing toward a mix bus. This will allow you to have full control to balance the instruments in each section, and to balance each section with the full orchestra.

Template Balance and Mix

An often overlooked element of a sample-based orchestral mix is the balance. Composers will sometimes become frustrated by an oboe that won’t come through, or a rogue trombone that overpowers everything else. These are symptoms of one of two things: a poor overall arrangement (see the rest of Chapter 7 for orchestration techniques), or a poor mix balance. The arrangement is really a composer’s first line of defense. If your arrangement is solid, then the mix will go smoothly and quickly. If your arrangement is sloppy and poorly managed, then the mix will be a nightmare. We will discuss this more in the coming sections, but keep this in mind for now.

Balance is the overall relationship of volume between each individual instrument, and the orchestral sections themselves. Balance is important for realism in an orchestral mix. Without proper balance your mix can sound like certain parts are sticking out or being buried, as mentioned above in the examples with the oboe and trombone. To avoid these issues, we recommend taking care when setting up your template to try and emulate the balance as you would hear it an actual orchestra.

Balance in an orchestra is an extremely complex topic as the ranges of each instrument can change its dynamic possibilities, but in general the rule as stated in Rimsky Korsakov’s “Principles of Orchestration”[4]  is an effective starting point. Korsakov asserts that at a piano dynamic, all instruments (and therefore sections) must be about equal in perceived volume. At a forte dynamic the sections become more differentiated. (Their timbral characteristics also become more pronounced, making it more difficult to blend). You can generally assume that at forte the percussion section will be the loudest. Just three percussion instruments are capable of overpowering an entire orchestra at high dynamics. The next loudest section will be the brass. The brass section should still sit nicely within the virtual space, but can usually be heard above the other sections. The strings have incredible dynamic flexibility, so at their top volume they would be the next loudest, along with flutes and piccolos in their upper register. Woodwinds are a very complicated section, and we will discuss some of the nuances of dealing with woodwinds later in the chapter (see “Orchestrating and Arranging for Live Instruments”). For now, we will simplify things and say that as a section, woodwinds should balance well with the strings at equal dynamic levels. Your overall template sound will largely come down to your preference and your musical intentions, but this is a good starting point.

Ex 7.8 (Word 10KB)

The hierarchy above is referring to perceived loudness. So you don’t have to go out of your way to crank up the gain on your percussion and brass instruments. You do have to make sure that at forte dynamics and above the sections are audible in the approximate order listed or you run the risk of a very unnatural sounding mix. It’s also important to keep in mind that these ratios are not exact. There is a large amount of flexibility when using samples as opposed to live instruments. In a live setting you will have to carefully plan your orchestration so that instruments will balance naturally (see the section on Arranging and Orchestrating for Live Instruments). The advantage of using virtual instruments is that you can play in your parts, and tweak the balance yourself using volume, velocity, and MIDI Continuous Controllers (CC’s).. MIDI CC’s like expression and Modulation are essential to writing using samples, as will be discussed in the following sections.

Ex 7.9

Ex 7.9

At this stage of your template setup, your goal is to balance each instrument, and each section so that the full orchestra sounds natural when playing all at once. Use a few simple chord voicings to assess the overall balance of your mix . For each of these chord samples, at full volume the percussion should be powerful and audible. Brass should shine brightly without completely overpowering the strings and winds. The strings and woodwinds should be balanced, with a bright flute timbre at the top.

Part of this process is to make sure that you can play at maximum volume and the result will not overload the stereo output. This means that no track, auxiliary channel, or mix bus should be showing any red. You need to allow yourself headroom to mix later on. Headroom is the gap between the loudest point of your mix, and peaking or overloading the output.[9]  Setting your template up this way will allow you to compose without worrying of overloading the stereo output. In addition you will be able to fully control the dynamics of your instruments using expression and modulation without worry of upending the overall balance.

Ex 7.10

Ex 7.10

When your template sounds balanced and you have a decent amount of headroom with full orchestral passages, you are ready to “finalize” your template. The best way to finalize your template is to work with it! Write a ton of music and save each version of your template so that you can go back to earlier iterations if need be. Find reference music in similar styles and really listen to the balance and reverb in each and adjust your template as needed. Don’t expect it to sound as good as a well-recorded professional orchestra, but strive to get as close as you can. In some cases your samples may actually sound as good or better!

Exercise

Follow the steps above to set up a “base template.” Then find some reference music in a few different styles. Try adjusting the template to fit these styles. Helpful examples would be action music, romance, world music, or horror. Save each of these separately so that you can go back to it if you land a project that requires a similar style. We will be using these templates in the orchestration exercises that follow.

Writing for Strings

Before we get too deep into writing parts for the orchestral string section, let’s first take a quick look at the most common articulations on the violin. Most of these articulations are equally relevant to the other orchestral strings (excluding Harp, as it is technically in its own category where playing technique is concerned).

www.facebook.com/watch/?v=287899172053784

As you can see in the video above, there are nearly endless possibilities for orchestral strings. Making use of a range of articulations will enhance the part interest, timbral variety, and part independence of your string writing. This is especially relevant in games, where string instruments are often split into separate layers and triggered independently. Check out the figure below for a common example of how different articulations can be used to make layers more independent in a game scenario.

EX 7.11

Here we have a bassline which is played pizzicato (plucked) by the cellos and basses, that serves as a base layer. The violas are split into divisi with a tremolo articulation, playing the 3rds and 5ths of each chord. The second violins are playing something of a countermelody staccato, while the first violins have the melody. Note that each part has its own articulation, which makes the parts more independent and audible. In a game scenario each of these layers would change the mood to some degree as well. Notice also that each of the four main points mentioned in the text are satisfied, despite the excerpt being very simple. The divisi chords in the final bar are also balanced in terms of relative volume and fullness, which satisfies point 5 (see the section in the text on writing for woodwind and brass samples).

Exercise

Open up your new template and solo the string section. Write 2 - 4 melodies for the first violins in any style that lasts at least 16 bars. Pick your favorite melody and write a single bassline for the cellos and basses together. Make the bassline interesting! Don’t just have them playing the roots of every chord. Make this part as much of a countermelody as you can. Finally,  use the violas and second violins to fill in the inner voices. Remember the four points mentioned in the text! The inner parts need to fill in missing chords tones, but they also should be interesting to play, independently of the other parts.

Exercise

Take the quartet mockup you have just produced and rearrange it. Take the initial violin I melody and transpose it into the viola or cello parts. How would you then fill out the orchestration? Try transposing it again into a third part, and fill in the rest of the voices and compare the mockups. How does this kind of transposition change the mood of the piece? These kinds of questions are important when you are orchestration a game cue. Your answers will give you clues as to how best to orchestrate each cue.

Exercise

Find a 3 - 4 minute gameplay video from a game in any genre and score it using only strings. Use your template to produce a demo - worthy product. Use all of the techniques you’ve learned in the string writing sections of the text. Play close attention to voice leading and try a variety of textures. When you are satisfied, export the audio to the video in your DAW and save it for use in a demo reel.

Examples of String Writing in Games

Below we’ve listed a few examples of effective string writing found in games. Study them and find some other games with string writing that you love. There a plenty of them out there!

Kingdom Hearts II (includes some great harp writing as well)

www.youtube.com/watch?v=zeTS71-khFk

Guild Wars 2 (with full orchestra)

www.youtube.com/watch?v=m5jn9YwRd84

The Last of Us (Chamber Strings)

www.youtube.com/watch?v=mLuPQfIZDSs

Journey

www.youtu.be/M3hFN8UrBPw?t=2786

Writing for Woodwinds and Brass

Now that we have a solid foundation for writing effective, balanced, woodwind and brass parts, let’s look at a few examples of how we can use these versatile instruments in our game scores.

Melodic Lines in Woodwinds and Brass

All woodwinds (including bassoons!) are fantastic choices for melodic lines. Flutes are commonly used for soaring melodies because they can cut right through an orchestra in their brilliant upper register. Flutes and clarinets are also very agile instruments, capable of performing runs as well as a spectrum of short and long notes. It does take a bit more breath for the oboe, but as you can hear in the example below from Abzu, the oboe is a beautiful melodic instrument with a lovely timbre.

www.youtube.com/watch?v=AybM12ipVgw

Of course no discussion of woodwinds melodies would be complete without mentioning the bassoon solo in Stavinsky’s Rite of Spring. For those who live exclusively in the world of samples, that bedpost with a stick coming out of it is actually a bassoon!

www.youtu.be/EkwqPJZe8ms?t=40

The most common melodic brass trick is to use a large brass ensemble for heroic themes. Check out the exciting track from Spider-Man, written by John Paesano. You’ll hear some very heroic Marvel-esque themes popping out over the top of an ostinato by the strings and percussion. It’s also very common to hear lower brass instruments (tuba and bass trombone, sometimes paired with low strings and/or low woodwinds) holding down a sustained bassline, as is done at the start of this cue.

www.youtu.be/3uIyh8wqipw?t=537

Chordal Movement in Woodwinds and Brass

The main considerations with chordal movement in winds and brass are 1. Voice leading and 2. Balance. The voice leading should be smooth to ensure there are no noticeable jumps popping out to distract from an otherwise smooth texture. Good voice leading should also ensure that all chord tones are represented in a logical way (i.e. don’t triple up on the 5th of a chord and leave out the root).

Balance refers to the resulting sonority produced by all instruments in tandem. If, for example, you’ve written a chord progression for brass choir, is the sonority balanced dynamically? Or are some instruments louder than others due to MIDI CCs, dynamic markings, or range? Are the timbres you’ve chosen orchestrated so that both volume and fullness are relatively equal between instruments? As mentioned in the text, it takes approximately two french horns to equal one trombone in fullness and volume. Has this been taken into account? Balance refers to all of these elements, and any others that could throw off the smoothness of a sonority.

If the chord progression to be orchestrated is an accompaniment to a melody, then 3. Range, and 4. Timbral Contrast must also be considered. For example, voicing a woodwind quartet in the same range as an oboe solo will completely bury the oboe. Likewise, using a flute choir, even at a lower range than a solo flute melody may result in the flute solo sounding like the top voice in the choir. It would be better to use strings, or some other contrasting timbre as an accompaniment.

Below we have two examples of voicings; one for woodwinds and one for brass. Refer back to the text (Chapter 7: Writing for Woodwinds and Brass) if you need a refresher on how to use the balance equations. We’ve also included them at the end of this chapter for reference.

EX 7.12a

In the above example we have an 8-piece woodwind ensemble playing through a I - V - IV - I progression. It’s a bit unorthodox to demonstrate timbral and dynamic balance. Notice that there are two woodwind instruments (from Wind Group I) for every horn to balance the dynamics. In other words, the volume of the two G major chords expressed in this example will be roughly equivalent, as would the fullness.

This is also an example of juxtaposed voicing (with the exception of the first and last “G” in the clarinets and top horn voice) because we are splitting up the horn and woodwind timbres by an octave. Winds from Wind Group II (Oboes and Bassoons) have been kept out to keep the timbral balance simpler.

EX 7.12b

The above example is also highly unconventional, but it serves as a useful model. This is the same progression as Ex 7.12a, but in C major. Here the horns and trumpets are doubled at the unison. It’s very unlikely you would need so much firepower in a chordal accompaniment, but nevertheless the balance in timbre, dynamics, and fullness is all there in the 2:1 horn to trumpet ratio. The trombones occupy the octave below in the same ratio. As a comparison we’ve included a flute part to demonstrate the incredible number of flutes it would take to approximate the same volume and fullness that the trumpets and trombones exhibit. It takes 12(!) flutes to play a C major triad, balanced with 3 trumpets, 3 trombones, OR 6 french horns. If the horns were to play together with the trumpets it would actually take 24(!!!) flutes to approximate that volume in theory. In practice listeners would still hear the flutes, but the flutes would in no way bear the same weight that the hefty brass section has.

Finally we have a full D major triad, spread throughout the strings, woodwinds, and brass. This is the full layout of the reduced score found in the text. You can use this as a template for balancing important chords as well as entire homophonic progressions. Pay close attention to the range between chord tones (5ths and octaves in the lower register), the doublings, and the ratios between the doublings, especially if they are in a different timbral group. Also note the lonely piccolo at the top of the chord. Why might the piccolo be clear and audible without any reinforcements?

Ex 7.13 - Download Now (PDF 16.5KB)

Textures

Both woodwinds and brass instruments are great for creating interesting textures. These can be recorded and played using samples libraries easily, but we would also encourage you to experiment with aleatoric effects as well. These take a bit of thought, but the benefit is that no one else will have the same effects that you do! If you’re writing an aleatoric technique to be played by a friend or professional, make sure to notate the effect as clearly as possible. Use other scores as guidelines (you can also refer back to Ex 6.8A and Ex 6.8b in the Sound Lab for some examples of aleatoric stingers from the game Evil Nun).

Here is a video that overviews the textural woodwind effects found in Cinesample’s Hollywoodwinds library.

www.youtube.com/watch?v=r_9LTkZQWXU

And finally we have a walkthrough of Symphobia, a very popular orchestral effects library that includes multiple brass effects.

www.youtu.be/EyT4yUv70Wc?t=206

Keep in mind that sample libraries only cover a fraction of the textural effects available to musicians. If you are interested in diving deeper, we recommend looking at contemporary and 20th century concert scores (Panderecki, Corigliano, Tower, Lutoslawski, etc.) as well as film scores. In addition to perusing these scores, make friends and experiment! There is no better way to learn to write textural and aleatoric effects that are idiomatic to each instrument than to collaborate with players.

Examples of Brass and Woodwinds in Games

Banner Saga (wind band)

www.youtube.com/watch?v=Pq8r2wEIgC0

Cuphead (jazz band with sax!)

www.youtube.com/watch?v=XORwfYUH23Y

World of Warcraft: Wrath of the Lich King (brass writing within orchestra)

www.youtube.com/watch?v=EuKGNGZjUJI

Abzu (woodwind writing within orchestra plus choir)

www.youtube.com/watch?v=TCuhQLgDgIg

Exercise

Find another gameplay clip (3 - 5 minutes) and score it entirely using woodwinds and/or brass. Make sure to keep the chordal passages balanced and smooth. Try experimenting with each instrument as a soloist by passing the melody around. Use instruments that aren’t often used for virtuosic passages (bassoon, tuba, etc.).

Examples of Choir Writing in Games

Let’s take a look at some examples of choir writing in games.

Below is a link to Jessica Curry’s score for Everybody’s Gone to the Rapture. This is a beautifully written score that features a choir singing in a sacred choral style. Pay attention to the chords and how the voices move. Think about the text as well. Can you imagine scenarios in a game where text might hinder the experience?

www.youtube.com/watch?v=XvBKsUe4lag

Another great score with choir is Dante’s Inferno, buy Garry Schyman. This track is much more active and horrific. Think about some of the effects the choir is creating; how would you notate them?

www.youtube.com/watch?v=-vC3UjQijk4

Christopher Tin’s “Sogno di Volare” from Civilization VI is another wonderful example of choral writing in games.

www.youtube.com/watch?v=dX9T4vuVOUA

Exercise

As with the other exercises, try scoring a 3-5 minute gameplay scene with choir only. If you have a phrase-based library, use text without making it the focus. If not, use a variety of “oohs” and “aahs” to achieve the desired mood.

Exercise

Add choir to any of the previously orchestrated examples.

Exercise

Make friends with a vocalist and add a solo vocal part to any of the previously orchestrated examples.

Examples of Percussion Writing in Games

Here we have some pitched and unpitched percussion examples.

www.youtu.be/0B4e3OzM_CY?t=28

Above we have an action-packed percussion example from the game Ghost Recon Future Soldier, composed by Tom Salta. This is in a hybrid style, where the percussion sounds organic but is most likely sampled. Pay close attention to the variety of unpitched percussion sounds that Salta used. They range in pitch, volume, and timbre, which makes for a fuller, more exciting arrangement. Salta has a fantastic way with percussion - his rhythms are so intricate, but each percussion instrument is only one piece of the puzzle. The overall “rhythm” of the cue comes from the composite of each percussion instrument, rather than just one or two. Think about using this technique in your own compositions. Don’t underestimate the “smaller forces” in your percussion libraries. They can make a huge impact.

www.youtube.com/watch?v=zip3u016uXE

And now we move back to one of the classics. The Legend of Zelda: Ocarina of Time uses a (synthetic) percussion ensemble for the Goron City Theme. You’ll hear marimbas and semi-pitched (talking) drums, yet it never gets tiresome. We get plenty of melody, and the groove is interesting enough to loop for hours without causing listening fatigue. This cue shows that pitched percussion can be used in a variety of ways including grooves and melodic lines.

www.youtu.be/ZXJWO2FQ16c?t=413

Above is a slightly unconventionally orchestrated, yet amazingly beautiful piece by Steve Reich called “Music for 18 Musicians.” Here the marimba, a pitched percussion instrument, is used to add to an evolving rhythmic texture. Minimalism like this is surprisingly rare in games, but it has been used to striking effect in Flower (which focuses heavily on guitar rather than percussion).

www.youtu.be/FHuD5y-PZM0?t=102

Lastly we have pitched percussion supporting melodic instruments in the example above. Typically you’ll see the xylophone paired with woodwinds at the unison or octave to add a crisp attack to the phrase. Listen to just about any John Williams score and you’ll hear this memorable effect. In the examples above you’ll hear the effect clearly at 1:42.

Exercise

Score a 3-5 minute gameplay scene with percussion only. Boss battles and action sequences work particularly well with this. Don’t forget that pitched percussion instruments like marimba, xylophone, vibraphone, and glockenspiel are capable of just about any flavor of melody, from horrific to romantic.

Exercise

Add percussion to any of the previously orchestrated examples.

Exercise

Make friends with a percussionist and write a solo for any melodic percussion instrument. This will be more challenging than you might think!

Examples of Full Sampled Orchestra in Games

At last we will discuss some examples of the full orchestra in games. First we will cover some useful timbral pairings for a variety of effects. Then we will look at a few examples of some of our favorite orchestrations in game scores.

Timbral Pairings and Useful Techniques

Low strings + Bassoons + Trombones

This is one of the most powerful combinations in the orchestra. Often you’ll hear rhythmic ostinatos that drive energy into a cue played by low strings and bassoons. Trombones are sometimes added to phrases that include sharp staccato, creating a powerful punch.

Below we have an example of a score with just these three timbres. In the first three bars the bassoons and cellos share a figure, paired at the unison. The cellos are playing with a measured tremolo while the bassoons have the same figure with a staccato articulation. In this case the two bassoons are providing accents on the eighths notes. In the final bar the two sections are joined by three trombones (two tenors and one bass), a contrabassoon, and the contrabasses. These additions provide power and volume to the fortissimo stabs.

EX 7.14a

Violins + Flute (or Piccolo)

This is an extremely useful combination. For melodic lines, it is a staple. The flute (or piccolo) adds clarity and brightness to the lush violins. This pairing is also capable of producing “ethereal” textures in the upper range. Below you can hear this pairing in a famous passage at the beginning of Benjamin Britten’s “Four Sea Interludes.”

www.youtube.com/watch?v=-6esm67yWpA

Exercise

Write 4 - 8 bars of a melody. Mock up it up with a violin section. Now do the same with one or more flutes. Listen to the melody with each timbre separately, and then together. What changes about the melody? How does the mood change? What game scenarios can you imagine where either of the timbres alone would be more appropriate than the two combined in terms of mood?

Below we have another example, this time it’s more of an ethereal texture than a melody. The first violins are now playing a drawn out arpeggiation in unison with two flutes. The measured tremolo adds a dreamlike mood to the figure. With the optional addition of the harp an octave below the winds and strings we are left with a lovely wandering texture. These kinds of textures are great for games because they can be as intricate as you like, yet they usually float in the background of an orchestration without getting in the way. With only three parts you can cover a lot of ground, producing a very full sound without distracting the player’s attention.

Ex 7.15a Exercise

Try using a technique similar to the one described above to compose a textural theme using only strings, woodwinds, and an optional harp.

Piccolo + Xylophone

In the following example we’ll take a close look at the “John Williams” pairing mentioned above. Here we’ve paired a piccolo and xylophone in the same octave to add sharpness to each attack. The piccolo part is marked staccato, but there is no need to add the staccato marking to the xylophone part due to its transient nature. This figure would cut through just about any orchestration, and it works well with most combinations of woodwind + melodic percussion.

Ex 7.15a - Download Now (PDF 27KB)

Full Section Unison Tutti

The full section unison tutti is somewhat underutilized. This effect occurs when all voices in a section perform the same part at the same octave. It usually provides a very strong, lush voicing for melodies. Below is an example of the unison tutti in the string section (starting at ~11 seconds).

EX 7.17

Orchestral Effects and Textures

This isn’t a particular technique per se, but there are many interesting textures that can be played by a full orchestra. Often these effects are found as pre-recorded samples in libraries like Symphobia and Spitfire’s London Contemporary Orchestral Effects. Hollywoodwinds, which we mentioned earlier contains various runs and rips in the wind section. To orchestrate these with samples either requires these pre-recordings, or else some careful orchestration is needed to compose a run in one instrument, and copy it to a few others.

Below is a “run,” a sweeping scalar figure, played by various members of the orchestra. Notice how it starts slow and low, and then other instruments “sneak in,” building the density and the frequency range of the run. It’s important to note that even for full orchestral effects, it is not essential to have every instrument playing all the time. In fact, that could ruin an otherwise effective run. Here the ranges have been carefully considered so that each instrument enters at an opportune moment, building the drama of the run. The brass is virtually nonexistent, only making an entrance to emphasize the final note with a rip in the french horns. The trumpets and trombones aren’t even necessary. Also be aware that the timpani is held back until the very end as well, notated with three slashes for the roll, as is customary of percussion notation.

Ex 7.18a - Download Now (PDF 23.6KB)

For aleatoric textures with strings, there’s no better starting point than Krzysztof Penderecki’s “Threnody to the Victims of Hiroshima.”

www.youtu.be/HilGthRhwP8?t=34

Takashi Yoshimatsu is an absolute genius with orchestral effects. In many cases he will offer up a framework for improvisation, and combine aleatory with standard written parts for support. In his “Cyber Bird” concerto for alto saxophone, Yoshimatsu asks the soloist to improvise quite a bit (along with a jazz rhythm section) as in the example below. The section builds to an absolutely insane climax with an orchestral cluster at 6:05.

www.youtu.be/Xp9zhpuRlUw?t=307

For brass textures and aleatory it is really hard to be John Corigliano’s “Circus Maximus.” Unfortunately the score is deceptively hard to get a hold of, but just listening carefully to the antiphonal brass in the first movement is a treasure trove of brass writing ideas.

www.youtube.com/watch?v=lYF4ndDX1pg

Working with aleatoric choir textures is extremely effective and surprisingly easy. The choir is capable of improvising beautiful consonance and ambient textures as well as creepy or horrifying textures. Below we’ve provided an example of how you can achieve a “creepy whispered” texture by providing only fragments of sentences and a framework for the speed of the whispered text. The string harmonics supporting the whispers add to surreal atmosphere.

Ex 7.19a - Download Now (PDF 56.3KB)

Another fantastic example of creative textures is Caroline Shaw’s “Partita for 8 Voices.” The group performing in the video is Roomful of Teeth, and experimental a cappella group that studies fringe vocal techniques from around the world and combines them in practice. This piece exhibits just about every vocal texture imaginable from smooth motet polyphony to throat singing to beatboxing.

www.youtu.be/NDVMtnaB28E?t=11

Exercise: Take 2 -3 of the score examples above and mock them up with your template. Focus on getting the most organic, natural sound that you can.

Examples of Full Orchestra in Games

Horizon Zero Dawn (opening sequence)

www.youtu.be/5fHgVuVAQFs?t=35

Super Mario Galaxy (full orchestra with electronics)

www.youtube.com/watch?v=s7hMIHpQGGo

Ori and the Blind Forest (full orchestra with vocals)

www.youtube.com/watch?v=MkzeOmkOUHM&t=1142s

Spider-Man (full orchestra, adaptive swinging sequence)

www.youtube.com/watch?v=Z7JkMYrgNtU&vl=en

(Pay close attention to when the music triggers and ducks out!)

Exercise: Pick another gameplay scene and score it with an entire orchestra. Make use of as many of the techniques mentioned previously as you can.

Writing Homophonically

Below we have an example for analysis of a brief homophonic cue. Homophony is a common texture found in games because it moves smoothly from chord to chord. The sound is capable of sitting and looping in the background without calling attention away from gameplay. It is also very easily layered on top of, which is useful for a variety of adaptive scenarios. Melodies can be layered and triggered, or the homophonic texture can be split into its respective timbral groupings with each timbre being triggered under different conditions. The only catch is that for these layers to be as effective as possible (and to loop smoothly) the voice leading must be controlled and the groupings must be dynamically balanced. Approaching homophonic scores like this is relatively simple, just analyze the chord progression and watch how the voice leading functions. In most cases all chord tones will be fully represented and the voice leading with be scalar and simple.

Ex 7.20a - Download Now (PDF 36.2KB)

COMING SOON (EX 20.a)

This is a very short, simple example of homophonic writing. Note that homogenous rhythms. This is a typical element of homophonic writing. Take note of the way each chord is voiced. What timbral groups are voicing which chord tones? Count the number of players and evaluate whether or not each chord tone is balanced (refer back to Table 7.2 in the text for the equations). Also look at the voice leading, particularly at the loop point. Is it a smooth loop?

Exercise

Try mocking up this short example. Try balancing it using only MIDI CC’s (expression and modulation) rather than using any mixing procedures like compression or automation. Is there anything that can be improved upon?

Writing Thematically

Below we have a thematic example. This builds on a few of the orchestration techniques we discussed. It is more of a combination of simple textures than a texture in and of itself, but nevertheless it is commonly found in games. A great way to analyze thematic cues like this is to first dissect what motifs are happening, and then evaluate which instruments are playing them. Follow each motif through to the end as if you were reading a “choose your own adventure” story. This will tell you how the composer intended to develop each motif, and give you some clues as to how best to develop your own motifs.

Ex 7.21a - Download Now (PDF 58.2KB)

<COMING SOON (EX 21.a)>

In this example we have two main motifs: and ostination (m. 1) and a theme (mm. 2 - 4). The main theme is presented in a few ways. First, the bass clarinet takes the theme. The only counterpointing orchestration is the ostinato played by the cellos and percussion. The melody here is intentionally subtle. In measure 4 we get a hint of a counterpointing melody in the basses, contrabassoon, and tuba.

Continuing to follow the thematic motif, in measure 5 the theme is now brought into full light. The full brass section along with the high strings and winds now take the theme in part, sequencing it upwards until the climax in bar 7. This shows us two things. First, it shows us that orchestrating a theme can drastically change the mood. The theme was first presented by a solo bass clarinet, which sounded dark and rich, but still quite subtle and reserved. By presenting it in the full brass with winds and strings doubling it in 5(!) octaves we have massively changed the mood. It takes up virtually the entire mix with this orchestration, which presents a powerful conclusion.

Second, it shows us a method of compositional development. The theme isn’t repeated in its entirety in bars 5 - 6. The first four notes are taken and then sequenced upward, ending in entirely new material for the resolution. This is a very microscopic view on how to develop motivic material throughout a game score. Tiny chunks can be used and reused, reorchestrated and dissected again as is needed throughout the game.

Moving on to the development of the ostinato, in measure 5 it is completely dropped in the cellos, left only to the percussion. Instead, the low strings take over a homophonic chord progression through to the end of the cue. This tells us another important point about orchestrating ostinatos: they don’t need to be repeated exactly each time to be effective. In this case the melody is still ringing in our ears, so the triplet rhythms of the toms and timpani are more than enough to keep the pitch material of the ostinato in our brains. Similar to the thematic motif, the ostinato is capable of being dissected and developed throughout a score.

Exercise

Write a theme and orchestrate it in a variety of ways using your template. Don’t forget to develop it motivically!

Writing Polyphonically

Polyphony is characterized by multiple voices performing linear, melodic content that interact with each other. You may be able to tell just by a surface glance, but the rhythms and parts here are not homogenized as in the homophonic example. Instead we have a variety of contrapuntal voices, with disparate rhythms and themes jumping from instrument to instrument. These voices are intentionally not balanced so that relevant motifs could be brought into the foreground while other elements remained in the background. This is important to note because without a foreground and background, complex polyphony can become overwhelming to the point where no part is presented clearly. In other words, this piece exploits the balance equations (tables 7.1 and 7.2) to create imbalance.

It is sometimes possible to analyze chords as in homophonic textures, but in most cases this will be more work than it’s worth. A better approach is to find the motivic elements like we did in the thematic example, and follow them through to the end. Not which instruments are playing which motifs. Who is doubling what at what octave? How is the motif presented with each repetition? What has changed? Then you can go back and analyze how the motivic elements are interacting. Are there counterpointing parts? Are two motifs being played simultaneously, or does the composer fire them off in sequence? This process is made infinitely more complicated when it comes to game scores because each motif is likely to be triggered based on gameplay data. Here, we will focus mostly on the orchestration and part prep, and then in Chapter 9 we will dive more into how the adaptive elements work.

Ex 7.22a - Download Now (PDF 99.8KB)

<COMING SOON (EX 7.22a)>

Adaptive Structure

Before we begin, let’s cover the adaptive structure of this piece. It’s quite complex to the point where it is unlikely you’d ever see this much adaptivity in 8 bars of music, but we wanted to show you a variety of possibilities for adaptive orchestration.

In short, this piece is a linear introductory stinger followed by a loop. The loop (mm. 5 - 12) contains a variety of vertical layers, thus resulting in an adaptive system with both vertical and horizontal approaches.

Color Scheme

You are probably asking yourself, “what in the world are all those colors for?!” This is a valid question, and the answer is the colors help visually separate the vertical layers. This is an advanced method that orchestrators use to keep the layers organized during a recording session with an orchestra. The conductor can say “Yellow!” And everyone in the orchestra will look at their parts, and know exactly what to play. This particularly helps when the layers dovetail between between rather than splitting the layers into tradition instrument sections. We feel that this allows for much more emotional flexibility with layering.

Below we’ll take a look at each color independently.

Yellow - We’ll start with yellow because this is the primary ostinato that begins in m. 1 and continues throughout the piece. This is simple to analyze as the cellos are the only group that have it. It functions much the same way that the ostinato in our thematic example did.

Purple - This is a very brief pizzicato figure in the violin II’s. It last only for bars 5 -7, but it adds a brand new color with the col legno and pizzicato articulations, and then ducks into a tremolo in bar 3. Currently this tremolo figure adds some darkness and interest to the previous layer, but later on we’ll see that it actually provides support for the foreground layer, which we’ll get to shortly. This is an example of a layer that really has two functions that change depending on the layers present in the mix.

Mint Green - This layer starts with the violas in m. 5 and picks up the violin II’s in m. 8 and clarinets in m. 12. This layer on its own is homophonic, but it is being juxtaposed with other layers, making it polyphonic. This layer adds some mystery and darkness to the otherwise driving and energetic ostinato.

Light Blue - This layer is mostly in the clarinets (until they switch the mint green in the final bar). This is actually just the mint green layer in a higher octave (which does shift the mood, and make it more prominent), with the addition of multiphonics and other aleatory.

Pink - This layer is made up of the horns and high reed instruments. The horns provide chordal swells while the reeds add some counterpointing arpeggiation. Note how the oboe and english horn dovetail in m. 6. This is a useful way to ensure rhythmic consistency.

Light Brown - This layer is made up of percussion only. Currently this layer only adds some color and emphasis to the previous layers, along with a subtle bass part in the tuba.

Dark Brown - This layer is made up of low instruments. The lower strings, winds, and brass here are adding power and emphasis with these quick stabbing rhythms. The figures themselves add an ominous energy to the previous layers, and they provide a sharp forward momentum.

Red - This layer is interesting because it is actually a counterpointing bassline performed by the low strings and winds. In this layering schema we actually get the bassline before the foreground melody. Although there are various melodic figures present in the previous layers, this is our first real foreground melody.

Turquoise - This layer begins to really add some punch. This is a foreground “hit” layer. The brass figures prominently here, adding stabs and aiding the transition in bar 8. Note that they still “lock in” like a puzzle around the other layers - they do not overpower the other layers completely! The piccolo is also prominent here, adding some rips and runs to the top of the frequency range. This is both a function of effective polyphony and adaptive scoring technique.

Orange - Finally we have our main foreground melody. This is a lively and chaotic layer split between high strings, winds, piano, and percussion.

Summary

In summary, each of these layers adds a particular flavor or mood of its own. The ordering above actually moves from lightest to heaviest, or least intense to most intense in the smallest increments possible. This is important, because improperly ordering your layers will render subtler layers useless.

In terms of orchestration analysis, think of each of these layers as their own motifs. They move from timbre to timbre and develop as the cue progresses. Look at how they function on their own, and with each of the other layers. How does the voice leading work? Can you extrapolate any particular chord progression? It’s crucial when writing complex polyphonic cues to take into account range and counterpoint. Start with a motif. Then write a counterpointing part in an alternate timbre and an alternate range. This will keep your orchestration from getting cluttered, and it will maximize the contrast between parts so that each is audible. Continue stacking parts until you are satisfied. Notice that the orange layer is virtually unobstructed in its range. And for the brass stabs in the turquoise layer there are no other brass timbres competing, they are functioning as one unit. This contrast makes each layer as clear as possible.

Below we have included a sibelius file for you to listen to and study. We’ll also take a look at this piece in FMOD in Chapter 9 to hear these layers and how they function together.

Ex 7.22c Polyphonic Ex01 Full Score Highlights -
Download Now (PDF 99.8KB)

Exercise

Find an active gameplay capture and score it polyphonically.

Ambient Cues

Not all cues have to be big and complex! In fact a majority of game cues need to be subtle. The ambient cue below is sparse, and colorful. Instead of focusing on chords or voice leading, look at the interplay between the different timbres.

Ex 7.23a - Download Now (PDF 36.9KB)

Exercise

Compose 2 - 3 ambient cues using only 3-4 instruments.

Non-Orchestral Styles/Genres in Games

Below we’ve compiled a list of examples to go along with the styles outlined at the end of Chapter 7 in the text. These examples should help you get a feel for the variety of styles found in games, and hopefully provide inspiration for creating some unique ensemble choices in your own projects!

Chamber

Chamber music is basically just a smaller version of an orchestra, often focused on a particular section rather than all of them. For example, a chamber piece might have a string quartet and a woodwind quartet, omitting the brass entirely. In general, any kind of “chamber ensemble” usually refers to a group of instruments that are small enough to have a single musician on each part as opposed to a “section” of musicians on a part.

Examples in Games: Everybody’s Gone to the Rapture, Bioshock Infinite, BestLuck, The Last of Us, Dead Space 2, Kingdom Hearts, Firewatch

“Slipping Away” - Jessica Curry, Everybody’s Gone to the Rapture (chamber string ensemble, piano, choir)

www.youtube.com/watch?v=k9Izq0bFWoU

“The Battle for Columbia” - Garry Schyman, Bioshock Infinite (chamber string ensemble, percussion)

www.youtu.be/6GiKRkH5iH8?t=1919

“Kairi I” - Yoko Shimomura, Kingdom Hearts (synthetic chamber ensemble)

www.youtube.com/watch?v=rgkLbf1jEcw

“All Gone (No Escape)” - Gustavo Santaolalla, The Last of Us (chamber string ensemble)

www.youtube.com/watch?v=y_xpAFcgK-g

“Lacrimosa” - Jason Graves, Dead Space 2 (string quartet)

“Aurora’s Theme for Orchestra” - Béatrice Martin, Child of Light (chamber ensemble)

www.youtube.com/watch?v=v6KcYN0A5LY&t=160s

“When Cherry Blossoms Fade” - Spencer Bambrick, BestLuck (chamber orchestra)

Ex 7.24a - Download Now (PDF 79.8KB)

Electronic / Dance

This style is self-explanatory. Any track that focuses entirely on electronics falls into this category. Don’t be deceived though, the applications for electronic music go far beyond dance music.

“Investigation” - Nima Fakhrara, Detroit: Become Human

www.youtube.com/watch?v=vjgGSxR5Z-g

“Adventure” - Disasterpeace, Fez

www.youtube.com/watch?v=hXfos-mAMMA

“Story of the Thousand Year Door” - Yoshito Hirano, Yuka Tsujiyoko, and Koji Kondo, Paper Mario: The Thousand-Year Door

www.youtube.com/watch?v=3j3VL70JpBo

“Once Upon a Time” - Toby Fox, Undertale

www.youtube.com/watch?v=SxNcKXjfaQo

“Milky Way: Explore” - Ben Prunty, FTL

www.youtube.com/watch?v=WFkGjEut9U4

“Prologue” - Kinuyo Yamashita, Castlevania

www.youtube.com/watch?v=V0z2yeLlqpY

Rock and Pop

“Intro Theme” - Yoko Kanno, Ragnorok Online 2

www.youtube.com/watch?v=QEqEh9GGyx4

“Theme of Laura” - Akira Yamaoka, Silent Hill 2

www.youtube.com/watch?v=QFvt2cNSOaM

“Rip & Tear” - Mick Gordon, DOOM

www.youtu.be/Jm932Sqwf5E?t=45

Hybrid

Hybrid music refers to a combination of any number of styles. Most often it is a combination of electronic and orchestral. Check out the examples below, because these two styles go together like peanut butter and jelly!

Examples in Games: Deus Ex: Mankind Divided, Super Mario Galaxy, Assassin’s Creed, Little Big Planet 3, Mirror’s Edge, Spyro the Dragon, Detroit: Become Human
- deus ex mankind
- jesper kyd "Earth"
- super mario galaxy
- phillips little big planet 3
- Mirror's Edge
- Spyro Stweart Copeland
- Firewatch

Jazz and Big Band

Examples in Games: L.A. Noire, I Expect You to Die, Metal Gear Solid 3: Snake Eater, Cuphead
 Andrew hale L.A. noire
- Cuphead
- I Expect You to Die” Composers: Tim Rosko, Bonnie Bogovich, & Connor Fallon
- Snake Eater
Jose D: Recent Mario games: Odyssey

“World” Music

Examples in Games: Civilization VI, Prince of Persia: The Sands of Time, Revelation Online
- Civilization VI
- EBOH
Jose D: Neal Acree, Revelation Online – Chinese instruments

Electroacoustic Music

Examples in Games: Silent Hill: Shattered Memories, Limbo, Inside

Part Preparation

By “part preparation” we mean getting your music out of your DAW and into a score format, with individual parts for actual musicians to read and record. Before getting started on part preparation, we want to make two important points clear. First, this process deserves volumes of texts on its own to be thoroughly covered. Many talented people create lucrative careers for themselves doing nothing but orchestration and part preparation. Second, orchestration itself is intertwined with part preparation. Part of orchestration is the creative aspect (creating textures and balancing timbres, etc.) and the other part is notating those parts idiomatically for actual human beings to play. The former doesn’t work without the latter, so don’t underestimate this aspect of orchestration.

Below we attempt to distill this complex process into a few simple points to get you started. However, there are numerous resources we encourage you to explore as well. For starters, Elaine Gould’s book “Behind Bars: The Definitive Guide to Music Notation” is a foundational text that will answer 90% of your questions, and make your notation immediately more effective. Things you never knew you were doing wrong will be brought to light. Second, write solo pieces for friends who play instruments that you don’t! It’s incredible the depth of knowledge that can be acquired by simply writing for players, and having them offer feedback on what is idiomatic and what is not. Ask for their opinion on both notation as well as composition and take back their parts after they have marked it up. Third, use professionally published sheet music as reference. Be aware that these are not always correct or up to date, but they can be a useful reference if you have a score in the style you’re currently composing in. Finally, although it is mostly geared for film, Tristan Noon has a useful ebook called DAW to Score you might find helpful in understanding the overall process.

When you begin the process of part preparation, your job has officially changed. You are no longer a game composer. You are an orchestration and notation expert. You now have two main goals: 1) to accurately convey the “composer’s” intentions to a live ensemble, and 2) to prepare parts that are as clear and idiomatic as possible for each player.

To accomplish these goals, we’ve outlined a basic three step process below:

  1. Preparing the DAW session for export
  2. Importing (or inputing) the score into dedicated notation software (Sibelius, Finale, etc.)
  3. Preparing Individual Parts

Before diving into each of these steps, we’d like to make one thing abundantly clear: an exported MIDI files does not equal an adequately prepared part in the slightest! Even if you export the MIDI file for a very simple solo, you will likely not end up with a readable part. Part preparation is a separate phase in the process of producing a score for a game.

Preparing the DAW Session for Export

This step consists of organizing and consolidating parts so that step 2 runs smoothly. If you were to export an orchestral session as a MIDI file, and import it directly into Sibelius you will likely see something like this:

Ex 7.25 - Download Now (PDF 169.3KB)

The full score is very obviously not readable, and the parts are even less so. The problem is that the MIDI scheme for your DAW and all of your sample libraries are unique, and completely different than the scheme that Sibelius or Finale uses. Most composers wind up with extra keyswitches in their score that the notation program interprets as a counterpointing voice. Other times slight rhythmic inconsistencies (which sound great in a mockup) are interpreted to be hyper-exact 64th notes in a notation program, cluttering and overcomplicating an otherwise simple part. For these reasons and many others, we use step 1 to adjust our session so that it can be imported as efficiently as possible.

Before cleaning up your session make sure to save the session as a copy so that you don’t save over your mockup. Then you can begin step 1. Here’s a brief list of some of the things you’ll want to clean up before exporting to a notation program:

  1. Delete keyswitches
  2. Zero out MIDI CC data
  3. Make sure all rhythms are exactly quantized to the largest beat division possible (16th notes, not 64ths, etc.)
  4. Get rid of legato overlaps
  5. Combine separated articulations into single tracks (i.e. violin legato and spiccato)
  6. Get rid of any phrase-based tracks
  7. Delete any extra layers or doubles of parts that bolstered the production/mix
  8. Think of any other extraneous parts or data in your session and delete it

When you’ve cleaned up your session you should be left with essentially one MIDI file per part with no MIDI data whatsoever. This will ensure a minimal export/import into your notation program, saving you the time of deleting all of the extraneous data.

Importing Your Score

The purpose of this step is to get your music from your DAW into a professional looking and easily readable conductor’s score. This is the score the conductor will read off of. It needs to be uncluttered and easily readable. Before even breaking into the music itself, you’ll need to make sure your cues are organized in the conductor’s score. Using logical titles for each cue in your soundtrack is a must - keep them numbered in a consistent way so that players can easily find the cues they need. Also use large time signatures, especially if there are many signatures changes in your score. 

Although this step is called “importing your score,” there are actually two ways to get your music from your DAW into a notation program. The first is to export the MIDI file from your DAW (see your DAW’s manual for how to do so), and then open up the MIDI file in your chosen notation program. You should just be able to right click the MIDI file and select “open in…” and then choose your notation program. You can also refer to your notation program’s manual for specifics on importing one or more MIDI files. Importing MIDI is a very easy process and it should only take you a second or two.

If you choose to import the MIDI file into Sibelius or Finale you can either import a single MIDI file of the entire orchestration, or you can import each instrument individually. Sometimes the latter option is easier because you have more control over the instruments the MIDI files are assigned to. It ends up being a cleaner import.

Your job does not end here! Your score will be missing dynamic markings, technique markings,  tempo marking, and just about every marking that makes music musical. Slur lines will need to be added, fermatas added, rhythms will need to be cleaned up, and so on and so forth. Add text, instructions, idiomatic markings like pedaling and breath marks, solo or unison (a2, a3, etc.) instructions and anything else that makes the score clearer. For some specifics on what not to forget in your musical markings, check out this blog post below by Marius Masalar, of Composer Focus:

www.composerfocus.com/score-preparation-for-session-recording/

The second way to get your music from your DAW into Finale or Sibelius is to simply input everything yourself! This is actually much simpler in some ways, and doesn’t necessarily add time to the process. It will actually save you time if your import needs a ton of cleanup, which is why this is our preferred method. Another benefit to this approach is you can use the DAW session as a reference as you enter, which decreases the likelihood of slip ups because you’re constantly checking back on the original. We find that there is a greater connection to the piece this way, so the translation into score is very faithful.

Regardless of the approach that you choose, don’t forget to do a read through to make sure that each part is not only organized and clear, but also playable at the level of sight reading. Look for awkward chromaticism that can be made simpler by respelling enharmonics. Make sure that instrument ranges aren’t broken, instrument changes are clearly marked, and that there are no insanely virtuosic figures that will not be sight readable. Send parts that you’re unsure of to friends that play the instrument to offer feedback - harp, trombone, and organ parts are often problematic. So are percussion parts.

Try to fit each instrument on their own line in the conductor’s score, but if parts are similar in terms of rhythm, then condense them (i.e. 3 trumpets on one line). If the parts are very independent from each other rhythmically, or there is complex voice crossing they’ll need to remain separate. There are various ways to accomplish this using Sibelius and Finale, but make sure the individual parts still exist even if they are hidden. This is because every musician should still have her or his own part, even if the parts are condensed in the conductor’s score. We’ll discuss this further in Step 3.

Additionally, check to see if extremely important parts will not be competing with over-orchestrated background parts. Go through every single part until you are confident that the score is complete. If you aren’t sure, grab a professional contemporary orchestral score and use it as a reference to see if there’s anything you’ve missed.

Preparing Individual Parts.

Once you have polished up your conductor’s score, it’s time to move on to the individual parts. If you’ve done everything meticulously so far, this part should be relatively simple. All notation programs allow you to “zoom” into each part (i.e. trumpet 1, or violins II, etc.). Your first consideration will be to figure out who gets what parts. In short, every musician should have her or his own part. Wind and brass players should all have individual parts rather than condensed parts. So trumpet 1 should be a separate part from trumpet 2 even if they are notated on the same line in the conductor’s score. All specialized instruments like piano, harp, vibes, etc. should have their own parts as well.

Percussion can be a bit tricky to translate into parts from a mockup. For starters, the notes will be completely different than the pitches in your MIDI export. Most of the time you’ll be notating on a single line. If the instrument has multiple pitches (like tom-toms), there may be up to five line staves, but it still will need to be adjusted after your import. Additionally percussionists don’t read by instrument the way that other sections do. Percussionists need to switch instruments so often that they rarely stick to one instrument at a time. It’s more likely that you’ll have to notate based on the number of percussionists at your disposal. If there are three percussionists at the recording session, then you’ll have to make sure your percussion parts are playable by only three percussionists. Make sure that there is time between instrument transitions for percussionists to run over to another instrument. The exception here is the timpanist. Timpani players almost never switch from the timpani.

Ex 7.26 - Download Now (PDF 53.8KB)

If you are writing for choir or solo voice with text, make sure the lyrics are clear and readable. Break up words with hyphens (i.e. “ca-te-go-rie”), and start each syllable with a consonant if possible. For melismas (extended notes on a single syllable) you’ll use an underscore “ahh_______”.

Ex 7.27 - Download Now (PDF 188.8KB)

Finally, for the strings, each section gets their own part, so it’s okay to notate divisi on the same staff. Be aware of how many players are at your disposal. You will eventually have to print parts, and usually string players group two players per music stand. Each stand needs a part, so make sure to print enough copies!

Another consideration is where to put page turns in the general layout of each of your parts. This is often overlooked, but a page turn can ruin an otherwise beautifully recorded cue, so don’t forget to account for it! Adjust the layout so that page turns do not occur in the middle of complicated parts. Remember that the music will be laid out such that pages 1 and 2 are on the stand simultaneously. The page turns occur between pages 2 and 3, 4 and 5, etc. Ideally each cue could be condensed to two pages. If not, try to make sure the page turn occurs during a rest. Ideally it would occur during a rest, and when other instruments are playing loudly to cover up any loud page turns. By planning the layout of every one of your parts carefully, you can avoid ruining a few takes. Just remember that the layout of each part must be taken into consideration.

Below is a pretty solid look at the preparation process. This video starts off at step 2, when the music is already input into a conductor’s score format, and it needs to be cleaned up a bit and parts need to be extracted.

www.youtube.com/watch?v=ZzN09sGrw_c

Finally, after the score and parts are all excellently prepared you will need to actually print the sheets. Use Tabloid (11 x 17in) for the conductor’s score. Standard letter (11 x 17in) or or parts is okay, but 9 x 12in is better if you can swing the price.

Additional Resources on Orchestration

Ranges

Below is an orchestral range chart. Use it!

www.orchestralibrary.com/reftables/rang.html

Trombone Slide Positions

Below is a useful reference for trombone slide positions. Trombone parts are notorious for being overlooked in terms of playability due to the unfamiliar nature of the slide positions. Don’t write unplayable parts!

www.michaelclayville.com/2011/10/24/everything-you-never-cared-to-know-about-the-trombone-glissando/

Synthesized Timbres

Types of Synthesis

There are many types of synthesis. Too many to be able to cover here in detail, but below is a list of some of the more useful types found in games. Also see Chapter 3 in the SoundLab for further coverage of synthesis.

Subtractive Synthesis

This is probably the most common form of synthesis found in games. In this method wave shapes, or waveforms, are combined to create a sound with a vast amount of harmonics and complexity. Filters are then used to subtract frequencies, and to carve out the desired sound. Most synthesizers include filters, and so much of the synthesis you will do as a game composer will incorporate elements of subtractive synthesis.

Additive Synthesis

Additive synthesis is a bit trickier to execute, but there are a few common forms of additive synthesis that are used pretty widely. In this method sine waves (gloss)(a very simple wave shape) are added together at specific frequencies to generate the desired harmonics. A very common additive synthesizer is Native Instruments’ FM8.

Wavetable Synthesis

Wavetable synthesis occurs when two distinct wave shapes are placed on a spectrum. The composer then modulates between them, which changes the resulting sound to form an amalgam of the two shapes. A very popular wavetable synthesizer is Native Instrument’s Massive (which also includes filters, and therefore subtractive synthesis).

Granular Synthesis

Granular synthesis is the only method that does not use pure waveforms (gloss). Pure waveforms are the basic sounds generated from scratch by a synthesizer or computer (we will explore these shortly). Instead, granular synthesizers take recorded samples and chop them up into tiny grains to be processed and reorganized according to the composer’s needs. This isn’t pure synthesis in the sense that we are working with recorded samples, but we have included it in the section because it sound cool. Therefore we highly encourage you to experiment with granular synthesis in your game projects. Spectrasonics’ Omnisphere II includes a phenomenal granular synthesis engine.

These synthesis methods all have their own unique character. Many synthesizers incorporate one or more of these methods, so in some ways the method of synthesis matters less than the synthesizer itself and your comfort level with it.

Fundamentals of Synthesis

Pure Wave Shapes and the Oscillator

The most fundamental element of a synthesizer is the oscillator. This is the component of a synthesizer that generates sound in the form of a particular waveshape. Knowing the basic waveshapes will help you include synthetic elements in your game music more effectively. Below is a list of the basic wave shapes in order of their harmonic complexity.

Sine Wave - The simplest waveshape possible. This consists of a tone at a single frequency.

Triangle Wave - This is slightly more complex version of a sine wave. It sounds similar to the sine wave, but has a bit more of an edge to it. It still has a very hollow sonority.

Square Wave - This is again slightly more harmonically complex than the triangle wave, but it is much more pronounced. The hollowness here is a bit more biting and less pure, but equally pronounced.

Sawtooth Wave - The sawtooth, or saw wave, is another step up in complexity from the square with. It is much more aggressive than the square however, and should be easily differentiable. It may sound almost distorted due to these characteristics.

Noise- Noise is an often overlooked waveshape, but it can be extremely useful. There are a few different types of white noise, but they all sound like some form of static.

The best way to get to know these basic wave shapes is to open up your synthesizer and turn off everything except for a single oscillator module. Most synthesizers will then allow you to listen to these waveforms without any added effects or processing. It is extremely helpful to look at these waveforms with an equalizer on. By turning on the EQ’s visual analyzer you will be able to see exactly what the frequency spectrum looks like for each wave shape (figure). It’s even possible to see specific overtones on most of these sounds. It’s helpful to compare and contrast the spectrum, and to watch as the spectrum changes as you alter the sound via wavetable synthesis.

Each of these waveshapes can be roughly approximate to the orchestral timbral groupings. Note that this approximation only concerns the sustained timbres. Each instrument also has a particular attack which factors heavily into our perception of the sound itself. By looking at these sustained waveforms through an EQ and comparing it an orchestral group, you should be able to confirm for yourself the similarities between the timbres. For example,

These groupings certainly aren’t exact. It can actually be counter-productive to limit your synthesis to the confines of orchestral timbres. But it is quite helpful to use these comparative groupings as a basis for experimentation by combining wave shapes with other synthetic sounds or with traditional timbres. The only limitation is your own imagination!

The Filter Section

The next module you should be aware of when synthesizing sounds is the filter section. Almost every synth should have a filter section. This is the basis of subtractive synthesis, where we can select frequency bands to cut from our “wall of harmonics” that we created with the oscillators. Some synthesizers will also allow specialty filters that color the sound in some way, or add subtle (or extreme) processing effects.

The Modulation Section

The most complex section of a synthesizer, and the one that changes the most drastically from synth to synth is the modulation section. This is where the fun begins. The modulation section allows you to use LFO’s  (gloss) and envelopes  (gloss) to create movement and dynamics in your synthesizer patch. LFO stands for “low-frequency-oscillator.” This is not to be confused with the normal oscillator which generates sound in a particular waveform. We cannot hear an LFO, we can only hear the effect that and LFO has on a particular parameter of our synthesizer patch. For example, if we used an LFO to modulate the pitch of oscillator 1, we would hear the pitch change. But we could also use it to change amplitude (volume), or panning, or really anything.

It’s best to think of LFO’s as if they were metronomes, oscillating back and forth at a rate and path that we choose. (figure) is an example of an LFO that is modulating the pitch of a pure sine wave. In the first example the LFO is not affecting the sine wave at all. In the second example the LFO is modulating the pitch at a rate of a whole note at 120bpm. The third example speeds things up a bit because we are now oscillating at a rate of ¼ note at 120bpm. The last example changes things up a bit. Instead of oscillating uniformly between two pitches, the LFO is now oscillating in one direction from the higher pitch down to the lower pitch in what’s called a downward ramp. This is an example of the LFO path being altered. LFO’s have the option of changing their path to match corresponding many different wave shapes, so the possibilities are virtually unlimited. This concept may seem a bit confusing because essentially we are using wave shapes to tell LFO’s how to modulate wave shapes… For this reason we like to think of the LFO’s as taking a particular path which determines how the sound modulates. The concept will become much more intuitive as you gain more experience with LFO’s and modulating simple sounds.

Envelopes also modulate the sound coming from your oscillator(s), but they do it in a different way. Where LFO’s are metronomes that move back and forth in a particular way, Envelopes are basically just curves that tell your parameters how to modulate. Going back to the pitch example, we can hear many different envelopes curves modulating the pitch of our synth patch  (figure). Most synthesizers actually allow you to draw in your own envelopes which can be very fun and rewarding.

The Synthesizer

Finally, the last section on our tour of synthesizers is going to be the output module. Usually this includes some type of effects chain to polish up your synth sound. This is also where you can pan or adjust the overall volume of your patch.

The routing and details of every synthesizer are different, but the basic framework is usually pretty consistent. (figure) depicts a simple layout of a basic synthesizer. You can use this to get your bearings on pretty much any synthesizer on the market, and begin making interesting sounds. We recommend simplifying things when you first start out by turning off all of the fancy frills and extra components and starting with the basics - the oscillators - and working your way up until you arrive at the output module. When you are more familiarized with the process of synthesis, you can then start looking at presets and trying to deconstruct what you’re hearing back into its fundamental components (wave shapes and filters). When you become highly experienced you may even be able to listen to synthesized sounds and determine how those sounds were constructed from your ears alone.

Creating Your Own Synthesizer Patches

Once you have the basic elements of synthesis down, you are ready for the most fun aspect of using synthesizers - creating your own sounds! This is a great way to come up with unique sounds to add to your project or template. There are two ways to go about this. One way is to start from scratch (usually a basic sine wave oscillator with no filters or modulation). Another way is to start with a complex preset and tweak it to your liking.

When starting from scratch it makes sense to follow the flow of the synthesizer itself. This usually means starting by choosing your basic waveshapes, then choosing your filter settings, modulation, and so on. Try experimenting with multiple oscillators on different frequencies. One common effect is to include octaves or sub-octaves all into the same patch. Remember that synth patches are more usable and human-sounding if you include some type of parameter change via the mod wheel or velocity. Failure to do this will result in a very lifeless sound.

When working from a preset it can be really helpful to bypass modules and effects to retrace the signal. For example, try loading a heavy preset and then turning off everything but one oscillator. Then turn on each module one by one and listen to the sound. This will tell you exactly what each element of the synthesizer is doing to the sound. It will also make it clear which elements you should tweak and which you should retain.

Common Synthesized Instruments

The possibilities for designing synthetic sounds are infinite, but there are a few very common types of sounds that are useful for games. We have listed a few of them below:

Leads - A lead sound is usually a prominent, sometimes aggressive sound that contains a good amount of upper harmonics. As the name suggests, these sounds are ideal for melodies and attention-grabbing motifs.

Arps - An arp is a synthetic sound that has been arpeggiated. Many synthesizers contain arpeggiators which take a chord as input and trigger the notes in a set pattern, so these can be really fun to play around and experiment with. The character of these sounds is usually “plucky,” meaning the attack and sustain are very short.

Pads - Pads are sounds with a long sustain. Often they resemble string or choir patches and are usually played by blocking out chord progressions homophonically. Pad sounds mostly sit in the background of a mix, but the better ones will have some type of modulation to keep them alive and interesting.

Bass - Bass sounds are actually strikingly similar to lead sounds and can often be interchangeable. The only difference is a heavy focus on lower frequencies.

Subs - Subs, or sub-bass patches are used frequently to add a thick low rumbling to an arrangement. These are heavy, hard-hitting, and powerful sounds often used in trailers. They can be fantastic additions to fill out a template, but be careful not to overuse them.

Perc - Similar to acoustic percussion, synthetic percussion timbres are eclectic. They often model acoustic percussion. For example a snare sound on a synthesizer will usually consist of white noise, shaped with an immediate attack and no sustain. Cymbals would also use white noise, but instead might have a very slow attack and long sustain. Synthetic percussion works great when layered underneath acoustic percussion.

Risers - A sustained sound that is pitched, and literally “rises” upward. Often used to add tension and “gravitas” before an emphasized downbeat.

Falls - The opposite of a “riser.” This is often pitched downward and releases tension rather than increasing it.

Effects - Synthesis is such a robust tool that it is capable of generating just about any kind of sound you can imagine. For this reason, synths are great for creating sonic effects that defy categorization.

Critical Listening

Strings

Detroit: Become Human

Journey

Bioshock

Bioshock Infinite

Winds

The Banner Saga

Brass

Tomb Raider (2013)

World of Warcraft

Percussion

Dead Space

Choir

Everybody’s Gone to the Rapture

Dante’s Inferno

Full Orchestra

Ori and the Blind Forest

Star Wars Battlefront

God of War

Assignments and Practice

Assignment A

  1. Find a gameplay capture of your choice, and score a 1:00 - 2:00 minute gameplay scene using only strings.
  2. Find a different gameplay capture of your choice, and score a 1:00 - 2:00 minute gameplay scene using only woodwinds.
  3. Find a different gameplay capture of your choice, and score a 1:00 - 2:00 minute gameplay scene using only brass.
  4. Find a different gameplay capture of your choice, and score a 1:00 - 2:00 minute gameplay scene using only percussion and choir.

Assignment B

  1. Take a 3:00 minute gameplay capture from a game of your choosing. Make sure the 3:00 minutes covers a variety of situations and scene transitions. Use your full orchestral template to score the scene.
  2. Now, open up a new FMOD session. Restructure your score so that it can be implemented adaptively into FMOD. Make sure your session contains enough adaptive elements so that your music adequately underscores any and all situations in the gameplay capture.

Assignment C

  1. Pick a scene from a game and write an adaptive musical cue for a particular scene using only a handful (4 - 5 maximum) of instruments. The scene should have enough complexity so that your musical can change and evolve with the gameplay.
  2. Mock up the cue, and implement it into FMOD.
  3. When you are happy with your music and the level of adaptivity search through some forums or ask a few friends to record the parts.
  4. Transcribe the music from your DAW and prepare the parts for the players. Be sure to give them a click track so they stay in sync with each other. If they are recording together, try to be at the session so you can offer feedback and support.
  5. Ask you players for feedback during and after the session. Take notes on what worked and what didn’t, and why.
  6. After the recordings are done, mix the tracks and replace your mockup in FMOD with the recordings.

Chapter 8

Asset Delivery and Implementation

File Naming Conventions

Naming convention rules typically define file name length, letter case and special characters. Best practices when working with software dictate that we limit our file names to using alphanumeric characters and the underscore “_” only, with no other special characters. Things such as the exclamation point “!”, question mark “?”, asterisk “*” and other characters can cause unexpected and unwanted behavior in software programs. In many programming languages these special characters are known as reserved characters.They are used in very specific situations within code and might confuse the compiler that needs to run the program if they exist within a file name. If they do create problems they will often times only rear their ugly heads as unrepeatable bugs or other strange behaviors in the game. As an audio designer, you don’t want the team coming back to you complaining that your files ended up being the cause of a massive fatal bug in the game right before ship date.

When we say “alphanumeric” what we mean specifically are the characters “a..z” and “A..Z” and “0..9” and that is it. No spaces, no commas, no semicolons. Nothing else, and only a single period “.”  just before the file extension.

Let’s look at some typical examples you might find in a real world project.

  • FootstepGravel01
  • footstepgravel01
  • Footstep_gravel_01
  • footstepgravel_walk_01

These are a few examples of poor naming conventions.

  • Footsteps Gravel 1 walk 01
  • Footsteps gravel 1 walk 01
  • Footsteps Gravel 1 walk 2
  • Footsteps Gravel 1 walk 04
  • footsteps gravel walking outdoors boot 01

Notice in the poor naming convention examples we have spaces and inconsistent capitalization. When a computer program is reading a file name, it may read the lower case and upper case characters as distinct and different from one another. But our human eyes tend to read them as the same. In the above example, not only are there spaces in the file names which could break the code, but the top two variations might be read by a human as the same file. We as the audio designer might not even remember that there are two files with very similar names. As we are trying to get the programmer to track down something in what we are hearing they might keep selecting the first variation when what we are hearing is happening in the second variation of the list. File names like these cause a lot of churn and frustration no matter whether it is a programmer or even you doing the implementation.

In addition to the capitalization issues, the list of poorly named files also lacks a consistency in the numbering being applied. In this example a folder with these files sorted alphabetically would sort the “04” variation ahead of the “2” variation, due to the zero “0” sorting before the two. We also might not realize there is a “2” variation and go on to create an “02” version, again leading to all sorts of confusion as we try to track down files during the development process.

If the audio designer is tasked with implementing into the game engine natively, proper file naming should also be used for all of the reasons we stated above. If the integration is being done in audio middleware such as Wwise or FMOD, an additional focus for naming conventions lies in the event name. The event name is what will be called in code or within components in the game engine. Audio middleware allows the designer to create virtual folders to organize the events. Keep in mind as you organize, the event path is used in code. If the event, named ‘rpgfirereload’,  is created in a folder named ‘weapons’ the path the programmer will use is ‘event:/weapons/rpgfirereload’. With that in mind you can understand why it’s important to avoid over complicating the naming scheme. In the last example in the poorly named list, the overly long file name can be problematic for just this type of reason. Event names should not be changed once defined. If somewhere along in the production the designer decides to move the rpgfirereload event into a folder named ‘RPG’ this will change the path and break the link in the engine. In Wwise, the audio objects can be renamed and re-hooked to events without it being an issue as long as the event name remains constant. In Wwise the audio objects in the Audio tab can be renamed and re-hooked to events as the event name isn’t changed. Similarly, in Fmod, the audio assets on tracks and in the audio bin can be renamed and re-hooked to events. However, it is far better to try and plan ahead, which is why we recommend you and your programming team agree on naming conventions from the beginning. Sure, things can change once the game starts to take shape, but if you try to plan ahead of time, in most cases the scheme you develop can be extended to address new situations and scenarios as the game grows.

As we discussed previously, file format and compression is an important part of the delivery. When the assets are integrated by the developer the audio designer should confirm the file type for the target platform with the programmer and deliver in the proper format. Depending on the engine being used the programmer may need to re-compress the delivered files in game. As with all audio work, if a different compression is needed, it should ideally be done from the original full-resolution master. Work with the programmer to ensure your compressed files will not be further compressed or re-formatted. You as the audio professional have tools that will do a much better job of this, just as the art team is tapped to reduce poly-counts of assets as a game nears the end of the development process.

When the audio engine will handle compression it’s best to deliver all assets as 48 kHz 24 bit .wav files. A higher quality file will sound better when compressed in engine since it isn’t at risk of being compressed twice. 

The audio designer should decide ahead of time which files will need to be rendered in mono versus stereo. As we will discuss later in this chapter, 3D sound sources work best with mono files unless the spread will be used to widen the arc around the listener. Mono sounds also keep the file size for audio in check. 

When you create a beautiful sound track for a game or design high quality sound effects, there is always the possibility of the files being compressed to the target platforms format. The audio designer should be prepared for their work of art to be subject to reduced frequency range, lower the dynamic range and reduce stereo imaging. Being familiar with compression, file types and target platforms requirements will help the audio designer make informed decisions in regards to delivery. This will ultimately make your games sound better.

The Engine Editor

Here we will provide an overview of the Unity Engine Editor. Feel free to open Unity and follow along.

A major function in a game engine is the GameObject, which is a container that can represent anything/everything in the game. All the environments, characters and props in the game world are referenced as GameObjects. The function of a GameObject is defined by the components/behaviours assigned to it. An Inspector or Component window in the editor displays all the components assigned to a GameObject and allows for the editing or adding of additional components/behaviours. In the Inspector view, the component values can be modified and additional scripting can expose or create functions and values for use with the GameObject. 

The Project view represents the Asset folder found at the root  of the Unity project folder on the computer's hard drive or server. This makes for easy access, within the game engine environment; to important elements that make up the game assuming that they are all neatly organized. Game assets, including the audio are displayed and searchable in this view.

In the editor the Hierarchy view holds all the GameObjects, which the world designer drags and drops into the Scene view as she builds the world. As a GameObject is selected in the Hierarchy view its assigned components are revealed in the Inspector window. 

The Scene window offers a non-rendered view of the game world, which can be navigated via the X, Y and Z plane. When a GameObject is clicked and highlighted in the Hierarchy view, the Scene view snaps it into view. Navigation tools allow for various movement and manipulation of objects within the scene. Just as we as audio designers will prepare our assets in an outside toolset like our DAWs, the art team also uses specialized software such as Blender, Maya, 3DS Max and others to create the objects we see in the world. Game engines do provide generic or primitive art and visual objects as well. This makes it easy to prototype ideas without dedicating a lot of time and resources to creating beautiful art that eventually isn’t used at all in the game.

A game can be built on one scene but some developers choose to link multiple scenes that represent a main menu, master level, game over or even individual levels. Keeping in line with the theme of being flexible, there are various ways programmers might set up a project. Not everything needs to be contained in a single GameObject and not all game objects need to be in a scene as they can be addressed and handled by managers and controllers. 

It's important for the audio designer to be familiar with traversing the engine’s editor as they are often tasked with assigning audio components to GameObjects by placing objects known as audio sources into a scene. We’ll cover these more in depth further along in this chapter.

The Audio Mixer view displays a master mix bus and any other audio busses setup by the user. Volume attenuation, pitch shift and insert effects can be applied to a group of Audio Sources when they are assigned to a mix group. 

That leaves us with the Console and Game views. The Game window is a rendered view  of the Scene from the game camera(s). While the game is in runtime within the editor, the Console view displays a log of errors, warnings and other messages about the game. Game view offers a way to play the game in the editor without publishing a build.

Publishing a build, within the Build Settings menu, the user can generate platform specific assets that allow the game to be played stand-alone outside the editor. The audio designer should be familiar with publishing builds and the requirements that go along with the process. Audio bugs can often be revealed outside the editor. In order to publish to platforms other then the development platform the build support options must be installed. 

Once the audio designer has a solid understanding of working with these features in one engine they can apply that knowledge to other engine workflows. It’s the same concept moving from DAW to DAW or between multiple effects plugins of the same type. 

The programming or tools team on a project usually selects the game engine. While the audio designer doesn’t have much say in this process, a well informed opinion can be offered on the audio engine side of things. 

If the audio designer is given access to the project in the editor they should familiarize themselves with ways to drop the quality settings for smoother performance when playing the rendered game within the Game view. While an audio designer needs a fast processor and a good amount of RAM to run plugins and virtual instruments, a high quality video card is often overlooked when building a system. Lowering the settings will reduce things like animations, shaders and lighting, which provide a smoother experience when playing the game in the editor.

In Unity under the Edit menu in Project Settings the Quality setting can be accessed and adjusted. Unreal also has a feature within the Settings menu, which can be accessed by clicking the gear icon on the editor toolbar. 

There are many quirks one may learn after enough practice with tools and techniques. For example, in Unity any changes made to component values while in play mode in the Game View, aren’t saved, it is meant to be only considered as a preview when the game is running. There are ways to work around this such as making a change to a component while play mode is engaged and right clicking on the gear icon at the top right of the components window. Selecting “copy component values” will copy the changes and allow the user to apply them when play mode is disengaged by right clicking the same gear icon and choosing “paste component as value”. Sometimes the user might not think about being in play mode and forget to copy values before existing. To avoid this there is a preference in Unity called “Tint” which allows the user to change the color of the editor window while in play mode to act as a visual reminder know that changes are only temporary. This can also work in your favor as it allows you to experiment and try things out that you may not be sure of.

While the audio designer may not spend all of their time in the game engine, one can see how important it is to have an understanding of the tools beyond just the audio hierarchy. The development team will appreciate an audio designer that can get around on their own by thinking creatively and solving problems.

Audio Components

The Audio Listener and Audio Source are important components for the audio designer to be familiar with when implementing audio natively. The Audio Listener is a function of the game engine, which acts like the player’s ears or an Omni directional microphone picking up the Audio Sources throughout the world and relays the sounds spatial information to the player through the Audio Mixer and game engine. An Audio Source can be attached to any GameObject in the world, even an object that is not seen visually by the player and the AudioSource plays back an assigned Audio Clip (the audio files you have created) in the rendered game.

While the Audio Listener component doesn't have much in the way of options exposed in the Inspector window the important task is defining which GameObject it should be applied to. We mentioned that games can be built on multiple scenes or in just one main scene but we should note that only one Audio Listener can be active per scene. If multiple listeners are active at the same time a warning message will be logged in the Console. There are uses for multiple listeners but they would need to be enabled and disabled via a script so that both are not enabled at the same time.

There are a few options for applying the Audio Listener in the Scene and later when we discuss 2D versus 3D events it will provide clarity on the consequence of the choice. The most common places for listener placement is attached to the Main Camera, the Player Character or part of the way between the Player Character and the Camera. The placement decision is dependant on considerations like gameplay perspective. Game perspectives in the likes of First Person, Third Person, 2D or Top Down can benefit from different listener placement. A listener placed on the Player Character in a Top Down perspective game might make the audio feel disconnected as the players view is above and slightly behind the Player Character. Hearing auditory changes in attenuation or panning in relation to the PC (Player Character) may feel too abrupt or unusual based on the viewpoint of the player. At the same time, attaching the listener to the Main Camera may delay the sounds compared to the PC position in the world. This is why testing and experimenting is an important part of ensuring the experience is as seamless as it can be.

The Audio Source component has quite a few important audio related functions the audio designer should be familiar with. Think about an AudioSource as if it is an individual speaker within the game world. We can play whatever audio we like through our speakers in the real world, and it is the same in a game.

Image of Unity Audio Source

Image of Unity Audio Source

An Audio Clip is simply the sound that we have decided as audio designers to play through any of the speakers we have setup as Audio Sources in the game world. An Audio Clip is a reference to an  audio asset called in a script or attached directly to the Audio Source. The parameters that follow apply to the audio triggered by the AudioSource component when playing back an Audio Clip.

The Output parameter allows us to route our audio through any available mixer bus found within the Audio Mixer in Unity. This allows the Audio Source and it’s currently playing Audio Clip to be grouped in on a mix bus with similar effects or music. Think of these as sub-mixes. Just as you route groups of tracks in your DAW to auxes, sub-mixes, or stems, the grouped tracks’ volume and insert effects can be applied on the mix bus rather than having to change to each individual Audio Source value.

Mute simply mutes the audio source in the scene and the bypass functions will ignore the effects applied. Play On Awake when enabled will trigger or start the sound in the scene when game play mode is engaged. If this function is disabled the audio event will need to be triggered via a Play command in a script. This function is useful for ambient and music events that will trigger or start playing as soon as the game / scene starts.

Loop, Volume and Pitch are self-explanatory but it is important to note how values of zero to one apply to the Audio Clip. At zero volume the audio will be inaudible, but still playing if the game has called it. At a value of one playback will be set at the full volume of the currently playing AudioClip. Later in this chapter we will review how to prepare assets for implementation and how volume plays into it. In relation to pitch a value of one is normal playback speed. It's important to note that the pitch change defined on the Audio Source will change the speed of the assets playback in addition to the pitch. To change the pitch without affecting the speed the effect can be applied on the mix bus as an insert. The values can be defined within the range of zero to one.

Priority allows the user to define the level of importance of the Audio Source and its currently playing AudioClip amongst the other audio events playing in the scene. The value is defined in a range from 0 to 256 with the latter being least important and zeros the most important. The default setting is typically in the middle at 128. Later in the chapter we will discuss priority of sounds and managing playback resources but we will mention that these settings will help manage how many sounds can be playing at the same time together in a scene. A game with a lot of action will need resource management to avoid maxing out the number of sounds playing at one time. 

2D, stereo sounds can be artificially skewed more to the left or right using the Stereo Pan function. This is different than the 3D panning function, which will discuss later. Reverb zone Mix allows the user to define how much of the signal will be routed to the reverb zones in game. A Reverb Zone is an empty GameObject defined in the scene covering the area where reverb should affect any Audio Source that enters the zone. An Audio Reverb component added to this GameObject will allow for the reverb values to be defined within the zone. The Reverb Zone Mix value on the Audio Source component defines how wet the reverb effect will be on the audio triggered and played back within the zone. This is really useful for emulating a specific indoor or outdoor acoustic space. When a Player Character walks from a forest into a cave in game, the player will be expecting to hear an aural change in reflections. The reverb effect can be applied to all sounds in the zone from PC footsteps and cloth or armor movement to weapons, NPC movement and water drips being emitted from scattered sources in the 3D space. On the other hand, if the game includes voiceover from an announcer, we would not expect that audio to react to various Reverb Zones the PC (player character) walks into. 

Audio Middleware

Here are a list of resources for learning more about Wwise and FMOD

Wwise

FMOD

Table 8.2 (Compares middleware workflow between FMOD and Wwise)

FMOD Wwise
Events Audio Objects / Events
Parameters RTPC (Real-Time Parameter Control)
Banks and GUIDS Generated Banks
Sandbox Sound Caster
Game Objects Game Objects
Audio Listeners Audio Listeners
Snapshots Mix Snapshots
Mixer Mixer
Profiler Profiler
Game Syncs

Table 8.3 (Breaks down the advantages of using middleware over the game engine native audio.)

Game Engine Audio Audio Middleware
Scripting – Extensive scripting for audio integration for any behavior other than Play and Loop Scripting – Minimal scripting for all audio behaviors as they are set within the middleware. Only requires calling event and hooking parameters.
Randomization – Need to be scripted Randomization – Easy access to number of sounds, volume, pitch, filters, priority systems
Game Parameters – Controlled via SnapShots which have limited flexibility and transition time needs to be scripted Game Parameters - Real; time parameter control in the hands of the audio designer
Music – Complex music systems must be scripted Music – Dedicated dynamic music implementation with transitions, on beat sync for cues
Debugging – Play testing and console logs

Debugging – Easily done via a Profiler, which details events and parameters being triggered. It also offers Live remote connection to the engine for real time changes

Ambient Zones

Randomization

Loudness

On mobile platforms In terms of mixing for mobile there isn’t yet a true set standard. Sony has adopted -24 LUFS for console but nothing specific has been adopted for mobile. That standard comes from broadcast tv standards. Apple sound check uses -16 lufs and internet streaming standards are around -18 to -16 so a bunch of mobile games are mixed to that. 

Both Wwise and FMOD have a LUFS meter built in but with straight to engine implementation you can route the game engine’s output to a track on your daw or a 2-track editor. You can get a lufs meter (Youlean is free) and run your games output through it. Play through about 20-30 min to get a good sense of overall game play.

Resource:

On Mac you can route output signals via Loopback. www.rogueamoeba.com/loopback/

Resource:

On Windows you can try Audio Router
www.github.com/audiorouterdev/audio-router/releases/download/v0.10.2/AudioRouter-0.10.2.zip

You can also check the mixer app that comes with your audio interface. Sometimes there is a loop playback button that allows you to route the computer output into an input.

Make sure you check your gain staging along the way. If the master bus in your game engine, an RMS or Peak meter, is set very low you will want to set your master at 0db and do some gain staging on your group buses before you look at LUFS metering.

There should be a short-term meter on the plugin that will display the LUFS measurement over the last three or 4 seconds. The integrated meter should be your focus as it shows the accumulating LUFS level of your input signal.

While it's fine to have dynamics in those real time readings, your average over time should be targeted at -24lufs for mobile and -24 LUFS for console.

Overall a lot of the indie games out there on mobile aren’t sticking to any standard but it’s good to mix this way for a more professional and polished sound.

Frequency choice. It’s been said, but EQ is the way to go to control the perceived loudness.  Higher frequencies will always be perceived as louder than lower frequencies even though your loudness meter says otherwise. I would say that the 4 kHz region isn’t great to boost - it doesn’t read well on smaller setups and can sound very clicky and small.

A good learning tool is to take a broad bell EQ and move it across the frequency range, boosting then cutting, while monitoring through a LUFS meter. Of course you must use your ears, but this is a way to also get some more specific understanding of what frequencies are poking out or missed out in your mix.

Here are some additional resources for understanding loudness in games:

Testing, Debugging and QA

In the referenced video below you will find a tutorial on profiling for debugging and resource optimization:

www.youtu.be/T3SmLEiSUPM

Mix Considerations

Here are some additional resources for listening levels and preventing ear fatigue:

Platforms and Delivery

A game developed for mobile or handheld means the player will most likely be listening through a small mono speaker built into the device or a pair of stereo ear buds. Neither will handle extreme ends of the frequency spectrum very well, so use EQ to manage your mix.

Here is a resource for mixing for small speakers:
www.behindthespeakers.com/mixing-for-small-speakers/

Exercise:

If you haven’t experimented with various file formats and compression types try doing so now. Take a 48/24-wav file and compress it into ogg and mp3. Try various quality settings of each lossy compression format and make note of how the sound changes as you further compress.

Vertical Layering Example

Now that we have covered numerous techniques for vertical layering, let’s see an example in action. Below you will see a video of a session in FMOD. This session is meant to accompany a hypothetical boss battle. Please watch the video before reading the summary.

Ex 9.1

The main point to take away from the above example is that vertical layers are used to increase the intensity of the music during a boss fight. The layers added simulate the growing tension of the battle, as well as progression for the player. Each change to the music signifies a new phase in the battle. In this way this example illustrates progress.

Exercise:

Write a 1:00 - 2:00 minute gameplay cue that could serve as underscore during a puzzle. Export the stems in such a way that you can use vertical layering to signify progress during the puzzle. Now implement the layers into a session in either FMOD or Wwise.

Horizontal Re-Sequencing Example

Now let’s take a brief look at a horizontal system in action.

Ex 9.2

This example is quite simple, but it effectively shows how one module or “chunk” of music can transition into the next using a parameter as a trigger. Notice how there are options to quantize these triggers so that the transitions occur at musically logical moments. Also notice that at times immediate transitions are necessary. When a battle occurs, the transition must be immediate to be synchronized with the visuals. Instead of quantizing the transition, we used a cymbal roll to make the transition smoother. When the music transitions back into the exploration loop, a larger quantization value has been applied.

Exercise:

Write a 2:00 linear cue that has a clear beginning, middle and ending. The structure of the cue should convey significant development in a hypothetical game scene. For example it could be a battle cue that begins, loops, and ends in a finale. Or it could be meant to signify progress throughout a level, or a developing narrative. The important point is that the cue must change significantly. Try using tempo changes, key changes, or significant compositional changes to illustrate this.

Now cut the cue up and export at least 3 or 4 segments of the cue. These will serve as your horizontal “chunks” of music. Take these segments and implement them into FMOD or Wwise.

Composing Complex Adaptive Systems Using Middleware

Finally we’ll take a look at a more complex adaptive system that makes use of both horizontal and vertical techniques.

Ex 9.3

There are a number of points to note here:

  1. Vertical System - the vertical system here is the exploration section. Here the vertical organization allows the mood of the music to shift from neutral to light, or neutral to dark, or vice versa. The system reacts to the “mood” parameter.
  2. Horizontal System - There are two sections of this session that are scored horizontally. The first is the menu music that transitions into the exploration music. The second is the battle music that transitions to and from the exploration music. These transitions are based on the “condition” parameter.
  3. Quantization - as mentioned earlier, note the differences in quantization parameters. These are important because the function of each music section will inform the quantization value.
  4. ADSR Values - these values allow the horizontal transitions to be smooth. ADSR curves are modulators that are applied to individual modules (multi-instruments and playlists) of music, rather than on entire tracks. This is because every layer of music needs to have its own release value to maximize the smoothness of each transition. For example, percussion tracks usually need very short release values to prevent flanging, while sustained tracks (such as strings or synth pads) usually sound great with longer release values.

Exercise:

Find a significantly complex gameplay example; something with a high degree of interactivity involved. Now compose an adaptive musical system that factors in all interactive elements in FMOD or Wwise. The result should be smooth and musical, but the interactive elements should be clearly emphasized in sync with the visuals (i.e. no lag time when immediate aural feedback is needed). Use as many of the tools we have covered as you can.

Note that you can download the FMOD session of examples 9.1 and 9.3 below. Open the session and play around with it! (FMOD ver 02.00.00)

Ex 9.3a - Download Now (ZIP 437.7MB)

Exercise:

Download the FMOD session is example 9.3a and replace the events with your own music. Try to make some adjustments based on the musical style. Take note of the transitions (ADSR timings, quantization durations, etc.) and change them to fit your music more appropriately. How can you expand on these systems? How can you make them even more adaptive?

Below is another example. This time we have implemented the polyphonic example from Chapter 7 (Ex 7.5) in FMOD. Remember the lessons we covered in regards to orchestration. It is possibly to use orchestration techniques as an adaptive tool. By pairing timbres together, and recording stems based on color rather than instrument type, a wider range of mood and emotional impact is possible during the implementation phase. It also lends itself to more appropriate orchestration as the music adapts to gameplay. Download and open the file in FMOD and have a listen!

Ex 9.4 - Download Now (ZIP 36.8MB) Ex 7.22c Polyphonic Ex01
Full Score Highlights (PDF 99.8KB)
Ex9.4c Polyphonic Ex01
Full Score Highlights (SIB 64.9KB)

Above you will see the PDF and Sibelius files for the Polyphonic example we covered in Chapter 7.

Screenshot 1

Screenshot 1

Above is a screenshot of the “bounce second cycle pass” function in Logic Pro x. This function is useful for baking reverb tails back into the beginning of a loop. It does however require further editing in some cases. For example, it is common to bring the loop back into your DAW or two-track editor and add a small ~20 or so sample fade in and fade out to the file. Another option is to take the exact region that is meant to loop and then copy it three times and bounce it. You can then extract the middle iteration, which will have the reverb from the first iteration baked in. You can then add your fades.

Screenshot 2

Screenshot 2


Screenshot 3

Screenshot 3

Notice in the examples above that the colors match the orchestration colors. An important point here is that the layers are much more complex than the typical vertical examples. Where many vertical systems split stems by instrument type, here the instruments all take part in multiple layers based on what they are playing. For example the string basses participate in both the red layer and the dark brown layer. This allows the music to exclude the countermelody employed in the red layer if it is not appropriate in the game scene. There are other numerous examples of these interlocking layers. We encourage you to study the score and layers carefully in FMOD, and to play around with different orderings. This example is extremely dense, and you may find just one or two of the layers is sufficient for a cue. How many different gameplay cues can you create using these layers?

Exercise:

In FMOD or Wwise, compose your own adaptive music system that utilizes these interlocking layers rather than splitting up stems based on timbre. Remember that all of the layers do not need to sound good together! The point is to find a way to mix and match layers to suit the needs of a game scene, and in many cases all the layers will never be triggered at once.

Experimenting with Implementation

As mentioned in the text, there are many ways to experiment. For example, machine learning is currently being used to generate music in real-time for games:

“Mezzo” AI Generated Music; Daniel Brown

https://www.youtube.com/watch?time_continue=4&v=r13M7OG8ANA&feature=emb_logo

The Dynamic Percussion System - Intelligent Music Systems

https://youtu.be/ILZ9P-B_BkQ

There are plenty of ways for you to get started as well! Pure Data (Pd) is an open-source, free(!) object-oriented programming language that can provide a great starting point for musical experimentation for games. Pd is a visual programming environment, which makes it very intuitive. Above all, it can be used to create your own audio engines from scratch, which makes it an ideal tool for game audio.

https://puredata.info

Below is an example of a generative music system created by Laurie Spiegel called Music Mouse.

https://teropa.info/musicmouse/

You can also find more information on generative music through the interactive website below:

https://teropa.info/loop/#/title

Prepping a Session Template for Integration

When working with middleware such as Audiokinetic Wwise and Firelight Technologies FMOD there are ways to make the asset import process a bit simpler.

The resource links below will provide the steps to a more streamlined import process from the DAW to Wwise audio engine.

Wwise templates

Wwise from DAW to Engine

FMOD makes use of an Audio Bin feature which can easily be set up to mirror your local folder structure. Organizing the Audio Bin will make searching and overriding parent conversion settings a more simple task. Here are a list of other features that can be used to organize your FMOD sessions.

  • Copy and Paste allows you to quickly set up a new event by copying an existing one. As long as you do not assign your “template” event to a bank, it will not be included in your built project.
  • Bulk Edit allows you to edit the properties and content of multiple events simultaneously, removing the need to duplicate labour when making the same change to multiple events. (Wwise also has a multi-edit feature. You may right click on a number of audio objects or events and select multi-edit for quick batch changes.)
  • Event reference sound modules are handy. They allow a reference of an existing event in other events (think parent / child hierarchy) by having the other events trigger an instance of the referenced event whose output is routed through the other event’s audio track.

Adaptive Recording

When all of your music and musical mock-ups have been implemented and are working smoothly, it is time to record any live instruments and mix the score. Recording and mixing are fields in their own right, and certainly are deep enough to fill textbooks of their own. Indeed they already have! Bobby Owinsky’s “The Recording Engineer’s Handbook” and “The Mixing Engineer’s Handbook” are great starting points for recording and mixing live instruments. But games are so multifaceted that recording and mixing live instruments only scratches the surface of what needs to be done at this stage of development. Often live instruments will be mixed alongside sampled instruments, electronic instruments, sound effects, and voiceover. All of this can shape how you record and mix your music in ways both subtle and obvious. And then there is the interactive element to consider. Due to all of these variables, the best way to learn to record and mix adaptive music is through hands-on practical application. Let’s get started!

Remote Recording for Solo Instruments

If you are new to recording and mixing, the best way to start out is by either recording yourself on an instrument that you are experienced with, or recording someone else remotely. This takes the pressure off the session and minimizes cost. But just because this is a simple and inexpensive technique does not mean it won’t be high quality. On the contrary, recording instruments remotely often yields surprising and unique performances due to the high level of individual attention from the performer. Many wonderful AAA and indie projects over the years make use of remotely recorded tracks.

If you can play an instrument (any instrument!) yourself then you already have almost everything you need to get started. All you’ll need is a decent quality microphone (usually these can be purchased for somewhere between $100 and $300) and an audio interface (which you should already have for your speaker setup (see Chapter 2 for more information on equipment and Chapter 3 for basic recording techniques). As a basic guideline, softer acoustic instruments are usually best captured with a condenser or ribbon microphone. These microphone types have a full frequency response and will capture very detailed performances. Electric instruments and drums can be captured well with dynamic microphones.

There are many considerations when setting up a microphone to record. The first thing to consider is how your instrument produces sound. Some instruments have obvious areas where sound exits, but most resonate throughout the full body of the instrument. This greatly contributes to the overall sound. Many acoustic instruments also interact with the room as part of the sound they produce (orchestral string instruments for example), which usually means that a better sound can be captured by placing the microphone farther away. Room interactions can be detrimental to a performance if the space is too resonant or too loud. For this reason we usually recommend starting with a microphone position that is quite close to your instrument. Listen to some scratch recordings and adjust the microphone until the sound produced is as natural as it can be.

It is easy to get bogged down with details when recording. Keep in mind that 90% of the sound you will end up with will come from the space you’re in and your performance itself. The best thing you can do is record yourself in as many different locations around the room, and with as many different microphone positions as you can. Listen very critically to what you hear and choose the location that best suits your needs. Your room and your sound are very idiosyncratic to you, so experimentation is important for finding the “sweet spot.”

If you are working with another musician to record then they will likely have their own setup for recording themselves. It can be helpful to ask what kind of microphone and interface they have to record with just to make sure they are using equipment that is industry standard. There is no specific cutoff point between amateurs and professionals in terms of gear, but a USB microphone, field recorder, or mobile phone is not going to be high enough quality for a game project. Other than that, just be clear about your goals and intentions for the music they will be recording. If you have done your research (listening to their demos, looked over previous projects and recordings) and feel that they are the right person for the project, then trust in their experience and expertise.

The workflow for remote recording sessions can change quite drastically from person to person. Some musicians are happy to record your parts by ear, while others need the music to be written out using a notation program like Sibelius or Finale. It is usually best to ask in advance, but you will almost always want to provide them with audio files of a) their instrumental part soloed, and b) the rest of the track without their part. It’s also helpful to provide the MIDI file so that they can double check their pitches. With regard to the recording itself composers are sometimes asked to Skype in on sessions to provide feedback, but more often than not musicians prefer to record the work on their own and take feedback afterward. The important point here is that communication between you and your musician should be very clear, and so should your expectations of each other.

Especially if your musician is inexperienced with game music, it is essential to adequately stress the importance of a seamless loop. You don’t want to spend hours afterward chopping up a moving performance so your track loops properly. Instead, make sure to explain simply how the music will work in the game. If the cue is meant to loop then be clear how important it is for the dynamics, timbre, and inflection of the final bar to match the first bar. Make sure that the rhythms are precise, or the cue may need to be adjusted later on. Sometimes it’s a good idea to ask them to record loops two or three times in a row to ensure that you have a seamless transition from the last bar into the first bar.

After all of the logistical information is out of the way, be sure to leave room for your musician to experiment and improvise. This will make the session fun for your musician, and will likely leave you with loads of extra takes. In our experience these takes are quite often the most useful! The soundtrack to Evil Nun (Keplerians) is a great example of this. We didn’t have much of a budget working on this game, so many of the cues had to be recorded by solo instrumentalists remotely. We set up a system for each cue where our musicians would record a specific part that was notated, an aleatoric part which outlined a particular gesture but was open for interpretation, and then a completely improvised “bed” of aleatoric creepy sounds. The aleatoric parts were entirely up to the player to create. We received so many great takes that the hardest part was choosing which ones to leave out.

Most importantly it’s good to remember that you aren’t just hiring a player to record for you - you are building a relationship with a collaborator. By being open to their ideas you are giving them an incentive to add their own creative spark to your project, which in turn will make your music much more unique and powerful. If you nurture these relationships you will eventually have an extremely capable network of musicians to choose from on all of your projects.

Ensemble Recording: Session Prep

Ensemble recordings are very different from remote recording sessions. They are much higher in price and therefore the stakes and stress are usually higher as well. Remotely recording solo instruments is extremely useful for most projects, but it can be very difficult to simulate a full ensemble with one person. And many times samples aren’t capable of nailing the specifics of a desired performance. If this is the case and you have the budget, then it might be time to hire an ensemble of musicians.

The biggest difference between a solo remote session and an ensemble session is the amount of preparation that goes into it. Whether or not you have a team to back you up, you will need to prepare sheet music parts for the session (see Part-Prep, in Chapter 7). You will also likely have to provide DAW sessions with click tracks for the conductor, backing tracks, and perhaps even stems of the samples that will eventually be replaced by the live recording.

At this point you have probably spent quite a bit of time orchestrating your mock-ups. If you had a clear idea of the ensemble in mind, then you will have an easy time taking your MIDI data and preparing your parts in a notation program. If you used samples without regard for number of players or the space you will record them in, then you will need to carefully consider how to re-orchestrate your parts to achieve the desired sound. This is not always a straightforward task. Samples are recorded in specific locations with the best players in the world, and with amazing engineers to polish them afterward. Unless you plan on recreating the exact recording environment for your samples (which would be close to impossible even with the largest budget) then you will have considerable work to do.

The most obvious point to consider when orchestrating (or re-orchestrating) is the number of players. If your mock-up makes use of 16 first violins artificially split into divisi and you are recording with 6, then you will have to scale back your expectations and re-orchestrate. If the effect you’re after is a dramatic marcato attack, then you may consider adding more double stops as accents. If you’re looking for a lush sweeping violin melody, then it might be best to cut the divisi parts and have both first and second violins (~10 - 12 players) play the melody together, and compensate for the lost harmony in the violas and cellos. This would strengthen the melody and come much closer to the original sound of 16 violins. Or you could simply add your samples back into the mix later on and get the best of both worlds (discussed in more detail below). Regardless of what you choose, it’s important to be realistic about the number of players and to adjust your orchestration to reflect that.

Beyond the number differences, it is also important to consider the space you will be recording in. Reverb time and acoustics are a large factor in the resulting sound of an ensemble. If you’re looking for a large “Lord of the Rings” style sound, you’ll need a large “Lord of the Rings” space. If you’re looking for something intimate and dry, then choose a smaller space with a low noise floor. If you want your score to sound like it was recorded in a giant cathedral… you get the point.

Another factor to consider is how idiomatic your parts are. It can be all too easy to write a part that sounds great in MIDI, but happens to be extremely difficult for an instrument to play in reality due to playing mechanics. Take for example the woodwind “breaks” (See Chapter 7). A simple trill in the wrong spot can break a great cue. It may be best to give those trills to a different instrument altogether to compensate. Other similar examples include writing trombone parts with impossible position changes, or writing a note that doesn’t exist in a bass flute’s lower register (both of these are actual mistakes we have made in the past). To avoid these errors it is crucial to go over each part individually and check that they are written idiomatically to the instrument.

Of equal importance are the dynamics of your score. In a live setting they will likely deviate to some degree from your mock-up. If you have taken careful measures to make your template sound natural (see Chapter 8 and the companion site for reference on template mixing) then you should have a pretty good idea of the final sound. Regardless, it is important to take a look at your score as a whole with individual ranges and dynamic curves in mind. For example, if you have an important flute part in measure 37, you will want to make sure that it is not in the lower register. Due to the physicality of the instrument, the low notes of any flute are softer and less brilliant. This makes them at risk of being buried by thick orchestration. The most effective solution would be to clear out some of the competing elements around it. Similarly if you have a beautiful and delicate tenor saxophone melody in measure 52 make sure it is in the upper register. Saxophones need more air to push out low notes, so phrases in the low end can sound forced and aggressive.

An often overlooked aspect of a large scale recording session is the communication between your chosen studio and your music team. You will need to make important decisions such as determining the timeframe of your session, choosing a conductor, and planning the logistics of travel to and from the session. Communicating your needs to the studio can help immensely with these issues, especially if the studio is a plane ride away. Be clear on the style of music you’ll be recording, how much music you’ll be recording, and what vibe you’re after for the project. If you can get mock-ups or early orchestrations to the conductor then they may be able to more accurately assess the necessary time needed for recording. The studio may even be able to help you plan travel and lodging accommodations.

The final step before entering an ensemble session is the actual preparation of sheet music parts for your musicians. This topic (like many topics in orchestration) can fill a textbook all on its own, but we will try to keep things simple here. The bottom line is to make each part as clear and readable as humanly possible. If all of your parts are clean, organized, and readable, then good musicians will perform well. If they are messy, unclear, and show a lack of understanding of how the instruments are played, then musicians will make mistakes. They also won’t have much incentive to take your parts very seriously. For a solid foundation on part preparation check out Elaine Gould’s “Behind Bars.” It is dense and extremely thorough. The best way to practice part prep and notation is (as always) to write for as many musicians as you can. After they perform your work, ask for their sheet music back so you can look at their notes. You would be surprised at how enlightening this process can be!

Ensemble Recording: At the Session

If you have adequately prepared, then the session itself should be a cinch! As long as your parts are written well and your ensemble has been carefully chosen to fit the needs of the game then you have done 99% of the work already. The other 1% comes down to staying on task, on time, and on getting the right sound.

When dealing with the ensemble it is absolutely essential that you value their time. Don’t rush through, but don’t waste time either. Try to keep things positive by expressing your gratitude to the players, but above all show them your gratitude by staying focused and keeping the session moving forward. Certainly make sure to allow time for breaks, which can reinvigorate their performance.

There are a few considerations to make here when factoring in the interactive element of game music. For one thing, many cues need to be recorded in stems rather than as a full orchestra. One way to do this is to set the session up with sound isolators to separate instrument groups. This is a great way to record if your studio can effectively isolate sections because you will get a truly unified sound when all layers are together. It also saves quite a lot of time because you will never have to go back and re-record cues for a different stem.

If your studio cannot adequately isolate sections, or if you have orchestrated stems that are not routed in the basic sections (see Vertical Layering by Timbral Grouping) then you will have to record stems separately. This is by no means a lesser way of recording adaptive music. On the contrary, it allows for more complex interactions between instrumental layers. The most effective method we have found to organize these stemmed recordings is to color code your orchestrations. For example, if layer 1 consisted of high strings and high woodwinds then you would highlight (digitally during the part preparation phase) those instruments in yellow for that particular cue. This way the conductor can say “Cue 1, yellow group!” And the high strings and winds will record their parts. It’s exceedingly efficient and easy for players to follow. Be sure to look at the polyphonic example on the companion site for more details on color coding.

In either case you must also choose a method for recording loops. Many of the considerations made for solo remote recordings apply here as well - rhythmic precision, dynamic consistency at the beginning and end of a loop, etc. are all essential elements for a looped ensemble cue. The big difference with an ensemble is that you are usually dealing with a larger space and therefore you must deal with reverb. As you know, reverb can be tricky when applied to a looping cue. With a remote session it’s easy to go back and fix a take that is unable to loop, but it’s imperative to get it right the first time in an ensemble setting.

One method is to simply allow the cue to end and hold in silence as you record an extra bar or two of the reverb tail. Later on you can cut the tail off and add it to the beginning of the cue to ensure seamless looping. This works well, but it can be a bit of a pain to cut and mix. Another option if you have the time and the budget is to record loops 2.5 - 3 times through. This will allow you to quickly cut out the middle take and voila! It loops perfectly. You will also have more takes to draw from during mixing. The downside is that with longer loops it can be overkill, so make sure to choose the most appropriate approach based on all of the relevant factors. You can alternatively do partial takes from the last bar into the first one, and then cut and paste onto the beginning. In either case recording with a click track can be a huge help. Loops can be done without a click track, but it will require quite a bit of chopping and moving things around afterward.

It’s important to remember that regardless of how much preparation you put into your session, you will likely need to answer questions and make changes on the fly. Get a good night’s sleep beforehand and do your best to remain open and flexible during the session. If you have to fly to the session, arrive a day or two early with your score and parts. You’ll have some time to relax and maybe even get to know some of the studio players. It also helps quite a bit if you have someone else conducting your music. This will allow you and your orchestrator (if you have one) to devote your undivided attention to the sound produced by the orchestra. Always keep in the back of your mind that this music is intended to be heard within the context of a video game, so make decisions accordingly.

Mixing Game Music

There are really two different forms of “mixing” as it relates to game audio. There is the traditional mix that we produce when we balance and process and tweak our instruments before sending them out for implementation. Then there is the “final mix.” This is the overall mix of music, dialogue, and sound effects that the player will hear when playing the game. This is analogous to mastering an album because it is the process of balancing each cue in the context of the final product (refer back to Chapter 8, Dynamic Mix Systems for specifics on the in-game mix).

Regardless of what you are mixing, the goal is always the same. Clarity, balance, and emotional impact. In other words the listener should be able to hear each instrument clearly, the volumes should be well balanced in our 3D mix-space, and above all the emotion of the track (and by extension the emotion of the game itself) must be conveyed through the mix. As with many of our topics this is only scratching the surface of mixing, but these guidelines will help give you a great starting point when mixing your cues.

Mixing Workflow

Unfortunately there is no single method for producing a “good” mix. It depends heavily on instrumentation and intended impact. Instead we encourage more of a loose strategy for mixing workflow. We like to tell students to aim for a well-balanced outline of a mix within the first 20 minutes of mixing. After a basic setup is achieved we encourage them to dig into details like cleaning up audio, adding plugins, and automating tracks. Start by balancing panning and volume (in that order) and you will immediately start to hear your mix take shape. Try to place each instrument in their 3D mix-space. Then use basic audio editing functions, along with EQ and compression to carve out a frequency niche for each instrument. Now you have a clear location spatially and in terms of the frequency spectrum for every sound. Remember that at this point you are only accentuating the natural sound of each instrument, so don’t go overboard with the plugins...yet.

Once we have a clear and balanced mix with adequate spatialization, it’s time to bring in the creative element. Find a way to make your mix “speak.” Use effects and automation to bring out the story that the track is trying to tell. If the melody is a key aspect of the track, then make sure to duck other instruments out of the way. If the rhythm instruments are lively and dynamic, then use a bit of automation to bring out those dynamics and ditch the compression. It’s up to you to determine what story the track has to tell, and how best to tell it.

Preparing a Track for Mixing

A good arrangement makes for an easy mix. So the first step is to make sure the arrangement itself is as clean as possible. Afterward it will be easy to balance or automate instruments and sections. Also try and keep the processes separate as much as possible because they require different mindsets. Composition requires a creative mindset, while mixing requires an analytical one. Arranging requires a bit of both.

Mixing Plugins

The most important tools for mixing are panning and volume. It’s possible to achieve a solid mix by creating a sense of 3D space with panning, and a proper volume balance. However to take it to the next level and deliver a polished and compelling mix, usually EQ, compression, and reverb are required to some degree. Refer back to Chapter 3: Effects Processing as a Sound Design Tool for an exhaustive list of mix plugins.

EQ

In slight contrast to the way sound designer use equalization, composers usually use EQ more sparingly. In a music mix (unless some kind of extreme effect is required) EQ often takes the role of reducing unwanted or unnecessary frequencies. This makes room for other instruments to be audible. The idea here is to allow every timbre in the mix to have its own slot in the frequency spectrum. This is sometimes called frequency slotting, or complimentary EQing.In sound design this can be used in extreme ways, but where acoustic instruments are concerned we recommend being subtle about it to maintain a natural sound.

Compression

Compression is another important tool, but it has a tendency to be very much overused. In short, compressors reduce the dynamic range of a track. This can be great for vocals because we can reduce the dynamic range, and then bring up the overall volume, thereby smoothing out dynamics and making the voice sound “big.” By contrast, compression can completely ruin some orchestral mixes because orchestras rely on their large dynamic range for excitement and drama. For this reason, we recommend adhering to the mixing workflow mentioned above (panning, volume, EQ, then compression), and only using compression on instruments that really need it. Acoustic instruments that rely on high dynamic variability will often need much less compression than electronic instruments.

Reverb

We’ve covered reverb as a means of putting the orchestra in 3D space both in Chapter 7 and on the companion site. But reverb can be used purely for aesthetic purposes as well. Reverb is a great tool to add polish and character to instruments while mixing. As always, overuse can reduce the effectiveness of the entire track. However using reverb with taste will add some “life” to any instrument. Electronic instruments work especially well with reverb because they usually sound very dry and static without it.

Mix Types

Let’s go over a few basic types of mixes and outline some guidelines to make them effective:

Fully Electronic Mix

“Fully electronic” does not refer to the EDM style. It just means that there are no live recorded elements. So this could consist of electronic drums and synths, or some synths and a sampled orchestra, or really anything that is purely “in the box.” Dealing with a fully electronic mix means that you have full control over every aspect of it. The tendency here is usually to add too many competing elements to a mix. For this reason, your goal is to find the focal points and make sure they stand out. We encourage subtractive mixing. This means that to bring a violin part out in the mix it is usually more effective to lower the volume of every track except for the violin  than it is to raise up the volume of the violin alone. In the case of the latter you end up with a volume war, which inevitably ends in clipping and overload. Remember that there are many ways to highlight a particular instrument including manipulating volume, panning, frequency range, timbre, or a combination of these.

Similarly we also recommend mixing at a relatively low volume. This allows for more headroom, which is the difference between the actual signal level and clipping. By mixing low we allow for a wider dynamic range, something that comes in quite handy in video games. A good trick for keeping levels low is to highlight every track and bring them all down at once. This will maintain the balance between each track when reducing volume.

A crucial point to remember is that the mixing level and your listening level are two different things. Your mixing level refers to the objective loudness of your mix. Your listening level is how loud you are hearing that mix. You could be mixing your track at about -16LUFS, but you may be listening to it at any arbitrary level depending on how loud you set your speakers. The listening level will not affect the mixing level. When we recommend mixing low, we’re actually recommending that you give yourself enough headroom so that your mix doesn’t clip and has appropriate dynamic range. Refer back to Chapter 2 for more information about calibrating monitors. - Spencer

Hybrid Mix

In this case we use hybrid to mean a combination of in the box and recorded elements. Mixing a remote recorded solo violin into a sampled orchestra is an example of a hybrid mix. This is a great approach for lower budget projects because it combines the emotion of a live player with the convenience of using a sampled orchestra. In this scenario you would have two options for mixing; mixing the recorded element in a way that stands out, or mixing the recorded element in a way that blends into the track.

Let’s say we wanted the violin to stand out in the mix. This would be effective if the part were more like a concerto solo, or something that demanded attention. With an approach like this you are emphasizing the part rather than hiding it. This usually means keeping the part at a higher volume, and panning it front and center. Panning it hard left or right would certainly emphasize it, but it also might put the mix off balance. This approach also might call for using a touch less reverb than other approaches. This is so that the detail captured by the close microphone position isn’t blurred by the reverb.

Now let’s say we wanted the violin to blend into the mix. This approach would be appropriate for adding realism to a sampled orchestra track. It could also work for a track that called for a balanced combination of recorded and sampled instruments. In this case the goal is for listeners to be unable to tell which instruments are sampled and which are live recorded. This approach is drastically different from the last. In the last mix we basically tried to polish and preserve the violin. Here we are trying to blend it with our electronic elements (specifically a sampled orchestra). To do this we need to limit the frequency range and dynamic range captured by the close microphone placement. You can use an EQ and a compressor (in that order) to carve extraneous frequencies and squash the dynamics.

Next, contrary to the first example we will need to “place” the violin into our orchestra by panning it with our samples. Finally, we need to bring the volume down (probably further than you’d think) so that it blends in nicely and supports the samples with some added realism. This technique is usually referred to as layering live sounds with samples. You can do this with one or more string instruments to great effect.

Our work on Evil Nun (mentioned above) is a good example of both types of mixing. For the stingers to be as scary as possible we mixed a number of solo instruments together and brought them to the front of the mix for impact. The jarring part about it wasn’t the volume, it was the detail heard in the solo recordings. A chaotic sampled orchestra loop then triggers immediately following the introduction stingers. This provided a nice contrast to the remote recordings. The solo instruments were mixed into the loop as well, this time blending into the sampled orchestra so that you may not even notice they were there!

A final example for a hybrid mix is to use samples to thicken up a live recording. This works phenomenally well on soaring string melodies and thick horn ensemble parts. It works less well for more nuanced or exposed passages due to the sampled instruments being recorded in a different space. The biggest factor when using a sample in conjunction with live players is to match the reverb tail as closely as you can. If your recording was done in a hall with a 2 second reverb tail and you add a sampled bassoon with a 3.5 second tail it will sound blatantly off. Match the reverb tail, and then try and match the microphone distance (if your samples have adjustable microphone positions) and you will usually end up with a solid mix of the two.

The Final Mix and Polish

As mentioned earlier, this is essentially the “mastering” process. At this stage you will play through the game over and over again, adjusting the relative volume of cues and making tweaks to the music to ensure that it isn’t overshadowing dialogue or sound effects. At this point you will also need to assess whether or not the emotional impact is as effective as intended. If not, you may have to adjust the volume or placement of cues.

Often the best way to ensure emotional impact is to remove cues. Although it is always tempting, wall-to-wall music isn’t always the best choice. By including large areas of musical silence you will make the entrance of the next musical cue far more impactful. Take a look at the game Inside from Playdead. Martin Stig Andersen’s soundtrack (and audio as a whole) for this game is so sparse that when the rare pad or musical texture triggers it is has profound effect on the player.

The final mix is about the audio experience as a whole. By playing through and revising the music, levels, and trigger cues you can fine tune the way that the audio shapes the player’s experience.

Refer to Chapter 8 for more details on:

  • Resource and Performance Optimization
  • Source control
  • Bank management
  • Event management
  • Per Platform Mixing

Audio for VR: An Overview

VR has been around for about 30 years and recently has made a comeback, which seems to be stronger than the initial push. There are some buzz words which float around with these new opportunities for audio so we should briefly explain them.

3D sound implies the perceived localization of a sound source in the 3D space or 2D plane over speakers or headphones. This is your typical video games audio.

Spatial audio in VR takes the perceived localization of a sound source to another level. Spatial audio offers a certain depth of field, which audio designers have been trying to mimic with stereo and surround, but it is missing an important piece of the puzzle. In surround the listener can localize a sound coming from the front and rear axis but it doesn’t contain info about the height, which is really how we hear. We listen to sounds in a binaural fashion with two ears. Spatial audio or 360º audio is all about receiving that localization information from all directions and distances over headphones.

Ambisonics is another buzzword when it comes to spatial audio but it too has been around for quite a long time. By definition ambisonics is a full-sphere surround sound format that covers the horizontal plane as well as sound sources above and below the listener1.

Sound design and mixing in game are both a creative art form with a good amount of technical requirements. VR as a medium has introduced new challenges and also changed some of the common techniques I have come to rely on for creating game assets.

Luckily, there has been an influx of tools designed to make things a bit easier. We use the Oculus Spatializer VST plugin in our DAW to preview sounds. We also switch back and forth between studio monitors and the Oculus headset to ensure the sound will work well in the smaller speakers the players will most likely be using.

A majority of the sounds designed for VR are delivered and implemented into the engine in mono. If you find your audio is hitting the CPU too hard during real-time format conversion, experiment with delivering the sounds from the audio engine in the same output format as the development device.

The game developer usually defines the game engine and any audio engines used on the project but an audio designer can offer a suggestion of using middleware (see Chapter 8) to gain more control over the integration.

Choosing Sounds for Immersion

The type of sounds used in a VR experience or game is really dependant on the type of environment presented in the visuals. In 2D and 3D games we aim for immersive sound coming from a plane in the virtual world. In VR, the experience sound should be realistic and it’s really important that the player can understand the space they are in and where a specific object might be through the sound it emits.

An important element in choosing how to design sounds for a particular object in the virtual world is defined by how the user interacts with the object. If the object will be touched by the virtual hands of the player then aural information can help fill the cognitive gap of the player not physically touching the object. This can be done with a subtle yet tactical and precisely timed sound when the user interacts with the object.

In virtual reality the viewer can get very close to objects in the world which means the level of detail in the sound design needs to appropriately follow the real world to really offer a believable experience.

When discussing real-time effects in regards to 2D and 3D games earlier in this chapter we mentioned it being dependant on development platform and  planned usage. Designing audio for VR, AR or mixed realities a lot of the same techniques and workflows can be applied from 2 or 3D games. It’s all about CPU usage and avoiding overdoing the real time effects (DSP) to the point where it causes lag in the game. By asking the question, “will this sound’s property change or evolve over time in game?” the audio designer can assess whether it is necessary to apply real-time. It’s a matter of cost versus value. If the answer to that question is yes, then apply real time DSP in the audio engine and if the answer is no, the effects can bake into the sound in the DAW.

“Standing Out” in the Mix

Sounds should be thought of as being in the background, middle-ground and foreground. Certain sound elements will be more prominent while others sit further in the background. These three states of the sound help us process and differentiate the various sonic cues we hear within a space. Of course these sound states can change as the player moves throughout the scene. We mainly focus on this during implementation and use filters to soften sounds that need to appear to be further away from the listener. As sound with more detail and at a louder volume will be perceived as being closer to the listener.

HRTF’s filter frequency content to process the spatial information of the sound. Harmonics can help make a sound more identifiable in a space. So pure tones like sine waves should be avoided. Frequency content above 1500 Hz is very helpful to the human ear when processing sound localization and lacking these frequencies can make the sound more difficult to locate. On the other hand, low frequency sounds are far more difficult for the human ear to locate and won’t make sense cost wise on the cpu.

Adding movement to sound is something we do in 3D game mixes as well. An effect like tremolo can help add movement to the sound as it loops in the game. Our ears can better pick up sounds that have movement. The human ear can pick up various sounds all at one time but the brain controls, which sounds we actually pay attention to. If a sound is dull and flat or even static we may not pay much attention to it but if there is some content movement in the sound we are more drawn to it.

Hey You... Over Here!

Directing the attention of the listener or player within the virtual experience is an important job for sound. VR is a medium that can cause us to turn our head when we hear sound from various directions around us. An audio designer can use spatial sound cues to direct the player’s attention or guide their movement.

Our ears can process full spatial information, which includes sounds outside our field of view, which is important for attention grabbing and guiding player decisions with scripted or pre-defined events. So, sound stemming from outside the line of sight is important, as this is how we hear in the real world. Our ears can detect sounds behind us, to the side, above and below. When looking forward we don’t only hear what is in front of us so it would feel very unrealistic. This helps the user to get a fuller sense of the space around them or what else is occupying the space with them. A sound off camera can prompt the player to look or move in another direction. It’s all very similar to the theory of 2 or 3D game audio where sound provides information to the player only in VR sound must be that much better at providing information. In real life, if a sound is behind us or to the side of us we tend to react to the sound and look in that direction. When watching a movie or playing a game in surround we don’t tend to turn our heads when we hear sound from a rear speaker. In VR since we are wearing the head mounted display (HMD), we tend to feel more open to looking around yet not everyone turns around in their first encounter with VR. It may be because we are trained to look at monitors and televisions during media playback but in real life we are experts in peripheral vision and picking up sound behind us and turning to react to it. As peripheral vision in HMD’s takes time to evolve, sound can play an important role in getting the users attention from below, above or behind.

Advances in technology and tools will help audio designers continue to create and implement immersive sound experiences in new realities. Plugins like “Near-Field HRTF” (head-related transfer function) and “Volumetric Sound Sources,” make things a bit more accessible when creating spatial audio for VR. Near-Field HRTF is allowing developers to control sounds that are very close to the listener. With Volumetric sound Sources it allows more precise control over sound based on the size or volume by allowing the implementer to assign a radius to sounds based on their volume to allow for different control.

A few startups have products that employ head tracking in the headphones. As this tech continues to improve and be more accessible we will find better VR audio experiences that will adapt and follow the listener as they move their head.

Perceived sound is a subjective experience of course so that should be kept in mind and testing on the project should be done to ensure a majority of players are responding in the intended way.It’s also a matter of making sure that sounds, which are not intended for directing the attention of the listener, do not get in the way.

Testing

It can be very difficult working in VR or AR and not having a device for testing. Working in-house usually grants you access to various test platforms but as a freelancer you will have to provide your own test devices. You can’t depend on the team for whom you are working remotely to properly test audio on the development platform. Having a test device such as a HTC Vive, Oculus Rift or mobile phone and viewer will help you land jobs creating sound for new realities.

I prefer being able to listen back via my Oculus Rift headset so I can hear the sound just as a player would. There are quite a few platforms and it can get expensive to purchase them all but I did some research and decided I would add an Oculus Rift and Google cardboard to my collection which I use with my iPhone and smaller android test device. - Gina Zdanowicz

Testing consists of playing through the game or experience while wearing the headset in a space that offers the proper room scale. Headsets like the Vive and Rift have users define their room scale while in the VR user interface through a set up guide. Some games allow the user to move around in the VR experience so the room scale ensures there is enough space to do so. If you ever played Nintendo Wii or Microsoft Kinect games you will understand the need for space.

As VR platform technology, computing power and machine learning continue to evolve eventually we should have the necessary processing power to implement all the little details that make up the sound of an object and more accurate acoustic modeling to mimic the effects of sonic energy that bounces around and interacts in the real world.

Assignments and Practice

Assignment A

  1. Open a new session in either FMOD or Wwise.
  2. Take the polyphonic score you created in Assignment A in Chapter 7 and create an adaptive music system with it using as many techniques as you can (vertical, horizontal, etc.).

Assignment B

  1. Find (or make) a gameplay capture for a game of your choosing. Make sure it includes a variety of choices that the player can make (point and click games are great for this).
  2. Compose a branching music system that accounts for all possible choices the player can make throughout the duration of the gameplay capture.
  3. Take your branching music system and implement it into FMOD or Wwise in such a way that you can play the capture, and follow along changing the music in real-time.

Assignment C

  1. Brainstorm a simple concept for a generative music system that could be used in a hypothetical game. Investigate some of the tools we’ve mentioned throughout the chapter, and do some research to determine which tool would be the most useful for creating this simple.
  2. Take an hour or two and create a basic prototype for this system. Take careful note of what worked and what didn’t. Was it successful? Why or why not? What did you learn that could be applied to future projects experiments?

Chapter 10

State of the Industry

It’s a good idea to keep abreast of the latest industry trends as it is always evolving. Here we provide some resources that will help you stay on top of the state of the industry.

Personal Health/Happiness

The specifics of a balanced diet are well outside the scope of this book, but the basics are important for a sustainable career. The CDC again has a helpful reference for nutrition if you’re looking for a plan to follow. Here we present additional resources to help you build a healthy plan.

Resource:

www.google.com/url?q=https://health.gov/dietaryguidelines/2015/guidelines/&sa=D&ust=1572901478134000&usg=AFQjCNGnlOWd4hQ2MCu_1XbhBfuY9JHsIw

The CDC recommends these three nutritional guidelines as the foundation of a healthy diet reference:

  1. Follow a healthy eating pattern across the lifespan.
    All food and beverage choices matter. Choose a healthy eating pattern at an appropriate calorie level to help achieve and maintain a healthy body weight, support nutrient adequacy, and reduce the risk of chronic disease.
  2. Focus on variety, nutrient density, and amount.
    To meet nutrient needs within calorie limits, choose a variety of nutrient-dense foods across and within all food groups in recommended amounts.
  3. Limit calories from added sugars and saturated fats and reduce sodium intake

Consume an eating pattern low in added sugars, saturated fats, and sodium. Cut back on foods and beverages higher in these components to amounts that fit within healthy eating patterns.

The CDC also greatly emphasizes the need for daily exercise. Especially as a game audio creator your day-to-day life may become dangerously sedentary. Balance this out with consistent physical activity. There’s not need to enter any bodybuilding competitions, but we do recommend finding a sport or some other physical activity that you enjoy and making it a prioritized aspect of your routine.

According to the Center for disease control (CDC), 7 or more hours of sleep is recommended for adults between the ages of 18 and 60. Reference Additionally a lack of sleep is correlated with diseases such as type 2 diabetes, heart disease, and depression. Prioritize sleep and you will prioritize career development.

Finally, mental health is extremely important for your career development. How others view you is often a projection of how we view ourselves. Having an indelible sense of self-worth will yield stronger, more meaningful interactions with others. This is more easily said than done of course. Understanding the fact that self worth precedes your work is a vital aspect of your mental health. By this we mean to say that it is a mistake to evaluate yourself based on the work that you do. Don’t fall into the trap of thinking that you are only as good as your last project. Don’t even fall into the trap of thinking that your career success has anything to do with your value as a human being! They are two distinct aspects of your life. Focus on cultivating an unconditional attitude of self-acceptance, and you will find yourself worthy of every career opportunity that comes your way.

Career Paths in Game Audio

Your goals and your focus in terms of skills and networking will largely inform your choice of career path. This is a tricky topic, however because career paths are not linear. Like game music, your career will not proceed sequentially and predictably from point A to B to C. Career trajectories can be surprising, and they are much more flexible than you would think. Often one role will offer experience and knowledge that leads to a totally new role, which in turn shifts your trajectory in a new direction and so on. This is not something to be avoided, it is something to be embraced. Follow your curiosities and try to cultivate a hunger for learning. This will ensure that every position or project that you take on will broaden your skills and teach you more about game development and audio creation.

In aiming for tangible career goals it is essential to evaluate what your day to day life will actually be like rather than what you want it to be like. This is a common issue in the video game industry because we are all so passionate about games! Many novice audio creators have a habit of romanticizing the work, and then they feel disappointed and lost when the reality is quite different. This is not a sustainable state, so it helps to have a concrete idea of what shape your daily life will take before you make any big career moves. Take a look at the list below and think carefully about what feels right for you.

In-House vs. Freelance

The choice between working in-house and freelance will have a large impact on your career trajectory. Many professionals do both at one point or another, so it’s good to have an idea what each entails. In-house positions and freelance work can vary somewhat in terms of the required skill set, but the biggest difference is in the day to day tasks.

For in-house roles audio professionals can be somewhat isolated from the rest of the development team due to the necessity of an acoustically treated listening environment but for the most part they may have a desk or cubicle mingled into  Despite this fact, a team oriented mentality is still an important skill to have in-house. AAA games have massive audio requirements, so you will likely be one of a few audio designers working together. Even in more of an indie environment it will be essential to collaborate with your superiors on direction, as well as with members of the art and programming departments. Doing so allows for more creative decision making, and therefore makes for a more immersive audio experience. If a fast-paced collaborative experience is something that interests you then an in-house role might be a good goal to set.

In-house positions are usually full time for the length of your contract. This can be good or bad depending on your goals. If you are someone that loves diving as deep as possible into one project at a time, then an in-house role would likely satisfy that urge. If prefer to juggling multiple projects at once, then that might be easier to find as a freelancer due to the time commitment. Sometimes studios work on multiple projects at a time which can offer some variety, but it will certainly depend on the studio and the type of games being developed.

The financial situation in-house obviously depends on the size and finances of the studio, but as a full time employee you will have guaranteed work and benefits. If stability is important to you, then consider an in-house position. But we would caution you against confusing stability with security. Unfortunately the reality of the game industry is that it is always in flux, and it is all too common to hear of studios clearing out entire branches of full-time employees. This does lend itself to the common career movement between in-house and freelance work. A major strength of in-house work is that as a collaborator you will make plenty of friends and personal connections which can help sustain your career during freelance stints. In-house positions often provide a solid network foundation for audio professionals to branch into freelance work later on. By contrast, it can be much more difficult to build a network as a freelancer because you will be approaching the industry from the “outside.”

Freelance work can be similar in terms of the skillset but very different in other ways. The name of the game here is time management. As a freelancer you will need to juggle multiple projects, coordinate with clients and collaborators, and maintain constant pressure on career development in order to sustain yourself. This requires organization, focus, and tenacity.

Another thing that freelance work requires is the ability to make sound financial decisions. At least at first, you will have to keep your overhead expenses low and be very discerning about the equipment and software you choose to buy. Remember that you are starting a business, and it can sometimes take years to build a steady stream of clients. Patience and passion are also very important traits for a freelancer. A solid chunk of your time will be spent marketing yourself and immersing yourself in the industry. This is something that in-house folks can sometimes take for granted due to the stability of their work.

If the ups and downs of your first few years of freelance work aren’t a deterrent, then you could be well suited to freelance work. The positives of this route are many. You will enjoy full control over the clients that you work with. This can be invigorating, and it will likely keep your creativity high. For many people the fast-paced schedules and lack of predictability keep things from being boring. Financially speaking, this also means that there is no cap to how much you can make. If you’re willing to put in the time developing your brand as a freelancer, you can work as much as you want and choose your own rates. However the downside of that is that when business is slow you will need to rely on savings. This lack of financial stability can be a huge negative for many people, especially those with families to support. It is extremely common for freelancers to supplement their income with other jobs outside of games or even outside of audio altogether.

A hybrid role is the In-House contractor who works side by side the rest of the team but for a limited amount of time. The in-house contractor is paid like a freelancer and doesn’t typically receive any benefits but can often acquire a higher daily rate to compensate.

This has just been a brief overview of what in-house and freelance work could look like. Every role is different and nobody’s career path is exactly the same. But knowing some basic differences between the two can help guide you toward a path that is better suited for your goals. We’ll take a more specific look at building these career trajectories in “The Game Plan: Your Five Year Plan for Success in Game Audio.”

Various Roles on an In-House Audio Team

Even within the realm of in-house positions there is a huge amount of variation between roles. This is something to consider as well. Will you be a sound designer or composer? In-house composers are very rare, so if music is the only thing that interests you it may be more appropriate to pursue a freelance career. Are you interested in implementation? Sound design roles often require knowledge of Wwise, FMOD, and even scripting for the more technical positions. Are you interested in music editing and implementation? In AAA companies music editing can be its own role entirely. Do you have any inkling to try foley? Voice and dialogue editing? These are all valid tasks you might be responsible for.

One important thing to note here is that larger companies may have specialists for each of these roles. Smaller companies are much more likely to require you as an audio professional to do all of the above. This again can be good or bad depending on your goals. If you’re looking to learn a bit of everything to better inform yourself of your interests and passions then a smaller company could be a great place to start. You’ll have the opportunity to try a bit of everything and see what you like. If you’re looking to get in-depth technical experience in a specific area then a larger company with more delineation between roles might be a better fit.

For further details on in-house roles, refer to a list of roles in the Introduction chapter: Game Development Roles Defined.

Specialization vs. Jack-of-All-Trades

The broad range of audio roles on a development team also means that you have the option to specialize in a specific area, or to be more of a utility player. In reality, just by being in the industry for years you will pick up a lot of information about the development process as a whole. Especially as a sound designer, you will likely move around a bit and do some of everything at one point or another even if your job title is specialized. As a freelancer however, your choice is more about branding yourself than it is about skills.

In the freelance world your image is often what clients are buying. So it makes a difference whether you are a composer, a sound designer, or a composer/sound designer. If you have no interest in sound design whatsoever then the choice is already made for you. Developers that are looking for highly specialized music might be more inclined to hire someone that solely works on music, especially AAA companies. The downside is that there are more work opportunities for sound designers than composers, and there are far fewer sound designers than composers in our industry. By adding sound design to your list of skills you are making your brand more marketable. Doing both also allows you to set your prices slightly lower for projects requiring both music and sound, which is a compelling reason for lower budget games to choose you.

AAA vs. Indie

The concept of specialization ties in very nicely to our next career consideration. Are you interested in working on projects from AAA developers or Indie developers? Of course you don’t have to choose to abandon one completely. Most game audio folks end up doing both at some point or another. And in many cases the work that you produce will beget more work in a similar framework. So if you do a ton of Indie games, more Indie games will likely come your way. But it is still important for you to tailor your image and your brand toward the one that you most enjoy working on.

As mentioned above, Indie work usually goes hand in hand with wearing “many hats,” as some in the industry would put it. If you’re hired as a freelancer to be the audio creator for a game, you are then in charge of everything from music to sound design to dialogue. This is much less likely to happen in a AAA environment, freelance or not.

This section is aimed at getting you to think about the kind of work that you’d like to be doing day to day. AAA work differs from Indie work in a variety of ways. AAA work is often higher paying. It can also be very demanding in terms of specificity. As a sound designer you might find yourself in the “asset cannon” situation. This is where you need to fire of a large number of high quality assets in a short period of time. Similarly composers might have a very specific reference for their music because AAA titles usually have a particular idea of what they want. This also means that composers will be chosen based on fulfilling a predetermined aesthetic, and not necessarily for their own artistic sensibilities.

Indie work is by definition lower budget. This again means less specialization and more collaboration to cover as much ground as possible. In many cases this can come with a greater sense of ownership over the product. Creative input can feel like it is more highly valued in this environment because there are fewer restrictions and less financial risk. The deadlines can be just as tight however.

Game Audio vs. Audio

Finally we come to a topic that many people avoid. Will you focus on game audio, or audio in general? Again, most of us at one point or another have done a bit of everything. But it is an important question for your career. The answer for composers is often both. Composers are less likely to need hard technical skills than sound designers because music is a specialty in itself. Where sound design is concerned, it is much more likely to be a technical role.

There is certainly nothing wrong with supplementing income with projects in other areas of audio outside of games. But it is becoming increasingly more common to specialize in games due to the incredibly fast technical advancements of the industry. The more technology we have at our disposal, the more technical the requirements for audio positions will be. It’s becoming more and more difficult to make a smooth transition from linear sound to non-linear sound. We would encourage aspiring game audio professionals to be aware of this fact. Don’t underestimate the game industry’s bar for technical experience! It is a field unto itself and it is as challenging as any other.

Summary

Considering the above questions can be helpful when taking steps toward a career path, especially when first starting out. But remember that our career paths are always works-in-progress. Keep an open mind and allow yourself to be curious about other aspects of game development that present themselves to you. Narrowing your scope can work well when building skills for short term goals, but in the long-term it can sometimes be detrimental to forward progress.

Chapter 11

Business and Price Consideration

Let’s take a look at questions to ask the developer that will map out the sonic requirements of the game.

  • What is the games genre?
  • What is the story or lore behind the game?
  • What games provided inspiration for conceptualizing this idea?
  • What are the target development platform(s)?
  • What game engine and audio engine?
  • How many hours of gameplay are typically expected?
  • How many levels are in-game?
  • Will the game require original music, sound effects and/or voiceover for the game?
  • What is the production timeline?
  • If the game requires original music, what style or genre is fitting? References are helpful.
  • How much music is required?
  • What are the important adaptive or interactive aspects of the games audio needs?
  • How many SFX does the game require?
  • What is the overall tone of the game and how should the sfx relay it to the player? Serious, Cartoony, Realistic, Fantasy, SCI-FI, Casual?  Etc..

If voiceover is required, be sure to inquire about the type of voice talent the developer is looking for and offer to setup auditions. It is important to have an idea of the costs and budget for hiring voice artists before agreeing to anything. Don’t assume all developers will understand specific categories of sounds and can often lump voiceovers into sound effects.

“Early on in my freelancing career I accepted a project which called for music and SFX. I later found out the developer had voiceovers in mind when they quoted the number of SFX required. I had already agreed to a budget and had to take the voice artist fee out of the SFX budget. As they say “Learn from your mistakes”, and I did.” -  Gina

The Elevator Pitch

Here are some resources for developing your elevator pitch:

Resource:

www.themuse.com/advice/perfect-pitch-how-to-nail-your-elevator-speech

What makes a great portfolio?

It’s a small victory if a potential employer or client views your demo reel. So you will want to be sure the material will hook them right away so they feel more inclined to keep listening. Here will further discuss demo reels.

Audio that accompanies visual action is another great way to show that you know what you’re doing. Of course, this can also backfire if you’re lacking the right experience – getting the perspective wrong or using the wrong sound might raise some eyebrows. Until you have some actual projects to present, you can create visuals or put together a video of your own and then beef it up with your sounds. You can also ‘borrow’ a movie trailer or game cinematic, replace all of the audio with your own, adding an appropriate disclaimer that you did NOT do the original work but provide it as an example of your talent – being extremely careful not to imply you were involved with the borrowed project.

Make sure to label things like Foley Artist, Recording Engineer, VO record, ADR, mixing engineer, music editor, and music supervisor to your responsibilities list.

Take a look at all your work that you want to include in your reel. How do you want to be viewed? Let’s say you have a lot of music composition experience with animation but don’t necessarily want to be pigeonholed as such. Take notice of the feel and flow of your work, the way you present can really change the entire feel.

Make sure to list any affiliations, agencies, directors, or producers that helped make the material on your reel. This is just the courteous thing to do and you’d be surprised at the negative response that can occur if someone was not given credit or the credits were manipulated to mislead.

This one is seems obvious, but there are some demo reels out there where the levels are all over the place. Check all your levels, mix everything together, and master accordingly. Listen to every detail.

  • Show the reel to friends or other students to get a fresh perspective.
  • Your best work only! The strongest pieces should be first and one at the end for a dramatic close.
  • Keep it short! 1-2 minute demos throughout your reel.
  • Know yourself! Are you focusing on composing, sound design, implementation, voice production or all of the above?
  • Show your range! To expand your opportunities, you need to master of a variety of styles and genres.
  • Customize your reel to suit the needs of your target client! Create different versions of your reel for specific genres/styles so you can give the client what they are looking for.
  • Include several “audio scenes” for sound design reels. Record and/or create sounds and use them to tell a story.
  • Implement your music and/or SFX into working projects to show your understanding of interactive audio.
  • Use clear and descriptive naming conventions and proper metadata on files
  • Video is good but playable is better.
  • Focus on creating a good mix. Be sure to have dynamics in your work but also watch your levels
  • You don’t necessarily want to include music in your sound design videos
  • Create unique sounds that will draw the listener in.
  • Practice critical listening study soundtracks and scores and sound design.

Showing technical proficiency of tools and processes, artistic creative ideas, and the attitude and personality of the person behind the work. A demo reel that is 5 mins of you demoing the implementation of a particular sound project, or passion project in Wwise, FMOD or Unity, with your webcam in the top corner as you explain what is being presented would be perfect.

What to do when you think you have nothing to show:

  • Don’t let your lack of content delay your portfolio
  • Get started by using work you completed during your college courses or by going to game jams, doing your own demos from the tutorials on the Audio engine sites (Unity, Unreal, FMOD, Wwise)
  • Replace the audio game play videos. Although, you will want to be sure not to replace the audio in a game developed by the company for which you are applying. Be sure to include a disclaimer: “Audio replacement demo” so as not to imply you worked on the project.
  • Join Game Jams in your area. Team up with student game developers or those just starting out

When sending a demo reel it helps to stand out if you send a link to a customized reel via a service like ReelCrafter https://app.reelcrafter.com. Be sure to include some audio tracks and a short and to the point video. An up-to-date headshot and a short blurb about about you with a link to your full portfolio site will round out this sort of “Elevator Pitch”.

Some audio artists choose to present a video capture, which demonstrates their ability to integrate audio into audio middleware or a game engine. Here are our thoughts on this.

The viewer most likely won’t take the time to watch that video but the option is there should they have the time to invest in it. If the opportunity strikes and the viewer decides to take a peek at your implementation style it can go one of two ways. Since there are various styles and workflows out there, the viewer could feel that you offer a fresh view on implementation and be interested in working with you or they could feel like you have a different workflow and find it off putting.

Most audio directors or lead audio designers are expecting a sound designer to have integration skills so instead of demonstrating the basics that everyone might be able to do, you could point out interesting implementation ideas For example, let’s say you implemented rain ambience. You have your 2D static loop. Maybe you created that loop by doing something granular like chopping up various rain loops for variety and they all play in a playlist smoothly. Maybe you have an area where rain is trickling down from a bridge above. You could talk about how you use a stereo sound but as a 3D event with stereo spread to attenuate the volume as the player moves away from the area and smooth the panning as the player gets close to the area.

Another option is to send a video of gameplay capture with text overlay to demonstrate your technical notes on how you went about implementing the sounds. Either of these two ways work well to avoid showing basic integration ideas that the masses are knowledgeable by demonstrating more ideas that show how you think about integration solve problems stemming from non-linear media.

The Design Test

While the design tests will test your creative and technical skills, the process often requires written or verbal tasks. Here we will review some questions you might be asked and how to answer them.

1) You have been assigned the task of creating sound effects for a player character’s attacks and special abilities. What is your plan of action?

This sounds like they are asking how you would go about designing the sounds. This might include setting up recording sessions for source material gathering source material from libraries. It's a good idea to gather a sonic pallet of sounds that fit the character and game. It would be good to mention that you create an asset list to keep track of source you might need or any special notes. Being detail oriented helps.

Next designing the sounds should be the place to describe your process including any software and plugins you might use. You can describe layering and transient stacking to ensure the transients will stand out in the mix and offer your sound a unique cadence. Discuss about how you go about designing the sounds including how you "master" the assets to prepare them for in game.

2) You’ve just finished creating sound effects for a player character’s attacks and special abilities and your next task is to implement the sounds in game. How might you go about doing this?

Here you should be as detailed as possible, this means starting with how you export files from DAW. Open audio engine, create new object and import SFX file by importing. Hit play button and ensure sound is audible. To trigger SFX sounds in the game, it will need to create events. Select the action "play".

Here I would also discuss file naming and how you would conform to standards. I would do some research on Company X to get a better idea of what games they create and what that means for special abilities. Try to cater the file names more to their type of games as it will show you did your research.

Discuss your plan to bounce files. Meaning what volume in RMS are you bouncing.  Some game audio directors prefer -3 to -5 RMS. Be sure to say you would check with the audio director on the specifics.  Then talk about the steps necessary for integrating. Do some research (or ask) as to if they use Wwise or FMOD, direct into the game engine or something proprietary? Then you can be a more confident when you are specific on how you would implement by creating events, parameter control by checking which game syncs are available etc.

3) You just checked in sound effects for a player character’s attacks and special abilities, hooked up in game and now an engineer has come to you and told you that the memory for the player character’s audio is over budget. What do you do next?

Here you can discuss your understanding of codec plug-ins in the Audio Middleware (in Wwise, we can do this in the conversion setting window) and convert SFX file format to such vorbis.

Also, it would be good to talk about doing an audio engine profiler to check the performance for the player character. Talk about setting priority of sound and how many instances of sounds can play.  Check if there is a plugin that is eating up CPU.

4) You’ve been assigned the task of creating sound effects for an important cut scene that many disciplines and stakeholders are involved in.  You’ve just finished designing the sound and you think it’s perfect.  What’s next?

It sounds like they want some type of process like getting other opinions on it.  Testing it in game if possible so you can hear how the in game audio transitions to the cutscene audio.  Step away from it and refresh your ears and come back to listen.

Business and Price Considerations

What Determines Your Price?

The price you set for a project can be broken down into a few categories. These categories add up to the price that you will offer on your final bid. We’ll look at each category below:

Studio Fees

Composers and sound designers, as well asmusicians and voiceover artists should all consider recording studio fees when determining price. If you are working with any kind of talent - musicians, voice artists, etc. - you will have to find a place to record them. If a home studio recording is inadequate for the task or if distance is a barrier then the only option is to pay to rent out a studio. Some studios offer very reasonable pricing, but it is important to estimate the length of the recording session, multiply it by the hourly rate, and then add this sum to your bid. Note that it is common practice to overestimate the session length to avoid running out of money for the studio! For these estimations - and any estimations for added cost of a project in the following categories - the best bet is to reach out to professionals that offer the services you need and ask them for a realistic quote. This gives you some real-world data to back your calculations with.

Equipment/Software Fees

As a game audio professional (even as a relative newbie) you have likely spent hundreds if not thousands of dollars building your studio. Your business will eventually have to make this back or risk going under. It is wise to factor in equipment and software fees where appropriate. For example, if you need to rent remote recording gear for a gig, or if you need to buy a shiny new library because a project does not have the budget for live players then factor it into your bid! We have more than once successfully convinced a developer to spring for a new sound design or sample library by arguing that it would drive up the quality of the work without costing an arm and a leg. The beauty is that after the project ends you can keep the library! It’s a win win.

Talent Fees

This goes without saying, but when working with musicians or voice artists you must pay them! This payment should come from the developer and not out of pocket. The only exception is if a developer is skeptical about the benefits of a live recording vs. a MIDI mock-up. In cases like these we have sometimes paid a small sum ($25 - $50) out of pocket for a very short spec recording. We then send the developer a comparison between the mock-up and the live recording. It is rare that a developer will opt for the mock-up over spending a couple hundred dollars on a live recording.

Management/Organizational Fees

Some projects require hours and hours of spreadsheets, Skype calls, and online communications. Most game audio folks are happy to do this - but it should always be factored into the bid. Make sure to clarify that all business interactions including feedback meetings, creative brainstorming sessions, and time spent organizing spreadsheets of data are all billable hours. This has the added benefit of making meetings efficient rather than endless time wasters.

Mixing/Mastering/Outside Collaborator Fees

Towards the end of a project you may end up contracting out miscellaneous tasks including mixing or mastering of assets. These tasks could be things that you’re capable of doing, but lack the time given tight deadlines. It also could be due to the need for a specialized engineer’s touch for a final polish. Either way, if this is the route you’re likely to go then make it part of your pricing.

Buffer Budget

Aaron Marks calls this category “The Kicker” in his book “The Complete Guide to Game Audio.” Essentially it is the extra budget that is commonly added to a bid to account for margin of error or unexpected extra costs. Its good practice to calculate about 5-10% of the total bid and leave it open for any extra recording fees, changes in direction, software damage or other unpredictable elements of production. This is less necessary for smaller projects that won’t be a massive time suck, but for larger projects it can save you quite a bit of stress and hassle.

Creative Fees

At last we have come to the creative fee. This will make up the lion’s share of your pricing and it is a hotly debated topic. Of course there are ranges you can find online for what you should be charging per minute of music. But in reality these ranges are far wider than they are purported to be. This is because the game industry is changing. Mobile and Indie developers are making games with budgets under $10k. Some games are even developed for less, and this is by no means an indication of the quality of the final product. And you can’t expect a developer to pay you $2,500 per minute of music if the budget of the entire game is $2,500. So how do you set your creativity price? To break this topic down, we must start with some basic numbers.

$1,000 per minute of music and/or ~$150 - $250 per sound effect

These are common numbers for the price for the average game audio professional with experience in the industry. These numbers would go down for newer professionals, and go up for professionals with more experience or notoriety. The only problem with this is that it is more or less arbitrary. These numbers don’t actually reflect real-world prices because they bear no relationship to a particular project. We don’t know what the details are in regards to the pricing categories (recording fees, talent fees, etc.) so we can’t know what fraction of those numbers is calculated for the creative fee, or the studio fees, or anything else. What’s worse is that these numbers have been floating around for years - possibly decades - and they certainly have not been adjusted for inflation. The myriad changes in the game audio market have not been adequately considered when it comes to the traditional pricing wisdom. According to a 2017 GameSoundCon Survey the most common prices per minute of music are $100 and $1,250. This is an astronomical gap! This suggests that developer budgets also have a wide range. As we mentioned earlier, mobile games are a huge portion of the consumer market now and some mobile apps can have a total budget of $1,000. Is it reasonable to charge $1,000 for a minute of music in that scenario? More than that, do you think that it is profitable to price your services in a way that does not reflect changes in the market itself? Here is another common generalization of game audio pricing:

10% - 20% of the overall project budget

These percentages would cover the total audio budget as compared to the budget of the project as a whole. So 10-20% of the total project budget would cover music, sound design, implementation, and any voice or musician fees. This estimation does work a lot better because it is more reflective of changes in the market. If project budgets are on the rise, then so is the overall audio budget, which is very reasonable. It is also a very realistic way to share your bid calculations with a developer. The obvious downside is that it offers nothing in the way of business development. For example, with this model an experienced sound designer might take a micro game project on for maybe $$50 or $100. For someone with considerable experience, a killer portfolio, and a rock solid reputation this project will not put food on the table, nor will it be likely to advance her career in any way. The previous model of $1000/$150 per minute/sound effect actually gave us a stable reference point for career development, this model does not. Considering this model even further, it is unlikely that you will be able to sniff out the exact budget of every developer that you enter negotiations with leaving your pricing completely up to chance and guess work. Believe us, this might seem tempting when you are bidding for your first few gigs, but it is far more stress than it is worth! So how do we set our price point for our creative fees to reflect both the market and our career development?

Anchor Value

Anchor Value pricing is a method of setting your price outlined by a 2018 GameSoundCon article. In essence this method allows you to set your price based on the value of your overall services. The article asserts that the first number that a client sees will be the value that they attribute to you and you work. This is crucial to your career development. When clients value your work higher, they are willing to pay more, and will respect your technical and creative input more. They will also be more appreciative of your collaboration in general. It affects just about every aspect of your interactions with them in a positive way. You’ll find that once you start putting a higher number as your anchor value, you will also feel more confident in yourself and your work as well.

So how should you set the anchor value? The truth is that there is no real answer to this. Basing this number on your years of experience is common, but also somewhat arbitrary. Years of experience doesn’t necessarily lead to quality. For that matter quality itself is somewhat arbitrary! You would be surprised how variable the term “quality audio” can be when you ask a game developer to evaluate work. For one client quality audio means fantasy driven sound effects and epic music. For another, a plucky cartoonish aesthetic is what they mean by “quality.” Quality really boils down to individual taste, and that is impossible to quantify.

Our favorite method for setting an anchor value is simple. Imagine yourself in a situation where your dream project comes along and the developer asks you for a bid (per hour, or per asset, or for the whole project, it doesn’t really matter). Now pick a number in your head that you are fully comfortable and confident in as your price point. Now double it. This may seem greedy, but as creative, passionate people we tend to undervalue ourselves. This is especially true when faced with projects that excite and inspire us. This method forces us to push outside our comfort zones and prioritize our career and personal development. For the skeptics out there, sit tight because the process isn’t over. The anchor value is not necessarily the price that you will be paid. But it is an important part of your Creative Fee.

Example Bids Using the Anchor Method

Dear (Name of person receiving bid),

It was great meeting you at the networking mixer at GDC. I found your thoughts on art style in 2D games to be really interesting! Your unique approach definitely shows in your game project _________.

Thanks for reaching out about my pricing for audio. I’ve attached my rates sheet in this email. These are the rates that I use for larger studio projects, but I work on games projects of all sizes and budgets and I’ve never had an issue finding a price that works for both parties. Please let me how these rates fit into your budget. If they aren’t doable, then let me know what you’d like to spend on audio and we can work something out. This project looks like a great fit for my skills, and I believe it will be a very successful game, so I’m happy to work with you to reach a price point that is mutually beneficial.

Sincerely,

The Game Audio Wizard

An important aspect of this email is that most of the work is already done. Clearly the hypothetical audio creator has had a meaningful conversation at GDC. It is not necessary to rehash the details of this conversation - keep things short and to the point - but reminders are helpful. The tone is equally professional and friendly. It shows that this person has worked on a range of project budgets and is capable of creating quality content under a variety of price points. It also shows that this person is willing to negotiate, but asks the developer to make an offer first. This allows the audio creator to either reject the offer if it is unreasonably low, or apply some leverage and negotiate for one or more of the creative payment options.

On the rates sheet the anchor value comes first as a buyout option, and below there are only one or two other options maximum. In the email, this person still leaves room for further negotiation if the exclusive license is still too high. This is important because it leaves room to leverage other types of payment options (see section above on “Payment Options”), which are common on Indie projects.

Navigating Contracts

Non-Disclosure Agreement

A non-disclosure agreement or NDA is likely to be the first thing you will have to sign in your game audio career. These are standard contracts in the industry, particularly for AAA studios. An NDA keeps information about the project under wraps. This keeps proprietary tools and ideas safe from other developers. NDAs are less common in the world of independent developers simply because indies often rely on public support or crowd funding. Audio teams are sometimes brought on after this campaign is started, so NDAs can be less of a priority than they are to AAAs.

Always read contracts thoroughly before signing. NDAs are straightforward, but it’s important to make sure they don’t contain anything that contradicts previous agreements. Also take note of how the NDA stipulates dealing with these proprietary items after the contract is terminated. Failure to observe these stipulations can get you into trouble. Sometimes clients can sneak designated actions for the assets you create into the NDA as well. Have your client clarify all ambiguous points for you and don’t be afraid to ask for any items you are uncomfortable with to be removed.

Work for Hire Agreement

If you are creating custom assets under contract for a client then you are engaged in a “Work Made for Hire Agreement.” These contracts are important because they clarify expectations for the project timeline and delivery specifics. Work for Hire contracts will also detail the terms of the services you are providing as a contractor. The determination between a buyout or a license agreement will be made as well, along with other important information. Here is an example of a Work for Hire Agreement:

Notable Contract Points

Before you sign, here are some important points to consider when reading through (or drawing up) your contract:

Revisions and Reworks

Revisions and reworks are part of the process of creating audio for games. No one nails a cue on the first try every time. In fact much of this book emphasizes the importance of iteration in the development process. The important thing to remember here is that revisions and reworks are two completely different things.

A revision is a small change to an asset that you have already created. For example if you deliver a gunshot sound effect to a client and she says “it’s good, but it sounds a bit ‘small’ relative to the size of the image we sent you. Can you make it sound bigger and more aggressive?” This is a clear example of a revision. The client sent adequate direction for the sound effect to be produced, and the requested change is an aesthetic revision to the asset, which is still within its original scope. By contrast, an example of a rework would be a client saying this; “we liked your handgun sound effect, but we have decided to change the weapon to a sniper rifle. Can you take this sound and make it fit this new image?” This is a rework because it is a complete change of direction. It is up to you how you price a rework, but it should reflect that fact that it is a completely new asset, and not a change to a previously submitted one. The clear difference between these two examples is that (assuming proper communication) revisions are expected and as such are the responsibility of the contractor, while reworks are complete changes in creative direction, and are thus the responsibility of the client or developer.

Our final word of advice is to set clear boundaries on what you are willing to do for revisions and reworks, and make sure your pricing reflects this. If you have a limit on revisions, make sure to clearly state the number of revisions you are willing to perform before it becomes a rework. If you are willing to do unlimited revisions, then try adding in some language that gives you a bit of wiggle room to operate. For example add something like “...unlimited revisions to an asset within reason as agreed upon by client and contractor.” Then make sure to stipulate when a revision becomes a rework. Also be sure to include how much reworks cost relative to your standard pricing.

Buyout vs. License

We’ve already covered the differences between buyouts, exclusive licenses, and non-exclusive licenses but we’ve added it here as a reminder. This is where you will have to stipulate which of these your agreements is based on. You will also stipulate price and related terms.

Resource:

This is a great article put together by composer and sound designer Kole Hicks which details budgeting for Game’s Audio.
www.gamasutra.com/blogs/KoleHicks/20160523/273019/Budgeting_for_Audio_in_Your_Games_Crowdfunding_Campaign.php

Milestones

Milestones are a very important part of the delivery process. Milestones are just points in the where the client agrees to make a payment. It makes sense for these to coincide with the larger structure of the development cycle as a whole. Typically these will be split relatively evenly throughout the length of the cycle with a larger payout at the end.

Final Approval

Before you get started on your work it is vitally important to understand who has final approval over your work and ask to include this in the contract. This can clear up a lot of ambiguity if you are receiving feedback from multiple sources, which can be very common in game development. Knowing who you are trying to please will clarify your direction and save you time and energy in the long run.

Delivery Specifics

It is equally important to clarify the delivery format, methods, and schedule and add it into the contract. You essentially cannot do your job if you don’t know if you’re delivering WAV files or mp3s, or if you don’t know exactly how that delivery will be made. Will you be delivering to a dropbox folder? Sending via email? What are the file and naming conventions to use? These are all essential questions that are sometimes treated like afterthoughts. Throughout this book we have been advocating for game audio professionals to be truly integrated into the process of game development and you simply can’t do that if you don’t clarify the delivery specifics.

Credit Stipulations

Another item that is often overlooked is how you are credited. This is a relatively small ask, and is therefore rarely denied. But spelling and presentation counts! Your career development is reliant on how you are seen by players and peers. Don’t be shy about requesting your name to be adequately represented in your game project. Ideally you would ask for the entire audio team to be credited as well - voice talent, musicians, and any contractors that helped. It can mean a lot to your collaborators, so don’t overlook this simple kindness.

Payment Specifics

Finally, make sure you cover payment specifics. It can be a pain to wait months and months to get paid for a project. When you can, try to minimize the time between doing the work and getting paid. If you steer the client toward electronic transfers rather than paper checks, do it. This can save you a few days waiting for the mail. Be aware of using things like PayPal because extra service charges are sometimes added.

Digesting Feedback

Often times after rounds of feedback and revisions the game sound might feel that much better. In the end teams build games and being open to working with feedback will ultimately improve the games audio. Here we will lay out a few tips on how to do so effectively and how to think about feedback in general.

  • When delivering work for feedback try to avoid overstating the fact that you are submitting a “rough version” or an “unfinished mix.” It might feel like the logical thing to say to help soften the blow, you may end up getting scaled back feedback in response to your forewarning. More feedback is always better.
  • Receiving feedback in the likes of “It sounds great!” sure does make us feel good, but it won’t help with improving skills. Afterall, these sounds are not for our own enjoyment or benefit. The audio must please a variety of consumers, and it must properly serve the needs of the game above all else. Try to keep all of this in mind throughout the process.
  • Feedback will come at you in all-different degrees throughout your career. A programmer or artist on your team might have something specific to say about your audio or the audio director may want to go in a different direction. In our experience, taking feedback and applying it to revisions can be really rewarding. The outcome is usually even better than what was first presented. The saying “It takes a village” can very effectively be applied to game development. Remember that you are contributing to a product that will need to be enjoyable to a wide audience and it takes feedback and teamwork to make it happen.
  • After receiving feedback you should prioritize it by source. It’s best to set priority on feedback from your team (especially your direct superior or creative director) over friends and family. Next, try to sort out the bigger issues before digging into smaller details as those may change as you work out the former.
  • Considering the source of feedback, you may find yourself wondering how to interpret certain descriptors. Someone not familiar with audio terminology may not have the tools to properly express what they are hearing. This does not mean their feedback is irrelevant. Everyone with the gift of hearing has been listening and interpreting the sounds around them for as long as they can remember. This provides everyone with an intuitive sense of what is pleasing to the ear, as well as a sense of what is annoying.
  • When designing a sound or working on a mix you should step away now and then to give your ears a break. Listener Fatigue is a very real phenomenon. When our ears are overloaded with sound they can play tricks on us. You might find yourself believing that everything you hear sounds amazing when it is in reality terribly awful, or vice versa. As you tweak your sounds, if you are no longer sure if you are making things better or worse, Stop! Take a break and come back with fresh ears. It really improves the quality of your sounds. Asking for a second opinion can also be helpful if you are short on time.
  • Being critical of your own work is very difficult, but luckily the game development process has built in help in the form of producers, audio directors, game designers, and every other team member. Use them! Family and friends can also be helpful when you are in need of a non-technical ear as most consumers aren’t experts in game development or game audio. But they do ingest media on a daily basis, making their feedback worth seeking out.
  • Audio feedback described in lay terms can be difficult to process. You may be met with words like muffled, bright, high or low tone, shrill, harsh or washed out. To effectively implement feedback it is important to be able to translate the intent of the feedback. This will prevent you from running off with this poorly understood feedback and making edits based solely on your understanding. A helpful way to do this is to bring the conversation to some common ground. Usually with audio this means discussing mood or emotions rather than specific audio terminology. For example, try to define in lay terms what you think they might be hearing that makes them uncomfortable. If something is described as too harsh, bright, or shrill it might mean the high frequencies need to be reduced around 7k+. Over time you will become more comfortable will this type of translation.

Music Rights

Download Music Rights for Game Audio (PDF 255.5KB)

Chapter 12

The Five-Year Plan

Now that we have everything we need to dig into the business aspects of game audio, we will outline a game plan (pun intended) for your career. Remember that no two career paths are alike. These are just two of the infinite possible paths you can take in the game audio industry.

The Five-Year Plan: In-House Sound Designer

Here we’ve laid out an example five year plan for someone looking to break into the industry as an in-house sound designer. We’ve broken each year up into skill building, projects, and network building. The network building is further broken down into events and some category specifics. We assume essentially no knowledge of the industry, but some financial independence and perhaps a college degree is assumed. Use this as a model to give you some ideas and inspiration for your own career.

First Year

In year one the goal is to get a start on the basic skills needed for a career in video game sound design. Emphasis here is placed on design a variety of quality sounds. At this point you still need to rely on a steady income from elsewhere, and the time you are putting is is essentially our own free time.

Skill Building

Basic Sound Design - Build a basic studio setup outlined in Part II, and begin creating sound effects. Use methods in Chapter 3: Designing Sound (plus other relevant books in the “Further Reading” section of Chapter 3) to practice creating assets. The goal is to familiarize yourself with the tools of your basic setup, and to learn the process of designing sound as deeply as possible. Stay curious and follow your interests.

Basics of Game Sound - Study Chapter 2 of this textbook and play a slew of video games with adaptive audio. Pay close attention to the sounds; how they sound, how they function, and how they are triggered. Imagine how you would set up adaptive sounds in a similar situation. Research how the actual audio team dealt with the challenges of interactivity. How was your hypothetical approach different?

Projects

Personal Sound Design Library - Create your own video game sound design library. Make a list of as many sound effects as you can with a cohesive aesthetic and create each sound effect to the best of your ability. Then share the library in a game audio forum of social media page and ask for feedback from other sound designers. When you are satisfied with the quality, find another aesthetic style and design a new library.

Network Building

Make a point to study Chapter 10: The Business of Networking to get an overview of what the industry is like. Familiarize yourself with various game audio roles and think about what suits your needs to best. Above all, start to cultivate and prioritize a healthy work life balance, as bad habits often start early and are hard to get rid of.

Events - Join your local IGDA and G.A.N.G. chapters and attend as many meetings as possible. Keep an open mind and make friends!

Solicitations - Begin making your presence known on forums and social media. Create a TIGSource and IndieDB account. Put time into learning about the projects that jump out at you, and make helpful comments. Genuine constructive criticism is like gold when 9/10 people are simply offering praise in the hopes of landing a gig.

Mentorship - In your first year it can be hugely beneficial to reach out to some industry vets who are willing and able to mentor you. Apply to be a mentee at the AMP (Audio Mentor Project) and do all you can to learn from your mentor. Strike up a relationship and meet in person at conferences if you can. Reach out to higher ups in organizations like G.A.N.G (The Game Audio Network Guild) and ask some questions. Don’t be a nuisance, but foster these relationships and do what you can to offer something of value to your mentors.

Second Year

Year two is a combination on building on the basics that you learned in year 1, with the addition of striking out and looking at taking on freelance projects.

Skill Building

Intermediate Sound Design - Take with you’ve learned so far in Chapters 2, 3, and 4, along with any of the texts in “Further Reading” and online tutorials and start to put together a compelling demo reel. Find gameplay captures and cutscenes and complete a few audio replacement demos. Focus on absolutely the highest quality work you are capable of and apply that to a broad range of sound design aesthetics.

Implementation - Study Chapter 9: Music Implementation and begin making advancements in the field of implementation. Select a middleware tool and a game engine to begin acclimating to the implementation workflow. The goal is to be as comfortable as possible with a particular set of implementation tools (FMOD and Unity, Wwise and Unreal, etc). If you are able to master one combination, move on to another!

Projects

Demo Reels - Complete a few audio replacement reels that can serve as high quality and polished demo reels (See Chapter 11: The Business of Games Part II). Find short gameplay and cutscenes of video games and replace the audio one sound effect at a time. Research online tutorials detailing methods for similar sounds. Pay close attention to detail and the quality of your work.

Personal Game Projects: At this point it is extremely important to start working on your own actual game projects. This can be paid or unpaid freelance work, student work, or simply personal projects. It doesn’t really matter what the projects are but you need to have some real interactive projects to show that you are serious about game audio. It will also prepare you for the problem solving skills required of an in-house role.

Network Building

Make a point to study Chapters 11, 12, and 13, as well as all of the “Further Reading” recommendations. The goal for year two is similar to year one - be everywhere and see everything you can. Also make a point to continue to foster relationships with the folks you met in year one.

Events - Solidify your local relationships as much as possible. Try volunteering for a local event or two. If you’re comfortable and have the travel money, consider making the trek to GameSoundCon or GDC would provide important perspective on the industry as well as networking opportunities.

Solicitations - After a year of learning the foundations of game audio it’s time to strike out on your own! In year one you’ve made your presence known on forums and social media pages and you’ve earned respect by offering well-placed constructive criticism. Now start reaching out and offering your service. Use the advice in Part IV of this book to help price yourself for smaller projects, but don’t hold yourself back. Try and drum up some work!

Third Year

The third year is all about working toward specific in-house positions. You will use what you’ve learned in Part IV to demonstrate your value and build compelling applications for and entry level sound designer position.

Skill Building

Role-Oriented Skill Building - In year three you will narrow your skill building by researching entry-level sound design roles. Pay close attention to exactly what the required skills, responsibilities, and levels of experience are for the roles that you see yourself being happy in. Focus your research on exactly these items. Use website like gamasutra.com and various social media pages to aid you in your search for game audio jobs.

Projects

Freelance Projects - At this point you will likely have some valuable clients and collaborators hiring you for projects. Keep this up! You will probably still need a stable source of income, possibly not even game audio related, but this is ok. As long as you are working on freelance projects, growing your skillset, and gaining experience working with clients you are on the right track.

Application-Oriented Personal Projects - This is possibly the most important aspect of your career at this point. You will need to gear all of your personal projects toward skills and requirements that are mentioned in the applications you’ve looked at. If a role requires Wwise experience and comfortability scripting in Unreal, then create your own project with those elements and dig deep! Find ways to demonstrate this skill set that you are building.

Network Building

Public Speaking Engagements - Now that you have some experience under your belt it’s time to start sharing your knowledge with the world. Apply for public speaking engagements to any conference you can find. Encourage your developers to apply to game expos, both local and in larger cities that tend to lean toward tech or the game industry (Los Angeles, San Francisco, Montreal, Austin, London, Tokyo, etc.).

Conferences - You should have a steady group of “conference friends” at this point. Many of them will have their own speaking engagements. It’s important to foster these relationships, but don’t be complacent. 4-5 weeks before the conferences kick off do some research into companies you’d like to work for and reach out to anybody and everybody in the audio departments. Set up casual meetings with them for coffee or cocktails and ask them about their work. Try and cultivate meaningful and fun rapport.

Fourth Year

The fourth year brings you to your first in-house position as a sound designer. This is an entry-level role so your goal is to absorb as much knowledge as you can and enjoy the process. On your down time try to think about what your interests are, what you want your focus to be in the future, and what skills you would need to move into a senior level position.

Skill Building

Technical Skills - At this point you have the skills and experience to land an entry level position as a sound designer. Moving forward you should focus on building hard technical skills like the sound design oriented skills mentioned in Chapter 5 and even the music implementation skills in Chapter 9. Focus on creative implementation and integration with game engines. Make yourself comfortable with industry level tools in every possible combination.

Projects

Entry Level In-House Position - Use the advice in Chapters 10 and 11 to land an in-house entry level sound designer position. The freelance projects have now been replaced for the most part by steady work on a long term AAA title. You will spend your time focus on tasks like foley and basic implementation. learn the foundations of in-house work and spend your free time at work asking about technical sound design skills. Use these skills you have to excel in your new role!

Personal Projects - Just because you have a full-time role doesn’t mean you need to ditch all personal projects. It can be gratifying and useful to continue working on personal projects that will challenge you and keep you sharp. Focus your projects on the technical curiosities you have. This will help you narrow your focus when applying for more lucrative full-time positions. Use these projects as an opportunity to study things like object oriented audio environments and programming. This will be indispensable for future applications.

Networking

Public Speaking Engagements - Now that you have an in-house role, you will likely have a whole crew of audio professionals to go to the conference with. Your studio will probably pay for your travel expenses. Keep an open mind, talk to everybody while you’re there, and push yourself to give a talk about a topic that interests and inspires you. This is your opportunity to impress audio directors with your technical prowess and to make some new friends capable of furthering your career development.

Expo Floor - You now have a shipped title to share at conferences, so get out on the expo floor and have some fun! People are always willing to play a fun game, and this is a great way to share your work and make some friends.

Fifth Year

In your fifth year you will become increasingly comfortable with in-house life as a designer. You will be experienced and competent with your work, and you will enjoy all of the numerous benefits of in-house positions. The focus will now be on finding your deeper technical interests, and giving back to the game audio community.

Skill Building

Role-Oriented Skill Building - Now that you are comfortable as a sound designer, it’s time to get uncomfortable again! Start looking at technical sound design and senior sound design roles and at their requirements. It’s too early to apply as they often require 3-5 years of AAA experience, but you are well on your way to that. Be patient, but use this time to build role-specific skills. Do you need experience with leadership? Ask to take the lead on some side projects for your studio. Do you need experience with Voiceover or programming? The pattern here is to stay up to date on what the industry needs for the roles that interest you. Then build your skills to fill those needs.

Projects

Personal Project - At this point you’ll be taking on more and more responsibility in-house, so you may not have much time to work on personal projects. This is ok as long as you are taking the necessary steps toward roles that are interesting and rewarding for you. If you have project that align with the values of your company, don’t be afraid to pitch them as an experiment. If they say no, then pursue it on your own (under the legal restrictions of your contract of course). The goal here is to stay curious and stay motivated.

Demo Reel and Website - Having four years of development under your belt is a massive task. It’s easy to be complacent when you have a stable role, but it’s important not to forget about demonstrating your value. Keep your reel and your website up to date, and make sure you put your best work out there for people to see. You’ll need to keep your reel polished if you want to apply to roles with more responsibility and a higher technical bar.

Network Building

Mentorship - You now have made such a dent in the industry that it’s time to give back. You apply to be a mentor with the AMP (Audio Mentoring Project) and it all comes full circle. You now have a mentee who is passionate as you were, but has a lot to learn. In your position you can offer your mentee tons of great advice!

Conferences - Conferences are now vacations where you get to see some of your closest friends. It’s still important to branch out and meet new people, but you now look forward to catching up on a personal level with people that you don’t get to see often enough. Have fun at these conferences and be sure to strike up friendships with anybody and everybody willing to talk. You are the industry veteran now and it means a lot when you devote some of your time to aspiring game audio professionals.

Five Year Summary

It has now been five years since you started your journey and you have come a long way. With a bit of luck you will have 3-5 years of experience in AAA game development and you will be well on your way to creating a compelling application for a senior level technical sound design role. You may even have had the fortune of earning an award or two at GDC for your effort!

The Five Year Plan: Freelance Composer

Here we’ve laid out an example five year plan for someone looking to make a living as a freelance game composer. Again we assume essentially no knowledge of the industry. We’ve broken each year up into skill building, projects, and network building - same as we did with the in-house plan. You’ll see more of a focus on side roles here as freelancing can often be unpredictable. For this reason we added a “Complementary Income” section after two years. You’ll find that compared to the in-house plan we will focus more on networking and demonstrating value through projects.

First Year

The goal for your first year is to set up a home studio and begin learning the foundational skills for game composition. We are going to hit the ground running on this one since it can take a long time to build a steady client base.

Skill Building

Basic Music Composition - Build a basic studio setup outlined in Part III, and begin writing music. Use methods in Chapter 7: Composing Music (plus other relevant books in the “Further Reading” section of Chapter 7) to practice. At this point it is beneficial to work on writing in various styles. Pick a few genres of music you’ve heard in video games and randomly select one each day to emulate. The goal here is to write at least one minute of music every single day. This is by far the most useful way to spend your time as a new game composer.

Basics of Game Music - Study Chapter 6 of this textbook and play a slew of video games with adaptive audio. Pay close attention to the style of music and how the music functions as you interact with the game. Try to allow the game time to loop through full cues. How does it work? Can you find the loop point? Does the cue loop infinitely or does it end somehow? How would you have done it differently? How adaptive is the music overall?

Projects

Music for Visuals -The best personal project you can take on is to begin developing your intuition as a media composer. What we mean by that is that you need to learn how to take cues from visuals (gameplay, concept art, a cutscene, or anything you can think of) and score it. Focus on nailing the mood of the visuals above all else. Be critical afterwards. Does the music feel right for the mood? Show it to some friends and peers and ask them to be specific about what feelings the visuals evokes in them. Then ask what feelings your music evokes. Do they match? Use your one minute of music per day to create steady stream of hypothetical game music.

Network Building

Make a point to study Chapter 10: The Business of Networking to get an overview of what the industry is like and to remind yourself about the Pyramid of Sustainability as it goes double for freelancers. Familiarize yourself with various game audio roles and think about what suits your needs to best. Above all, start to cultivate and prioritize a healthy work life balance, as bad habits often start early and are hard to get rid of. Work life balance is sometimes even more crucial as a freelancer because the temptation is to work until you drop. This is not sustainable!

Events - As a freelancer you need to be as aggressive as possible with your time networking. This does not refer to aggressiver personal relationships! This is referring to the number of events you attend. Start local and join your IGDA and GANG chapters, but also apply to volunteer at GDC and GSC. Make a trip to your local PAX Expo and don’t miss any indie developer expos that are within driving distance. Try to make it to about one industry event per month. Meet up, make friends, and have fun!

Solicitations - Begin making your presence known on forums and social media. Create a TIGSource and IndieDB account. Put time into learning about the projects that jump out at you, and make helpful comments. Genuine constructive criticism is like gold when 9/10 people are simply offering praise in the hopes of landing a gig. Make sure your comments are not always about music. It can come off to developers as sneaky. Try to immerse yourself in their world and make comments that are helpful to them without obviously looking for attention toward your music.

Video Game Remix Collaborations: Find a community like Materia Collective or OCReMix and have some fun making remixes! Collaborate with as many people as humanly possible and enjoy the benefits of exciting friendships and wonderful human beings. You will most likely have a ton of shared interests, so this is a great way to meet conference-goers ahead of time. It is also a fantastic exercise in networking because it is so low pressure. Use this as an opportunity to make friends with people whose skill sets do not match your own, and learn from them! Pick their brains about mixing, mastering, instrument playing and anything you can think of. Make friends. Meet up. Have fun!

Mentorship - In your first year it can be hugely beneficial to reach out to some industry vets who are willing and able to mentor you. Apply to be a mentee at the AMP (Audio Mentor Project) and do all you can to learn from your mentor. Strike up a relationship and meet in person at conferences if you can. Reach out to higher ups in organizations like G.A.N.G (The Game Audio Network Guild) and ask some questions. Don’t be a nuisance, but foster these relationships and do what you can to offer something of value to your mentors. You might even land an internship with a notable game composer!

Financial Stability

It is unrealistic to expect to make a living in game music before you’ve had any experience. If you’ve had a job in the past, keep it! Even if it’s part-time, it can be hugely beneficial. We will largely be working on building skills and hustling in our free time. This is part of why freelancing can be so difficult. All of this work needs to be done on our own time, and it can be difficult to find the time to still strike out and make the connections we need to sustain us financially. We will address this as we build our skills, but for now think of this as a passion projects to be done on our own time.

Second Year

Year two is a combination on building on the basics that you learned in year 1, with the addition of striking out and looking at taking on freelance projects. This is similar to the in-house plan, but we will also focus on building our teaching skills as well. In all likelihood you will still be reliant on some side-income, but wer will begin addressing that now.

Skill Building

Intermediate Game Music - Take with you’ve learned so far in Chapters 5, 6, and 7, along with any of the texts in “Further Reading” and online tutorials and start to put together a compelling demo reel. Use your genre experiments and media projects and pick the ones that stand out to you. Use Chapter 11: Demonstrating Your Value as a reference. Focus on absolutely the highest quality work you are capable of and apply that to a broad range of musical aesthetics. One thing to mention here is that you should have a large volume of sample music by now. This means it’s time to start considering what your voice as a composer will be. We would never recommend narrowing your work, or pigeonholing your style. But you should feel a particular sense of expertise in some styles and you should find ways to inject some of your personality and uniqueness into everything you write.

Live Instrument Work: Now is the time to take advantage of all of the musicians and mix engineers you’ve met through Materia Collective and OCR. An extremely important benefit of having tons of talented musicians is that you can learn how to write and record for actual instruments that way. If you were to go back to school to get a masters or doctorate in music composition, this is what you would be doing. Writing music for actual instrumentalists (not just samples) and learning the ins and outs of as many instruments as possible. There is no better way to learn how to write and orchestrate than writing music and getting feedback from the musicians playing your music. These musicians also make for extremely valuable collaborators when you start working on freelance projects that need instrumental recordings.

Implementation - Study Chapter 9: Implementation and Adaptive Recording and begin making advancements in your implementation skills. Select a middleware tool and a game engine and begin acclimating yourself to the implementation workflow. The goal is to be as comfortable as possible with a particular set of implementation tools (FMOD and Unity, Wwise and Unreal, etc). If you are able to master one combination, move on to another!

Projects

Demo Reels - Complete a few music reels that can serve as high quality and polished demos (See Chapter 11: The Business of Games Part II). It’s ok to do music replacement demos, but we would encourage you to come up with your own work and let the music speak for itself. Pay close attention to detail and the quality of your work. Beyond that, make sure that your voice can be heard in these demos. After enough time in the industry it is your voice that clients will be hiring you for.

Personal Game Projects: At this point it is extremely important to start working on your own actual game projects. This can be paid or unpaid freelance work, student work, or simply personal projects. It doesn’t really matter what the projects are but you need to have some real interactive projects to show that you are serious about game music. It is helpful to have the problem solving skills that you will develop with these projects, but above all your goal is to let your musical voice shine through these projects. This means sharing your aesthetic sense and technical proclivity for creating adaptive systems. If it helps, team up with a programmer and/or sound designer to make these projects more well-rounded.

Network Building

Make a point to study Chapters 11, 12, and 13, as well as all of the “Further Reading” recommendations. The goal for year two is similar to year one - be everywhere and see everything you can. Also make a point to continue to foster relationships with the folks you met in year one.

Events - Solidify your local relationships as much as possible. Try volunteering for a local event or two. GDC and GSC should be staples in your yearly calendar, so continue saving and scraping your way into those events! While at these conferences it can be easy to find a clique of audio folk to hang out with, but force yourself to find events where you can branch out. Go to developer panels and parties and strike up as many conversations as you can. Play new games, ask questions, and be positive and curious. The more developers you meet the more friends you have with the ability to help you realize your goals and aspirations.

Solicitations - After a year of learning the foundations of game music it’s time to strike out on your own! In year one you’ve made your presence known on forums and social media pages and you’ve earned respect by offering well-placed constructive criticism. Now start reaching out and offering your service. Use the advice in Part IV of this book to help price yourself for smaller projects, but don’t hold yourself back. Try and drum up some work!

Community Networking - Another avenue for networking is volunteering in your local game development community. This could mean offering your services for GANG, or for IGDA to organize events, or it could mean something else entirely. The point is to immerse yourself locally and try positioning yourself in the role of an advocator. We all need advocators to help build value in our industry, and volunteering is a great way to meet people and to build skills that can help you towards career stability.

Teaching

Adjunct Teaching - A fantastic side job that offers a little bit of financial stability as well as opportunities to learn more about game music is adjunct teaching. You likely already have experience in areas that are needed at a local college or community college. If you have a degree that covered audio technology, music theory, musical instrument lessons or anything of the like we highly recommend that you make some advances in this direction. Look for open adjunct positions at any and all colleges near you and apply!

Third Year

Now we are beginning to see a larger split between the in-house plan and the freelance plan. In the third year you want to find yourself involved in some real-world game projects. Your skill building will become slightly less, and your projects will start to take over. You will see your skills moving in the direction of organizational or managerial skills in order to compensate for the workload. You will also start sharing your projects at conferences and events.

Skill Building

Production Test - It’s time to check back in and evaluate your work. As a freelance composer you might not necessarily devote yourself to skills as a sound designer would. What you will do instead is regularly check that the quality of your production is on point. Take a weekend to listen very critically to music for games that you’d ideally like to work on. Then compare your own tracks in a similar style. Use the methods in Chapter 8 to fine tune your orchestral template.

Composition Test - This is basically the same task as the “Production Test” but for your compositional prowess. Dig into Chapters 7 and 8 again and really try to find your voice as a composer. Move outside of your comfort zone. Get your hands on some concert music scores and draw up some quick piano reductions. What makes the pieces you like tick? How are the harmonies treated? How are melodies treated? How are complex textures handled? Really try to find some music that is generally not heard in video games. When you find a way to bring those styles into a project it will sound fresh and innovative.

Orchestration Test - Again, this is the same as above but for orchestration. You’ve worked with plenty of friendly musicians on projects thus far, so you are already familiar with tons of instruments. Now it’s time to branch out and try collaborating with some ensembles. If you can’t find paid projects with the budget to afford this you can try and barter favors with friends to piece together a remote ensemble. There are also cheap recording options like the $99 orchestra. Find a minute or two of music that you want to record live and orchestrate all of the parts. Many have complained about the quality of these recordings, but in reality it’s a great test of your orchestration skills. These musicians will have minimal time to record your music, so the orchestration needs to be extremely clear and readable. Go into it thinking of it as an experimental business expense and you will learn a lot about how your orchestrations sound with actual human beings playing it!

Projects

Freelance Projects - At this point you will likely have some valuable clients and collaborators hiring you for projects. Keep this up! Do you best to source a variety of projects. Keep an open mind and really immerse yourself in whatever you can. Use each project as a stepping stone. The goal is to walk away with some powerful demo material after each and every project is finished.

Composition Practice - The following exercise is applicable to personal projects as well as paid projects. When you are tasked with writing music for a particular mood, start out by writing using just a piano. This will force you to maximize your use of harmony, melody, and basic texture to get the job done. You won’t be able to rely on a nice big orchestra or fancy synthesizers or samplers. This is challenging, but it is important for you to build the skills necessary to really nail an emotion using the least available colors.

Network Building

Public Speaking Engagements - Now that you have some experience under your belt it’s time to start sharing your knowledge with the world. Apply for public speaking engagements to any conference you can find. Encourage your developers to apply to game expos, both local and in larger cities that tend to lean toward tech or the game industry (Los Angeles, San Francisco, Montreal, Austin, London, Tokyo, etc.).

Conferences - You should have a steady group of “conference friends” at this point. Many of them will have their own speaking engagements. It’s important to foster these relationships, but don’t be complacent. 4-5 weeks before the conferences kick off do some research into game developers that will be there. Find some common ground and ask them to grab a coffee or a drink during the conference. The goal here is to get some facetime, and plant a seed that will hopefully grow into a friendship.

Solicitation - As a freelance composer you will likely have to keep soliciting most of your career. At this point you will most likely begin to understand what games are best suited for you, and where you can find games with larger budgets. You can probably afford to be a bit pickier now that you’ve had some projects under your belt. Focus on games that will either give you a great soundtrack, therefore advancing your career, or games with sizeable budgets.

Teaching

Adjunct Teaching - At this point you may or may not need to hustle a little bit on the side. Teaching gigs can often fit in quite nicely with the freelancers schedule, so it would be wise to keep it up until you absolutely can not spare the time any more. In fact, lots of institutions now have game audio and film scoring online degrees if not actual on campus degrees. Now that you have considerable experience you can move away from more generic teaching roles and begin to move toward teaching positions that are closer to your passions and skill set.

Public Speaking Engagements - these conference engagements are important as they put you and your work in the limelight. It also is good experience for higher level teaching roles.

Fourth Year

The fourth year is where things start to pick up for your hypothetical freelance career. You land a large soundtrack with some notoriety and you nail it. This leads to other similar projects which is great for business. It will also threaten to pigeon-hole your work, so at this point branching out creatively is important.

Skill Building

Technical Skills - Now that you have some notoriety as a composer, many developers are looking to hire you. Increasing your technical knowledge of implementation can be a great way to convince developers to let you try more interesting approaches to adaptive music. Take some time to research some of the topics you’ve heard of at GDC. Procedural music, complex implementation using middleware (See Chapter 9), and more unique musical styles like chamber music and world music are great topics to dive into.

Organizational Skills - Now that work is rolling in steadily, having some organization tools such a Quickbooks, Excel, Trello and the like are important. Boring, but important. Keep your client information and financials organized so that you can spend most of your time composing and not on finding information and emailing back and forth. It is also important when it comes to juggling projects that all have tight deadlines.

Projects

Freelance Projects - At this point your freelance work is rolling in. You’ll need to be picky about what you take on. You want to choose projects that have funding behind them, and will allow your voice to heard in a creative and fresh new light.

Networking

Public Speaking Engagements - Keep up the public speaking engagements! It saves money and it can be inspiring for others to hear your perspective on the industry.

Conferences - Conferences are now your home away from home. Have fun with your friends, but be diligent about making new developer contacts.

Leadership - It’s time to take a leadership role in your local organization. GANG, IGDA, and any other advocacy organization is a great place to start. Game audio is a community and we’re all in this together! Plus putting together events is a great way to get to know everyone there.

Teaching

Adjunct Teaching - In year four teaching is going better than ever. You have a good handle on syllabi and schedules, and the day to day work is feeling more natural. You have the choice to take on more classes, or continue to focus on just a couple to leave room for your freelance projects.

Fifth Year

Year five is very similar to year four. This in no way means that day to day life is the same however! Your days will depend on your projects, and usually as a successful freelancer your projects will be varied and unpredictable. Time management now because an essential aspect of your life.

Skill Building

Generative/Procedural Music - At this point you are so proficient with orchestration and composing for live instruments that you can’t help but become interested in procedural music. Look back at Chapter 9 for some inspiration and take a deep dive into Pure Data. This is a great contrast to most of your soundtrack work, and when the right project comes along you will be more than ready to tackle it not by algorithmically generated note!

Time Management - With all these disparate projects coming through it is now threatening to your work life balance. Take stock of your days and be intentional about your work life balance. Make sure the time you spend working is focused. Employ the minimal effective dose and do not let email or social media distract you. Strictly limiting distractions from your work means that you can go home to your family and be present and content.

Projects

Freelance Projects - You find that in the fifth year you are quite comfortable juggling multiple projects. You have a great complement of simple but high paying gigs and projects that are smaller, more experimental, and creatively rewarding.

Demo Reel and Website: With all the impressive projects that have come your way it’s time to take another look at your reels and website. Update all of it and keep it fresh! As a freelancer you can’t afford not to be looking your best to a potential client’s eye.

Network Building

Mentorship: You now have made such a dent in the industry that it’s time to give back. You apply to be a mentor with the AMP (Audio Mentoring Project) and it all comes full circle. You now have a mentee who is passionate as you were, but has a lot to learn. In your position you can offer your mentee tons of great advice!

Conferences: Conferences are now vacations where you get to see some of your closest friends. It’s still important to branch out and meet new people, but you now look forward to catching up on a personal level with people that you don’t get to see often enough. Have fun at these conferences and be sure to strike up friendships with anybody and everybody willing to talk. You are the industry veteran now and it means a lot when you devote some of your time to aspiring game audio professionals.

Repeat Clients - This is somewhat of an odd category, but it is vitally important as a freelancer to prioritize repeat clients. You can literally live off of a few well-selected repeat clients. Always follow up with past clients on holidays and after they release big projects. Check in just to say hey, and don’t always mention work. Keep them updated on your life and encourage their creativity. Even if you haven’t spoken in five years you can still land a lucky gig just by reaching out!

Teaching

Now that you have found such a lucrative niche in the industry it is actually not entirely necessary to teach anymore. It is a fantastic skill that is transferable to so many situations, so if you can fit it into your schedule we highly recommend making it part of your life.

Five Year Summary

It has now been five years since you started your journey and you have come a long way. If you’ve gotten a few breaks here and there you may have landed some larger gigs with enough budget to cover a slew of musicians or even a full orchestral recording. If not you will still likely have some quality soundtracks to your name, and of course a load of useful skills. You will be more than prepared for a sustainable and creatively rewarding freelance career.

Final Notes on Career Development

We have now covered a multitude of strategies for starting and sustaining your career in game audio. However there are still a few other miscellaneous topics to cover before we are finished:

Metrics

Most careers have at least one or two quantifiable metrics by which you can evaluate your professional progress. Usually this is either money, job titles, benefits, or some combination of all three. In-house game audio professionals have these metrics too. A promotion from Foley artist to senior audio lead is a clear step up in pay and responsibilities. But freelancers don’t exactly have these metrics which can make it difficult to determine if you’re making progress or not. We believe that it is important to evaluate your progress as objectively as possible so you can make informed and appropriate decisions for your career development. It also allows you to celebrate your successes! Here we’ve listed a few helpful metrics for you to more accurately assess your career sustainability. These metrics can work well for in-house audio folks too, but may not be necessary. Either way, it can be helpful to track these items in a simple spreadsheet each year to look over (we recommend doing this during tax season!).

Financials - Your yearly finances is an easy way to track your progress. If you are steadily and predictably increasing your income each year, then you are objectively making progress! If you are steadily and predictably decreasing your income each year, then you may need to take a look at your rates and bump up the prices along with evaluating the effectiveness of your networking strategies.

Number of Projects Per Year - Usually an increase in the number of projects per year is a sign of increased demand for your work. This could be another signal for you to increase your rates. Over time you want to work less for more money in order for your work life balance to be sustainable.

Scope of Work - The scope of work can likewise be a very enlightening metric for your career progress. If you started out regularly doing 2 minutes of music for mobile projects a few times a year, and now you are doing 30 minutes of music for AAA console games a few times a year, then you have objectively made some serious progress. This is as much a quantitative metricas anything else. This means that you can numerically and objectively compare the scope from one project to the next. So make it a priority to track your delivered assets.

Number of Network Contacts - The quality of your network is almost always more important than the quantity, but as we have outlined earlier opportunities can come from anywhere. So having more network connections will usually lead to a more sustainable career. Additionally it is very hard to quantify the quality of a network connection because it is impossible to predict where that relationship will lead. It is similarly a waste of time to count the exact number of network connections and detailing them on a spreadsheet. We recommend simply observing your experiences at conferences and events. If after a few years you are still only speaking to the same one or two people, you may want to branch out. If you are finding yourself caught up in great conversations with dozens of people after a couple years, then you’re doing great! Take our word for it, it is almost impossible not to make friends at these events.

Number of Marketable Skills - Checking off skills that are desirable to game studios is a great way to check if you’re making progress. It is also a great way to make sure you’re keeping up with the technical demands of the industry. Similarly to what we outlined earlier in the chapter, we encourage you to check in on audio applications once in a while and find some skills you can pick up here and there. It will come in handy!

Quality of Work - Despite the fact that quality of work is by definition not measurable, it is still a great metric for career sustainability if approached in the right way. Don’t compare and contrast work separated by weeks or even months. Instead take a listen to some work that you produced two or three years ago and compare it to some recent work. Is there anything that sticks out? Has your production value increased? Is there any spark of creativity that you have maybe lost and could stand to reintroduce into your current work? These questions can be very helpful not only to ensure continuous improvement, but also to show you how far you’ve come!

Creative Rewards - Finally we’ve reached the least measurable metric. Is your work creatively rewarding? If creativity is a priority to you, then we would recommend using this as a metric once in a while. Think about your portfolio of projects. Which ones were the most rewarding to you while working on them? This is a tough question to answer, but if you give it some honest thought it can truly help push you in a positive direction in terms of the kinds of projects you take on. If years of work has pushed you away from your creative passions, then it’s time to take a step (or leap) back toward them. If years of work has only furthered you output of creatively rewarding games, then pat yourself on the back. By any and all metrics, you are building a sustainable and creatively rewarding career.

Imposter Syndrome

The pervasive feeling that your accomplishments are not enough, and that you will one day be exposed as a “fraud” is a phenomenon labeled imposter syndrome. For our purposes we can broaden the definition a bit to the general feeling of “I’m not good enough.” Imposter Syndrome is a truly unfortunate issue, and it is especially prevalent in careers like game audio where art and technology intersect. On the tech side of this meeting place we have a gargantuan level of aptitude required to sustain a career. On the art side we have a very personal attachment to the final product. This attachment leads us to think of our work as an extension of ourselves, which leads to confusion about self-worth and the market value of our services. In truth, they are two distinct things influenced by very different factors. But we tend to forget that and we are left with very normal career ups and downs that can sometimes lead to a loss of self esteem.

Another factor that causes Imposter Syndrome is that we sometimes romanticize the people we look up to and the work they produce. Most of us have had the experience of looking at our favorite composer or designer and think to ourselves, “This person puts out amazing work without even trying! Why am I not creating art like that? What am I doing wrong?” In truth, that person most likely worked their butt off learning skills and networking just like everyone else. Then they continued to persevere to produce the work that we all love so much. But we forget this fact and instead tell ourselves that the work we do is not good enough.

The antidote to Imposter Syndrome is gratitude and realism. Gratitude is important because the work we do is good enough. And it must be celebrated. If we do our best to be grateful for our projects then we will do our best to make those projects sound great. So many external factors go into how a game is received by the public, and how much profit a game makes, and we don’t have much control over either of those factors. So don’t use either of those as a metric of your self-worth. Instead try to have some pride in all of the projects you’ve completed and pat yourself on the back. If you compare your work, don’t compare it with the work of someone who has vastly more experience than you. Compare it with the work from the beginning of your career and observe how far you’ve come.

Having a realistic attitude is equally important in defending against imposter syndrome. Simply put, everyone has imposter syndrome. We do, you do, your idols do, everyone. At some point or another everyone in our industry has adopted the “not good enough” mentality and has suffered for it. That is the reality. The irony is that if everyone is a fraud, no one is! So be realistic about how you evaluate yourself and your work. Some of our work could always be better for sure, but when we focus only on the flaws then we are missing out on the other half of the picture. To be realistic we need to accept the strengths in our work as well as the flaws. This will go a long way toward painting a realistic, confident picture of your career without tying it to your self worth.

The final defense against Imposter Syndrome is time. Putting yourself out there, succeeding, failing, making friends, and building skills will all help defend against it. The more you immerse yourself in the industry the more you will feel like a genuine part of it and less like an imposter.

Patience

Lastly, be patient! Just like any career, it takes time to build skills and work your way up. It can take years to feel established in the game audio industry. Don’t give up, but don’t shut out opportunities either. Often you will work with someone on a project that might not work out, or may fair poorly on the market. Then five or even ten years may pass by and you find yourself back in touch with them and working on a hit game! The lesson here is to be patient and trust in the process. If you take every project seriously and you strive to maintain friendships your network will come through for you. And above all, keep working! As long as you are working on audio or games in some way you are gaining valuable experience and creating opportunities for yourself.

Tips:

Listening to other success stories how they managed the path to land a job in games can really be helpful in building your own path. Hearing words like "You can make your dreams come true! Can be very encouraging to someone looking to make the leap into the industry. I think the key word here is “leap.” In an industry that can be difficult to navigate you want to do your research and find the best point of entry. This might mean you don’t give up your day job and “leap” right in.  Starting out by getting your feet wet with smaller projects and working your way up while holding onto your day job isn’t a bad thing at all. Not everyone’s definition of “making it” is the same and by mapping out a five year plan you can decide how you define this. There are those who go on to be top names in the industry but there are also many more whose names you may not always hear but are making a good living working in games. It’s important to be flexible as sometimes making it to your goal might not always be exactly as you expected it to be. Being open to opportunities of all types can do you a lot of good. You won’t be doing yourself a service if you decide you are going to only work in game audio and overlook the many other opportunities in the audio field. Here are some rules we follow to help us continue on our journey.

  • Be patient but persistent - it goes a long way in helping you avoid frustration as you work your way to your goal. There are times that I have to follow up with a potential client for a year or two before landing the gig.
  • Be kind. In a world of social media everyone is watching. If you are in a game dev group and acting in any other way than being kind and encouraging others just remember that someone who may be in charge of hiring you could be watching.
  • Practice and study. Critical listening and practice is key in helping you grow your skills in audio. Never stop working on being better at what you do. Study the business side of things and continue to grow your brand and network your way to new projects. Don’t forget to practice your elevator pitch too.
  • Learn from mistakes. Don’t let them set you back or turn you away from your goals.

Score Study

For further study into orchestration, implementation, and general game music composition we have included a few examples of scores used in game projects.

Below you’ll find the score and other materials from Dren McDonald’s “Gathering Sky.”

Artist Lecture “Inspiration” - Dren McDonald

Inspiration is that piece of magic that keeps us going when everyone else you know is asleep and you are solving musical/gameplay puzzles when your body wants to lurch you off to dreamland. So let’s start with that!

Interactive music is quite unique, and I’ve found that for myself, the game play (and narrative, if that is a strong component) is usually responsible for sparking the initial musical inspiration. If you are lucky, that spark can carry you through a project from start to finish, like a pair of wings through the fluffy clouds, which brings me to a story about a game that did that for me, Gathering Sky (hence, the wings metaphor…you’ll see.)

I first experienced Gathering Sky (initially titles Apsis) when it was an entry in Indie Cade and I was volunteering as a juror, reviewing games and giving reviews on them. A lot of the games were broken, builds weren’t loading on devices like they were supposed to, many games felt like obvious ‘homages’ to other games, and there were times that this volunteer gig wasn’t exactly what I hoped it would be. Then I came across Apsis in my queue. The game actually opened and I could play it, so they had that going for them. The game begins with a singular bird (even this early version) and you would guide the bird through the sky…until the bird finds another bird friend, and when the birds interact, the new bird friend will follow your 1st bird…and then you continue to build up more bird friends as you fly through this mysterious sky of clouds, wind currents and rock formations. Before you know it, you are guiding an entire flock through the sky, and you feel somewhat responsible for these pixel creatures in a way I can’t explain. You’ll just have to play the game to experience it.

I think it was this initial feeling that hooked me with this game, and really sparked my imagination. “Why did I care so much about my flock? How did I become so emotionally engaged with this experience that did not include other humans or human forms or speech or an obvious story line?” I was emotionally invested in the experience and I couldn’t stop thinking about it.

During that first play thru (in which I played the entire game, for 45 minutes straight, no breaks), there was music in the game, but no sound design to speak of. Somehow the music was appearing to work with the game, however the songs were linear, and would just…end…leaving silence. So something strange was happening*. I gave a detailed review of the game, lauding it’s virtues and then giving an incredibly detailed list of improvements that they should consider for the audio. I did not forget the game, but I didn’t expect to hear from the developers about any of my ramblings.

Fast forward a few months, and I was at an indie game event in San Francisco. Devs were in a room just showing off their games, as is usually the case with these smaller events. And then I saw it…the BIRD GAME! Holy cow, these developers are here! So I got to talk to them and tell them “Hi, yes, I was the one to give you all of that detailed audio feedback for your game, but don’t take that personally, I loved the game, this is meant to be constructive feedback!” After chatting with them for a while, we exchanged information and left it at “well, I’m really busy with several projects at the moment, but I know a lot of other game audio folks who would probably love to work on this if you decide that you want help”

Long story short, all of those projects that I had lined up…just disappeared or got rescheduled and I suddenly had time. Almost as if the universe made me available for this project.

So I began work on the game. It went far longer than I anticipated that it would, but returning to the theme of ‘inspiration’, even as the game development continued and started to feel like a lot of late nights in a row, I continued to raise the bar of possibility for this project. This was really only because I believed so much in the vision of this ‘experience’ (it’s more of an ‘experience’ than a ‘game’). I wanted to push myself to see what I could bring to this project to bring it to life. That inspiration can sometimes be rare, but when you find it and open yourself to it, it works like magic.

* It turns out that the devs had actually designed the levels to work with the music, so if there was a tempo increase in the music, they would blow the winds faster for the flock etc…I think that might be a first!

Score Study/Fmod

Let’s take a look at the Level 2 music score and fmod music event to see a few different methods of scoring technique and creating a dynamic music system.

The first philosophy that must be considered in this level is ‘what does the music need to do to support the gameplay?’

The answer for us was:
1) it needs to be playful and encourage the player to explore, discover and feel the wonders of flying.
2) When a player enters an ‘explore’ area, the music needs to reflect that and then give subtle hints that the player can continue to move on after ‘x’ amount of time (a determination of time based on extensive playtesting).

Looking at the FMOD session you will see the first track on the event timeline is a nested event (screenshot 1). In Screen shot 2 you will see what is inside of that nested event:
Track 1: pad transition
Track 2: low pad
Track 3: flute
Track 4: a blank track. I really should have removed that one.

The pad tracks were created by recording the string quartet playing long drones on specific notes. Often open strings. I would roughly conduct the players to play loud or quiet, with faster or slower bowing etc. I took those recordings and put them into Izotope’s Iris sample synth (on Iris 2 at the moment) and created these drone/synth pads by using the raw recordings of our string players. Speaking of flutes, in the pad nested event you will also see some multi sounds with a bucket of various flute flourishes to play back randomly). These were all improvised flute parts that we recorded, not written out. When you find yourseflin a room with a musician who knows their way around an instrument really well, just hit the record button and record everything that you think you might need (as long as you have time). You will almost always find something that you can use.

In screen shot 3 you can see where the score starts with a linear piece of music, and in this case the flute was leading that particular cue. I used the interplay of the flute and 1st violin (sometimes clarinet as well) to represent the ‘frolicking’ nature of the birds interactions with each other. So flute played a big part in the score and often was the first instrument in a cue. Other instruments would follow the flute, similar to the birds in the game.

We knew that this section would almost always last as long as it did because of the in-game map and wind currents. So you can see in the screenshot that it doesn’t have an interactive moment until it reaches the tempo marker at 175 (ignore the “to test’ transition markers and ‘test’ marker. Again, I should have removed those!) But that moment at the 175 tempo flag corresponds with Bar 17 in the score where you can see there are long legato notes held there that fade out. That was my crude transition, but it worked as a transition and also allowed us a good edit point to increase the tempo without trying to record the tempo increase live. The musicians on this session were not familiar with using a click track or playing to a click…let alone a click track that increased in tempo. So in order to mitigate stress in the session, we prepared the music to be recorded as two separate cues here, one at 165 bpm with a long legato fadeout, and a 2nd one at 175 bpm that could begin while the fade of the first cue played

You can see in screen shot 4 of FMOD that at the 175 tempo marker, the nested event that it transitions to is another piece of linear music “Level 2 B Linear Section”. We had a pretty good idea of how long this section would take for the player to get through (hence the linear music) but you can see in screenshot 5 that I hedged my bets, and put in a safety transition in case they reached it earlier. The green transition region labeled “To C Section” will transition to a new section if the player reaches a specific marker on the game map. If they reach the marker, the linear music will cross fade into a new pad (again made up of the string drone recordings and Iris). You’ll also notice that there is a new tempo marker (with a new time signature) at that point (there are a lot of tempo markers in this music event).

I hope this demonstrates a few different methods of building a dynamic score using both recorded instruments and samples/synths, and offers some ideas for transitions and composition as it directly relates to game play.

Study Ex 1

Study Ex 1


Figure 1.1.png

Study Ex 2


Figure 1.1.png

Study Ex 3


Figure 1.1.png

Study Ex 4


Figure 1.1.png

Study Ex 5


Study Ex 6 - Download Now (PDF 82KB) Study Ex 7 - Download Now (PDF 216KB) Study Ex 8 - Download Now (PDF 24KB)

Next we have an example from John Robert Matz’ score to “Fossil Echo.” This game won multiple G.A.N.G. awards upon its release. It is a highly adaptive score that utilizes numerous nonstandard instruments. Study the orchestration carefully!

Study Ex 9 - Download Now (PDF 523KB)

Glossary

Numbers


2-Track editor (wave editor) allows for editing and generating 2 tracks of audio data. It’s functions are similar to multi-track editors which allow for editing the file and applying effects and other processes to manipulate the audio data.

3D sound emitter allows for real time altering of how the sound is played back in game during runtime based on the audio listeners distance to the emitting source.

A


AAA (pronounced "triple-A") is a term used to classify games with the large development and marketing budgets and expected to be a high quality game or to be among the year's best sellers.

Acousmatic Sound is sound whose originating source can’t be seen.

Adaptive Music in video games and other nonlinear experiences refers to music that changes or adapts in response to a change in events in the game.

Aleatory is music composition, which involves elements of randomness or chance.

Ambient Zone is a defined area in a game engine editor, which can be used to trigger ambience or background sounds.

Anchor Value Pricing is a method where contractors present an initial price to a client, which is then used as an “anchor” to set the perceived value of services rendered.

Anechoic Chamber is a room that completely absorbs reflections of sound or electromagnetic waves.

API (application programming interface) refers to a set of functions and procedures that allow the creation of a process to access features or data of applications, operating systems, or other services.

Arco is a direction in music for string players, which means “with the bow.”

ASMR (Autonomous Sensory Meridian Response) is an experience characterized by a tingling sensation on the skin when listening to certain sounds or watching certain visuals.

Asset Cannon is the process of delivering assets for a game based on a list of required sounds. This is usually done blindly in that the audio designer doesn’t have control over how the sounds are implemented.

Attenuators reduce the power of a signal without distorting the waveform.

Audio Engine refers to the software based audio functions within a game engine or in middleware / plugins.

Auralization refers to the process of modeling acoustics in a virtual space.

Automation refers to the process of software automatically performing a task over time. For example automating the volume on a mixer fader.

Axis of Symmetry in music refers to a midpoint around which all pitches can balance during transposition. Essentially the axis of symmetry functions as a mirror for any motivic operations.

B


Baffle (sound) is a constructed device to reduce the strength of sound waves to reduce reflections and mitigate noise.

Bake (effects into sound) is the process of rendering or bouncing down audio from a DAW with effects processing.

Bark (voice over / sound) refers to adaptive lines of chatter from NPCs in game.

Batch Processing is the process of applying an effects chain or the same edits to multiple files in an automated process.

Beta generally refers to the games status during the final stages of development. An open or public beta refers to the first publicly available version of a game. During this stage in development there may be bugs, which are still being worked out, but for the most part production elements are finalized.

Beta Testers are typically people outside the development studio who test the game before final release. This offers a look at real-world exposure and varied play styles on different devices.

Bidding Process is used to select a vendor for services. During this time the audio contractor will prepare a statement of work and estimated costs in hopes of landing the project.

Binary Form (A/B) is a two section musical structure.

Black-Box Technique concentrates on the functionality and playability of the game. This includes the User Interface function, game play, graphics, sound and animations.

Blend Container (Wwise) is a container, which allows the grouping of multiple objects that can be played simultaneously.

Blimp (sound) is a microphone housing attached to a stand or boom pole used to reduce noise and protect from wind.

Branching is a horizontal re-sequencing technique where fully composed musical cues “branch” into one of many possible outcomes based on player choices.

Bug(s) is an error or fault in a computer program or video game that produces an unexpected result or behaves in unintended ways.

Build is a (platform specific) playable version of the game outside of the game engine’s editor.

C


Cadence(s) are musical configurations (usually a series of chords) that conclude a phrase and provide a sense of finality

Chromaticism is the use of notes outside of a particular key for coloristic effect.

Circuit Bending is an experimental process in which circuits are customized and altered within low-voltage or battery-powered children's toys, digital synthesizers, or guitar pedal effects to create new and unique sound generators.

Classes (scripting) are objects that all have the same components but unique values for those components. Additionally, classes are not limited to just variables.

Clear-Box Testing (aka White-Box Testing) is the process of testing and analysis of the system. During this process the tester will check the system or engine using profiling and other debugging methods to fix a bug or issue is found during Black-Box Testing.

Clusters are chords that contain three or more notes containing mostly major and minor seconds.

Codec is a software application that encodes or decodes digital data. This process is used in audio to generate smaller file footprints across a large amount of assets.

Complimentary Orchestration is the process of combining colors/timbres in a way that “fills in the gaps” like a puzzle piece.

Component(s) make up the function of objects and behaviors in a game. In Unity the Inspector window is used to add, remove or modify components within game objects.

Compression is the process of adjusting the dynamic range between the quietest and loudest parts of an audio signal through attenuation.

Conceptual Music draws inspiration from a particular concept, often creating a direct relationship with the concept itself and the procedure by which the music is composed.

Conical Bore is a possible shape of wind instruments like oboe and trumpet, which strongly affects the range and timbre of the instrument.

Container(s) (Wwise / FMOD) are a way to group audio objects and apply various settings to define the playback behavior of sounds within the game.

CPU stands for central processing unit, which is responsible for managing instructions, and allocating processes and tasks to manage the load.

Cylindrical Bore is a possible shape of wind instruments like oboe and trumpet, which strongly affects the range and timbre of the instrument.

D


Deceptive Cadence (I - V - vi) is a progression often used at the end of a phrase to subvert the typical resolution.

Delay Slaps are a repeat of the original signal.

Destructive Editor(s) applies edits and processing directly to the audio data, changing the data immediately as opposed to just editing its playback parameters.

Development Methodologies (Agile, Scrum, Waterfall or Kanban Design Sprints) are software programs that offer management tools for design, product management and project management. These tools can be customized for a development cycle based on project requirements. Game development requires many stages to complete its cycle. These stages can be organized and managed for a smoother development cycle using one of the many methodologies.

Diatonic Harmony is harmony that uses only notes within a particular key.

Diegetic (sound / music) refers to a sound that has a source visible in game. A simple example is a car horn honk when a car is on camera in game.

Divisi means “divide,” referring to a string section usually.

DIY “do it yourself”

DLC (downloadable content) is additional content created for an already released game. The distribution process is typically a download over the Internet.

Doppler Effect is the change in frequency of a waveform in relation to the listener who is moving relative to the wave source. A common example of the Doppler effect is when a vehicle drives by a listener while blaring the horn. The pitch changes, as the vehicle gets closer to and then further from the listener.

Dynamic Mixing is a system in middleware or the native audio engine that changes the mix based on various factors such as game states and currently playing audio in games.

E


Enclosure is an orchestration technique in which a timbre encloses another timbre from above and below.

Envelope (sound) represents the varying levels of a sound wave over time broken down into attack, decay, sustain and release.

Event(s) (audio) are units of sound content and properties that are triggered and controlled by game code. All sounds produced in game have a corresponding event.

F


Filter typically refers to a device or a software application that can be used to amplify (boost), pass or attenuate (cut) frequency ranges of a sound wave.

Fletcher-Munson Curve is a graph that illustrates human hearing. It demonstrates the human ear’s average sensitivity to different frequencies at various levels.

Flutter Echo occurs when sound bounces quickly between two reflective surfaces, usually resulting in a “chatter” sound.

Foley Walkers is a term used to describe someone who is a profession sound-effects expert specifically in performing footsteps to picture.

Form (music) is the large-scale structure of a cue or piece of music.

FPS (First Person Shooter) is a game genre classification that describes a first person perspective focused on a weapon. In this style of game the player experiences the action through the eyes of the protagonist.

Frequency Range or Band is an interval in the frequency domain that is bound by an upper and lower frequency.

Frequency Masking occurs when the perception of one sound is affected by the presence of another in the frequency domain.

G


Game Data is the data that comes directly from the game engine as it is played.

Game Design Document (or GDD) is a highly descriptive video game design document created and edited by the development team and is primarily used to organize efforts within a development team and lay out the scope of the game.

Game Engine is the software platform that aids in the development and performance in real-time of a video game.

Game State refers to the relative configuration of all game components and assets at a given time.

Game Objects are interactable objects or processes within a game.

Game Mechanics (Gameplay Mechanics) are the methods of interaction that a game employs.

Game Sync(s) are objects in Wwise that synchronize with game data and enable real-time dynamic audio.

Gel or Glue (Compression) is a compression technique used to blend various source sounds and elements together in the mix.

Generative Music is music that is composed by a system, process, or framework, usually at a more granular level (i.e. individual notes and rhythms).

Gesture refers to the contour or “shape” of a sound or musical phrase.

Glissando is a musical technique where player slide from one note to another over a larger interval than portamento.

Gold Master refers to the final release candidate build of a game, which passes all of the publisher and platform requirements.

Grotbox is a monitoring device, which simulates playback on consumer devices such as televisions, mobile devices, and home surround systems.

H


Harmonic Series is the series of overtones created by any and all sounds. Various orchestrational principles can be extracted from this physical phenomenon.

Harmonics are techniques that string players are often asked to perform that sound airy and thin due to the focus on an overtone other than the fundamental. Also see Harmonic Series.

Harmony is the framework of musical composition that guides the presentation of chords and other superimposed sounds as they relate to pitch.

Heterophony (Heterophonic) is a musical texture characterized by the simultaneous presentation and variation on a melody.

Homophony is a musical texture characterized by a lead voice and an accompaniment.

Hooks are points of connection between game data and objects in Middleware (e.g. RTPCs).

Horizontal Resequencingis a method of musical composition where fully composed modules of music transition sequentially depending on gameplay. Also see Horizontal Scoring.

Horizontal Scoring is a method of musical composition where fully composed modules of music are scored so they can transition sequentially depending on gameplay. Also see Horizontal Resequencing.

HRTF (head-related transfer function) is a response that characterizes how the human ear receives a sound from a specific point in space.

I


Immersion is a subjective perception that is characterized by the level of engagement with the gameworld.

Implementation is the process of taking audio assets and organizing them into an interactive/adaptive framework either in Middleware or a game engine.

Inbound Networking is a networking strategy that brings potential clients in through various tactics (social media, professional contributions, speaking engagements, etc.).

Indie “independent,” referring to an independent game development studio.

Integral Serialism is a method of composition where all aspects of the musical are pre-determined by a system.

Integration is the process of connecting and synchronizing Middleware with a game engine like Unity or Unreal.

Interactive Audio is audio that the player can influence directly (e.g. Guitar Hero).

Interlocking is the orchestration process by which timbres in a stacked chord are alternated (i.e. flute1, then violin 1, then flute 2, then violin 2, etc.).

Inversion is the musical process of reordering notes or chords from top to bottom or bottom to top around an axis.

L


Layering is a sound design process where multiple sounds are stacked on top of each other to provide depth and detail.

Legato is a musical technique where note transitions are smooth and fluid.

Leitmotifs are musical ideas (like motifs) that are designed to be associated with a character, object, or idea in a game.

Level refers to volume and area in a game.

Loop Region(s) are objects in FMOD that define a section in which the playhead will loop indefinitely.

Ludomusicology is the scholarly study of game music.

M


Metadata is data that holds information about other data.

Mid-Side Technique refers to the placement of two microphones as closely as possible to each other and the stereo image is created by differences in loudness rather than time delays.

Middleware programs like Wwise and FMOD are “go-betweens” that allow composers and sound designers to create interactive systems with little to no programming. These programs are the “middle men” between composition and programming.

Milestones are checkpoints throughout the development process.

MOBA refers to multiplayer online battle arena game genre.

Mode scale (major, locrian, mixolydian, etc.)

Mode Mixture is a compositional technique where the type of scale (major, minor, lydian, etc.) is mixed for coloristic effect.

Mode(s) of Limited Transposition are scales that cannot be transposed to every key without mapping to the original scale (e.g. whole-tone, octatonic scales).

Modularity refers to the property of music that allows it to be highly adaptable or interactive. Small “modules” of music are composed (as opposed to longer structures), which allows the music to take turns and adapt flexibly to player actions.

Modulation is the process of varying one or more properties of a sound.

Monophony is a musical texture where two or more voices combine to perform a single phrase in tandem.

Motif is a small musical idea.

Multi-Instruments are objects in FMOD that allow multiple sounds to be stored and triggered.

Musical Palette is the spectrum of instruments and sounds used on a project.

N


Narrative Design is the process of crafting the games story, systems and bridging scene to scene.

Native Implementation in computing, software or data formats that are native to a system are those that the system supports with minimal computational overhead and additional components. This word is used in such terms as native mode and native code.

Nested Events are event abstractions in FMOD; in other words events within events.

Networking is the process of making friends and connections with regards to professional work.

Noise Floor is the level of the noise below the audio signal in decibels

Noise Sweep is generated by an Envelope on the Filter where the speed of the sweep is set by the Attack and Decay and the direction of the sweep (up or down) is controlled by the filter settings. Most often the sound is generated by using noise oscillators such as white noise, pink noise or colored noise.

Non-Diegetic refers to sound whose source is not present in a game scene (e.g. underscore).
Non-Disclosure Agreement or NDA

Non-Exclusive Agreement is an agreement between licensor and licensee where the work created is legally available for use, but may be licensed out for other projects as well.

Nonlinear Audio is audio that changes in some way with each play through or in response to incoming game data.

Non-Player Character (NPC) is a character in a game generally not controlled by the player.

Non-Tertian Harmony is harmony based on intervals other than thirds. Also see clusters, quartal, and quintal harmony.

O


Obstruction / Occlusion are typical conditions of most game environments where sound or an object become either obstructed by another object (such as a wall) or occluded in a room where the listener can only hear a few muffled sounds leaking through walls.

One-Shot Sound refers to sounds that aren’t looped when triggered.

Open Voicing is a method of orchestration/arrangement where chords voiced across more than one octave.

ORTF is a microphone technique to record stereo sound.

Oscillator(s) are essential components of synthesizers that generate sound.

Outbound Networking is a networking tactic where contractors reach out to clients (cold-calling, door to door, etc.).

P


Parameters objects in FMOD and Wwise that “catch” incoming game data and allow real time processing (see RTPC).

Player Character (PC ) is the player-controlled character in a game.

Pedagogy is a method or framework for teaching a subject or skillset.

Physics in sound design refers to the physical properties that create a sound, for example velocity, mass, etc.

Pitch is the perceptual property of music that determines the note and octave; it correlates to frequency in hertz (Hz).

Pitch Shifter is a plugin that raises or lowers the pitch of incoming audio.

Pizzicato is a technique for strings where a note is plucked instead of bowed.

Play Testing is the process of playing a game and testing for various issues including bugs, inconsistencies, or larger problems like game play flow and coherency.

Playable Build is an executable version of the game that can be played on the targeted development platform.

Plugin Effects software program used within a DAW or Middleware that processes audio or MIDI in some way.

Polyphony is a musical texture where multiple lines have independence and complexity.

Portamento is a musical technique for strings where one note is bent or “slid” into from the previous note.

Positional Sound Emitter is an object placed in the game world with attached audio and logic resources allowing it to trigger sound from a specific point in the scene.

Pre-Delay refers to the amount of time between the original dry signal, and the audible onset of early reflections. Adjusting the pre-delay parameter makes a huge difference in the “clarity” of a mix

Procedural refers to a process that is controlled through an algorithm.

Production is the stage of game development when a vertical slice is typically created.

Profiler (Wwise / FMOD) is a debugging tool within audio middleware that allows for capturing and monitoring the performance of each game element or audio event as it occurs.

Programmer Sound (FMOD) are modules in middleware that are controlled at runtime by code.

Proximity Effect is the overemphasis on bass frequencies caused by moving a condenser microphone too close to a sound’s source.

Punch is the capacity of a sound to impress or startle.

Pyramid of Sustainability is a visual metaphor for career development in which the foundation is personal health and happiness.

Q


Quad Ambience (Quadraphonic Sound) is equivalent to 4.0 surround sound, which uses four channels with speakers positioned at the four corners of the listening space, reproducing signals that are (wholly or in part) independent of one another. In games a quad ambience might refer to a single event with two or more sets of stereo recordings from the same environment. While you may lose realistic positioning with this style of ambience the plus side is a denser soundscape.

QA (Quality Assurance) for games is about finding inconsistencies or bugs in game and documenting, reproducing, and reviewing until the product is in a shippable state. QA teams test games for bugs and technical issues.

Quantitative Metric is an evaluative metric that can be measured with numbers.

Quantize is a method of “snapping” MIDI notes to a rhythmic grid.

Quartal Harmony is a harmonic system made up of stacked fourths.

Quintal Harmony is a harmonic system made up of stacked fifths.

R


Random Containers in Wwise randomize any sounds or musical cues within them.

Random Playlists in Wwise randomly sequence an array of musical segments within the playlist.

Ray Tracing (or Ray Casting) determines the visibility of surfaces by tracing imaginary rays of light from the camera’s view to the object in the scene.

Real-Time Parameter Control (RTPC) used in both Wwise and FMOD to bind to incoming game data and allow automation, transitions, and other real-time changes to audio.

Repository (SVN) is a collection of files and directories, bundled together in a database that records a complete history of all the changes that are made to these files. Users can collectivelyroll back changes or update the project with new data.

Retrograde means “backwards,” usually referring to a sequence of notes or chords.

Revisions and Reworks are basically “fixes’ to a work in progress audio asset. Revisions are small fixes, while reworks usually mean starting from scratch with a new direction in mind.

Rhythmic Density is a measure of the depth and complexity of a rhythmic figuration.
Rips are quick, aggressive musical gestures, usually ascending.

Rubato is a tempo instruction in music that gives players the freedom to speed up and slow down as needed.

Runs are musical gestures where musicians play an ascending or descending phrase with many notes quickly.

Runtime is when a program is running after the player starts the game.

S


Seek Speed is a method of smoothing parameter changes in FMOD.

Self-noise - Noise introduced to the audio path by the microphone’s circuitry. Using a microphone with too high a self-noise to capture very quiet sounds will result in audible hiss.

Semitone Offset is a compositional technique where the “goal” note in a melody is moved up or down by a semitone.

Sequencing is a technique where multiple transpositions are applied to a small melodic fragments and quick succession.

Side-Chaining is a method of triggering an action using the signal from another source. One common example is using input from a dialogue channel to “duck” other sounds.

Snapshots are mix-states in FMOD that can be used to change and blend the game’s mix in real-time.

Sound Containers is a broad term that describes a software object capable of storing multiple sounds to be triggers and processed in various ways.

Sound Propagation refers to the movement and travel of soundwaves.

Spatialization is a technique used to process sound to give the listener the impression of a sound source within a three-dimensional environment.

Spectrum Analyze r is a tool that measures the magnitude of an input signal versus frequency within the full frequency range of the sound.

Spiccato is a bowing technique for strings where the bow bounces lightly off the string.

Spotting is the process of deciding when and how music should be triggered in-game.

Spread Control is a parameter that adjusts the stereo base of a sound.

Stabs are quick, powerful musical hits, often used for emphasis.

Staccato is a musical direction that means “short,” referring to the duration of notes to be played.

Stingers are short linear music cues.

Streaming Audio will be stored on a device persistent memory (hard drive, flash drive etc) and streamed when played. Does not require RAM for storing and playing.

Sul Ponticello is a string technique where players bow close to the bridge.

Switch Group(s) are objects in Wwise that literally “switch” a sound or musical cue based on incoming game data.

Structural Development is a method of developing music in terms of large-scale form, usually this involves thematic development and reorchestration of major motifs.

T


Tight Scoring is a musical aesthetic akin to early cartoons where music tightly mimicked the actions of characters. Think, “Mickey Mousing.”

Transition Markers are objects in FMOD that allow for horizontal re-sequencing. They are like “goal posts” that can be transitioned to and from.

Transient Shaper a plugin that controls the attack and sustain of incoming audio.

True Peak refers to the highest point a signal reaches.

Timbre Separation is an orchestration technique that preserves and balances timbres when voicing a chord.

Tremolo is a musical technique where notes are rolled or subdivided.

Transposition involves moving a melody or chord up or down in pitch while maintaining the intervallic relationship.

True Legato is a sampling technique where actual legato transitions are recorded and triggered by overlapping MIDI notes.

Twelve Tone Method is amethod of musical composition developed by Arnold Schoenberg wherein all music notes must be used before any are repeated.

Tone Row is a particular ordering of twelve tones in a “row,” as used in Schoenberg’s twelve-tone method of composition.

Transient is a sound with a fast attack that dissipates quickly.

Tessitura is an instrument’s strongest and most comfortable range.

Ternary Form (A/B/A); three part musical structure.

V


Vocal Mimicry

Vertical Layering a method of stacking layers of music or sound “vertically” to be added or subtracted based on the game state.

Volume Rolloff is used to describe the sound attenuation as the audio listener moves away from the source.

VRAM is the ram used by graphical display cards.

Vertical Slice a portion of a game, which can be used as proof of concept for investors. Unlike a prototype the vertical slice is expected to have a polished quality.

W


Wilhelm Scream is a stock sound effect of a man screaming that has been used countless films and television series. It was first used in 1951 for the film Distant Drums. The scream is often used when someone is shot, falls from a great height, or is thrown from an explosion.

WALLA is a sound effect imitating the murmur of a crowd in the background.

White-box Technique (aka Clear-Box Testing) is the process of testing and analysis of the system. During this process the tester will check the system or engine using profiling and other debugging methods to fix a bug or issue is found during Black-Box Testing.

Z


Zero-Crossing is the point where a waveform crosses the zero level axis. Zero-crossing in audio editors are used when performing editing operations, such as cutting, pasting, or dragging. When these operations are not performed at zero crossings, this can result in discontinuities in the waveform, which will be perceived as clicks or pops in the sound.

Additional Resources

Sound Design

https://www.asoundeffect.com/blog/ Both a sound effects library resource and blog.

https://blog.audiokinetic.com/ Audiokinetic’s game audio and Wwise tips.

http://www.gamesounddesign.com/index.html - A helpful How to Blog

http://blog.lostchocolatelab.com/ - Damian Kastbauer’s reflections on game audio

http://tonebenderspodcast.com/ - A high quality sound design podcast featuring interviews, discussions and thoughts on field recording and sound design.

https://soundworkscollection.com/ - A well produced videos on a variety of sound production topics.

https://www.creativefieldrecording.com/ Paul Virostek’s blog with articles on field recording and sound effects.

https://www.gearslutz.com/board/ A forum for gear related posts as well as post-production techniques.

https://sound.stackexchange.com/ A Q&A site for sound engineers, producers, editors and enthusiasts.

http://designingsound.org/ - An amazing collection of articles, interviews and more. This site is no longer supported with updated material but thrives as an archive of wonderful content.

https://www.reddit.com/r/GameAudio An audio subreddit, which explores the craft of sound for video games.

https://www.thesoundarchitect.co.uk/category/game-audio/ A blog & podcast with interviews of professionals in the industry.

Music

http://www.videogamemusicacademy.com/blog/ A blog featuring interviews with professionals in the industry.

https://www.gamasutra.com/blogs/author/WinifredPhillips/930735/ A blog by Winifred Phillips’s featuring her musings on game audio.

https://www.gamasutra.com/category/audio/ A blog featuring various writers on a wide range of audio topics from creation to the business side of things.

http://www.lynda.com/ A paid website with video tutorials for a wide variety of subjects (Business, Audio, Video, Web, Design, Animation, and more),

http://www.greatgamemusic.com/blog/get-into-game-music/ Music for Games blog.

Business

https://www.gdconf.com/ The Game Developers Conference is a yearly professional event championing game developers and the advancement of their craft.

https://www.gamesoundcon.com/blog A Game Music, Sound Designer and VR Audio conference and blog.

https://www.twitch.tv/powerupaudio Reel talk is a weekly Twitch show which features the Power Up Audio teams in-development projects, Q&A and audience demo reel feedback and review. This is an excellent resource for putting together your demo reels. They will critique presentation, material selection, content quality, and distinction.

https://www.akashthakkar.com/courses Sound Designer Akash Thakkar offers free online courses with content specializing in freelancing in the game industry.

https://www.asoundeffect.com/find-audio-jobs/ Audio jobs newsletter and Facebook group.

https://www.facebook.com/groups/229441400464714/ The Game Audio Denizens Facebook group.

https://www.audiogang.org/ The Game Audio Network Guild.

https://igda.org/ The International Game Developers Association.

https://www.facebook.com/groups/nycga/ NYC based game audio group.

https://www.facebook.com/groups/gameaudio/ Game Audio Facebook group.

Education

Berklee College of Music - https://www.berklee.edu/

Berklee College of Music Online - https://online.berklee.edu/

NYU Scoring for Games Summer Workshop - https://steinhardt.nyu.edu/programs/screen-scoring/screen-scoring-summer-intensives/screen-scoring-summer-workshops/nyu

ThinkSpace Education Master’s program for Game Audio - https://thinkspaceeducation.com/gma/

The School of Game Audio - https://school.videogameaudio.com/

Becker College - https://www.becker.edu/academic/academic-programs/design-technology/game-audio/

Vancouver Film School - https://vfs.edu/programs/sound-design

Wwise Certifications - https://www.audiokinetic.com/learn/certifications/

Learn FMOD - https://www.fmod.com/learn

* We will continue to update this page with new resources so come back regularly for new information.

Further Reading

Part 1: Sound Design

Everest, F. (2006) Critical Listening Skills for Audio Professionals: 2nd edn. Course Technology PTR

Sanger, G. (2003) The Fat Man on Game Audio: Tasty Morsels of Sonic Goodness: 2nd edn. Fat Manor Publishing

Viers, R. (2011) Sound Effects Bible: How to Create and Record Hollywood Style Sound Effects: Illustrated edn. Michael Wiese Productions

Theme Ament, V. (2014) The Foley Grail: The Art of Performing Sound for Film, Games, and Animation: 2nd edn. Focal Press

Bridgett, R. (2009) A Holistic Approach to Game Dialogue Production: Gamasutra: www.gamasutra.com/view/feature/132566/a_holistic_approach_to_game_.php

Purcell, J. (2013) Dialogue Editing for Motion Pictures: A Guide to the Invisible Art: 2nd edn. Routledge

Part II: Music

Thomas, C. (2015) Composing Music for games: 1st edn. Routledge

Sweet, M. (2014) Writing Interactive Music for Video Games: A Composer's Guide: 1st edn. Addison-Wesley Professional

Marks, A. (2017) The Complete Guide to Game Audio: 3rd edn. A K Peters/CRC Press

Phillips, W. (2017) A Guide to Composing Game Music: 1st edn. MIT Press

Owinsky, B. (2017) The Recording Engineer’s Handbook: 4th edn. Bobby Owsinski Media Group

Owinsky, B. (2017) The Mix Engineer’s Handbook: 4th edn. Bobby Owsinski Media Group

Melin, S. (2019) Family-First Composer: Proven Path to Escape 9–5 and Support Your Family Composing Music for Film, TV, & Video Games. Independently published

Collins, K. (2008) Game Sound: An Introduction to the History, Theory, and Practice of Video Game Music and Sound Design: 1st edn. MIT Press

Part III: Implementation

Kastbauer, D. (2017) Game Audio: Tales of a Technical Sound Designer Volume(s) 01 & 02. Amazon Digital Services LLC

Goodwin, S. (2019) Beep to Boom: The Development of Advanced Runtime Sound Systems for Games and Extended Reality (Audio Engineering Society Presents): 1st edn. Routledge

Stevens, R., and Raybould, D. (2015) Game Audio Implementation: A Practical Guide Using the Unreal Engine: 1st edn. Routledge

Lanham, M. (2017) Game Audio Development with Unity 5.x. Packt Publishing

Part IV: Business

As audio designers we focus on improving our creative and technical skills. It’s great to have a hungry for knowledge as most successful people learn as much as they can while questioning, and re-evaluating their paths.

The thing is...Creative and Technical knowledge will only get you so far. Expanding your knowledge will help you build a lasting career.

In the list below you will find some further reading which will help you flex the business side of your skill set by building relationships, increasing your value and selling your brand.

Barnes, C. (n.d) Getting into Game Audio: CB Sound: www.cb-sound.com/game-audio-business-101

Gaston-Bird, L. (2019) Women in Audio (Audio Engineering Society Presents): 1st edn. Routledge

Covey, S (2013) The 7 Habits of Highly Effective People: Powerful Lessons in Personal Change: Anniversary edn. Simon & Schuster

Carnegie, D (2017) How to Win Friends and Influence People: Paperback edn. Amaryllis

Kiyosaki, R (2017) Rich Dad Poor Dad: What the Rich Teach Their Kids About Money That the Poor and Middle Class Do Not!: 2nd edn. Plata Publishing