On Saturday, May 11th, 2019, 9:30AM - 4:00PM, DigiPen hosted its seventh annual Audio Symposium, a yearly event hosted by the DigiPen Music & Sound Design department at DigiPen's Redmond campus. The event was cosponsored by the Game Audio Network Guild and the Audio Engineering Society Pacific Northwest Section. In lieu of a traditional section meeting, AESPNW encouraged members to take advantage of the opportunity. Roughly 10 AES members were among the attendees.
The day-long event, free and open to the public, featured presentations on a wide range of music and audio related topics. This year's presentations focused on music and sound for video games, audio for virtual, augmented, and mixed reality, and algorithmic music composition. Talks were given by Guy Whitmore (Foxface Rabbitfish LLC), Alistair Hirst and Kevin Salchert (Amazon Game Studios), Sally Kellaway (Microsoft Mixed Reality at Work), and Stan LePard.
Guy Whitmore's talk "Freelance Music Design and Beyond!" focused on the current opportunities afforded to video game composers for not just music composition but also the implementation of music in a video game, which Whitmore calls "Music Design." Whitmore spoke about how it is common in the video game industry for a composer to deliver music
assets, yet they often do not work on implementing their music into the game. Instead, music implementation is often relegated to a game programmer or someone else potentially without any musical knowledge or trained ears, which can lead to a less musical result. Whitmore encouraged game composers to work closer to the game engine as well as to start scoring musical moments in game instead of "minutes" or large periods of time. Whitmore gave solutions for how composers could offer their services beyond simply delivering music assets. These additional approaches to freelance music design may include consultation, music design, adaptive music composition techniques, and computer scripting. He encouraged composers that lacked certain technical abilities to think about forming a team with other technically-oriented computer programmers/musicians to help accomplish the ultimate goal of elevating the art form of game music composition.
Alistair Hirst and Kevin Salchert presented their talk entitled "Audio for The Grand Tour Game," discussing the unique challenges of making a racing game designed around an episodic, loosely-scripted, reality television series, The Grand Tour. Hirst described some of the difficulties in directing the audio for an episodic game where new content needed to be turned around very quickly. Hirst and his team worked closely with the TV crew and had access to
ProTools sessions directly from the show. Challenges that they faced included getting audio content to sound consistent when many times the actors were recorded in different locations with less than ideal acoustics and background noise. He also spoke about his approach to handling the large amount of voice lines employed in the game. Hirst programmatically utilized the voices from the TV show to help comment upon the actions taking place in the game to give the game a unique and humurous spirit. He employed a tagging sysem based on emotion to help classify different voice lines to appropriately match gameplay actions and outcomes. Hirst utilized voiceover to provide guidance to the player, providing awareness of straightaways and sharp turns within the game's levels.
Kevin Salchert spoke about more of the technical audio challenges for The Grand Tour Game, including a customized sound spatialization system employed for handling multiplayer split-screen mode along with a specialized granular synthesis tool for car engine sounds the team used (Rev by Crankcase Audio). Salchert reviewed the techniques his team used to create exciting whooshes and whizzes for cars and objects flying by all of the game's players.
Sally Kellaway's talk "Hearing into the Future: Audio in XR" shed light on the unique design challenges of creating audio experiences for virtual, augmented, or mixed reality. She stated that "the audio industry for spatial audio is the largest real-world psychoacoustics experiment that has ever been conducted." Kellaway spoke about the science and history of spatial audio technology and different design considerations that sound designers and composers can make to help make user experiences more aesthetically pleasing and immersive. This included insights on psychoacoustics, HRTF's for sound spatialization, mixing sound techniques, and considerations of object size and spacing for producing sound for virtual and augmented realities. She proposed an approach for self-reflection for evaluating user experince in XR; that the technology, content, and the perception of the user altogether equals the experience. Kellaway played examples of work she created as a sound designer for virtual reality that showcased these techniques and approaches. She demonstrated several ways we can leverage sound to help users better understand audio cues and to help make these experiences sound more satisfying.
Stan LePard's presentation "Forays into Non-Linear Music" discussed three different works that the composer created which feature algorithmic and aleatoric or chance music techniques. The first work, Images of Emergence, focused on the concept of emergence. LePard composed this work for symphony orchestra and video exploring musical emergence as well as video images related to emergence taken from various vantage points: macroscopic views from space and cities, microscopic levels such as crystalline formations, and other emergent phenomena such as fractal geometry. Rather than the typical approach of writing for the orchestra as a whole, LePard asks each member of the orchestra to function as an individual or "agent." The players are each given instructions for a range of musical notes, rhythms, and playing techniques. With global sections defined by the conductor, the orchestra creates emergent sound combinations as each orchestra member (or "agent) combines their sounds with the other instrumentalists in the orchestra. LePard then spoke about his second work, an electronic version of Images of Emergence that he composed using the software Max/
MSP. LePard was not happy with the outcome of this version due to limited timbre and lack of computational power, although he is considering continuing work and improvements with this software in the future.
Finally, LePard showed his musical composition created for an installation at Seattle's Living Computer Museum. For this work, LePard used Ableton Live to create an ambient realtime composition that is designed to play infinitely as background music. By composing a wide set of various musical motifs, LePard leveraged the unique algorithmic nature of Ableton Live to play back this array of musical moments into a compelling ever-changing mosaic of sound.
In addition, he created continuous sonic interest by employing a wide range of electronic timbres that were all made utilizing Ableton Live's basic set of synthesizers. LePard demonstrated unique ways he generated complex rhythmic structures for the piece using an arpeggiator along with tempo automation/ modulation.
Reported by Greg Dixon, PNW Section Committee
|