SlideShare une entreprise Scribd logo
1  sur  311
Télécharger pour lire hors ligne
The Next-Gen Dynamic Sound System
of Killzone Shadow Fall
Take Away
Take Away
• Promote experimentation
Take Away
• Promote experimentation
• Own your tools to allow it to happen
Take Away
• Promote experimentation
• Own your tools to allow it to happen
• Reduce “Idea to Game” time as much as possible
Take Away
Take Away
• If a programmer is in the creative loop, creativity
suffers
Take Away
• If a programmer is in the creative loop, creativity
suffers
• Don't be afraid to give designers a visual
programming tool
Take Away
• If a programmer is in the creative loop, creativity
suffers
• Don't be afraid to give designers a visual
programming tool
• Generating code from assets is easy and powerful
Andreas Varga Anton Woldhek
Senior Tech Programmer Senior Sound Designer
Who are Andreas & Anton? Why would you listen to them? Well, we worked on sound for these games…
Andreas Varga
Killzone Shadow Fall
Killzone 3
Killzone 2
Manhunt 2
Max Payne 2: The Fall of Max Payne
Anton Woldhek
Killzone Shadow Fall
Killzone 3
Killzone 2
!
Creator of the Game Audio Podcast
Senior Tech Programmer Senior Sound Designer
Who are Andreas & Anton? Why would you listen to them? Well, we worked on sound for these games…
We work at Guerrilla, which is one of Sony’s 1st party studios. Here’s a view of our sound studios
Killzone 2 Killzone 3
After Killzone 3, we started working on Killzone Shadow Fall, which was a launch title for the PS4. This was planned as a next-gen title from the beginning.
Here's a short video showing how the game looks and sounds.
Here's a short video showing how the game looks and sounds.
Andreas: But what exactly does next-gen mean?
We thought next gen sound (initially) wouldn’t be about fancy new DSP but about faster iteration and less optimization. To allow the creation of more flexible sounds that truly can become a part of the game design. We also wanted
to make sure that the sound team works with the same basic work flow as the rest of the company.
So our motivation was to build a cutting edge sound system and tool set, but don't build a new synth.
What is Next-Gen Audio?
Anton
What is Next-Gen Audio?
• Emancipating sound, the team and the asset
Anton
What is Next-Gen Audio?
• Emancipating sound, the team and the asset
• Instantly hearing work in game, never reload
Anton
What is Next-Gen Audio?
• Emancipating sound, the team and the asset
• Instantly hearing work in game, never reload
• Extendable system, reusable components
Anton
What is Next-Gen Audio?
• Emancipating sound, the team and the asset
• Instantly hearing work in game, never reload
• Extendable system, reusable components
• High run-time performance
Anton
What is Next-Gen Audio?
• Emancipating sound, the team and the asset
• Instantly hearing work in game, never reload
• Extendable system, reusable components
• High run-time performance
• Don’t worry about the synth
Anton
Why Emancipate?
Anton
Why Emancipate?
• Audio is separate from rest
Anton
Why Emancipate?
• Audio is separate from rest
• Physical: sound proof rooms
Anton
Why Emancipate?
• Audio is separate from rest
• Physical: sound proof rooms
• Mental: closed off sound engine and tools
Anton
Why Emancipate?
• Audio is separate from rest
• Physical: sound proof rooms
• Mental: closed off sound engine and tools
• Should actively pursue integration
Anton
Why Integrate?
Anton
Why Integrate?
• Gain a community of end users that:
Anton
Why Integrate?
• Gain a community of end users that:
• Can help with complex setup
Anton
Why Integrate?
• Gain a community of end users that:
• Can help with complex setup
• Teach you new tricks
Anton
Why Integrate?
• Gain a community of end users that:
• Can help with complex setup
• Teach you new tricks
• Find bugs
Anton
Why Integrate?
• Gain a community of end users that:
• Can help with complex setup
• Teach you new tricks
• Find bugs
• Benefit mutually from improvements
Anton
Motivation for Change
Motivation for Change
Previous generation very risk averse, failing was not possible
Ideas we had Ideas we implemented Ideas we shipped with
KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF
Motivation for Change
Previous generation very risk averse, failing was not possible
Ideas we had Ideas we implemented Ideas we shipped with
KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF
Motivation for Change
Previous generation very risk averse, failing was not possible
Ideas we had Ideas we implemented Ideas we shipped with
KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF
Motivation for Change
Previous generation very risk averse, failing was not possible
Ideas we had Ideas we implemented Ideas we shipped with
KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF
Motivation for Change
Previous generation very risk averse, failing was not possible
We could
experiment
much
more!
{
Ideas we had Ideas we implemented Ideas we shipped with
KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF
Motivation for Change
Previous generation very risk averse, failing was not possible
More of the
ideas
shipped in
the game! {
We could
experiment
much
more!
{
Ideas we had Ideas we implemented Ideas we shipped with
KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF
Motivation for Change
Previous generation very risk averse, failing was not possible
More of the
ideas
shipped in
the game! {
We could
experiment
much
more!
{ }
No

bad

ideas!
Old Workflow
Nuendo
Old Workflow
Nuendo
Old Workflow
Nuendo export
WAV

Assets
Old Workflow
Nuendo export
3rd Party
Sound Tool
WAV

Assets
import
Old Workflow
Nuendo export
3rd Party
Sound Tool
WAV

Assets
import reload Play Game
Old Workflow
Nuendo export
3rd Party
Sound Tool
WAV

Assets
import reload Play Game
Wasted Time Wasted Time Wasted Time
~10s 5s…30s ~2mins
New Workflow
Nuendo
Sound Tool
Running
Game
New Workflow
Nuendo export
Sound Tool
WAV

Assets
hot-load
Running
Game
hot-load
New Workflow
Nuendo export
Sound Tool
WAV

Assets
hot-load
Running
Game
hot-load
Wasted Time
~10s
Video of iterating on a single sound by re-exporting its wav files from Nuendo and syncing the changes with the running game.
Video of iterating on a single sound by re-exporting its wav files from Nuendo and syncing the changes with the running game.
No More Auditioning Tool
No More Auditioning Tool
• Game is the inspiration
No More Auditioning Tool
• Game is the inspiration
• Get sound into the game quickly
No More Auditioning Tool
• Game is the inspiration
• Get sound into the game quickly
• No interruptions
No More Auditioning Tool
• Game is the inspiration
• Get sound into the game quickly
• No interruptions
• Why audition anywhere else?
How to Emancipate?
Andreas
How to Emancipate?
• Make sound engine and tools in-house
Andreas
How to Emancipate?
• Make sound engine and tools in-house
• Allows deep integration
Andreas
How to Emancipate?
• Make sound engine and tools in-house
• Allows deep integration
• Immediate changes, based on designer feedback
Andreas
How to Emancipate?
• Make sound engine and tools in-house
• Allows deep integration
• Immediate changes, based on designer feedback
• No arbitrary technical limitations
Andreas
Extendable, Reusable

and

High Performance?
April 11, 2011
April 11, 2011
April 11, 2011
!
Anton: Sound designers are already used to data flow environments such as Max/MSP and puredata.
!
Andreas: But while that’s one good way of thinking about our system, it’s also a bit different from them. Our system doesn’t work at on the sample level or audio rate, i.e. we’re not using our graphs for synthesis, but just for
playback of already existing waveform data.
!
Max/MSP
Anton: Sound designers are already used to data flow environments such as Max/MSP and puredata.
!
Andreas: But while that’s one good way of thinking about our system, it’s also a bit different from them. Our system doesn’t work at on the sample level or audio rate, i.e. we’re not using our graphs for synthesis, but just for
playback of already existing waveform data.
!
Max/MSP Pure Data
Anton: Sound designers are already used to data flow environments such as Max/MSP and puredata.
!
Andreas: But while that’s one good way of thinking about our system, it’s also a bit different from them. Our system doesn’t work at on the sample level or audio rate, i.e. we’re not using our graphs for synthesis, but just for
playback of already existing waveform data.
⚠️ Technical Details Ahead! ⚠️
⚠️ Technical Details Ahead! ⚠️
• We will show code
⚠️ Technical Details Ahead! ⚠️
• We will show code
• But you don’t have to be afraid
⚠️ Technical Details Ahead! ⚠️
• We will show code
• But you don’t have to be afraid
• Sound designers will never have to look at it
Graph Sound System
Graph Sound System
• Generate native code from data flow
Graph Sound System
• Generate native code from data flow
• Execute resulting program every 5.333ms (187Hz)
Graph Sound System
• Generate native code from data flow
• Execute resulting program every 5.333ms (187Hz)
• Used to play samples and modulate parameters
Graph Sound System
• Generate native code from data flow
• Execute resulting program every 5.333ms (187Hz)
• Used to play samples and modulate parameters
• NOT running actual synthesis
Graph Sound System
• Generate native code from data flow
• Execute resulting program every 5.333ms (187Hz)
• Used to play samples and modulate parameters
• NOT running actual synthesis
• Dynamic behavior
Sound
Input X
Input Y
Output Z
The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important.
The functionality of each node is defined in a C++ file, which typically just contains one function.
When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph.
We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
Sound
Node A
Node C
Node B
Input X
Input Y
Output Z
The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important.
The functionality of each node is defined in a C++ file, which typically just contains one function.
When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph.
We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
Sound
Node A
Node C
Node B
Input X
Input Y
Output Z
The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important.
The functionality of each node is defined in a C++ file, which typically just contains one function.
When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph.
We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
Sound
Node A
Node C
Node B
A.cpp
B.cpp
C.cpp
Input X
Input Y
Output Z
The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important.
The functionality of each node is defined in a C++ file, which typically just contains one function.
When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph.
We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
A.cpp
B.cpp
C.cpp
The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important.
The functionality of each node is defined in a C++ file, which typically just contains one function.
When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph.
We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
A.cpp
B.cpp
C.cpp
Sound.cpp
The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important.
The functionality of each node is defined in a C++ file, which typically just contains one function.
When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph.
We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
Sound.cpp
A(inA, outA) { … }
B(inB, outB) { … }
C(inX, inY, outZ) { … }
The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important.
The functionality of each node is defined in a C++ file, which typically just contains one function.
When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph.
We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
Sound(X, Y, Z)
{
A(X,outA);
B(Y,outB);
C(outA,outB,Z);
}
Sound.cpp
A(inA, outA) { … }
B(inB, outB) { … }
C(inX, inY, outZ) { … }
The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important.
The functionality of each node is defined in a C++ file, which typically just contains one function.
When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph.
We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
Sound.cpp
So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the
PS4 at runtime. This allows us to execute the code in the game.
Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a
more traditional offline compilation step, using clang and dynamic libraries.
Sound.cpp compile
So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the
PS4 at runtime. This allows us to execute the code in the game.
Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a
more traditional offline compilation step, using clang and dynamic libraries.
Sound.cpp compile Sound.o
So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the
PS4 at runtime. This allows us to execute the code in the game.
Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a
more traditional offline compilation step, using clang and dynamic libraries.
Sound.cpp compile Sound.o
compile Sound2.o
compile Sound3.o
compile Sound4.o
Sound2.cpp
Sound3.cpp
Sound4.cpp
So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the
PS4 at runtime. This allows us to execute the code in the game.
Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a
more traditional offline compilation step, using clang and dynamic libraries.
Sound.cpp compile Sound.o
compile Sound2.o
compile Sound3.o
compile Sound4.o
link
Sound2.cpp
Sound3.cpp
Sound4.cpp
So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the
PS4 at runtime. This allows us to execute the code in the game.
Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a
more traditional offline compilation step, using clang and dynamic libraries.
Sound.cpp compile Sound.o
compile Sound2.o
compile Sound3.o
compile Sound4.o
Dynamic Library (PRX)
link
Sound2.cpp
Sound3.cpp
Sound4.cpp
So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the
PS4 at runtime. This allows us to execute the code in the game.
Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a
more traditional offline compilation step, using clang and dynamic libraries.
Sound.cpp compile Sound.o
compile Sound2.o
compile Sound3.o
compile Sound4.o
Dynamic Library (PRX)
link
hot-load
Sound2.cpp
Sound3.cpp
Sound4.cpp
So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the
PS4 at runtime. This allows us to execute the code in the game.
Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a
more traditional offline compilation step, using clang and dynamic libraries.
Sound.cpp compile Sound.o
compile Sound2.o
compile Sound3.o
compile Sound4.o
Dynamic Library (PRX)
link
Execute Sound() in Game hot-load
Sound2.cpp
Sound3.cpp
Sound4.cpp
So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the
PS4 at runtime. This allows us to execute the code in the game.
Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a
more traditional offline compilation step, using clang and dynamic libraries.
The most simple graph sound example
we can come up with
Lets get to know the system by building a very basic sound
Wave
inStart
inWave (sample data)
Inputs
on_start
on_stop
Other Voice

Parameters:

Dry Gain,

Wet Gain,

LFE Gain, etc.
inStop
}
Game

Parameters{
• Wave node is used to play sample data
Wave
inStart
inWave (sample data)
Inputs
on_start
on_stop
Other Voice

Parameters:

Dry Gain,

Wet Gain,

LFE Gain, etc.
inStop
}
Game

Parameters{
• Wave node is used to play sample data
• The on_start input is activated when the sound starts
Wave
inStart
inWave (sample data)
Inputs
on_start
on_stop
Other Voice

Parameters:

Dry Gain,

Wet Gain,

LFE Gain, etc.
inStop
}
Game

Parameters{
• Wave node is used to play sample data
• The on_start input is activated when the sound starts
• When inStart is active, a voice is created and played
Wave
inStart
inWave (sample data)
Inputs
on_start
on_stop
Other Voice

Parameters:

Dry Gain,

Wet Gain,

LFE Gain, etc.
inStop
}
Game

Parameters{
• Wave node is used to play sample data
• The on_start input is activated when the sound starts
• When inStart is active, a voice is created and played
• So this graph plays one sample when the sound starts
Wave
inStart
inWave (sample data)
Inputs
on_start
on_stop
Other Voice

Parameters:

Dry Gain,

Wet Gain,

LFE Gain, etc.
inStop
}
Game

Parameters{
But what about the code?
Andreas: This is the result of the code generator, it's fairly readable.
You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented
in the engine code.
Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms.
As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
But what about the code?
!
!
void Sound(bool _on_start_, bool _on_stop_)
{
bool outAllVoicesDone = false;
float outDuration = 0;
!
Wave(_on_start_, false, false,
WaveDataPointer, VoiceParameters, ...,
&outAllVoicesDone, &outDuration);
}
Andreas: This is the result of the code generator, it's fairly readable.
You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented
in the engine code.
Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms.
As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
But what about the code?
!
!
void Sound(bool _on_start_, bool _on_stop_)
{
bool outAllVoicesDone = false;
float outDuration = 0;
!
Wave(_on_start_, false, false,
WaveDataPointer, VoiceParameters, ...,
&outAllVoicesDone, &outDuration);
}
Andreas: This is the result of the code generator, it's fairly readable.
You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented
in the engine code.
Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms.
As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
But what about the code?
!
!
void Sound(bool _on_start_, bool _on_stop_)
{
bool outAllVoicesDone = false;
float outDuration = 0;
!
Wave(_on_start_, false, false,
WaveDataPointer, VoiceParameters, ...,
&outAllVoicesDone, &outDuration);
}
Andreas: This is the result of the code generator, it's fairly readable.
You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented
in the engine code.
Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms.
As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
But what about the code?
!
!
void Sound(bool _on_start_, bool _on_stop_)
{
bool outAllVoicesDone = false;
float outDuration = 0;
!
Wave(_on_start_, false, false,
WaveDataPointer, VoiceParameters, ...,
&outAllVoicesDone, &outDuration);
}
#include <Wave.cpp>
Andreas: This is the result of the code generator, it's fairly readable.
You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented
in the engine code.
Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms.
As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
But the nice thing is, that the code is not immediately visible. It’s there, and as a programmer you can use it to debug what’s going on, but a sound designer would only see the graph representation.
Anton: Now let's inject the sound into the game and play it, for testing purposes.
Video showing how a sound is created from scratch and injected into the running game.
Anton: Now let's inject the sound into the game and play it, for testing purposes.
Video showing how a sound is created from scratch and injected into the running game.
Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between
this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example.
!
Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can
put in front to convert from dB to linear values.
Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between
this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example.
!
Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can
put in front to convert from dB to linear values.
Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between
this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example.
!
Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can
put in front to convert from dB to linear values.
Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between
this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example.
!
Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can
put in front to convert from dB to linear values.
Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between
this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example.
!
Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can
put in front to convert from dB to linear values.
void Sound(bool _on_start_, bool _on_stop_,
float inDistanceToListener)
{
float outYValue = 0;
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
!
bool outAllVoicesDone = false;
float outDuration = 0;
Wave(_on_start_, false, false,
WaveDataPointer, VoiceParameters, outYValue,
&outAllVoicesDone, &outDuration);
}
It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the
Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
void Sound(bool _on_start_, bool _on_stop_,
float inDistanceToListener)
{
float outYValue = 0;
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
!
bool outAllVoicesDone = false;
float outDuration = 0;
Wave(_on_start_, false, false,
WaveDataPointer, VoiceParameters, outYValue,
&outAllVoicesDone, &outDuration);
}
It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the
Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
void Sound(bool _on_start_, bool _on_stop_,
float inDistanceToListener)
{
float outYValue = 0;
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
!
bool outAllVoicesDone = false;
float outDuration = 0;
Wave(_on_start_, false, false,
WaveDataPointer, VoiceParameters, outYValue,
&outAllVoicesDone, &outDuration);
}
It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the
Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
void Sound(bool _on_start_, bool _on_stop_,
float inDistanceToListener)
{
float outYValue = 0;
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
!
bool outAllVoicesDone = false;
float outDuration = 0;
Wave(_on_start_, false, false,
WaveDataPointer, VoiceParameters, outYValue,
&outAllVoicesDone, &outDuration);
}
It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the
Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
void Sound(bool _on_start_, bool _on_stop_,
float inDistanceToListener)
{
float outYValue = 0;
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
!
bool outAllVoicesDone = false;
float outDuration = 0;
Wave(_on_start_, false, false,
WaveDataPointer, VoiceParameters, outYValue,
&outAllVoicesDone, &outDuration);
}
It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the
Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
void Sound(bool _on_start_, bool _on_stop_,
float inDistanceToListener)
{
float outYValue = 0;
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
!
bool outAllVoicesDone = false;
float outDuration = 0;
Wave(_on_start_, false, false,
WaveDataPointer, VoiceParameters, outYValue,
&outAllVoicesDone, &outDuration);
}
#include <EvaluateCurve.cpp>
#include <Wave.cpp>
It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the
Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
Video showing effect of distance-based low pass filtering example
Video showing effect of distance-based low pass filtering example
Adding Nodes
I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available.
But why is it truly extendable?
That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the
implementation.
Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated
graph programs are not linked with anything, so very small and light weight.
This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
Adding Nodes
• Create C++ file with one function
I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available.
But why is it truly extendable?
That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the
implementation.
Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated
graph programs are not linked with anything, so very small and light weight.
This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
Adding Nodes
• Create C++ file with one function
• Create small description file for node
I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available.
But why is it truly extendable?
That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the
implementation.
Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated
graph programs are not linked with anything, so very small and light weight.
This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
Adding Nodes
• Create C++ file with one function
• Create small description file for node
• List inputs and outputs
I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available.
But why is it truly extendable?
That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the
implementation.
Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated
graph programs are not linked with anything, so very small and light weight.
This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
Adding Nodes
• Create C++ file with one function
• Create small description file for node
• List inputs and outputs
• Define the default values
I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available.
But why is it truly extendable?
That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the
implementation.
Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated
graph programs are not linked with anything, so very small and light weight.
This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
Adding Nodes
• Create C++ file with one function
• Create small description file for node
• List inputs and outputs
• Define the default values
• No need to recompile game, just add assets
I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available.
But why is it truly extendable?
That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the
implementation.
Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated
graph programs are not linked with anything, so very small and light weight.
This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
Math example
Convert semitones to pitch ratio
SemitonesToPitchRatio
inSemiTones outPitchRatio
SemitonesToPitchRatio
inSemiTones outPitchRatio
Andreas: Let's look at an example node for a mathematical problem, the obvious ones like add or multiply are trivial, but here's a useful one, converting from musical semitones to a pitch ratio value. So for instance inputting a
value of 12 semitones, would result in a pitch ratio of 2, i.e. an octave.
void SemitonesToPitchRatio(const float inSemitones,
float* outPitchRatio)
{
*outPitchRatio = Math::Pow(2.0f, inSemitones/12.0f);
}
SemiTonesToPitchRatio.cpp
This shows how easy it is to create a new node. The only additional information that needs to be created is a meta data file that describes the input and output values and their default values.
Math example
Convert semitones to pitch ratio
SemitonesToPitchRatio
inSemiTones outPitchRatio
Logic example
Compare two values
Compare
inA outGreater
inB outSmaller
outEqual
Here's another simple example for comparison of values. Typically you have different parts that depend on the comparison result, so it's useful to have "smaller" and "greater than" available at the same time. Also saves us from
using additional nodes.
void Compare(const float inA, const float inB,
bool* outSmaller, bool* outGreater,
bool* outEqual)
{
*outSmaller = inA < inB;
*outGreater = inA > inB;
*outEqual = inA == inB;
}
Compare.cpp
The implementation is just as simple. All the comparison results are stored in output parameters.
Logic example
Compare two values
Compare
inA outGreater
inB outSmaller
outEqual
Modulation example
Sine LFO
SineLFO
inFrequency outValue
inAmplitude
inOffset
inPhase
This is the sine LFO node, it will output a sine wave at a given frequency and amplitude. This is a slightly more complicated node.
void SineLFO(const float inFrequency,
const float inAmplitude,
const float inOffset, const float inPhase,
float* outValue, float* phase)
{
*outValue = inOffset +
inAmplitude * Math::Sin((*phase + inPhase) * M_TWO_PI);
float new_phase = *phase + inFrequency * GraphSound::GetTimeStep();
if (new_phase > 1.0f)
new_phase -= 1.0f;
*phase = new_phase;
}
SineLFO.cpp
Let's look at how this is implemented. It's using two functions that are implemented in our engine, one for calculating the sine and one for getting the time step. If the game time is scaled (for example if we’re running the game
in slow motion), then the sound time-step will also be adjusted. However it's also possible to have sounds that are not affected by this.
Normally the graph execution is state less, that means that every time it runs, it does exactly the same, because it doesn't store any information, so it can't depend on previous runs. This node however is an example of
something that requires state information, and it's possible to do that on a per node basis. So here you can see that it uses a special “phase” parameter, which is stored separately for each instance of the node, and is passed in
by reference as a pointer. The state system allows us to store any arbitrary set of data, for example a full C++ object, or just a simple value like in this example.
The Wave node also uses this to store the information about the voices it has created.
void SineLFO(const float inFrequency,
const float inAmplitude,
const float inOffset, const float inPhase,
float* outValue, float* phase)
{
*outValue = inOffset +
inAmplitude * Math::Sin((*phase + inPhase) * M_TWO_PI);
float new_phase = *phase + inFrequency * GraphSound::GetTimeStep();
if (new_phase > 1.0f)
new_phase -= 1.0f;
*phase = new_phase;
}
SineLFO.cpp
Let's look at how this is implemented. It's using two functions that are implemented in our engine, one for calculating the sine and one for getting the time step. If the game time is scaled (for example if we’re running the game
in slow motion), then the sound time-step will also be adjusted. However it's also possible to have sounds that are not affected by this.
Normally the graph execution is state less, that means that every time it runs, it does exactly the same, because it doesn't store any information, so it can't depend on previous runs. This node however is an example of
something that requires state information, and it's possible to do that on a per node basis. So here you can see that it uses a special “phase” parameter, which is stored separately for each instance of the node, and is passed in
by reference as a pointer. The state system allows us to store any arbitrary set of data, for example a full C++ object, or just a simple value like in this example.
The Wave node also uses this to store the information about the voices it has created.
void SineLFO(const float inFrequency,
const float inAmplitude,
const float inOffset, const float inPhase,
float* outValue, float* phase)
{
*outValue = inOffset +
inAmplitude * Math::Sin((*phase + inPhase) * M_TWO_PI);
float new_phase = *phase + inFrequency * GraphSound::GetTimeStep();
if (new_phase > 1.0f)
new_phase -= 1.0f;
*phase = new_phase;
}
SineLFO.cpp
Let's look at how this is implemented. It's using two functions that are implemented in our engine, one for calculating the sine and one for getting the time step. If the game time is scaled (for example if we’re running the game
in slow motion), then the sound time-step will also be adjusted. However it's also possible to have sounds that are not affected by this.
Normally the graph execution is state less, that means that every time it runs, it does exactly the same, because it doesn't store any information, so it can't depend on previous runs. This node however is an example of
something that requires state information, and it's possible to do that on a per node basis. So here you can see that it uses a special “phase” parameter, which is stored separately for each instance of the node, and is passed in
by reference as a pointer. The state system allows us to store any arbitrary set of data, for example a full C++ object, or just a simple value like in this example.
The Wave node also uses this to store the information about the voices it has created.
void SineLFO(const float inFrequency,
const float inAmplitude,
const float inOffset, const float inPhase,
float* outValue, float* phase)
{
*outValue = inOffset +
inAmplitude * Math::Sin((*phase + inPhase) * M_TWO_PI);
float new_phase = *phase + inFrequency * GraphSound::GetTimeStep();
if (new_phase > 1.0f)
new_phase -= 1.0f;
*phase = new_phase;
}
SineLFO.cpp
Let's look at how this is implemented. It's using two functions that are implemented in our engine, one for calculating the sine and one for getting the time step. If the game time is scaled (for example if we’re running the game
in slow motion), then the sound time-step will also be adjusted. However it's also possible to have sounds that are not affected by this.
Normally the graph execution is state less, that means that every time it runs, it does exactly the same, because it doesn't store any information, so it can't depend on previous runs. This node however is an example of
something that requires state information, and it's possible to do that on a per node basis. So here you can see that it uses a special “phase” parameter, which is stored separately for each instance of the node, and is passed in
by reference as a pointer. The state system allows us to store any arbitrary set of data, for example a full C++ object, or just a simple value like in this example.
The Wave node also uses this to store the information about the voices it has created.
Modulation example
Sine LFO
SineLFO
inFrequency outValue
inAmplitude
inOffset
inPhase
Nodes built from
Graphs
Generate a random pitch within a
range of semitones
RandomRangedPitchRatio
inTrigger outPitchRatio
inMinSemitone
inMaxSemitone
Certain math or logic processing that starts to happen often in sounds can be abstracted into their own nodes, without having to know any programming. Here’s an example of a node, built from an actual graph.
As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then converts to a pitch ratio.
It also uses the sample&hold node to make sure the value doesn’t change, as normally the random value would be recalculated at every update of the sound, i.e. once every 5.333ms.
Creating a node like this is basically as simple as selecting a group of nodes, then right-clicking and selecting “Collapse nodes”. The editor will do the rest.
As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then converts to a pitch ratio.
It also uses the sample&hold node to make sure the value doesn’t change, as normally the random value would be recalculated at every update of the sound, i.e. once every 5.333ms.
Creating a node like this is basically as simple as selecting a group of nodes, then right-clicking and selecting “Collapse nodes”. The editor will do the rest.
As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then converts to a pitch ratio.
It also uses the sample&hold node to make sure the value doesn’t change, as normally the random value would be recalculated at every update of the sound, i.e. once every 5.333ms.
Creating a node like this is basically as simple as selecting a group of nodes, then right-clicking and selecting “Collapse nodes”. The editor will do the rest.
As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then converts to a pitch ratio.
It also uses the sample&hold node to make sure the value doesn’t change, as normally the random value would be recalculated at every update of the sound, i.e. once every 5.333ms.
Creating a node like this is basically as simple as selecting a group of nodes, then right-clicking and selecting “Collapse nodes”. The editor will do the rest.
Nodes built from
Graphs
Generate a random pitch within a
range of semitones
RandomRangedPitchRatio
inTrigger outPitchRatio
inMinSemitone
inMaxSemitone
When you notice repeated work…
When you notice repeated work…
…abstract it!
When you notice repeated work…
…abstract it!
When you notice repeated work…
…abstract it!
Inputs
Start
ChanceToPlay
Inputs
Start
ChanceToPlay
Delay
inEvent
inTime
outEvent
0.05
Inputs
Start
ChanceToPlay
Random
inRange100
Compare
inA outGreater
inB outSmaller
outEqual
OR
Delay
inEvent
inTime
outEvent
0.05
Inputs
Start
ChanceToPlay
Random
inRange100
Compare
inA outGreater
inB outSmaller
outEqual
OR
Delay
inEvent
inTime
outEvent
0.05
Wave
inAzimuth
inStart
inWave
AND
Inputs
Start
ChanceToPlay
Random
inRange100
Compare
inA outGreater
inB outSmaller
outEqual
OR
Delay
inEvent
inTime
outEvent
0.05
Wave
inAzimuth
inStart
inWave
AND
Select

Random

Wave
Inputs
Start
ChanceToPlay
Random
inRange100
Compare
inA outGreater
inB outSmaller
outEqual
OR
Delay
inEvent
inTime
outEvent
0.05
Wave
inAzimuth
inStart
inWave
AND
Random
inRange150
Ramp
inTrigger outValue
inStartValue
inTargetValue
0
Select

Random

Wave
Experimenting Should Be Safe and Fun
Experimenting Should Be Safe and Fun
• If your idea cannot fail, you’re not going to go to
uncharted territory.
Experimenting Should Be Safe and Fun
• If your idea cannot fail, you’re not going to go to
uncharted territory.
• We tried a lot
Experimenting Should Be Safe and Fun
• If your idea cannot fail, you’re not going to go to
uncharted territory.
• We tried a lot
• Some of it worked, some didn’t
Experimenting Should Be Safe and Fun
• If your idea cannot fail, you’re not going to go to
uncharted territory.
• We tried a lot
• Some of it worked, some didn’t
• Little or no code support - all in audio design
Examples
Fire Rate Variation
Fire Rate Variation
• Experiment: How to make battle sound more
interesting
Fire Rate Variation
• Experiment: How to make battle sound more
interesting
Fire Rate Variation
• Experiment: How to make battle sound more
interesting
Fire Rate Variation
• Experiment: How to make battle sound more
interesting
• Idea: Slight variation in fire rate
Fire Rate Variation
• Experiment: How to make battle sound more
interesting
• Idea: Slight variation in fire rate
What the Mind Hears, …
What the Mind Hears, …
What the Mind Hears, …
• Experiment: Birds should react when you fire gun
What the Mind Hears, …
• Experiment: Birds should react when you fire gun
• Birds exist as sound only
What the Mind Hears, …
• Experiment: Birds should react when you fire gun
• Birds exist as sound only
• Use sound-to-sound messaging system
What the Mind Hears, …
• Experiment: Birds should react when you fire gun
• Birds exist as sound only
• Use sound-to-sound messaging system
• Gun sounds notify bird sounds of shot
Bird Sound Logic
Not Shooting Play “Bird Loop”FALSE
GunFireState
Gun Sound Bird Sound
Bird Sound Logic
Shooting Play “Bird Loop”FALSE
GunFireState
Gun Sound Bird Sound
Bird Sound Logic
Shooting Play “Bird Loop”TRUE
GunFireState
Gun Sound Bird Sound
set
Bird Sound Logic
Shooting Play “Bird Loop”TRUE
GunFireState
Gun Sound Bird Sound
set get
Play

“Birds Flying Off”
!
Wait 30 seconds
Bird Sound Logic
Shooting TRUE
GunFireState
Gun Sound Bird Sound
set get
Play

“Birds Flying Off”
!
Wait 30 seconds
Bird Sound Logic
Not Shooting TRUE
GunFireState
Gun Sound Bird Sound
set get
Play

“Birds Flying Off”
!
Wait 30 seconds
Bird Sound Logic
Not Shooting
GunFireState
Gun Sound Bird Sound
set getFALSE
Play “Bird Loop”
Bird Sound Logic
Not Shooting
GunFireState
Gun Sound Bird Sound
set getFALSE
Play “Bird Loop”
Bird Sound Logic
Not Shooting
GunFireState
Gun Sound Bird Sound
set getFALSE
Any sound can send a message which can
be received by any other sound
Material Dependent Environmental
Reactions (MADDER)
Material Dependent Environmental
Reactions (MADDER)
• Inspiration: 

Guns have this explosive force in the world

when they’re fired
Material Dependent Environmental
Reactions (MADDER)
• Inspiration: 

Guns have this explosive force in the world

when they’re fired
• Things that are near start to rattle
See http://www.youtube.com/watch?v=nMkiDguYtGw
See http://www.youtube.com/watch?v=nMkiDguYtGw
See http://www.youtube.com/watch?v=nMkiDguYtGw
MADDER
MADDER
• Inputs required:
MADDER
• Inputs required:
• Distance to closest surface
MADDER
• Inputs required:
• Distance to closest surface
• Direction of closest surface
MADDER
• Inputs required:
• Distance to closest surface
• Direction of closest surface
• Material of closest surface
MADDER raycast
Listener
MADDER raycast
• Sweeping raycast
around listener
Listener
MADDER raycast
• Sweeping raycast
around listener
Listener
MADDER raycast
• Sweeping raycast
around listener
• After one revolution,
closest hit point is used Listener
MADDER raycast
• Sweeping raycast
around listener
• After one revolution,
closest hit point is used
• Distance, angle and
material is passed to
sound
Distance
}
Angle
Material
Listener
Inputs
on_start
inIsFirstPerson
inWallDistance
inWallAngle
inWallMaterial
Inputs
on_start
inIsFirstPerson
inWallDistance
inWallAngle
inWallMaterial
FirstPerson
inStartAND
Inputs
on_start
inIsFirstPerson
inWallDistance
inWallAngle
inWallMaterial
FirstPerson
inStartAND
ThirdPerson
inStartNOT
Inputs
on_start
inIsFirstPerson
inWallDistance
inWallAngle
inWallMaterial
FirstPerson
inStartAND
ThirdPerson
inStartNOT
MADDER
inWallDistance
PitchModifier
inWallAngle
inWallMaterial
Start
SpeedOfSound
MaterialSamples
IsSilencedGun
IsBigGun
330
1.0
False
False
Inputs
on_start
inIsFirstPerson
inWallDistance
inWallAngle
inWallMaterial
FirstPerson
inStartAND
ThirdPerson
inStartNOT
MADDER
inWallDistance
PitchModifier
inWallAngle
inWallMaterial
Start
SpeedOfSound
MaterialSamples
IsSilencedGun
IsBigGun
330
1.0
False
False
Inputs
on_start
inIsFirstPerson
inWallDistance
inWallAngle
inWallMaterial
FirstPerson
inStartAND
ThirdPerson
inStartNOT
MADDER

Content

Package
MADDER
inWallDistance
PitchModifier
inWallAngle
inWallMaterial
Start
SpeedOfSound
MaterialSamples
IsSilencedGun
IsBigGun
330
1.0
False
False
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWallDistance
inWallAngle
inWallMaterial
MaterialSamples
IsSilencedGun
IsBigGun
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWallDistance
inWallAngle
inWallMaterial
MaterialSamples
IsSilencedGun
IsBigGun
Delay

Unit
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWallDistance
inWallAngle
inWallMaterial
MaterialSamples
IsSilencedGun
IsBigGun
Voice

Selector
Delay

Unit
4x
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWallDistance
inWallAngle
inWallMaterial
MaterialSamples
IsSilencedGun
IsBigGun
Voice

Selector
Delay

Unit
4x
Wave
inAzimuth
inPitchRatio
inDryGain
inWetGain
inStart
inWave
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWallDistance
inWallAngle
inWallMaterial
MaterialSamples
IsSilencedGun
IsBigGun
Voice

Selector
Delay

Unit
4x
Wave
inAzimuth
inPitchRatio
inDryGain
inWetGain
inStart
inWave
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWallDistance
inWallAngle
inWallMaterial
MaterialSamples
IsSilencedGun
IsBigGun
Voice

Selector
Delay

Unit
4x
Wave
inAzimuth
inPitchRatio
inDryGain
inWetGain
inStart
inWave
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWallDistance
inWallAngle
inWallMaterial
MaterialSamples
IsSilencedGun
IsBigGun
Voice

Selector
Delay

Unit
4x
Wave
inAzimuth
inPitchRatio
inDryGain
inWetGain
inStart
inWave
SelectSample
inMaterial outWave
inMaterialSamples
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWallDistance
inWallAngle
inWallMaterial
MaterialSamples
IsSilencedGun
IsBigGun
Voice

Selector
Delay

Unit
4x
Wave
inAzimuth
inPitchRatio
inDryGain
inWetGain
inStart
inWave
SelectSample
inMaterial outWave
inMaterialSamples
Inputs
Start
SpeedOfSound
PitchModifier
FiringRate
inWallDistance
inWallAngle
inWallMaterial
MaterialSamples
IsSilencedGun
IsBigGun
Falloff

Unit
Voice

Selector
Delay

Unit
Video showing MADDER effect for 4 different materials
Video showing MADDER effect for 4 different materials
MADDER Prototype
Video showing initial MADDER prototype with 4 different materials in scene.
MADDER Prototype
Video showing initial MADDER prototype with 4 different materials in scene.
MADDER Four-angle raycast
-45° +45°
-135° +135°
MADDER Four-angle raycast
• Divide in four quadrants

around listener
-45° +45°
-135° +135°
MADDER Four-angle raycast
• Divide in four quadrants

around listener
• One sweeping raycast

in each quadrant
-45° +45°
-135° +135°
MADDER Four-angle raycast
• Divide in four quadrants

around listener
• One sweeping raycast

in each quadrant
• Closest hit point in

each quadrant is taken
Inputs
inWallDistanceFront
inWallAngleFront
inWallMaterialFront
inWallDistanceRight
inWallAngleRight
inWallMaterialRight
inWallDistanceBack
inWallAngleBack
inWallMaterialBack
inWallDistanceLeft
inWallAngleLeft
inWallMaterialLeft
Inputs
inWallDistanceFront
inWallAngleFront
inWallMaterialFront
inWallDistanceRight
inWallAngleRight
inWallMaterialRight
inWallDistanceBack
inWallAngleBack
inWallMaterialBack
inWallDistanceLeft
inWallAngleLeft
inWallMaterialLeft
MADDER
inWallDistance
inWallAngle
inWallMaterial
Inputs
inWallDistanceFront
inWallAngleFront
inWallMaterialFront
inWallDistanceRight
inWallAngleRight
inWallMaterialRight
inWallDistanceBack
inWallAngleBack
inWallMaterialBack
inWallDistanceLeft
inWallAngleLeft
inWallMaterialLeft
MADDER
inWallDistance
inWallAngle
inWallMaterial
MADDER
inWallDistance
inWallAngle
inWallMaterial
Inputs
inWallDistanceFront
inWallAngleFront
inWallMaterialFront
inWallDistanceRight
inWallAngleRight
inWallMaterialRight
inWallDistanceBack
inWallAngleBack
inWallMaterialBack
inWallDistanceLeft
inWallAngleLeft
inWallMaterialLeft
MADDER
inWallDistance
inWallAngle
inWallMaterial
MADDER
inWallDistance
inWallAngle
inWallMaterial
MADDER
inWallDistance
inWallAngle
inWallMaterial
Inputs
inWallDistanceFront
inWallAngleFront
inWallMaterialFront
inWallDistanceRight
inWallAngleRight
inWallMaterialRight
inWallDistanceBack
inWallAngleBack
inWallMaterialBack
inWallDistanceLeft
inWallAngleLeft
inWallMaterialLeft
MADDER
inWallDistance
inWallAngle
inWallMaterial
MADDER
inWallDistance
inWallAngle
inWallMaterial
MADDER
inWallDistance
inWallAngle
inWallMaterial
MADDER
inWallDistance
inWallAngle
inWallMaterial
Four-angle MADDER
Video showing final 4-angle MADDER with 4 materials placed in the scene
Four-angle MADDER
Video showing final 4-angle MADDER with 4 materials placed in the scene
MADDER video (30 seconds)
MADDER off
Capture from final game with MADDER disabled
MADDER video (30 seconds)
MADDER off
Capture from final game with MADDER disabled
MADDER on
Capture from final game with MADDER enabled
MADDER on
Capture from final game with MADDER enabled
Creative Workflow
Creative Workflow
IDEA Coder GAME
Sound
Designer
DONE
Creative Workflow
IDEA
Coder
GAME
Sound
Designer
DONE
Gun Tails
Gun Tails
• Experiment with existing data
Gun Tails
• Experiment with existing data
• Wall raycast
Gun Tails
• Experiment with existing data
• Wall raycast
• Inside/Outside
Gun Tails
• Experiment with existing data
• Wall raycast
• Inside/Outside
• Rounds Fired
Gun Tails
• Experiment with existing data
• Wall raycast
• Inside/Outside
• Rounds Fired
Gun Tails
• Experiment with existing data
• Wall raycast
• Inside/Outside
• Rounds Fired
Gun Tails
• Experiment with existing data
• Wall raycast
• Inside/Outside
• Rounds Fired
How to Debug?
What kind of debugging support did we have? Obviously a sound designer is creating much more complex logic now, and that means bugs will creep in. We need to be able to find problems such as incorrect behaviour quickly.
Manually Placed Debug Probes
SineLFO
inFrequency outValue
inAmplitude
inOffset
inPhase
0.5
0.5
1
0.0
Anton: These are placed by designers into their sounds, to query individual outputs of nodes. The values are shown as a curve on screen, useful for debugging a single value and how it changes over time. Not very useful for
quickly changing things, such as boolean events. Since the sound graphs execute multiple times per game frame, but the debug visualisation is only drawn at the game frame rate, a quick change can be missed.
In the final version of the game, these nodes are completely ignored.
!
Screenshot!
Manually Placed Debug Probes
SineLFO
inFrequency outValue
inAmplitude
inOffset
inPhase
DebugProbe
inDebugName
inDebugValue
sine
0.5
0.5
1
0.0
Anton: These are placed by designers into their sounds, to query individual outputs of nodes. The values are shown as a curve on screen, useful for debugging a single value and how it changes over time. Not very useful for
quickly changing things, such as boolean events. Since the sound graphs execute multiple times per game frame, but the debug visualisation is only drawn at the game frame rate, a quick change can be missed.
In the final version of the game, these nodes are completely ignored.
!
Screenshot!
Manually Placed Debug Probes
• Just connect any output to visualize it
SineLFO
inFrequency outValue
inAmplitude
inOffset
inPhase
DebugProbe
inDebugName
inDebugValue
sine
0.5
0.5
1
0.0
Anton: These are placed by designers into their sounds, to query individual outputs of nodes. The values are shown as a curve on screen, useful for debugging a single value and how it changes over time. Not very useful for
quickly changing things, such as boolean events. Since the sound graphs execute multiple times per game frame, but the debug visualisation is only drawn at the game frame rate, a quick change can be missed.
In the final version of the game, these nodes are completely ignored.
!
Screenshot!
Manually Placed Debug Probes
• Just connect any output to visualize it
• Shows debug value on game screen
SineLFO
inFrequency outValue
inAmplitude
inOffset
inPhase
DebugProbe
inDebugName
inDebugValue
sine
0.5
0.5
1
0.0
Anton: These are placed by designers into their sounds, to query individual outputs of nodes. The values are shown as a curve on screen, useful for debugging a single value and how it changes over time. Not very useful for
quickly changing things, such as boolean events. Since the sound graphs execute multiple times per game frame, but the debug visualisation is only drawn at the game frame rate, a quick change can be missed.
In the final version of the game, these nodes are completely ignored.
!
Screenshot!
Manually Placed Debug Probes
Can miss quick changes because game
frame rate is lower than sound update rate!
Manually Placed Debug Probes
Automatically Embedded Debug Probes
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
DEBUG_PROBE(0, (void*)&outYValue)
Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph
functions, so we can easily disable the debug support at compile time.
When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information.
!
Automatically Embedded Debug Probes
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
DEBUG_PROBE(0, (void*)&outYValue)
Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph
functions, so we can easily disable the debug support at compile time.
When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information.
!
Automatically Embedded Debug Probes
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
DEBUG_PROBE(0, (void*)&outYValue)
• All values are recorded, nothing is lost
Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph
functions, so we can easily disable the debug support at compile time.
When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information.
!
Automatically Embedded Debug Probes
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
DEBUG_PROBE(0, (void*)&outYValue)
• All values are recorded, nothing is lost
• Simple in-game debugger to show data
Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph
functions, so we can easily disable the debug support at compile time.
When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information.
!
Automatically Embedded Debug Probes
EvaluateCurve(CurveDataPointer, inDistanceToListener,
&outYValue);
DEBUG_PROBE(0, (void*)&outYValue)
• All values are recorded, nothing is lost
• Simple in-game debugger to show data
• Scrub through recording
Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph
functions, so we can easily disable the debug support at compile time.
When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information.
!
Post Mortem
So what when wrong and what went right?
Timeline
Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that
there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them
about any of this.
We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware.
Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that
there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them
about any of this.
We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware.
Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
November 2011 PC prototype of sound engine
Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that
there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them
about any of this.
We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware.
Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
November 2011 PC prototype of sound engine
August 2012 Moved over to PS4 hardware
Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that
there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them
about any of this.
We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware.
Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
November 2011 PC prototype of sound engine
August 2012 Moved over to PS4 hardware
February 2013 PS4 announcement demo (still using software codecs)
Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that
there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them
about any of this.
We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware.
Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
November 2011 PC prototype of sound engine
August 2012 Moved over to PS4 hardware
February 2013 PS4 announcement demo (still using software codecs)
June 2013 E3 demo (now using ACP hardware decoding)
Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that
there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them
about any of this.
We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware.
Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
November 2011 PC prototype of sound engine
August 2012 Moved over to PS4 hardware
February 2013 PS4 announcement demo (still using software codecs)
June 2013 E3 demo (now using ACP hardware decoding)
November 2013 Shipped as launch title
Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that
there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them
about any of this.
We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware.
Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
Timeline
February 2011 Pre-production begins, hardly any PS4 hardware details yet
November 2011 PC prototype of sound engine
August 2012 Moved over to PS4 hardware
February 2013 PS4 announcement demo (still using software codecs)
June 2013 E3 demo (now using ACP hardware decoding)
November 2013 Shipped as launch title
New system was up and running within 6 months!
Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that
there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them
about any of this.
We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware.
Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
“You’re making sound designers do
programming work, that’ll be a disaster!
The game will crash all the time!”
This didn't really happen. We had a few buggy nodes, but in general, things worked out fine. What we did is to make sure that if new nodes are peer-reviewed like any other code, especially if it’s written by a non-programmer.
Also since nodes are so simple and limited in their scope, it’s hard to make something that truly breaks the game.
“This is an unproven idea, why don’t we
just use some middleware solution?”
There were doubts that we can create new tech and a toolset with a user friendly workflow in time. The safer thing would’ve been to license a middleware solution, but at the time we had to make this decision, no 3rd parties
were disclosed about PS4 yet, and we couldn’t be sure if they will properly support the PS4 audio hardware.
Also we were keen on having a solution that’s integrated with our normal asset pipeline, which allowed us to share the tech built for audio with other disciplines.
Sound designers using the same workflow as artists and game designers is a benefit that middleware can't give us. New nodes created by e.g. game programmers are immediately useful for sound designers, and vice versa.
All of these reasons led us to push forward with our own tech, to make something that really fits the game and our way of working.
!
Anton: Also we wouldn’t have been able to do the kind of deep tool integration that we have now.
Used by other Disciplines
The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
Used by other Disciplines
• Graph system became popular
The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
Used by other Disciplines
• Graph system became popular
• Used for procedural rigging
The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
Used by other Disciplines
• Graph system became popular
• Used for procedural rigging
• Used for gameplay behavior
The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
Used by other Disciplines
• Graph system became popular
• Used for procedural rigging
• Used for gameplay behavior
• More nodes were added
The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
Used by other Disciplines
• Graph system became popular
• Used for procedural rigging
• Used for gameplay behavior
• More nodes were added
• Bugs were fixed quicker
The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
Compression Codecs
A little bit more information to show were we came from:
On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples.
This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole
memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for
larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data.
We’ve used ATRAC9 for surround streams, but those are not in-memory.
Compression Codecs
KZ3 (PS3) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
~20 MB for in-memory samples
A little bit more information to show were we came from:
On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples.
This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole
memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for
larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data.
We’ve used ATRAC9 for surround streams, but those are not in-memory.
Compression Codecs
KZ3 (PS3) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
KZSF (PS4) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
~20 MB for in-memory samples ~300 MB for in-memory samples
A little bit more information to show were we came from:
On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples.
This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole
memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for
larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data.
We’ve used ATRAC9 for surround streams, but those are not in-memory.
Compression Codecs
KZ3 (PS3) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
KZSF (PS4) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
Need low latency
~20 MB for in-memory samples ~300 MB for in-memory samples
A little bit more information to show were we came from:
On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples.
This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole
memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for
larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data.
We’ve used ATRAC9 for surround streams, but those are not in-memory.
Compression Codecs
KZ3 (PS3) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
KZSF (PS4) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
Large, don’t need precise timing
Need low latency
~20 MB for in-memory samples ~300 MB for in-memory samples
A little bit more information to show were we came from:
On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples.
This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole
memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for
larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data.
We’ve used ATRAC9 for surround streams, but those are not in-memory.
Compression Codecs
KZ3 (PS3) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
KZSF (PS4) samples
PCM
ADPCM
MP3
0 2500 5000 7500 10000
We’ve completely stopped using ADPCM (VAG) !
Large, don’t need precise timing
Need low latency
~20 MB for in-memory samples ~300 MB for in-memory samples
A little bit more information to show were we came from:
On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples.
This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole
memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for
larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data.
We’ve used ATRAC9 for surround streams, but those are not in-memory.
Sample Rates on PS3
We had a wide variety of sample rates on the PS3, because it was the main parameter we used to optimize memory usage of sounds. This led to some very low quality samples in the game.
On the PS4 we only used samples with 48kHz rate. The main way to optimize was to switch them to MP3 encoding and adjust the bit rate.
Sample Rates on PS3
0
175
350
525
700
4000 7000 11000 12500 15500 17000 19500 22050 25000 28000 31000 33075 36000 39000 48000
We had a wide variety of sample rates on the PS3, because it was the main parameter we used to optimize memory usage of sounds. This led to some very low quality samples in the game.
On the PS4 we only used samples with 48kHz rate. The main way to optimize was to switch them to MP3 encoding and adjust the bit rate.
Sample Rates on PS3
0
175
350
525
700
4000 7000 11000 12500 15500 17000 19500 22050 25000 28000 31000 33075 36000 39000 48000
Using sample rate to optimize memory usage! 😩
We had a wide variety of sample rates on the PS3, because it was the main parameter we used to optimize memory usage of sounds. This led to some very low quality samples in the game.
On the PS4 we only used samples with 48kHz rate. The main way to optimize was to switch them to MP3 encoding and adjust the bit rate.
Sample Rates on PS3
0
175
350
525
700
4000 7000 11000 12500 15500 17000 19500 22050 25000 28000 31000 33075 36000 39000 48000
On PS4 we use 48 kHz exclusively!
Using sample rate to optimize memory usage! 😩
We had a wide variety of sample rates on the PS3, because it was the main parameter we used to optimize memory usage of sounds. This led to some very low quality samples in the game.
On the PS4 we only used samples with 48kHz rate. The main way to optimize was to switch them to MP3 encoding and adjust the bit rate.
The Future…
Andreas: Were do we go from here?
We’re planning to extend and improve this system in the future, and use it for more purposes. E.g. changing the code generator to allow the creation of compute shaders. Also we'll add more elaborate debugging support, and
lots of new nodes of course. We’ll have more detailed profiling support, to measure the execution time of the graph for each node, to allow us to identify bottleneck nodes that need to be optimised.
We’re also thinking about different ways to use this system, to create samples on the fly for variations. In this scenario we would add nodes to allow waveforms to be output into buffers, which can then be played at a later point in
time.
The Future…
• Extend and improve
Andreas: Were do we go from here?
We’re planning to extend and improve this system in the future, and use it for more purposes. E.g. changing the code generator to allow the creation of compute shaders. Also we'll add more elaborate debugging support, and
lots of new nodes of course. We’ll have more detailed profiling support, to measure the execution time of the graph for each node, to allow us to identify bottleneck nodes that need to be optimised.
We’re also thinking about different ways to use this system, to create samples on the fly for variations. In this scenario we would add nodes to allow waveforms to be output into buffers, which can then be played at a later point in
time.
The Future…
• Extend and improve
• Create compute shaders from graphs
Andreas: Were do we go from here?
We’re planning to extend and improve this system in the future, and use it for more purposes. E.g. changing the code generator to allow the creation of compute shaders. Also we'll add more elaborate debugging support, and
lots of new nodes of course. We’ll have more detailed profiling support, to measure the execution time of the graph for each node, to allow us to identify bottleneck nodes that need to be optimised.
We’re also thinking about different ways to use this system, to create samples on the fly for variations. In this scenario we would add nodes to allow waveforms to be output into buffers, which can then be played at a later point in
time.
The Future…
• Extend and improve
• Create compute shaders from graphs
• More elaborate debugging support
Andreas: Were do we go from here?
We’re planning to extend and improve this system in the future, and use it for more purposes. E.g. changing the code generator to allow the creation of compute shaders. Also we'll add more elaborate debugging support, and
lots of new nodes of course. We’ll have more detailed profiling support, to measure the execution time of the graph for each node, to allow us to identify bottleneck nodes that need to be optimised.
We’re also thinking about different ways to use this system, to create samples on the fly for variations. In this scenario we would add nodes to allow waveforms to be output into buffers, which can then be played at a later point in
time.
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall
The Next-Gen Dynamic Sound System of Killzone Shadow Fall

Contenu connexe

Tendances

Alpan Aytekin-Game Audio Essentials
Alpan Aytekin-Game Audio EssentialsAlpan Aytekin-Game Audio Essentials
Alpan Aytekin-Game Audio Essentials
gamedevelopersturkey
 
Sound recording glossary - IMPROVED
Sound recording glossary - IMPROVEDSound recording glossary - IMPROVED
Sound recording glossary - IMPROVED
PaulinaKucharska
 
Crescendo encore quarter_presentation
Crescendo encore quarter_presentationCrescendo encore quarter_presentation
Crescendo encore quarter_presentation
chongz
 

Tendances (12)

Problems and Solutions in Game Audio
Problems and Solutions in Game AudioProblems and Solutions in Game Audio
Problems and Solutions in Game Audio
 
The secerts to great sounding samples.txt
The secerts to great sounding samples.txtThe secerts to great sounding samples.txt
The secerts to great sounding samples.txt
 
Video Killed the Rolex Star (CocoaConf Columbus, July 2015)
Video Killed the Rolex Star (CocoaConf Columbus, July 2015)Video Killed the Rolex Star (CocoaConf Columbus, July 2015)
Video Killed the Rolex Star (CocoaConf Columbus, July 2015)
 
Alpan Aytekin-Game Audio Essentials
Alpan Aytekin-Game Audio EssentialsAlpan Aytekin-Game Audio Essentials
Alpan Aytekin-Game Audio Essentials
 
Get On The Audiobus (CocoaConf Boston, October 2013)
Get On The Audiobus (CocoaConf Boston, October 2013)Get On The Audiobus (CocoaConf Boston, October 2013)
Get On The Audiobus (CocoaConf Boston, October 2013)
 
Session 2 : Android Developers Enthusiasts Session
Session 2 : Android Developers Enthusiasts SessionSession 2 : Android Developers Enthusiasts Session
Session 2 : Android Developers Enthusiasts Session
 
Тарас Терлецький "Як організувати роботу з вашим дизайнером по звуку" GameDe...
Тарас Терлецький "Як організувати роботу з вашим дизайнером по звуку"  GameDe...Тарас Терлецький "Як організувати роботу з вашим дизайнером по звуку"  GameDe...
Тарас Терлецький "Як організувати роботу з вашим дизайнером по звуку" GameDe...
 
Video Killed the Rolex Star (CocoaConf San Jose, November, 2015)
Video Killed the Rolex Star (CocoaConf San Jose, November, 2015)Video Killed the Rolex Star (CocoaConf San Jose, November, 2015)
Video Killed the Rolex Star (CocoaConf San Jose, November, 2015)
 
Better Know an Audio Programmer
Better Know an Audio ProgrammerBetter Know an Audio Programmer
Better Know an Audio Programmer
 
Sound recording glossary - IMPROVED
Sound recording glossary - IMPROVEDSound recording glossary - IMPROVED
Sound recording glossary - IMPROVED
 
Crescendo encore quarter_presentation
Crescendo encore quarter_presentationCrescendo encore quarter_presentation
Crescendo encore quarter_presentation
 
Effective HTML5 game audio
Effective HTML5 game audioEffective HTML5 game audio
Effective HTML5 game audio
 

En vedette

Moving Frostbite to Physically Based Rendering
Moving Frostbite to Physically Based RenderingMoving Frostbite to Physically Based Rendering
Moving Frostbite to Physically Based Rendering
Electronic Arts / DICE
 

En vedette (20)

Out of Sight, Out of Mind: Improving Visualization of AI Info
Out of Sight, Out of Mind: Improving Visualization of AI InfoOut of Sight, Out of Mind: Improving Visualization of AI Info
Out of Sight, Out of Mind: Improving Visualization of AI Info
 
Player Traversal Mechanics in the Vast World of Horizon Zero Dawn
Player Traversal Mechanics in the Vast World of Horizon Zero DawnPlayer Traversal Mechanics in the Vast World of Horizon Zero Dawn
Player Traversal Mechanics in the Vast World of Horizon Zero Dawn
 
Killzone Shadow Fall: Creating Art Tools For A New Generation Of Games
Killzone Shadow Fall: Creating Art Tools For A New Generation Of GamesKillzone Shadow Fall: Creating Art Tools For A New Generation Of Games
Killzone Shadow Fall: Creating Art Tools For A New Generation Of Games
 
Taking Killzone Shadow Fall Image Quality Into The Next Generation
Taking Killzone Shadow Fall Image Quality Into The Next GenerationTaking Killzone Shadow Fall Image Quality Into The Next Generation
Taking Killzone Shadow Fall Image Quality Into The Next Generation
 
Building Non-Linear Narratives in Horizon Zero Dawn
Building Non-Linear Narratives in Horizon Zero DawnBuilding Non-Linear Narratives in Horizon Zero Dawn
Building Non-Linear Narratives in Horizon Zero Dawn
 
Lighting of Killzone: Shadow Fall
Lighting of Killzone: Shadow FallLighting of Killzone: Shadow Fall
Lighting of Killzone: Shadow Fall
 
The Real-time Volumetric Cloudscapes of Horizon Zero Dawn
The Real-time Volumetric Cloudscapes of Horizon Zero DawnThe Real-time Volumetric Cloudscapes of Horizon Zero Dawn
The Real-time Volumetric Cloudscapes of Horizon Zero Dawn
 
Killzone Shadow Fall Demo Postmortem
Killzone Shadow Fall Demo PostmortemKillzone Shadow Fall Demo Postmortem
Killzone Shadow Fall Demo Postmortem
 
Release This! Tools for a Smooth Release Cycle
Release This! Tools for a Smooth Release CycleRelease This! Tools for a Smooth Release Cycle
Release This! Tools for a Smooth Release Cycle
 
The Production and Visual FX of Killzone Shadow Fall
The Production and Visual FX of Killzone Shadow FallThe Production and Visual FX of Killzone Shadow Fall
The Production and Visual FX of Killzone Shadow Fall
 
Practical Occlusion Culling on PS3
Practical Occlusion Culling on PS3Practical Occlusion Culling on PS3
Practical Occlusion Culling on PS3
 
Practical Occlusion Culling in Killzone 3
Practical Occlusion Culling in Killzone 3Practical Occlusion Culling in Killzone 3
Practical Occlusion Culling in Killzone 3
 
Automatic Annotations in Killzone 3 and Beyond
Automatic Annotations in Killzone 3 and BeyondAutomatic Annotations in Killzone 3 and Beyond
Automatic Annotations in Killzone 3 and Beyond
 
The Rendering Technology of Killzone 2
The Rendering Technology of Killzone 2The Rendering Technology of Killzone 2
The Rendering Technology of Killzone 2
 
Killzone 2 Multiplayer Bots
Killzone 2 Multiplayer BotsKillzone 2 Multiplayer Bots
Killzone 2 Multiplayer Bots
 
Killzone's AI: Dynamic Procedural Tactics
Killzone's AI: Dynamic Procedural TacticsKillzone's AI: Dynamic Procedural Tactics
Killzone's AI: Dynamic Procedural Tactics
 
The Server Architecture Behind Killzone Shadow Fall
The Server Architecture Behind Killzone Shadow FallThe Server Architecture Behind Killzone Shadow Fall
The Server Architecture Behind Killzone Shadow Fall
 
Deferred Rendering in Killzone 2
Deferred Rendering in Killzone 2Deferred Rendering in Killzone 2
Deferred Rendering in Killzone 2
 
4K Checkerboard in Battlefield 1 and Mass Effect Andromeda
4K Checkerboard in Battlefield 1 and Mass Effect Andromeda4K Checkerboard in Battlefield 1 and Mass Effect Andromeda
4K Checkerboard in Battlefield 1 and Mass Effect Andromeda
 
Moving Frostbite to Physically Based Rendering
Moving Frostbite to Physically Based RenderingMoving Frostbite to Physically Based Rendering
Moving Frostbite to Physically Based Rendering
 

Similaire à The Next-Gen Dynamic Sound System of Killzone Shadow Fall

Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
benstoraro
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
DamionVize
 
Unit 73 ig1 assignment Brief
Unit 73 ig1 assignment BriefUnit 73 ig1 assignment Brief
Unit 73 ig1 assignment Brief
Hooaax
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Nathan_West
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
halo4robo
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
VictoriaLBS
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2 (2)
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2 (2)Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2 (2)
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2 (2)
eduriez
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
JordanSmith96
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
warburton9191
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
James-003
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
JoeBrannigan
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Kenyon101
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
thomasmcd6
 

Similaire à The Next-Gen Dynamic Sound System of Killzone Shadow Fall (20)

Deep dive into Android’s audio latency problem
Deep dive into Android’s audio latency problemDeep dive into Android’s audio latency problem
Deep dive into Android’s audio latency problem
 
God Of War : post mortem
God Of War : post mortemGod Of War : post mortem
God Of War : post mortem
 
The secerts to great sounding samples.txt
The secerts to great sounding samples.txtThe secerts to great sounding samples.txt
The secerts to great sounding samples.txt
 
Tips for live streaming a musical performance
Tips for live streaming a musical performanceTips for live streaming a musical performance
Tips for live streaming a musical performance
 
Xbox App Dev 5. Design for TV
Xbox App Dev 5. Design for TVXbox App Dev 5. Design for TV
Xbox App Dev 5. Design for TV
 
Live streaming your podcast
Live streaming your podcastLive streaming your podcast
Live streaming your podcast
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
 
Assignment brief
Assignment briefAssignment brief
Assignment brief
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
 
Unit 73 ig1 assignment Brief
Unit 73 ig1 assignment BriefUnit 73 ig1 assignment Brief
Unit 73 ig1 assignment Brief
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2 (2)
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2 (2)Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2 (2)
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2 (2)
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
 
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
Unit 73 ig1 assignment computer game audio cut sequence production 2013_y2
 

Plus de Guerrilla

Plus de Guerrilla (8)

Horizon Zero Dawn: An Open World QA Case Study
Horizon Zero Dawn: An Open World QA Case StudyHorizon Zero Dawn: An Open World QA Case Study
Horizon Zero Dawn: An Open World QA Case Study
 
Horizon Zero Dawn: A Game Design Post-Mortem
Horizon Zero Dawn: A Game Design Post-MortemHorizon Zero Dawn: A Game Design Post-Mortem
Horizon Zero Dawn: A Game Design Post-Mortem
 
Putting the AI Back Into Air: Navigating the Air Space of Horizon Zero Dawn
Putting the AI Back Into Air: Navigating the Air Space of Horizon Zero DawnPutting the AI Back Into Air: Navigating the Air Space of Horizon Zero Dawn
Putting the AI Back Into Air: Navigating the Air Space of Horizon Zero Dawn
 
Decima Engine: Visibility in Horizon Zero Dawn
Decima Engine: Visibility in Horizon Zero DawnDecima Engine: Visibility in Horizon Zero Dawn
Decima Engine: Visibility in Horizon Zero Dawn
 
A Hierarchically-Layered Multiplayer Bot System for a First-Person Shooter
A Hierarchically-Layered Multiplayer Bot System for a First-Person ShooterA Hierarchically-Layered Multiplayer Bot System for a First-Person Shooter
A Hierarchically-Layered Multiplayer Bot System for a First-Person Shooter
 
The Creation of Killzone 3
The Creation of Killzone 3The Creation of Killzone 3
The Creation of Killzone 3
 
The PlayStation®3’s SPUs in the Real World: A KILLZONE 2 Case Study
The PlayStation®3’s SPUs in the Real World: A KILLZONE 2 Case StudyThe PlayStation®3’s SPUs in the Real World: A KILLZONE 2 Case Study
The PlayStation®3’s SPUs in the Real World: A KILLZONE 2 Case Study
 
The Guerrilla Guide to Game Code
The Guerrilla Guide to Game CodeThe Guerrilla Guide to Game Code
The Guerrilla Guide to Game Code
 

Dernier

introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
VishalKumarJha10
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
VictorSzoltysek
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
The title is not connected to what is inside
The title is not connected to what is insideThe title is not connected to what is inside
The title is not connected to what is inside
shinachiaurasa2
 

Dernier (20)

introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
 
VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learn
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
 
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park %in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
 
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verifiedSector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students
 
Pharm-D Biostatistics and Research methodology
Pharm-D Biostatistics and Research methodologyPharm-D Biostatistics and Research methodology
Pharm-D Biostatistics and Research methodology
 
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 
The Top App Development Trends Shaping the Industry in 2024-25 .pdf
The Top App Development Trends Shaping the Industry in 2024-25 .pdfThe Top App Development Trends Shaping the Industry in 2024-25 .pdf
The Top App Development Trends Shaping the Industry in 2024-25 .pdf
 
The title is not connected to what is inside
The title is not connected to what is insideThe title is not connected to what is inside
The title is not connected to what is inside
 
Exploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdfExploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdf
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdf
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
 

The Next-Gen Dynamic Sound System of Killzone Shadow Fall

  • 1. The Next-Gen Dynamic Sound System of Killzone Shadow Fall
  • 3. Take Away • Promote experimentation
  • 4. Take Away • Promote experimentation • Own your tools to allow it to happen
  • 5. Take Away • Promote experimentation • Own your tools to allow it to happen • Reduce “Idea to Game” time as much as possible
  • 7. Take Away • If a programmer is in the creative loop, creativity suffers
  • 8. Take Away • If a programmer is in the creative loop, creativity suffers • Don't be afraid to give designers a visual programming tool
  • 9. Take Away • If a programmer is in the creative loop, creativity suffers • Don't be afraid to give designers a visual programming tool • Generating code from assets is easy and powerful
  • 10. Andreas Varga Anton Woldhek Senior Tech Programmer Senior Sound Designer Who are Andreas & Anton? Why would you listen to them? Well, we worked on sound for these games…
  • 11. Andreas Varga Killzone Shadow Fall Killzone 3 Killzone 2 Manhunt 2 Max Payne 2: The Fall of Max Payne Anton Woldhek Killzone Shadow Fall Killzone 3 Killzone 2 ! Creator of the Game Audio Podcast Senior Tech Programmer Senior Sound Designer Who are Andreas & Anton? Why would you listen to them? Well, we worked on sound for these games…
  • 12. We work at Guerrilla, which is one of Sony’s 1st party studios. Here’s a view of our sound studios
  • 14. After Killzone 3, we started working on Killzone Shadow Fall, which was a launch title for the PS4. This was planned as a next-gen title from the beginning.
  • 15. Here's a short video showing how the game looks and sounds.
  • 16. Here's a short video showing how the game looks and sounds.
  • 17. Andreas: But what exactly does next-gen mean? We thought next gen sound (initially) wouldn’t be about fancy new DSP but about faster iteration and less optimization. To allow the creation of more flexible sounds that truly can become a part of the game design. We also wanted to make sure that the sound team works with the same basic work flow as the rest of the company. So our motivation was to build a cutting edge sound system and tool set, but don't build a new synth.
  • 18. What is Next-Gen Audio? Anton
  • 19. What is Next-Gen Audio? • Emancipating sound, the team and the asset Anton
  • 20. What is Next-Gen Audio? • Emancipating sound, the team and the asset • Instantly hearing work in game, never reload Anton
  • 21. What is Next-Gen Audio? • Emancipating sound, the team and the asset • Instantly hearing work in game, never reload • Extendable system, reusable components Anton
  • 22. What is Next-Gen Audio? • Emancipating sound, the team and the asset • Instantly hearing work in game, never reload • Extendable system, reusable components • High run-time performance Anton
  • 23. What is Next-Gen Audio? • Emancipating sound, the team and the asset • Instantly hearing work in game, never reload • Extendable system, reusable components • High run-time performance • Don’t worry about the synth Anton
  • 25. Why Emancipate? • Audio is separate from rest Anton
  • 26. Why Emancipate? • Audio is separate from rest • Physical: sound proof rooms Anton
  • 27. Why Emancipate? • Audio is separate from rest • Physical: sound proof rooms • Mental: closed off sound engine and tools Anton
  • 28. Why Emancipate? • Audio is separate from rest • Physical: sound proof rooms • Mental: closed off sound engine and tools • Should actively pursue integration Anton
  • 30. Why Integrate? • Gain a community of end users that: Anton
  • 31. Why Integrate? • Gain a community of end users that: • Can help with complex setup Anton
  • 32. Why Integrate? • Gain a community of end users that: • Can help with complex setup • Teach you new tricks Anton
  • 33. Why Integrate? • Gain a community of end users that: • Can help with complex setup • Teach you new tricks • Find bugs Anton
  • 34. Why Integrate? • Gain a community of end users that: • Can help with complex setup • Teach you new tricks • Find bugs • Benefit mutually from improvements Anton
  • 36. Motivation for Change Previous generation very risk averse, failing was not possible
  • 37. Ideas we had Ideas we implemented Ideas we shipped with KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF Motivation for Change Previous generation very risk averse, failing was not possible
  • 38. Ideas we had Ideas we implemented Ideas we shipped with KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF Motivation for Change Previous generation very risk averse, failing was not possible
  • 39. Ideas we had Ideas we implemented Ideas we shipped with KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF Motivation for Change Previous generation very risk averse, failing was not possible
  • 40. Ideas we had Ideas we implemented Ideas we shipped with KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF Motivation for Change Previous generation very risk averse, failing was not possible We could experiment much more! {
  • 41. Ideas we had Ideas we implemented Ideas we shipped with KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF Motivation for Change Previous generation very risk averse, failing was not possible More of the ideas shipped in the game! { We could experiment much more! {
  • 42. Ideas we had Ideas we implemented Ideas we shipped with KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF KZ2/KZ3 KZ SF Motivation for Change Previous generation very risk averse, failing was not possible More of the ideas shipped in the game! { We could experiment much more! { } No
 bad
 ideas!
  • 46. Old Workflow Nuendo export 3rd Party Sound Tool WAV
 Assets import
  • 47. Old Workflow Nuendo export 3rd Party Sound Tool WAV
 Assets import reload Play Game
  • 48. Old Workflow Nuendo export 3rd Party Sound Tool WAV
 Assets import reload Play Game Wasted Time Wasted Time Wasted Time ~10s 5s…30s ~2mins
  • 50. New Workflow Nuendo export Sound Tool WAV
 Assets hot-load Running Game hot-load
  • 51. New Workflow Nuendo export Sound Tool WAV
 Assets hot-load Running Game hot-load Wasted Time ~10s
  • 52. Video of iterating on a single sound by re-exporting its wav files from Nuendo and syncing the changes with the running game.
  • 53. Video of iterating on a single sound by re-exporting its wav files from Nuendo and syncing the changes with the running game.
  • 55. No More Auditioning Tool • Game is the inspiration
  • 56. No More Auditioning Tool • Game is the inspiration • Get sound into the game quickly
  • 57. No More Auditioning Tool • Game is the inspiration • Get sound into the game quickly • No interruptions
  • 58. No More Auditioning Tool • Game is the inspiration • Get sound into the game quickly • No interruptions • Why audition anywhere else?
  • 60. How to Emancipate? • Make sound engine and tools in-house Andreas
  • 61. How to Emancipate? • Make sound engine and tools in-house • Allows deep integration Andreas
  • 62. How to Emancipate? • Make sound engine and tools in-house • Allows deep integration • Immediate changes, based on designer feedback Andreas
  • 63. How to Emancipate? • Make sound engine and tools in-house • Allows deep integration • Immediate changes, based on designer feedback • No arbitrary technical limitations Andreas
  • 68. ! Anton: Sound designers are already used to data flow environments such as Max/MSP and puredata. ! Andreas: But while that’s one good way of thinking about our system, it’s also a bit different from them. Our system doesn’t work at on the sample level or audio rate, i.e. we’re not using our graphs for synthesis, but just for playback of already existing waveform data.
  • 69. ! Max/MSP Anton: Sound designers are already used to data flow environments such as Max/MSP and puredata. ! Andreas: But while that’s one good way of thinking about our system, it’s also a bit different from them. Our system doesn’t work at on the sample level or audio rate, i.e. we’re not using our graphs for synthesis, but just for playback of already existing waveform data.
  • 70. ! Max/MSP Pure Data Anton: Sound designers are already used to data flow environments such as Max/MSP and puredata. ! Andreas: But while that’s one good way of thinking about our system, it’s also a bit different from them. Our system doesn’t work at on the sample level or audio rate, i.e. we’re not using our graphs for synthesis, but just for playback of already existing waveform data.
  • 71. ⚠️ Technical Details Ahead! ⚠️
  • 72. ⚠️ Technical Details Ahead! ⚠️ • We will show code
  • 73. ⚠️ Technical Details Ahead! ⚠️ • We will show code • But you don’t have to be afraid
  • 74. ⚠️ Technical Details Ahead! ⚠️ • We will show code • But you don’t have to be afraid • Sound designers will never have to look at it
  • 76. Graph Sound System • Generate native code from data flow
  • 77. Graph Sound System • Generate native code from data flow • Execute resulting program every 5.333ms (187Hz)
  • 78. Graph Sound System • Generate native code from data flow • Execute resulting program every 5.333ms (187Hz) • Used to play samples and modulate parameters
  • 79. Graph Sound System • Generate native code from data flow • Execute resulting program every 5.333ms (187Hz) • Used to play samples and modulate parameters • NOT running actual synthesis
  • 80. Graph Sound System • Generate native code from data flow • Execute resulting program every 5.333ms (187Hz) • Used to play samples and modulate parameters • NOT running actual synthesis • Dynamic behavior
  • 81. Sound Input X Input Y Output Z The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  • 82. Sound Node A Node C Node B Input X Input Y Output Z The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  • 83. Sound Node A Node C Node B Input X Input Y Output Z The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  • 84. Sound Node A Node C Node B A.cpp B.cpp C.cpp Input X Input Y Output Z The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  • 85. A.cpp B.cpp C.cpp The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  • 86. A.cpp B.cpp C.cpp Sound.cpp The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  • 87. Sound.cpp A(inA, outA) { … } B(inB, outB) { … } C(inX, inY, outZ) { … } The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  • 88. Sound(X, Y, Z) { A(X,outA); B(Y,outB); C(outA,outB,Z); } Sound.cpp A(inA, outA) { … } B(inB, outB) { … } C(inX, inY, outZ) { … } The graphs we build have certain inputs and outputs and are made up of individual nodes and connections. Note that each graph can also be used as a node, which is very important. The functionality of each node is defined in a C++ file, which typically just contains one function. When we need to translate this graph into C++ code, we take those node definition files and paste them into a C++ file for the whole graph. We then generate the code according to how the nodes are connected, by calling the individual node functions in the right order.
  • 89. Sound.cpp So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  • 90. Sound.cpp compile So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  • 91. Sound.cpp compile Sound.o So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  • 92. Sound.cpp compile Sound.o compile Sound2.o compile Sound3.o compile Sound4.o Sound2.cpp Sound3.cpp Sound4.cpp So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  • 93. Sound.cpp compile Sound.o compile Sound2.o compile Sound3.o compile Sound4.o link Sound2.cpp Sound3.cpp Sound4.cpp So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  • 94. Sound.cpp compile Sound.o compile Sound2.o compile Sound3.o compile Sound4.o Dynamic Library (PRX) link Sound2.cpp Sound3.cpp Sound4.cpp So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  • 95. Sound.cpp compile Sound.o compile Sound2.o compile Sound3.o compile Sound4.o Dynamic Library (PRX) link hot-load Sound2.cpp Sound3.cpp Sound4.cpp So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  • 96. Sound.cpp compile Sound.o compile Sound2.o compile Sound3.o compile Sound4.o Dynamic Library (PRX) link Execute Sound() in Game hot-load Sound2.cpp Sound3.cpp Sound4.cpp So as part of our content pipeline the C++ code that was generated for the graph is then compiled into an object file, and together with other object files gets linked into a dynamic library, which are loaded as PRX files on the PS4 at runtime. This allows us to execute the code in the game. Our initial attempt was slightly different tough: We’ve first experimented with LLVM using byte code and the LLVM JIT compiler at runtime, which was nice for the initial testing and development. But soon after that we switched to a more traditional offline compilation step, using clang and dynamic libraries.
  • 97. The most simple graph sound example we can come up with Lets get to know the system by building a very basic sound
  • 98. Wave inStart inWave (sample data) Inputs on_start on_stop Other Voice Parameters: Dry Gain, Wet Gain, LFE Gain, etc. inStop } Game Parameters{
  • 99. • Wave node is used to play sample data Wave inStart inWave (sample data) Inputs on_start on_stop Other Voice Parameters: Dry Gain, Wet Gain, LFE Gain, etc. inStop } Game Parameters{
  • 100. • Wave node is used to play sample data • The on_start input is activated when the sound starts Wave inStart inWave (sample data) Inputs on_start on_stop Other Voice Parameters: Dry Gain, Wet Gain, LFE Gain, etc. inStop } Game Parameters{
  • 101. • Wave node is used to play sample data • The on_start input is activated when the sound starts • When inStart is active, a voice is created and played Wave inStart inWave (sample data) Inputs on_start on_stop Other Voice Parameters: Dry Gain, Wet Gain, LFE Gain, etc. inStop } Game Parameters{
  • 102. • Wave node is used to play sample data • The on_start input is activated when the sound starts • When inStart is active, a voice is created and played • So this graph plays one sample when the sound starts Wave inStart inWave (sample data) Inputs on_start on_stop Other Voice Parameters: Dry Gain, Wet Gain, LFE Gain, etc. inStop } Game Parameters{
  • 103.
  • 104.
  • 105.
  • 106.
  • 107. But what about the code? Andreas: This is the result of the code generator, it's fairly readable. You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented in the engine code. Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms. As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
  • 108. But what about the code? ! ! void Sound(bool _on_start_, bool _on_stop_) { bool outAllVoicesDone = false; float outDuration = 0; ! Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, ..., &outAllVoicesDone, &outDuration); } Andreas: This is the result of the code generator, it's fairly readable. You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented in the engine code. Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms. As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
  • 109. But what about the code? ! ! void Sound(bool _on_start_, bool _on_stop_) { bool outAllVoicesDone = false; float outDuration = 0; ! Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, ..., &outAllVoicesDone, &outDuration); } Andreas: This is the result of the code generator, it's fairly readable. You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented in the engine code. Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms. As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
  • 110. But what about the code? ! ! void Sound(bool _on_start_, bool _on_stop_) { bool outAllVoicesDone = false; float outDuration = 0; ! Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, ..., &outAllVoicesDone, &outDuration); } Andreas: This is the result of the code generator, it's fairly readable. You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented in the engine code. Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms. As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
  • 111. But what about the code? ! ! void Sound(bool _on_start_, bool _on_stop_) { bool outAllVoicesDone = false; float outDuration = 0; ! Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, ..., &outAllVoicesDone, &outDuration); } #include <Wave.cpp> Andreas: This is the result of the code generator, it's fairly readable. You can see the Wave node function call and how the on_start input of the graph is passed as a parameter. The sample is a constant resource attached to the graph and can be retrieved with this function, which is implemented in the engine code. Clang’s optimizer creates really tight assembly code from this. It's super fast, so we can run it at a high rate, currently once per synth frame, i.e. every 5.3 ms. As you can see, every input of the graph, becomes a parameter of the graph function, and each node input is also a parameter of the node function.
  • 112. But the nice thing is, that the code is not immediately visible. It’s there, and as a programmer you can use it to debug what’s going on, but a sound designer would only see the graph representation.
  • 113. Anton: Now let's inject the sound into the game and play it, for testing purposes. Video showing how a sound is created from scratch and injected into the running game.
  • 114. Anton: Now let's inject the sound into the game and play it, for testing purposes. Video showing how a sound is created from scratch and injected into the running game.
  • 115. Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example. ! Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can put in front to convert from dB to linear values.
  • 116. Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example. ! Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can put in front to convert from dB to linear values.
  • 117. Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example. ! Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can put in front to convert from dB to linear values.
  • 118. Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example. ! Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can put in front to convert from dB to linear values.
  • 119. Andreas: Now lets add some more dynamic behaviour to the sound. As you can see in the blue inputs node, I’ve added an input called “inDistanceToListener” which is filled in by the engine with the distance in meters between this sound and the listener. I use this distance as the X value of a curve lookup node, and use the Y value as the cutoff frequency of a low pass filter of the wave node. The rest is the same as in the previous example. ! Btw: As you can see, the gain inputs are all linear values, from 0 to 1. This is not very sound designer friendly, but it makes it easier to do certain calculations. In case you prefer dB full-scale, then we have a very simple node you can put in front to convert from dB to linear values.
  • 120. void Sound(bool _on_start_, bool _on_stop_, float inDistanceToListener) { float outYValue = 0; EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); ! bool outAllVoicesDone = false; float outDuration = 0; Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, outYValue, &outAllVoicesDone, &outDuration); } It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
  • 121. void Sound(bool _on_start_, bool _on_stop_, float inDistanceToListener) { float outYValue = 0; EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); ! bool outAllVoicesDone = false; float outDuration = 0; Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, outYValue, &outAllVoicesDone, &outDuration); } It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
  • 122. void Sound(bool _on_start_, bool _on_stop_, float inDistanceToListener) { float outYValue = 0; EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); ! bool outAllVoicesDone = false; float outDuration = 0; Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, outYValue, &outAllVoicesDone, &outDuration); } It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
  • 123. void Sound(bool _on_start_, bool _on_stop_, float inDistanceToListener) { float outYValue = 0; EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); ! bool outAllVoicesDone = false; float outDuration = 0; Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, outYValue, &outAllVoicesDone, &outDuration); } It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
  • 124. void Sound(bool _on_start_, bool _on_stop_, float inDistanceToListener) { float outYValue = 0; EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); ! bool outAllVoicesDone = false; float outDuration = 0; Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, outYValue, &outAllVoicesDone, &outDuration); } It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
  • 125. void Sound(bool _on_start_, bool _on_stop_, float inDistanceToListener) { float outYValue = 0; EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); ! bool outAllVoicesDone = false; float outDuration = 0; Wave(_on_start_, false, false, WaveDataPointer, VoiceParameters, outYValue, &outAllVoicesDone, &outDuration); } #include <EvaluateCurve.cpp> #include <Wave.cpp> It’s getting a bit more complicated now, but it’s still easy enough to see what’s going on. The inDistanceToListener input of the graph becomes a new graph function parameter. The Evaluate node is now called, and since the Wave node depends on the output of the Evaluate node, the Wave node has to be called after the Evaluate node. The code generator uses the dependencies to define the order in which it needs to call each node’s function.
  • 126.
  • 127. Video showing effect of distance-based low pass filtering example
  • 128. Video showing effect of distance-based low pass filtering example
  • 129. Adding Nodes I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available. But why is it truly extendable? That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the implementation. Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated graph programs are not linked with anything, so very small and light weight. This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
  • 130. Adding Nodes • Create C++ file with one function I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available. But why is it truly extendable? That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the implementation. Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated graph programs are not linked with anything, so very small and light weight. This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
  • 131. Adding Nodes • Create C++ file with one function • Create small description file for node I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available. But why is it truly extendable? That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the implementation. Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated graph programs are not linked with anything, so very small and light weight. This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
  • 132. Adding Nodes • Create C++ file with one function • Create small description file for node • List inputs and outputs I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available. But why is it truly extendable? That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the implementation. Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated graph programs are not linked with anything, so very small and light weight. This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
  • 133. Adding Nodes • Create C++ file with one function • Create small description file for node • List inputs and outputs • Define the default values I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available. But why is it truly extendable? That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the implementation. Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated graph programs are not linked with anything, so very small and light weight. This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
  • 134. Adding Nodes • Create C++ file with one function • Create small description file for node • List inputs and outputs • Define the default values • No need to recompile game, just add assets I guess by now you can see how this system allowed us to build powerful dynamic sounds, just limited by the set of nodes we have available. But why is it truly extendable? That’s because it’s really easy to create a new node in this system. Basically all you need to do is create an asset describing the inputs and outputs of the node (can be done in our editor) and a C++ file containing the implementation. Our code generator will collect the little snippets of C++ code for each node and create a combined function for each graph. No external libraries are used, even for math we call functions in the game engine. The generated graph programs are not linked with anything, so very small and light weight. This also means that the game or the editor don’t need to be recompiled to add a simple node. Nodes and graphs are game assets, which get turned into code automatically by the content pipeline.
  • 135. Math example Convert semitones to pitch ratio SemitonesToPitchRatio inSemiTones outPitchRatio SemitonesToPitchRatio inSemiTones outPitchRatio Andreas: Let's look at an example node for a mathematical problem, the obvious ones like add or multiply are trivial, but here's a useful one, converting from musical semitones to a pitch ratio value. So for instance inputting a value of 12 semitones, would result in a pitch ratio of 2, i.e. an octave.
  • 136. void SemitonesToPitchRatio(const float inSemitones, float* outPitchRatio) { *outPitchRatio = Math::Pow(2.0f, inSemitones/12.0f); } SemiTonesToPitchRatio.cpp This shows how easy it is to create a new node. The only additional information that needs to be created is a meta data file that describes the input and output values and their default values.
  • 137. Math example Convert semitones to pitch ratio SemitonesToPitchRatio inSemiTones outPitchRatio
  • 138. Logic example Compare two values Compare inA outGreater inB outSmaller outEqual Here's another simple example for comparison of values. Typically you have different parts that depend on the comparison result, so it's useful to have "smaller" and "greater than" available at the same time. Also saves us from using additional nodes.
  • 139. void Compare(const float inA, const float inB, bool* outSmaller, bool* outGreater, bool* outEqual) { *outSmaller = inA < inB; *outGreater = inA > inB; *outEqual = inA == inB; } Compare.cpp The implementation is just as simple. All the comparison results are stored in output parameters.
  • 140. Logic example Compare two values Compare inA outGreater inB outSmaller outEqual
  • 141. Modulation example Sine LFO SineLFO inFrequency outValue inAmplitude inOffset inPhase This is the sine LFO node, it will output a sine wave at a given frequency and amplitude. This is a slightly more complicated node.
  • 142. void SineLFO(const float inFrequency, const float inAmplitude, const float inOffset, const float inPhase, float* outValue, float* phase) { *outValue = inOffset + inAmplitude * Math::Sin((*phase + inPhase) * M_TWO_PI); float new_phase = *phase + inFrequency * GraphSound::GetTimeStep(); if (new_phase > 1.0f) new_phase -= 1.0f; *phase = new_phase; } SineLFO.cpp Let's look at how this is implemented. It's using two functions that are implemented in our engine, one for calculating the sine and one for getting the time step. If the game time is scaled (for example if we’re running the game in slow motion), then the sound time-step will also be adjusted. However it's also possible to have sounds that are not affected by this. Normally the graph execution is state less, that means that every time it runs, it does exactly the same, because it doesn't store any information, so it can't depend on previous runs. This node however is an example of something that requires state information, and it's possible to do that on a per node basis. So here you can see that it uses a special “phase” parameter, which is stored separately for each instance of the node, and is passed in by reference as a pointer. The state system allows us to store any arbitrary set of data, for example a full C++ object, or just a simple value like in this example. The Wave node also uses this to store the information about the voices it has created.
  • 143. void SineLFO(const float inFrequency, const float inAmplitude, const float inOffset, const float inPhase, float* outValue, float* phase) { *outValue = inOffset + inAmplitude * Math::Sin((*phase + inPhase) * M_TWO_PI); float new_phase = *phase + inFrequency * GraphSound::GetTimeStep(); if (new_phase > 1.0f) new_phase -= 1.0f; *phase = new_phase; } SineLFO.cpp Let's look at how this is implemented. It's using two functions that are implemented in our engine, one for calculating the sine and one for getting the time step. If the game time is scaled (for example if we’re running the game in slow motion), then the sound time-step will also be adjusted. However it's also possible to have sounds that are not affected by this. Normally the graph execution is state less, that means that every time it runs, it does exactly the same, because it doesn't store any information, so it can't depend on previous runs. This node however is an example of something that requires state information, and it's possible to do that on a per node basis. So here you can see that it uses a special “phase” parameter, which is stored separately for each instance of the node, and is passed in by reference as a pointer. The state system allows us to store any arbitrary set of data, for example a full C++ object, or just a simple value like in this example. The Wave node also uses this to store the information about the voices it has created.
  • 144. void SineLFO(const float inFrequency, const float inAmplitude, const float inOffset, const float inPhase, float* outValue, float* phase) { *outValue = inOffset + inAmplitude * Math::Sin((*phase + inPhase) * M_TWO_PI); float new_phase = *phase + inFrequency * GraphSound::GetTimeStep(); if (new_phase > 1.0f) new_phase -= 1.0f; *phase = new_phase; } SineLFO.cpp Let's look at how this is implemented. It's using two functions that are implemented in our engine, one for calculating the sine and one for getting the time step. If the game time is scaled (for example if we’re running the game in slow motion), then the sound time-step will also be adjusted. However it's also possible to have sounds that are not affected by this. Normally the graph execution is state less, that means that every time it runs, it does exactly the same, because it doesn't store any information, so it can't depend on previous runs. This node however is an example of something that requires state information, and it's possible to do that on a per node basis. So here you can see that it uses a special “phase” parameter, which is stored separately for each instance of the node, and is passed in by reference as a pointer. The state system allows us to store any arbitrary set of data, for example a full C++ object, or just a simple value like in this example. The Wave node also uses this to store the information about the voices it has created.
  • 145. void SineLFO(const float inFrequency, const float inAmplitude, const float inOffset, const float inPhase, float* outValue, float* phase) { *outValue = inOffset + inAmplitude * Math::Sin((*phase + inPhase) * M_TWO_PI); float new_phase = *phase + inFrequency * GraphSound::GetTimeStep(); if (new_phase > 1.0f) new_phase -= 1.0f; *phase = new_phase; } SineLFO.cpp Let's look at how this is implemented. It's using two functions that are implemented in our engine, one for calculating the sine and one for getting the time step. If the game time is scaled (for example if we’re running the game in slow motion), then the sound time-step will also be adjusted. However it's also possible to have sounds that are not affected by this. Normally the graph execution is state less, that means that every time it runs, it does exactly the same, because it doesn't store any information, so it can't depend on previous runs. This node however is an example of something that requires state information, and it's possible to do that on a per node basis. So here you can see that it uses a special “phase” parameter, which is stored separately for each instance of the node, and is passed in by reference as a pointer. The state system allows us to store any arbitrary set of data, for example a full C++ object, or just a simple value like in this example. The Wave node also uses this to store the information about the voices it has created.
  • 146. Modulation example Sine LFO SineLFO inFrequency outValue inAmplitude inOffset inPhase
  • 147. Nodes built from Graphs Generate a random pitch within a range of semitones RandomRangedPitchRatio inTrigger outPitchRatio inMinSemitone inMaxSemitone Certain math or logic processing that starts to happen often in sounds can be abstracted into their own nodes, without having to know any programming. Here’s an example of a node, built from an actual graph.
  • 148. As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then converts to a pitch ratio. It also uses the sample&hold node to make sure the value doesn’t change, as normally the random value would be recalculated at every update of the sound, i.e. once every 5.333ms. Creating a node like this is basically as simple as selecting a group of nodes, then right-clicking and selecting “Collapse nodes”. The editor will do the rest.
  • 149. As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then converts to a pitch ratio. It also uses the sample&hold node to make sure the value doesn’t change, as normally the random value would be recalculated at every update of the sound, i.e. once every 5.333ms. Creating a node like this is basically as simple as selecting a group of nodes, then right-clicking and selecting “Collapse nodes”. The editor will do the rest.
  • 150. As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then converts to a pitch ratio. It also uses the sample&hold node to make sure the value doesn’t change, as normally the random value would be recalculated at every update of the sound, i.e. once every 5.333ms. Creating a node like this is basically as simple as selecting a group of nodes, then right-clicking and selecting “Collapse nodes”. The editor will do the rest.
  • 151. As you can see here, it uses a combination of the random range node, to get a random semitone value, which it then converts to a pitch ratio. It also uses the sample&hold node to make sure the value doesn’t change, as normally the random value would be recalculated at every update of the sound, i.e. once every 5.333ms. Creating a node like this is basically as simple as selecting a group of nodes, then right-clicking and selecting “Collapse nodes”. The editor will do the rest.
  • 152. Nodes built from Graphs Generate a random pitch within a range of semitones RandomRangedPitchRatio inTrigger outPitchRatio inMinSemitone inMaxSemitone
  • 153. When you notice repeated work…
  • 154. When you notice repeated work… …abstract it!
  • 155. When you notice repeated work… …abstract it!
  • 156. When you notice repeated work… …abstract it!
  • 163. Experimenting Should Be Safe and Fun
  • 164. Experimenting Should Be Safe and Fun • If your idea cannot fail, you’re not going to go to uncharted territory.
  • 165. Experimenting Should Be Safe and Fun • If your idea cannot fail, you’re not going to go to uncharted territory. • We tried a lot
  • 166. Experimenting Should Be Safe and Fun • If your idea cannot fail, you’re not going to go to uncharted territory. • We tried a lot • Some of it worked, some didn’t
  • 167. Experimenting Should Be Safe and Fun • If your idea cannot fail, you’re not going to go to uncharted territory. • We tried a lot • Some of it worked, some didn’t • Little or no code support - all in audio design
  • 170. Fire Rate Variation • Experiment: How to make battle sound more interesting
  • 171. Fire Rate Variation • Experiment: How to make battle sound more interesting
  • 172. Fire Rate Variation • Experiment: How to make battle sound more interesting
  • 173. Fire Rate Variation • Experiment: How to make battle sound more interesting • Idea: Slight variation in fire rate
  • 174. Fire Rate Variation • Experiment: How to make battle sound more interesting • Idea: Slight variation in fire rate
  • 175. What the Mind Hears, …
  • 176. What the Mind Hears, …
  • 177. What the Mind Hears, … • Experiment: Birds should react when you fire gun
  • 178. What the Mind Hears, … • Experiment: Birds should react when you fire gun • Birds exist as sound only
  • 179. What the Mind Hears, … • Experiment: Birds should react when you fire gun • Birds exist as sound only • Use sound-to-sound messaging system
  • 180. What the Mind Hears, … • Experiment: Birds should react when you fire gun • Birds exist as sound only • Use sound-to-sound messaging system • Gun sounds notify bird sounds of shot
  • 181. Bird Sound Logic Not Shooting Play “Bird Loop”FALSE GunFireState Gun Sound Bird Sound
  • 182. Bird Sound Logic Shooting Play “Bird Loop”FALSE GunFireState Gun Sound Bird Sound
  • 183. Bird Sound Logic Shooting Play “Bird Loop”TRUE GunFireState Gun Sound Bird Sound set
  • 184. Bird Sound Logic Shooting Play “Bird Loop”TRUE GunFireState Gun Sound Bird Sound set get
  • 185. Play
 “Birds Flying Off” ! Wait 30 seconds Bird Sound Logic Shooting TRUE GunFireState Gun Sound Bird Sound set get
  • 186. Play
 “Birds Flying Off” ! Wait 30 seconds Bird Sound Logic Not Shooting TRUE GunFireState Gun Sound Bird Sound set get
  • 187. Play
 “Birds Flying Off” ! Wait 30 seconds Bird Sound Logic Not Shooting GunFireState Gun Sound Bird Sound set getFALSE
  • 188. Play “Bird Loop” Bird Sound Logic Not Shooting GunFireState Gun Sound Bird Sound set getFALSE
  • 189. Play “Bird Loop” Bird Sound Logic Not Shooting GunFireState Gun Sound Bird Sound set getFALSE Any sound can send a message which can be received by any other sound
  • 191. Material Dependent Environmental Reactions (MADDER) • Inspiration: 
 Guns have this explosive force in the world
 when they’re fired
  • 192. Material Dependent Environmental Reactions (MADDER) • Inspiration: 
 Guns have this explosive force in the world
 when they’re fired • Things that are near start to rattle
  • 196. MADDER
  • 198. MADDER • Inputs required: • Distance to closest surface
  • 199. MADDER • Inputs required: • Distance to closest surface • Direction of closest surface
  • 200. MADDER • Inputs required: • Distance to closest surface • Direction of closest surface • Material of closest surface
  • 202. MADDER raycast • Sweeping raycast around listener Listener
  • 203. MADDER raycast • Sweeping raycast around listener Listener
  • 204. MADDER raycast • Sweeping raycast around listener • After one revolution, closest hit point is used Listener
  • 205. MADDER raycast • Sweeping raycast around listener • After one revolution, closest hit point is used • Distance, angle and material is passed to sound Distance } Angle Material Listener
  • 221. Video showing MADDER effect for 4 different materials
  • 222. Video showing MADDER effect for 4 different materials
  • 223. MADDER Prototype Video showing initial MADDER prototype with 4 different materials in scene.
  • 224. MADDER Prototype Video showing initial MADDER prototype with 4 different materials in scene.
  • 226. -45° +45° -135° +135° MADDER Four-angle raycast • Divide in four quadrants
 around listener
  • 227. -45° +45° -135° +135° MADDER Four-angle raycast • Divide in four quadrants
 around listener • One sweeping raycast
 in each quadrant
  • 228. -45° +45° -135° +135° MADDER Four-angle raycast • Divide in four quadrants
 around listener • One sweeping raycast
 in each quadrant • Closest hit point in
 each quadrant is taken
  • 234. Four-angle MADDER Video showing final 4-angle MADDER with 4 materials placed in the scene
  • 235. Four-angle MADDER Video showing final 4-angle MADDER with 4 materials placed in the scene
  • 236. MADDER video (30 seconds) MADDER off Capture from final game with MADDER disabled
  • 237. MADDER video (30 seconds) MADDER off Capture from final game with MADDER disabled
  • 238. MADDER on Capture from final game with MADDER enabled
  • 239. MADDER on Capture from final game with MADDER enabled
  • 241. Creative Workflow IDEA Coder GAME Sound Designer DONE
  • 244. Gun Tails • Experiment with existing data
  • 245. Gun Tails • Experiment with existing data • Wall raycast
  • 246. Gun Tails • Experiment with existing data • Wall raycast • Inside/Outside
  • 247. Gun Tails • Experiment with existing data • Wall raycast • Inside/Outside • Rounds Fired
  • 248. Gun Tails • Experiment with existing data • Wall raycast • Inside/Outside • Rounds Fired
  • 249. Gun Tails • Experiment with existing data • Wall raycast • Inside/Outside • Rounds Fired
  • 250. Gun Tails • Experiment with existing data • Wall raycast • Inside/Outside • Rounds Fired
  • 251. How to Debug? What kind of debugging support did we have? Obviously a sound designer is creating much more complex logic now, and that means bugs will creep in. We need to be able to find problems such as incorrect behaviour quickly.
  • 252. Manually Placed Debug Probes SineLFO inFrequency outValue inAmplitude inOffset inPhase 0.5 0.5 1 0.0 Anton: These are placed by designers into their sounds, to query individual outputs of nodes. The values are shown as a curve on screen, useful for debugging a single value and how it changes over time. Not very useful for quickly changing things, such as boolean events. Since the sound graphs execute multiple times per game frame, but the debug visualisation is only drawn at the game frame rate, a quick change can be missed. In the final version of the game, these nodes are completely ignored. ! Screenshot!
  • 253. Manually Placed Debug Probes SineLFO inFrequency outValue inAmplitude inOffset inPhase DebugProbe inDebugName inDebugValue sine 0.5 0.5 1 0.0 Anton: These are placed by designers into their sounds, to query individual outputs of nodes. The values are shown as a curve on screen, useful for debugging a single value and how it changes over time. Not very useful for quickly changing things, such as boolean events. Since the sound graphs execute multiple times per game frame, but the debug visualisation is only drawn at the game frame rate, a quick change can be missed. In the final version of the game, these nodes are completely ignored. ! Screenshot!
  • 254. Manually Placed Debug Probes • Just connect any output to visualize it SineLFO inFrequency outValue inAmplitude inOffset inPhase DebugProbe inDebugName inDebugValue sine 0.5 0.5 1 0.0 Anton: These are placed by designers into their sounds, to query individual outputs of nodes. The values are shown as a curve on screen, useful for debugging a single value and how it changes over time. Not very useful for quickly changing things, such as boolean events. Since the sound graphs execute multiple times per game frame, but the debug visualisation is only drawn at the game frame rate, a quick change can be missed. In the final version of the game, these nodes are completely ignored. ! Screenshot!
  • 255. Manually Placed Debug Probes • Just connect any output to visualize it • Shows debug value on game screen SineLFO inFrequency outValue inAmplitude inOffset inPhase DebugProbe inDebugName inDebugValue sine 0.5 0.5 1 0.0 Anton: These are placed by designers into their sounds, to query individual outputs of nodes. The values are shown as a curve on screen, useful for debugging a single value and how it changes over time. Not very useful for quickly changing things, such as boolean events. Since the sound graphs execute multiple times per game frame, but the debug visualisation is only drawn at the game frame rate, a quick change can be missed. In the final version of the game, these nodes are completely ignored. ! Screenshot!
  • 257. Can miss quick changes because game frame rate is lower than sound update rate! Manually Placed Debug Probes
  • 258. Automatically Embedded Debug Probes EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); DEBUG_PROBE(0, (void*)&outYValue) Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph functions, so we can easily disable the debug support at compile time. When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information. !
  • 259. Automatically Embedded Debug Probes EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); DEBUG_PROBE(0, (void*)&outYValue) Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph functions, so we can easily disable the debug support at compile time. When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information. !
  • 260. Automatically Embedded Debug Probes EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); DEBUG_PROBE(0, (void*)&outYValue) • All values are recorded, nothing is lost Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph functions, so we can easily disable the debug support at compile time. When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information. !
  • 261. Automatically Embedded Debug Probes EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); DEBUG_PROBE(0, (void*)&outYValue) • All values are recorded, nothing is lost • Simple in-game debugger to show data Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph functions, so we can easily disable the debug support at compile time. When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information. !
  • 262. Automatically Embedded Debug Probes EvaluateCurve(CurveDataPointer, inDistanceToListener, &outYValue); DEBUG_PROBE(0, (void*)&outYValue) • All values are recorded, nothing is lost • Simple in-game debugger to show data • Scrub through recording Andreas: Each graph is generated in two versions, one without debugging code (for final game) and one with automatically generated debug probe code for every node. We emit these DEBUG_PROBE macros into the graph functions, so we can easily disable the debug support at compile time. When executing a graph with debugging enabled, the code in the macro collects the values of the inputs and outputs of every node (and of the graph itself) and passes them to the engine, which stores the information. !
  • 263. Post Mortem So what when wrong and what went right?
  • 264. Timeline Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  • 265. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  • 266. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet November 2011 PC prototype of sound engine Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  • 267. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet November 2011 PC prototype of sound engine August 2012 Moved over to PS4 hardware Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  • 268. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet November 2011 PC prototype of sound engine August 2012 Moved over to PS4 hardware February 2013 PS4 announcement demo (still using software codecs) Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  • 269. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet November 2011 PC prototype of sound engine August 2012 Moved over to PS4 hardware February 2013 PS4 announcement demo (still using software codecs) June 2013 E3 demo (now using ACP hardware decoding) Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  • 270. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet November 2011 PC prototype of sound engine August 2012 Moved over to PS4 hardware February 2013 PS4 announcement demo (still using software codecs) June 2013 E3 demo (now using ACP hardware decoding) November 2013 Shipped as launch title Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  • 271. Timeline February 2011 Pre-production begins, hardly any PS4 hardware details yet November 2011 PC prototype of sound engine August 2012 Moved over to PS4 hardware February 2013 PS4 announcement demo (still using software codecs) June 2013 E3 demo (now using ACP hardware decoding) November 2013 Shipped as launch title New system was up and running within 6 months! Andreas: We started working on PS4 technology very early, many parts of the PS4 hardware were still undefined when we started. At one point we even expected the hardware to do more than decoding, i.e. we assumed that there might be a way of executing our own DSP code on it. We thought we might have to write custom DSP code for that chip. At that time there were no middleware vendors disclosed yet, so we just couldn’t really talk to them about any of this. We therefore had to play it safe, so we designed the new sound system around an abstract synth. We’ve created an initial prototype implementation, for use in our PC build. Later on we’ve switched to PS4 hardware. Anton: become used to being alpha tester for hardware as a designer. became good audio tester. Other designers hardly noticed the transition from PC to PS4 hardware, as the synth was the same on both.
  • 272. “You’re making sound designers do programming work, that’ll be a disaster! The game will crash all the time!” This didn't really happen. We had a few buggy nodes, but in general, things worked out fine. What we did is to make sure that if new nodes are peer-reviewed like any other code, especially if it’s written by a non-programmer. Also since nodes are so simple and limited in their scope, it’s hard to make something that truly breaks the game.
  • 273. “This is an unproven idea, why don’t we just use some middleware solution?” There were doubts that we can create new tech and a toolset with a user friendly workflow in time. The safer thing would’ve been to license a middleware solution, but at the time we had to make this decision, no 3rd parties were disclosed about PS4 yet, and we couldn’t be sure if they will properly support the PS4 audio hardware. Also we were keen on having a solution that’s integrated with our normal asset pipeline, which allowed us to share the tech built for audio with other disciplines. Sound designers using the same workflow as artists and game designers is a benefit that middleware can't give us. New nodes created by e.g. game programmers are immediately useful for sound designers, and vice versa. All of these reasons led us to push forward with our own tech, to make something that really fits the game and our way of working. ! Anton: Also we wouldn’t have been able to do the kind of deep tool integration that we have now.
  • 274. Used by other Disciplines The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
  • 275. Used by other Disciplines • Graph system became popular The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
  • 276. Used by other Disciplines • Graph system became popular • Used for procedural rigging The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
  • 277. Used by other Disciplines • Graph system became popular • Used for procedural rigging • Used for gameplay behavior The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
  • 278. Used by other Disciplines • Graph system became popular • Used for procedural rigging • Used for gameplay behavior • More nodes were added The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
  • 279. Used by other Disciplines • Graph system became popular • Used for procedural rigging • Used for gameplay behavior • More nodes were added • Bugs were fixed quicker The fact that the graph system was available in the editor that the whole studio is using meant that it was directly available to all other disciplines.
  • 280. Compression Codecs A little bit more information to show were we came from: On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples. This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data. We’ve used ATRAC9 for surround streams, but those are not in-memory.
  • 281. Compression Codecs KZ3 (PS3) samples PCM ADPCM MP3 0 2500 5000 7500 10000 ~20 MB for in-memory samples A little bit more information to show were we came from: On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples. This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data. We’ve used ATRAC9 for surround streams, but those are not in-memory.
  • 282. Compression Codecs KZ3 (PS3) samples PCM ADPCM MP3 0 2500 5000 7500 10000 KZSF (PS4) samples PCM ADPCM MP3 0 2500 5000 7500 10000 ~20 MB for in-memory samples ~300 MB for in-memory samples A little bit more information to show were we came from: On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples. This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data. We’ve used ATRAC9 for surround streams, but those are not in-memory.
  • 283. Compression Codecs KZ3 (PS3) samples PCM ADPCM MP3 0 2500 5000 7500 10000 KZSF (PS4) samples PCM ADPCM MP3 0 2500 5000 7500 10000 Need low latency ~20 MB for in-memory samples ~300 MB for in-memory samples A little bit more information to show were we came from: On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples. This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data. We’ve used ATRAC9 for surround streams, but those are not in-memory.
  • 284. Compression Codecs KZ3 (PS3) samples PCM ADPCM MP3 0 2500 5000 7500 10000 KZSF (PS4) samples PCM ADPCM MP3 0 2500 5000 7500 10000 Large, don’t need precise timing Need low latency ~20 MB for in-memory samples ~300 MB for in-memory samples A little bit more information to show were we came from: On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples. This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data. We’ve used ATRAC9 for surround streams, but those are not in-memory.
  • 285. Compression Codecs KZ3 (PS3) samples PCM ADPCM MP3 0 2500 5000 7500 10000 KZSF (PS4) samples PCM ADPCM MP3 0 2500 5000 7500 10000 We’ve completely stopped using ADPCM (VAG) ! Large, don’t need precise timing Need low latency ~20 MB for in-memory samples ~300 MB for in-memory samples A little bit more information to show were we came from: On PS3 most sounds were ADPCM, a few were PCM, only streaming sounds used MP3 with various bit rates. We had about 20 MB budgeted for in-memory samples. This generation we ended up using a combination of PCM and MP3 for in-memory sounds, with a total of about 300MB of memory used at run-time, that's roughly 16x as much as on PS3, but still the same percentage of the whole memory (roughly 4%). We didn't use any ADPCM samples anymore (good riddance). The split between PCM and MP3 is roughly 50:50. We used PCM for anything that needs to loop or seek with sample accuracy, and MP3 for larger sounds that don’t require precise timing, such as the tails of guns, for example. Obviously we relied on the audio coprocessor of the PS4 to decode our MP3 data. We’ve used ATRAC9 for surround streams, but those are not in-memory.
  • 286. Sample Rates on PS3 We had a wide variety of sample rates on the PS3, because it was the main parameter we used to optimize memory usage of sounds. This led to some very low quality samples in the game. On the PS4 we only used samples with 48kHz rate. The main way to optimize was to switch them to MP3 encoding and adjust the bit rate.
  • 287. Sample Rates on PS3 0 175 350 525 700 4000 7000 11000 12500 15500 17000 19500 22050 25000 28000 31000 33075 36000 39000 48000 We had a wide variety of sample rates on the PS3, because it was the main parameter we used to optimize memory usage of sounds. This led to some very low quality samples in the game. On the PS4 we only used samples with 48kHz rate. The main way to optimize was to switch them to MP3 encoding and adjust the bit rate.
  • 288. Sample Rates on PS3 0 175 350 525 700 4000 7000 11000 12500 15500 17000 19500 22050 25000 28000 31000 33075 36000 39000 48000 Using sample rate to optimize memory usage! 😩 We had a wide variety of sample rates on the PS3, because it was the main parameter we used to optimize memory usage of sounds. This led to some very low quality samples in the game. On the PS4 we only used samples with 48kHz rate. The main way to optimize was to switch them to MP3 encoding and adjust the bit rate.
  • 289. Sample Rates on PS3 0 175 350 525 700 4000 7000 11000 12500 15500 17000 19500 22050 25000 28000 31000 33075 36000 39000 48000 On PS4 we use 48 kHz exclusively! Using sample rate to optimize memory usage! 😩 We had a wide variety of sample rates on the PS3, because it was the main parameter we used to optimize memory usage of sounds. This led to some very low quality samples in the game. On the PS4 we only used samples with 48kHz rate. The main way to optimize was to switch them to MP3 encoding and adjust the bit rate.
  • 290. The Future… Andreas: Were do we go from here? We’re planning to extend and improve this system in the future, and use it for more purposes. E.g. changing the code generator to allow the creation of compute shaders. Also we'll add more elaborate debugging support, and lots of new nodes of course. We’ll have more detailed profiling support, to measure the execution time of the graph for each node, to allow us to identify bottleneck nodes that need to be optimised. We’re also thinking about different ways to use this system, to create samples on the fly for variations. In this scenario we would add nodes to allow waveforms to be output into buffers, which can then be played at a later point in time.
  • 291. The Future… • Extend and improve Andreas: Were do we go from here? We’re planning to extend and improve this system in the future, and use it for more purposes. E.g. changing the code generator to allow the creation of compute shaders. Also we'll add more elaborate debugging support, and lots of new nodes of course. We’ll have more detailed profiling support, to measure the execution time of the graph for each node, to allow us to identify bottleneck nodes that need to be optimised. We’re also thinking about different ways to use this system, to create samples on the fly for variations. In this scenario we would add nodes to allow waveforms to be output into buffers, which can then be played at a later point in time.
  • 292. The Future… • Extend and improve • Create compute shaders from graphs Andreas: Were do we go from here? We’re planning to extend and improve this system in the future, and use it for more purposes. E.g. changing the code generator to allow the creation of compute shaders. Also we'll add more elaborate debugging support, and lots of new nodes of course. We’ll have more detailed profiling support, to measure the execution time of the graph for each node, to allow us to identify bottleneck nodes that need to be optimised. We’re also thinking about different ways to use this system, to create samples on the fly for variations. In this scenario we would add nodes to allow waveforms to be output into buffers, which can then be played at a later point in time.
  • 293. The Future… • Extend and improve • Create compute shaders from graphs • More elaborate debugging support Andreas: Were do we go from here? We’re planning to extend and improve this system in the future, and use it for more purposes. E.g. changing the code generator to allow the creation of compute shaders. Also we'll add more elaborate debugging support, and lots of new nodes of course. We’ll have more detailed profiling support, to measure the execution time of the graph for each node, to allow us to identify bottleneck nodes that need to be optimised. We’re also thinking about different ways to use this system, to create samples on the fly for variations. In this scenario we would add nodes to allow waveforms to be output into buffers, which can then be played at a later point in time.