Slides for the presentation I did remotely at Open Source World, to talk about audio-only WebRTC applications, and what we've done in Janus to improve and cover the requirements so far.
2024: Domino Containers - The Next Step. News from the Domino Container commu...
Janus + Audio @ Open Source World
1. Audio ergo sum: Playing with
Audio-only Streams in WebRTC and Janus
Lorenzo Miniero
Open Source World – Miami, FL, USA (kinda!)
June 23rd 2021
2. Who am I?
Lorenzo Miniero
• Ph.D @ UniNA
• Chairman @ Meetecho
• Main author of Janus®
Contacts and info
• lorenzo@meetecho.com
• https://twitter.com/elminiero
• https://www.slideshare.net/LorenzoMiniero
• https://soundcloud.com/lminiero
3. Just a few words on Meetecho
• Co-founded in 2009 as an academic spin-off
• University research efforts brought to the market
• Completely independent from the University
• Focus on real-time multimedia applications
• Strong perspective on standardization and open source
• Several activities
• Consulting services
• Commercial support and Janus licenses
• Streaming of live events (IETF, ACM, etc.)
• Proudly brewed in sunny Napoli, Italy
14. What’s Janus?
Janus
General purpose, open source WebRTC server
• https://github.com/meetecho/janus-gateway
• Demos and documentation: https://janus.conf.meetecho.com
• Community: https://groups.google.com/forum/#!forum/meetecho-janus
18. Modular architecture
• The core only implements the WebRTC stack
• JSEP/SDP, ICE, DTLS-SRTP, Data Channels, Simulcast, VP9-SVC, ...
• Plugins expose Janus API over different “transports”
• Currently HTTP / WebSockets / RabbitMQ / Unix Sockets / MQTT / Nanomsg
• “Application” logic implemented in plugins too
• Users attach to plugins via the Janus core
• The core handles the WebRTC stuff
• Plugins route/manipulate the media/data
• Plugins can be combined on client side as “bricks”
• Video SFU, Audio MCU, SIP gatewaying, broadcasting, etc.
19. Modular architecture
• The core only implements the WebRTC stack
• JSEP/SDP, ICE, DTLS-SRTP, Data Channels, Simulcast, VP9-SVC, ...
• Plugins expose Janus API over different “transports”
• Currently HTTP / WebSockets / RabbitMQ / Unix Sockets / MQTT / Nanomsg
• “Application” logic implemented in plugins too
• Users attach to plugins via the Janus core
• The core handles the WebRTC stuff
• Plugins route/manipulate the media/data
• Plugins can be combined on client side as “bricks”
• Video SFU, Audio MCU, SIP gatewaying, broadcasting, etc.
20. Modular architecture
• The core only implements the WebRTC stack
• JSEP/SDP, ICE, DTLS-SRTP, Data Channels, Simulcast, VP9-SVC, ...
• Plugins expose Janus API over different “transports”
• Currently HTTP / WebSockets / RabbitMQ / Unix Sockets / MQTT / Nanomsg
• “Application” logic implemented in plugins too
• Users attach to plugins via the Janus core
• The core handles the WebRTC stuff
• Plugins route/manipulate the media/data
• Plugins can be combined on client side as “bricks”
• Video SFU, Audio MCU, SIP gatewaying, broadcasting, etc.
21. Modular architecture
• The core only implements the WebRTC stack
• JSEP/SDP, ICE, DTLS-SRTP, Data Channels, Simulcast, VP9-SVC, ...
• Plugins expose Janus API over different “transports”
• Currently HTTP / WebSockets / RabbitMQ / Unix Sockets / MQTT / Nanomsg
• “Application” logic implemented in plugins too
• Users attach to plugins via the Janus core
• The core handles the WebRTC stuff
• Plugins route/manipulate the media/data
• Plugins can be combined on client side as “bricks”
• Video SFU, Audio MCU, SIP gatewaying, broadcasting, etc.
24. A ton of scenarios done today with Janus!
• SIP and RTSP gatewaying
• WebRTC-based call/contact centers
• Conferencing & collaboration
• E-learning & webinars
• Cloud platforms
• Media production
• Broadcasting & Gaming
• Identity verification
• Internet of Things
• Augmented/Virtual Reality
• ...and more!
25. A quick look at plugins: EchoTest
https://janus.conf.meetecho.com/docs/echotest
26. A quick look at plugins: Record & Play
https://janus.conf.meetecho.com/docs/recordplay
27. A quick look at plugins: Record & Play
https://janus.conf.meetecho.com/docs/recordplay
28. A quick look at plugins: SIP gateway
https://janus.conf.meetecho.com/docs/sipsofia
29. A quick look at plugins: NoSIP plugin
https://janus.conf.meetecho.com/docs/nosip
30. A quick look at plugins: Audio MCU
https://janus.conf.meetecho.com/docs/audiobridge
31. A quick look at plugins: Video SFU
https://janus.conf.meetecho.com/docs/videoroom
32. A quick look at plugins: Streaming
https://janus.conf.meetecho.com/docs/streaming
35. It’s not just about video!
• Video obviously takes the lion share
• Pretty much ubiquitous
• Most use cases assume video, one way or another
• It’s not the only thing that matters, though
• We still need to communicate, somehow
• Audio (and data) can be just as important, if not more
• Some applications even focus JUST on audio!
• ... and not only call/contact centers, PBX, or legacy infrastructures
36. It’s not just about video!
• Video obviously takes the lion share
• Pretty much ubiquitous
• Most use cases assume video, one way or another
• It’s not the only thing that matters, though
• We still need to communicate, somehow
• Audio (and data) can be just as important, if not more
• Some applications even focus JUST on audio!
• ... and not only call/contact centers, PBX, or legacy infrastructures
37. It’s not just about video!
• Video obviously takes the lion share
• Pretty much ubiquitous
• Most use cases assume video, one way or another
• It’s not the only thing that matters, though
• We still need to communicate, somehow
• Audio (and data) can be just as important, if not more
• Some applications even focus JUST on audio!
• ... and not only call/contact centers, PBX, or legacy infrastructures
41. “Can WebRTC help musicians?”
https://fosdem.org/2021/schedule/event/webrtc_musicians/
42. WebRTC and audio
• A couple of mandatory-to-implement codecs
• Opus + G.711
• G.711 just there as a fallback (and legacy interopability)
• Opus FTW!
• High quality audio codec designed for the Internet
• Very flexible in sampling rates, bitrates, etc.
• Support for stereo, and different “profiles” for voice/music
• A few interesting “tools”
• Audio levels RTP extension (VAD)
• Opus inband Forward Error Correction (FEC)
• Opus Discontinuous transmission (DTX)
43. WebRTC and audio
• A couple of mandatory-to-implement codecs
• Opus + G.711
• G.711 just there as a fallback (and legacy interopability)
• Opus FTW!
• High quality audio codec designed for the Internet
• Very flexible in sampling rates, bitrates, etc.
• Support for stereo, and different “profiles” for voice/music
• A few interesting “tools”
• Audio levels RTP extension (VAD)
• Opus inband Forward Error Correction (FEC)
• Opus Discontinuous transmission (DTX)
44. WebRTC and audio
• A couple of mandatory-to-implement codecs
• Opus + G.711
• G.711 just there as a fallback (and legacy interopability)
• Opus FTW!
• High quality audio codec designed for the Internet
• Very flexible in sampling rates, bitrates, etc.
• Support for stereo, and different “profiles” for voice/music
• A few interesting “tools”
• Audio levels RTP extension (VAD)
• Opus inband Forward Error Correction (FEC)
• Opus Discontinuous transmission (DTX)
45. Audio-only: SFU or MCU?
• SFUs ideal to just relay media
• No mixing/transcoding to worry about −→ less CPU on server, less delay
• More streams to distribute −→ more bandwidth needed
• Different streams −→ more control on UI
• MCUs ideal to just mix media
• Mixing/transcoding taking place −→ more CPU on server, more delay
• Just one stream to distribute −→ bandwidth constant
• Single output stream −→ UI rendering constrained
• Sometimes it makes sense to use them both!
• Use SFU where applicable (e.g., video, plenty of bandwidth)
• Use MCU to complement (e.g., audio, lower power devices)
• Besides, an MCU can mix SFU streams to broadcast to a CDN!
46. Audio-only: SFU or MCU?
• SFUs ideal to just relay media
• No mixing/transcoding to worry about −→ less CPU on server, less delay
• More streams to distribute −→ more bandwidth needed
• Different streams −→ more control on UI
• MCUs ideal to just mix media
• Mixing/transcoding taking place −→ more CPU on server, more delay
• Just one stream to distribute −→ bandwidth constant
• Single output stream −→ UI rendering constrained
• Sometimes it makes sense to use them both!
• Use SFU where applicable (e.g., video, plenty of bandwidth)
• Use MCU to complement (e.g., audio, lower power devices)
• Besides, an MCU can mix SFU streams to broadcast to a CDN!
47. Audio-only: SFU or MCU?
• SFUs ideal to just relay media
• No mixing/transcoding to worry about −→ less CPU on server, less delay
• More streams to distribute −→ more bandwidth needed
• Different streams −→ more control on UI
• MCUs ideal to just mix media
• Mixing/transcoding taking place −→ more CPU on server, more delay
• Just one stream to distribute −→ bandwidth constant
• Single output stream −→ UI rendering constrained
• Sometimes it makes sense to use them both!
• Use SFU where applicable (e.g., video, plenty of bandwidth)
• Use MCU to complement (e.g., audio, lower power devices)
• Besides, an MCU can mix SFU streams to broadcast to a CDN!
48. A simple use case to start from: podcasts
• Good example that combines interaction and scalability requirements
• One or more people talking, and a (potentially) wide audience
• Ability to invite people in can be a plus
• WebRTC a good fit for the conversation part
• Easy to have a chat just using your browser
• Broadcasting could be done with WebRTC too!
• May make sense to have the conversation mixed, though
• If broadcasting with WebRTC, the more the speakers, the more the bandwidth
• If NOT broadcasting with WebRTC, you need a mix to transcode anyway
• More control on additional media (e.g., themes, clips, ads, etc.)
• How to optimize mixing with the ability to bring people in in a scalable way?
49. A simple use case to start from: podcasts
• Good example that combines interaction and scalability requirements
• One or more people talking, and a (potentially) wide audience
• Ability to invite people in can be a plus
• WebRTC a good fit for the conversation part
• Easy to have a chat just using your browser
• Broadcasting could be done with WebRTC too!
• May make sense to have the conversation mixed, though
• If broadcasting with WebRTC, the more the speakers, the more the bandwidth
• If NOT broadcasting with WebRTC, you need a mix to transcode anyway
• More control on additional media (e.g., themes, clips, ads, etc.)
• How to optimize mixing with the ability to bring people in in a scalable way?
50. A simple use case to start from: podcasts
• Good example that combines interaction and scalability requirements
• One or more people talking, and a (potentially) wide audience
• Ability to invite people in can be a plus
• WebRTC a good fit for the conversation part
• Easy to have a chat just using your browser
• Broadcasting could be done with WebRTC too!
• May make sense to have the conversation mixed, though
• If broadcasting with WebRTC, the more the speakers, the more the bandwidth
• If NOT broadcasting with WebRTC, you need a mix to transcode anyway
• More control on additional media (e.g., themes, clips, ads, etc.)
• How to optimize mixing with the ability to bring people in in a scalable way?
51. A simple use case to start from: podcasts
• Good example that combines interaction and scalability requirements
• One or more people talking, and a (potentially) wide audience
• Ability to invite people in can be a plus
• WebRTC a good fit for the conversation part
• Easy to have a chat just using your browser
• Broadcasting could be done with WebRTC too!
• May make sense to have the conversation mixed, though
• If broadcasting with WebRTC, the more the speakers, the more the bandwidth
• If NOT broadcasting with WebRTC, you need a mix to transcode anyway
• More control on additional media (e.g., themes, clips, ads, etc.)
• How to optimize mixing with the ability to bring people in in a scalable way?
58. Foundation for our Virtual Event Platform
https://commcon.xyz/session/turning-live-events-to-virtual-with-janus
59. New audio-related Janus efforts
• Modular nature of Janus encourages new functionality
• Not necessarily in new plugins
• VideoRoom, AudioBridge, Streaming plugins can all benefit
• Several activities done, started or planned to enhance audio experience
• Mostly in AudioBridge... (due to the nature of the plugin)
• ... but some features actually available to all plugins!
• Many coming from requirements for our Virtual Event Platform
• But we like to experiment as well!
60. New audio-related Janus efforts
• Modular nature of Janus encourages new functionality
• Not necessarily in new plugins
• VideoRoom, AudioBridge, Streaming plugins can all benefit
• Several activities done, started or planned to enhance audio experience
• Mostly in AudioBridge... (due to the nature of the plugin)
• ... but some features actually available to all plugins!
• Many coming from requirements for our Virtual Event Platform
• But we like to experiment as well!
61. New audio-related Janus efforts
• Modular nature of Janus encourages new functionality
• Not necessarily in new plugins
• VideoRoom, AudioBridge, Streaming plugins can all benefit
• Several activities done, started or planned to enhance audio experience
• Mostly in AudioBridge... (due to the nature of the plugin)
• ... but some features actually available to all plugins!
• Many coming from requirements for our Virtual Event Platform
• But we like to experiment as well!
62. New audio-related Janus efforts
• Modular nature of Janus encourages new functionality
• Not necessarily in new plugins
• VideoRoom, AudioBridge, Streaming plugins can all benefit
• Several activities done, started or planned to enhance audio experience
• Mostly in AudioBridge... (due to the nature of the plugin)
• ... but some features actually available to all plugins!
• Many coming from requirements for our Virtual Event Platform
• But we like to experiment as well!
63. Multiopus: 5.1 and 7.1 surround audio
• This is little known, but Chrome does support surround audio in WebRTC
• Not really documented or standardized, though
• Mostly just there because it’s used by Stadia, today
• Multiopus (5.1 and 7.1)
• Each packet is basically OGG with multiple stereo Opus streams
• Number of streams determines number of channels (SDP munging for mapping)
• We have a cool demo, which currently doesn’t work due to a bug in Chrome...
• https://janus.conf.meetecho.com/multiopus.html
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2059
64. Multiopus: 5.1 and 7.1 surround audio
• This is little known, but Chrome does support surround audio in WebRTC
• Not really documented or standardized, though
• Mostly just there because it’s used by Stadia, today
• Multiopus (5.1 and 7.1)
• Each packet is basically OGG with multiple stereo Opus streams
• Number of streams determines number of channels (SDP munging for mapping)
• We have a cool demo, which currently doesn’t work due to a bug in Chrome...
• https://janus.conf.meetecho.com/multiopus.html
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2059
65. Multiopus: 5.1 and 7.1 surround audio
• This is little known, but Chrome does support surround audio in WebRTC
• Not really documented or standardized, though
• Mostly just there because it’s used by Stadia, today
• Multiopus (5.1 and 7.1)
• Each packet is basically OGG with multiple stereo Opus streams
• Number of streams determines number of channels (SDP munging for mapping)
• We have a cool demo, which currently doesn’t work due to a bug in Chrome...
• https://janus.conf.meetecho.com/multiopus.html
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2059
66. Multiopus: 5.1 and 7.1 surround audio
• This is little known, but Chrome does support surround audio in WebRTC
• Not really documented or standardized, though
• Mostly just there because it’s used by Stadia, today
• Multiopus (5.1 and 7.1)
• Each packet is basically OGG with multiple stereo Opus streams
• Number of streams determines number of channels (SDP munging for mapping)
• We have a cool demo, which currently doesn’t work due to a bug in Chrome...
• https://janus.conf.meetecho.com/multiopus.html
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2059
68. Playback of pre-recorded streams in AudioBridge
• We kinda had it already, but in a different plugin
• Streaming plugin always supported streaming static audio files via WebRTC
• Initially G.711 files only, now Opus as well (check the demos online!)
• We needed it in AudioBridge as well
• e.g., to play announcements or background music
• Basically a way to play an Opus file in an Opus room
• Now used in several contexts in production environments
• e.g., WebRTC-based auctions services
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2088
69. Playback of pre-recorded streams in AudioBridge
• We kinda had it already, but in a different plugin
• Streaming plugin always supported streaming static audio files via WebRTC
• Initially G.711 files only, now Opus as well (check the demos online!)
• We needed it in AudioBridge as well
• e.g., to play announcements or background music
• Basically a way to play an Opus file in an Opus room
• Now used in several contexts in production environments
• e.g., WebRTC-based auctions services
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2088
70. Playback of pre-recorded streams in AudioBridge
• We kinda had it already, but in a different plugin
• Streaming plugin always supported streaming static audio files via WebRTC
• Initially G.711 files only, now Opus as well (check the demos online!)
• We needed it in AudioBridge as well
• e.g., to play announcements or background music
• Basically a way to play an Opus file in an Opus room
• Now used in several contexts in production environments
• e.g., WebRTC-based auctions services
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2088
71. Playback of pre-recorded streams in AudioBridge
• We kinda had it already, but in a different plugin
• Streaming plugin always supported streaming static audio files via WebRTC
• Initially G.711 files only, now Opus as well (check the demos online!)
• We needed it in AudioBridge as well
• e.g., to play announcements or background music
• Basically a way to play an Opus file in an Opus room
• Now used in several contexts in production environments
• e.g., WebRTC-based auctions services
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2088
73. Spatial audio support in AudioBridge
• Besides experimental surround, WebRTC supports “regular” stereo too
• Easy to enable via negotiation in SDP
• Supported in most Janus plugins that simply relay media
• AudioBridge so far limited to mono only, though
• Decoding and mixing needs to be aware of number of channels
• Stereo mixing more complex as well
• Effort started to add stereo mode, and use it for spatial audio
• Participants joining can send/receive stereo audio
• Spatial positioning for participants in stereo space (0=L, 50=C, 100=R)
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2446
74. Spatial audio support in AudioBridge
• Besides experimental surround, WebRTC supports “regular” stereo too
• Easy to enable via negotiation in SDP
• Supported in most Janus plugins that simply relay media
• AudioBridge so far limited to mono only, though
• Decoding and mixing needs to be aware of number of channels
• Stereo mixing more complex as well
• Effort started to add stereo mode, and use it for spatial audio
• Participants joining can send/receive stereo audio
• Spatial positioning for participants in stereo space (0=L, 50=C, 100=R)
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2446
75. Spatial audio support in AudioBridge
• Besides experimental surround, WebRTC supports “regular” stereo too
• Easy to enable via negotiation in SDP
• Supported in most Janus plugins that simply relay media
• AudioBridge so far limited to mono only, though
• Decoding and mixing needs to be aware of number of channels
• Stereo mixing more complex as well
• Effort started to add stereo mode, and use it for spatial audio
• Participants joining can send/receive stereo audio
• Spatial positioning for participants in stereo space (0=L, 50=C, 100=R)
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2446
76. Spatial audio support in AudioBridge
• Besides experimental surround, WebRTC supports “regular” stereo too
• Easy to enable via negotiation in SDP
• Supported in most Janus plugins that simply relay media
• AudioBridge so far limited to mono only, though
• Decoding and mixing needs to be aware of number of channels
• Stereo mixing more complex as well
• Effort started to add stereo mode, and use it for spatial audio
• Participants joining can send/receive stereo audio
• Spatial positioning for participants in stereo space (0=L, 50=C, 100=R)
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2446
78. Support for plain-RTP participants in AudioBridge
• AudioBridge plugin conceived as a simple, and WebRTC-only, audio mixer
• Only WebRTC users allowed to join, via the Janus API
• As such, so far no way for, e.g., SIP users to participate
• Backend plain-RTP channel added to address that shortcoming
• Janus API still needed to add and manage RTP participant
• SDP crafting up to application (AudioBridge won’t do SIP/SDP for you)
• Opus still a requirement for participation (no further transcoding)
• In the future, plan is to use this for cascaded mixing as well
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2464
79. Support for plain-RTP participants in AudioBridge
• AudioBridge plugin conceived as a simple, and WebRTC-only, audio mixer
• Only WebRTC users allowed to join, via the Janus API
• As such, so far no way for, e.g., SIP users to participate
• Backend plain-RTP channel added to address that shortcoming
• Janus API still needed to add and manage RTP participant
• SDP crafting up to application (AudioBridge won’t do SIP/SDP for you)
• Opus still a requirement for participation (no further transcoding)
• In the future, plan is to use this for cascaded mixing as well
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2464
80. Support for plain-RTP participants in AudioBridge
• AudioBridge plugin conceived as a simple, and WebRTC-only, audio mixer
• Only WebRTC users allowed to join, via the Janus API
• As such, so far no way for, e.g., SIP users to participate
• Backend plain-RTP channel added to address that shortcoming
• Janus API still needed to add and manage RTP participant
• SDP crafting up to application (AudioBridge won’t do SIP/SDP for you)
• Opus still a requirement for participation (no further transcoding)
• In the future, plan is to use this for cascaded mixing as well
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2464
81. Support for plain-RTP participants in AudioBridge
• AudioBridge plugin conceived as a simple, and WebRTC-only, audio mixer
• Only WebRTC users allowed to join, via the Janus API
• As such, so far no way for, e.g., SIP users to participate
• Backend plain-RTP channel added to address that shortcoming
• Janus API still needed to add and manage RTP participant
• SDP crafting up to application (AudioBridge won’t do SIP/SDP for you)
• Opus still a requirement for participation (no further transcoding)
• In the future, plan is to use this for cascaded mixing as well
Pull request (now merged)
https://github.com/meetecho/janus-gateway/pull/2464
83. Grouping participants in AudioBridge
• We introduced AudioBridge RTP forwarders before
• Easy way to forward a room mix, e.g., for broadcasting purposes
• Sometimes helpful to only get a mix of some participants
• e.g., for selective processing of a class of participants
• Added participants tagging functionality to create “groups”
• Nothing changes for participants (they can still all hear each other)
• RTP forwarders, though, can now forward everything or just a group
Pull request (in testing phase)
https://github.com/meetecho/janus-gateway/pull/2653
84. Grouping participants in AudioBridge
• We introduced AudioBridge RTP forwarders before
• Easy way to forward a room mix, e.g., for broadcasting purposes
• Sometimes helpful to only get a mix of some participants
• e.g., for selective processing of a class of participants
• Added participants tagging functionality to create “groups”
• Nothing changes for participants (they can still all hear each other)
• RTP forwarders, though, can now forward everything or just a group
Pull request (in testing phase)
https://github.com/meetecho/janus-gateway/pull/2653
85. Grouping participants in AudioBridge
• We introduced AudioBridge RTP forwarders before
• Easy way to forward a room mix, e.g., for broadcasting purposes
• Sometimes helpful to only get a mix of some participants
• e.g., for selective processing of a class of participants
• Added participants tagging functionality to create “groups”
• Nothing changes for participants (they can still all hear each other)
• RTP forwarders, though, can now forward everything or just a group
Pull request (in testing phase)
https://github.com/meetecho/janus-gateway/pull/2653
86. Grouping participants in AudioBridge
• We introduced AudioBridge RTP forwarders before
• Easy way to forward a room mix, e.g., for broadcasting purposes
• Sometimes helpful to only get a mix of some participants
• e.g., for selective processing of a class of participants
• Added participants tagging functionality to create “groups”
• Nothing changes for participants (they can still all hear each other)
• RTP forwarders, though, can now forward everything or just a group
Pull request (in testing phase)
https://github.com/meetecho/janus-gateway/pull/2653
88. Audio redundancy via RED
• Old RTP payload format for Redundant Audio Data (RED)
• https://datatracker.ietf.org/doc/html/rfc2198
• Recently added to Chrome on an experimental basis
• https://webrtchacks.com/red-improving-audio-quality-with-redundancy/
• https://webrtchacks.com/implementing-redundant-audio-on-an-sfu/
• Basically a simple way to group multiple audio frames in a single RTP packet
• Current audio frame + one or more previously sent frames
• Allows recipient to easily recover lost packets at the cost of more bandwidth
89. Audio redundancy via RED
• Old RTP payload format for Redundant Audio Data (RED)
• https://datatracker.ietf.org/doc/html/rfc2198
• Recently added to Chrome on an experimental basis
• https://webrtchacks.com/red-improving-audio-quality-with-redundancy/
• https://webrtchacks.com/implementing-redundant-audio-on-an-sfu/
• Basically a simple way to group multiple audio frames in a single RTP packet
• Current audio frame + one or more previously sent frames
• Allows recipient to easily recover lost packets at the cost of more bandwidth
90. Audio redundancy via RED
• Old RTP payload format for Redundant Audio Data (RED)
• https://datatracker.ietf.org/doc/html/rfc2198
• Recently added to Chrome on an experimental basis
• https://webrtchacks.com/red-improving-audio-quality-with-redundancy/
• https://webrtchacks.com/implementing-redundant-audio-on-an-sfu/
• Basically a simple way to group multiple audio frames in a single RTP packet
• Current audio frame + one or more previously sent frames
• Allows recipient to easily recover lost packets at the cost of more bandwidth
96. Support for audio redundancy via RED in Janus
• Support in Janus needed work in both core and plugins
• Core needed to negotiate RED, and be able to unpack/pack RED
• Plugins needed to be able to do something with the data
• Important to support both endpoints that can do RED, and those who can’t
• RED-to-RED and nonRED-to-nonRED are easy
• In other cases, Janus may have to pack/unpack RED accordingly
• First integration basically done in most plugins
• EchoTest, VideoCall, SIP, NoSIP, Record&Play, Streaming, recordings post-processor
• “Big guns” like AudioBridge and VideoRoom to come next!
If you want to learn more... (PR in testing phase)
https://www.meetecho.com/blog/opus-red/
97. Support for audio redundancy via RED in Janus
• Support in Janus needed work in both core and plugins
• Core needed to negotiate RED, and be able to unpack/pack RED
• Plugins needed to be able to do something with the data
• Important to support both endpoints that can do RED, and those who can’t
• RED-to-RED and nonRED-to-nonRED are easy
• In other cases, Janus may have to pack/unpack RED accordingly
• First integration basically done in most plugins
• EchoTest, VideoCall, SIP, NoSIP, Record&Play, Streaming, recordings post-processor
• “Big guns” like AudioBridge and VideoRoom to come next!
If you want to learn more... (PR in testing phase)
https://www.meetecho.com/blog/opus-red/
98. Support for audio redundancy via RED in Janus
• Support in Janus needed work in both core and plugins
• Core needed to negotiate RED, and be able to unpack/pack RED
• Plugins needed to be able to do something with the data
• Important to support both endpoints that can do RED, and those who can’t
• RED-to-RED and nonRED-to-nonRED are easy
• In other cases, Janus may have to pack/unpack RED accordingly
• First integration basically done in most plugins
• EchoTest, VideoCall, SIP, NoSIP, Record&Play, Streaming, recordings post-processor
• “Big guns” like AudioBridge and VideoRoom to come next!
If you want to learn more... (PR in testing phase)
https://www.meetecho.com/blog/opus-red/
99. Support for audio redundancy via RED in Janus
• Support in Janus needed work in both core and plugins
• Core needed to negotiate RED, and be able to unpack/pack RED
• Plugins needed to be able to do something with the data
• Important to support both endpoints that can do RED, and those who can’t
• RED-to-RED and nonRED-to-nonRED are easy
• In other cases, Janus may have to pack/unpack RED accordingly
• First integration basically done in most plugins
• EchoTest, VideoCall, SIP, NoSIP, Record&Play, Streaming, recordings post-processor
• “Big guns” like AudioBridge and VideoRoom to come next!
If you want to learn more... (PR in testing phase)
https://www.meetecho.com/blog/opus-red/