Mpeg ARAF tutorial @ ISMAR 2014

PhD, MBA | Chairman - MPEG 3D Graphics | Leading Research Team – Augmented Reality, Cloud Computing, Interactive Media à INT
9 Nov 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
Mpeg ARAF tutorial @ ISMAR 2014
1 sur 53

Contenu connexe

Tendances

Tutorial MPEG 3D GraphicsTutorial MPEG 3D Graphics
Tutorial MPEG 3D GraphicsMarius Preda PhD
Bridging the gap between web and televisionBridging the gap between web and television
Bridging the gap between web and televisionMarius Preda PhD
Point Cloud Compression in MPEGPoint Cloud Compression in MPEG
Point Cloud Compression in MPEGMarius Preda PhD
Compression presentation 415 (1)Compression presentation 415 (1)
Compression presentation 415 (1)Godo Dodo
MPEG-1 Part 2 Video EncodingMPEG-1 Part 2 Video Encoding
MPEG-1 Part 2 Video EncodingChristian Kehl
Filmic Tonemapping - EA 2006Filmic Tonemapping - EA 2006
Filmic Tonemapping - EA 2006hpduiker

En vedette

Mpeg v-awareness eventMpeg v-awareness event
Mpeg v-awareness eventMarius Preda PhD
Mp3Mp3
Mp3Shirley Aranjo
MPEG-DASH Conformance and Reference SoftwareMPEG-DASH Conformance and Reference Software
MPEG-DASH Conformance and Reference SoftwareAlpen-Adria-Universität
Introduction au Lean StartupIntroduction au Lean Startup
Introduction au Lean StartupSébastien Sacard
La réalité augmentée : Que retenir de 2016 ?La réalité augmentée : Que retenir de 2016 ?
La réalité augmentée : Que retenir de 2016 ?Grégory MAUBON, PhD
Basics of Mpeg 4 Video CompressionBasics of Mpeg 4 Video Compression
Basics of Mpeg 4 Video CompressionMarius Preda PhD

Similaire à Mpeg ARAF tutorial @ ISMAR 2014

MPEG-DASH Reference Software and ConformanceMPEG-DASH Reference Software and Conformance
MPEG-DASH Reference Software and ConformanceAlpen-Adria-Universität
What’s new in MPEG?What’s new in MPEG?
What’s new in MPEG?Alpen-Adria-Universität
Audio and Video streaming.pptAudio and Video streaming.ppt
Audio and Video streaming.pptVideoguy
ShowNTell: An easy-to-use tool for answering students’ questions with voice-o...ShowNTell: An easy-to-use tool for answering students’ questions with voice-o...
ShowNTell: An easy-to-use tool for answering students’ questions with voice-o...Anand Bhojan
Audio and video streamingAudio and video streaming
Audio and video streamingRohan Bhatkar
Presentation NBMP and PCCPresentation NBMP and PCC
Presentation NBMP and PCCRufael Mekuria

Dernier

Global QCD analysis and dark photonsGlobal QCD analysis and dark photons
Global QCD analysis and dark photonsSérgio Sacani
Astronomaly at Scale: Searching for Anomalies Amongst 4 Million GalaxiesAstronomaly at Scale: Searching for Anomalies Amongst 4 Million Galaxies
Astronomaly at Scale: Searching for Anomalies Amongst 4 Million GalaxiesSérgio Sacani
Isolation of cellular Proteins.pptxIsolation of cellular Proteins.pptx
Isolation of cellular Proteins.pptxUniversity of Petroleum and Energy studies
subversiv › produktiv (QM).pdfsubversiv › produktiv (QM).pdf
subversiv › produktiv (QM).pdfpeterpur
230922 Semantic Exploration from Language Abstractions and Pretrained Represe...230922 Semantic Exploration from Language Abstractions and Pretrained Represe...
230922 Semantic Exploration from Language Abstractions and Pretrained Represe...Seungjoon1
Glycan-related Reagents for Extracellular Vesicles (EVs) ResearchGlycan-related Reagents for Extracellular Vesicles (EVs) Research
Glycan-related Reagents for Extracellular Vesicles (EVs) ResearchTokyo Chemicals Industry (TCI)

Mpeg ARAF tutorial @ ISMAR 2014

Notes de l'éditeur

  1. Passing On, Treasure Hunt, Castle Quest, Arduinnae, Castle Crisis
  2. Head Tracking is needed to render the audio. 3DAudio can be used to modulate the audio perception with respect to the user position and orientation. Currently similar approach is used at the production side but it can be used at the user side (in real time). The 3D position and orientation of the graphical objects (enriched with audio) is known and it should be forwarded to the 3D audio engine. The relative positions between the sources and the user are prefered. Draw a diagram showing that the scene is sending to the 3D audio engine the relative position of all the sources and get back the sound for the headphones. Reference software implementation exists but is working using files: the chain is the following: (1) 3D decoder (multi-channel); some of the outputs are objects and higher order ambisonic. (2) Object renderer. The 3D coordinates are included as a metadata in the bitstream but an entry can be done in the Object Renderer taking the input from the scene.
  3. Head Tracking is needed to render the audio. 3DAudio can be used to modulate the audio perception with respect to the user position and orientation. Currently similar approach is used at the production side but it can be used at the user side (in real time). The 3D position and orientation of the graphical objects (enriched with audio) is known and it should be forwarded to the 3D audio engine. The relative positions between the sources and the user are prefered. Draw a diagram showing that the scene is sending to the 3D audio engine the relative position of all the sources and get back the sound for the headphones. Reference software implementation exists but is working using files: the chain is the following: (1) 3D decoder (multi-channel); some of the outputs are objects and higher order ambisonic. (2) Object renderer. The 3D coordinates are included as a metadata in the bitstream but an entry can be done in the Object Renderer taking the input from the scene.
  4. Head Tracking is needed to render the audio. 3DAudio can be used to modulate the audio perception with respect to the user position and orientation. Currently similar approach is used at the production side but it can be used at the user side (in real time). The 3D position and orientation of the graphical objects (enriched with audio) is known and it should be forwarded to the 3D audio engine. The relative positions between the sources and the user are prefered. Draw a diagram showing that the scene is sending to the 3D audio engine the relative position of all the sources and get back the sound for the headphones. Reference software implementation exists but is working using files: the chain is the following: (1) 3D decoder (multi-channel); some of the outputs are objects and higher order ambisonic. (2) Object renderer. The 3D coordinates are included as a metadata in the bitstream but an entry can be done in the Object Renderer taking the input from the scene.
  5. Head Tracking is needed to render the audio. 3DAudio can be used to modulate the audio perception with respect to the user position and orientation. Currently similar approach is used at the production side but it can be used at the user side (in real time). The 3D position and orientation of the graphical objects (enriched with audio) is known and it should be forwarded to the 3D audio engine. The relative positions between the sources and the user are prefered. Draw a diagram showing that the scene is sending to the 3D audio engine the relative position of all the sources and get back the sound for the headphones. Reference software implementation exists but is working using files: the chain is the following: (1) 3D decoder (multi-channel); some of the outputs are objects and higher order ambisonic. (2) Object renderer. The 3D coordinates are included as a metadata in the bitstream but an entry can be done in the Object Renderer taking the input from the scene.