Spatial Workstation: Workflow

The Knowledgebase has moved! Please visit the link below for updates and new articles.

https://facebookincubator.github.io/facebook-360-spatial-workstation/KB.html

 

The Spatial Workstation is an end-to-end pipeline that consists of authoring plugins to popular DAWs, an Encoder and a real-time Rendering Engine SDK. Each of these components work together to help sound designers create immersive 3D audio soundscape, and have that rendered in real-time with headtracking on a multitude of platforms and apps. We have taken the best of game audio and linear sound design and created a compact, end-to-end, turnkey solution to cater to a brand new medium.

 1. Authoring: The DAW plugins allow you sculpt a soundscape for 360 videos in sync to a slaved 360 video player. The video player plays back 360 videos in sync with your DAW and allows you to "look around" the video or connect an HMD to get direct feedback while you mix. Head-tracking/orientation information and other metadata is shared between your DAW and the video player. Once done with the mix, the video player can also be used as a standalone application to test and preview your final mixes.

2. FB360 Encoder: The encoder is an offline application that takes the output from your DAW session, which is an 8 or 10 channel mix (depending on whether you plan to mix head-locked stereo audio along with spatialised binaural audio) and encodes it into a single .tbe file, where we use lossless compression to render to a proprietary format. This .tbe file that you create, simply gets passed down to the application developers who will integrate it into the video player or VR application. It contains all the necessary metadata and audio that you have created during the mixing process. The Encoder also supports exporting for Facebook 360 as well as other formats, for example B-Format, for clients wishing to also deploy on platforms such as Oculus Video and YouTube and a quad binaural mix, for the Samsung VR platform. 

3. Rendering Engine SDK: This is integrated into the final video player application. It is a one time process and typical integration times range from under a minute (for players developed in Unity) to about half a day if it is a custom player written in native code. The Rendering engine renders the .tbe file in real-time with head-tracking data and synchronises the audio to a video stream within the application.

For previewing the final mix on your desktop, you can load the .tbe file in the video player (supplied with the plugins) in standalone mode and and hear the end result combined with the headtracking data sent to it in real-time.

Have more questions? Submit a request

0 Comments

Article is closed for comments.
Powered by Zendesk