We can actually sort the player properly in complex environments, rather than having a flat representation! You can see the hand reaching forward over a filing cabinet while the rest of the body sorts behind. Knowing the depth, we can avoid so many sorting and occlusion problems!
No extra software to stream!
No need for OBS for real-time compositing and no need to spend days in After Effects doing compositing in post! That means that streaming setups can be simplified, avoid more apps in the content pipeline, and make the entire process of producing mixed reality content much easier!
No compositing complications! (seams between layers and shadow issues)
Using the sandwiching method, you lose the ability to cast a shadow between the foreground and background cameras. With the rendering happening in-engine, shadows act as they should and there are no seams.
Static or tracked dolly camera mode!
We can dolly the camera with a tracked controller to allow for shots with motion!
All on one machine!
No need to use multiple machines to composite mixed reality footage
Doesn’t actually require wearing the HMD!
Without needing cues from tracked controllers or tracked HMDs, you can still stand in a scene in VR. This also means you can have multiple users standing in mixed reality without hardware.
Lighting!
Now that the user is in-engine, we can utilize dynamic lighting to achieve more believable results!