|
Owlchemy VR(Job Simulator VR遊戲創作者)在官方部落格上介紹了自家開發的混合實境影片技術,
透過可記錄場景深度的攝影機,自行開發用於遊戲引擎的插件,就能將VR與真人即時結合在影片中。
該技術的優點如下:
- Per-pixel depth!
- We can actually sort the player properly in complex environments, rather than having a flat representation! You can see the hand reaching forward over a filing cabinet while the rest of the body sorts behind. Knowing the depth, we can avoid so many sorting and occlusion problems!
- No extra software to stream!
- No need for OBS for real-time compositing and no need to spend days in After Effects doing compositing in post! That means that streaming setups can be simplified, avoid more apps in the content pipeline, and make the entire process of producing mixed reality content much easier!
- No compositing complications! (seams between layers and shadow issues)
- Using the sandwiching method, you lose the ability to cast a shadow between the foreground and background cameras. With the rendering happening in-engine, shadows act as they should and there are no seams.
- Static or tracked dolly camera mode!
- We can dolly the camera with a tracked controller to allow for shots with motion!
- All on one machine!
- No need to use multiple machines to composite mixed reality footage
- Doesn’t actually require wearing the HMD!
- Without needing cues from tracked controllers or tracked HMDs, you can still stand in a scene in VR. This also means you can have multiple users standing in mixed reality without hardware.
- Lighting!
- Now that the user is in-engine, we can utilize dynamic lighting to achieve more believable results!
目前該技術尚在調整中,等完成後會釋出給大眾試用。
文章來源
|
|