Xxx 發表於 2016-8-29 03:46 「樓主你的說明完全沒有解釋Valve lighthouse的原理好嗎?不要誤導別人 」 這篇本來就是以討論為主,原理只是簡單帶過,細節我文章內附的連結就有說明了。 另外請問文章中哪裡有會造成「誤導 」的部分呢?如果有錯我會去修改內文。 |
樓主你的說明完全沒有解釋Valve lighthouse的原理好嗎?不要誤導別人 http://lib.csdn.net/article/vr/10786 這個文章才真正解釋了 |
ivanTai 發表於 2016-3-14 17:22 專業好文!!!高下立判! 尤其是講手部追蹤的地方 |
reddit上看到的簡單來說就是Oculus 得追蹤系統存在不穩的定因素 Hand movement is way faster than head movement. The linear speed of the Vive lasers at the extent of their tracking range is 15ft*2*pi/(1s/60)=3856mph. You can't move your hand fast enough to change that by a meaningful percentage. You can basically hook the Vive Controller up to a string and whip it around as fast as you can and not lose tracking. The Rift tracking system was optimized initially around only tracking a headset. Even at fast head movement speeds it loses an optical lock and falls back purely to IMUs. For fast hand speeds they are having lots of trouble. Two cameras forward-facing lets them re-id the LEDs quickly and gives them more SvN to work with in the edge pixel data. That's why they are stuck with that for fast hand movements. By lowering the emit-time of the LEDs they get a shorter exposure with less smear, but lose on signal vs noise. They then make up for it by having two cameras in front instead of one. With opposing cameras you can slowly walk around the room and play a point-and-click style adventure game with Oculus in opposing sensor mode, as long as you dont need to grab things off the ground due to FOV reasons, but you can't do things like swing swords unless you are in a small area hit by both cameras. Vertical FOV is also low enough to have to tilt the camera to switch from seated to standing. Photodiodes in Lighthouse don't have the reacquisition problem, each photodiode knows which photodiode it is, whereas the Rift Constellation system has to encode each LED's identifier in pulses over multiple frames. By having Touch visible through two offset front camera views, they can reacquire faster. Source: https://www.youtube.com/watch?v=asduqdRizqs&t=10m48s Touch was delayed to put lots of computer vision engineers on the range problem caused by the above factors ("panic piled" on Touch "increasing the scale [range]") Source: https://www.youtube.com/watch?v=dRPn_LK2Hkc&t=4m30s (edit: two forward facing sensors was billed as a method of improving hand-on-hand near interaction occlusion resistance, but with opposing sensors with real range like Lighthouse, you can simply stand in one of the corners without a sensor and look towards the middle of the room: bam, you now have two forward facing sensors and all the same occlusion resistance.) |
宅您老酥 發表於 2016-3-4 11:10 已經有開者在開發配合Oculus Touch的遊戲、軟體, 不過YouTube上很少同時秀出使用者看到的畫面+現實畫面的影片。 可以看看這個Oculus Medium的介紹影片(Oculus Medium是Oculus開發的創作工具) https://youtu.be/IreEK-abHio 再看看這個Tilt Brush by Google的影片 【HTC Vive x 漫畫家劉明昆】 神來一筆的幻想 https://youtu.be/Ng_kNGPgFIo 從這兩段影片可以很明顯看出兩者設計思路的差異, Oculus Medium是將兩個攝影機架設在使用者前方,讓使用者在一個較小的範圍內仔細雕琢物品。 而Tilt Brush則是將Lighthouse架設在對角,讓使用者在空間中自由揮灑,使用者能夠身處在自己的作品之中,進行創作。 對我來說,Tilt Brush的吸引力遠大於Oculus Medium, 引用迪士尼首席動畫師Glen Keane的話:「所有舊有的規則都不一樣了。透過能夠在虛擬空間創作的工具,我戴上頭罩,就宛若走入畫紙之中,直接在畫紙的世界中創作。東西南北,所有的方向都為我敞開,投入在創作空間中的感受不再像是作畫,而更像是在快活地跳著自己的舞,一邊不斷的為自己的創作驚嘆,『我怎麼踏進了這樣一個神奇的世界?』」 來源:畫出靈魂的輪廓 – HTC Vive 與迪士尼傳奇畫師 http://blog.htcvive.com/tw/2016/02/htc-vive-vr-%e8%bf%aa%e5%a3%ab%e5%b0%bc%e9%a6%96%e5%b8%ad%e5%8b%95%e7%95%ab%e5%b8%ab/ 本文章最後由( luyaoting )於 2016-3-4 13:31 編輯 |
專業好文推! 感覺就是一開始選擇的技術基礎不同 衍生出現在的差異 不過印象中RIFT的定位還沒實裝...嗎? |
GMT+8, 2024-11-16 05:29 , Processed in 0.081173 second(s), 25 queries .
Powered by Discuz! X3.1
© 2001-2013 Comsenz Inc.