49 | | To perceive depth perception, two cameras send different visual data to the two eyes of the Vive. We achieved this by first having two planes in Unity at the same location and rendering the webcam feeds of both cameras on each plane as textures. We then found a way so each object in Unity can be rendered to different eyes and so we rendered each webcam feed to each eye. |
50 | | Unity has limited support for utilizing a webcam and so the quality of the camera is not the best it can be. Unity uses the CPU to process the camera feeds instead of the GPU and so, not only is the frame rate low but the quality is low. Third party assets are avaliable for the few who do wish to utilize any camera that is connected but either the asset is too expensive or unable to display both cameras at the same time. In the future we hope to utilize better options avaliable in Unity as it gets updated or find a way ourselves to utilize the GPU for the webcam feeds. |
| 49 | To perceive depth perception, two cameras send different visual data to the two eyes of the Vive. Because Unity's support for utilizing the webcam feed is extremely limited, the quality of the video feeds is low. Third party assets are avaliable for the few who do wish to utilize any camera that is connected but either the asset is too expensive or unable to display both cameras at the same time. In the future we hope to utilize better options avaliable in Unity as it gets updated or find a way ourselves to utilize the GPU instead of the CPU for the webcam feeds which is what Unity does automatically. |