During a Renderman forum at Siggraph last year (“Stupid Renderman Tricks”) a Google VR engineer named Mach Kobayashi presented a method for rendering 360 degree stereoscopic images using Pixar’s Renderman. At the time, I thought it was less “stupid trick” and more “awesome technique” that I couldn’t wait to try. Recently, I tried it out on a 3D scene of NASA’s earth science satellites.
360 degree stereo render:
360 degree, or omnidirectional rendering, seems pretty straightforward – the trick is accurately rendering stereo 360 degree images. As Mach points out, if you shoot rays in every direction from two static points (left/right eye), the distance between the two rays is inconsistent depending on the ray direction, resulting in a varying interpupillary distance (aka, the distance between your eyes, which should remain constant). Mach’s solution was to rotate the two ray origins about circle when rendering, resulting in a constant interpupillary distance in the two rays (perfect stereo/3D images).
I recently had the opportunity to implement this method within our pipeline at the NASA Scientific Visualization Studio. We’re starting to explore VR content creation, and it seems like 360 videos are a great stepping stone toward full, interactive VR experiences. We already use Maya and Renderman in our pipeline, so I was curious to see how hard it would be to create a stereo, omnidirectional camera. It turns out to be pretty straightforward, thanks to Mach’s work. I added a few lines to Mach’s shader to adjust camera location and viewing direction. This allowed me to ‘fly’ the omnidirectional camera through my scene.
I started with Mach’s shader code available here. I made a few changes to add camera location and yaw controls. Compiled the shader, then imported into Slim.
In Maya, I have a locator (highlighted in green) to represent the omnidirectional camera and a camera pointed at a plane (highlighted in white). Attach the omnidirectional camera shader to the plane and render the scene from the perspective of the camera pointed at the plane.
The resulting image is a top/bottom 360 degree stereoscopic rendering. A video in this format can be uploaded to YouTube or Facebook (after adding some spherical video metadata) and be viewed in VR using a mobile viewer like Google Cardboard (I use the Google Tech C1 Glass, but any viewer works). You can also play these videos using standard media players compatible with the Oculus Rift, HTC Vive, GearVR, etc.
If you don’t have a VR viewer, you can view your content in augmented reality on your phone (move phone to change view) or certain web browsers (click and drag to pan around). YouTube will recognize that the content should not be played back in stereo, and it will only display one half of the image.
One interesting note – the apparent scale when viewing an image in stereo/VR is determined by the interpupillary distance. The difference in left/right images is how our brain tells us how large an object is and how far away it is. The smaller the the interpupillary distance, the less variation between left/right images (rendering rays originate from similar positions). When I first rendered my test scene with a spinning Earth, the interpupillary distance was unrealistically large. When viewed in VR, it looked like the Earth was the size of a baseball in front of me, instead of the massive planet I expected to see. Easy enough to fix – just reduce the interpupillary distance. For a real world scale make sure your scene/object units match the real interpupillary distance units. In the image below, the interpupillary distance has been lowered dramatically, so the left and right images appear very similar (Less left/right variation than the same image above). In VR, the Earth will still appear to be about the same size in the video, but it will be ‘less 3D,’ telling our brains that the object is much further away, and therefore, much larger.
Videos coming soon!