Oculus Rift at Convergence

Posted: December 27, 2014 at 3:27 pm

The following was originally posted on Google+ on Dec 18th, 2014.

I experienced the Oculus Rift at Convergence in Banff a few weeks ago and meant to post my reflections. First, a little background: I’ve been doing computer graphics since I was a preteen (I used digipaint – 86, Imagine 3D – 86, and multimedia with AmigaVision – ~90, I also tinkered with SGIs running Alias in the early 90s), and I was lucky enough to go to SIGGRAPH in 1995, when I used a VR system of the time. It was so heavy that it was mounted on an armature you moved  around with your hands.

Considering the nearly 20 years in between, I had quite high expectations. The most prominent surprise was that I could see the individual RGB subpixels in the display. This made everything look quite blocky. In fact, I think there would be no point in using anti-aliasing, as the pixels are so big it would not likely make any difference.

Sure the graphics are cool, but they look better on an  HD screen. I don’t remember seeing the subpixels in 1995, but granted that is a long time to remember an image. I do remember that everything was very flat shaded and certainly jaggy. I expect the system was based on two 640×480 displays, one for each eye. The appearance of the subpixels in the Oculus totally ruined the experience for me; sure it’s cool to be ‘immersed’, but how many people watch a movie at home and pay attention to the speakers or DVD player rather than the content of the screen? Attention allows us to immerse ourselves in content, even when it does not fill our field of view. Oculus was so bad that I wondered why they did not put some diffusion in the lenses; it’s already blurry, why not make it smooth and blurry?

So I wondered what my expectations were, and I realized they were based on screens and attention. i.e. I expected pixels on a display filling a small part of my FOV to be the same size as those in the HMD. Using my living room (and modest 32′ LCD) as a basis, I worked out that my screen fills about 1/5th of my horizontal field of view (for one eye). Assuming an HD image on screen, that would mean a HMD filling my whole FOV (for one eye) would require a 6400 pixel wide image to get the same resolution. I hear the next Oculus Rift will have a 4k display, but that is shared between both eyes, giving only 2k per eye. Assuming the 1995 system was 640×480 per eye, then that’s a 1280px wide display, and the Oculus I used was a 1920px wide display. 20 years and an increase of resolution of 50% (yes, it’s also lighter, the rendering is faster, etc.).

Why so little change? Well there are lots of reasons, but the main reason is that VR is more interesting an idea than an experience. It’s the promise of escape and immersion, but totally ignores the facts of perceptual attention. Oculus is just another fad like VR was in the 90s, and it will probably come back again after the Oculus fizzles out.

So I guess I’ll wait another 20 years and see what happens. I’m now in the mood to watch The Lawnmower Man