A GEARVR NEWS ARTICLE
May 27th just so happens to be my Birthday. I think this one could be quite grand thanks to my love for VR and what VR’s done to elevate human achievement and nVidia is no exception to that dynamic. More importantly, and more on point: May 27th is the date on which nVidia releases its newest flagship Geforce: the GTX 1080. Happy birthday to me!
With the GTX 1080 comes a slew of new features and special abilities, and as well it should given the discretionary budget for these advancements was over 1 billion dollars in op-ed research. There are literally dozens of major advancements, and 1000’s of minor ones, offered from almost anyone in nVidia’s entire chain of workers on almost every level. The outcome of all of this innovation is a new GPU that all by itself … (well, let’s let the picture speak its 1000 words here) … is:
Well, I guess they didn’t need a 1000 words after all (more like 3 words and some numbers ha!) More importantly for our purposes at GearVRNews, however, are the specific advancements which have been added in for the sake of VR. There are many, and of such a level of excellence, that it blows the mind. A billion dollars is a lot of money, but I think perhaps it went to good use from what I’ve just seen watching the 2016 nVidia Special Event on YouTube, which was hosted by Jen-Hsun Haung (shown below), a very intelligent and eloquent speaker. I found the lecture invigorating and fun; I now understand why he is the CEO of nVidia after this rousing talk.
Huang talked excitedly about Pascal, and even called the GTX 1080 his “child” at one point in the demo, beaming with pride over its major new capabilities.
One of those major capabilities brought forward with Pascal (the new core chip & code base for the GTX 1080), is a technology nVidia’s created called “Simultaneous Multi-Projection”. This feature has a STEREO option where both eye ports are rendered to… inside a SINGLE PASS. In case you think you mis-read that, I’ll say it again, differently: Stereoscopic 3D is now SINGLE PASS. That’s right; whole scenes no longer have to get re-rendered entirely for each eye! That’s a 40-50% improvement right there, alone!
How is this achieved? Much of it is still mysterious at this point, but I’ll share what I know so far. The scene is built at a higher FOV inside the hardware and projected through virtual screens for each eye inside a single pass. No more halving the GPU’s rendering power for VR games on a PC. This technology can also display up to 32 virtual screens, which allows for an impressive additional concept I feel MUST be duplicated by Oculus and Samsung for GearVR in the future! Multi-Pass Eye Warping, not the current tech which might be called Image-Stretched-Concavity Deformation.
This new Multi-Pass feature is very advanced. It removes the pre-warp stretching required to counter-act the bending of the lenses in HMD devices. Current technology has to warp the screen concavely to counteract the bending by the lenses convexly, but this means the rendering engine has to grab a much larger portion of the screen that what’s needed and stretch it out, which at the borders becomes quite blurry. This also means 20-30% more pixels are wasted for each eye, totally not used, but which still all have to be computed and textured as if they are. Can you imagine how the GearVR would benefit from this technology?
Now what they do is use Multi-Pass technology to render 4 windows per eye, where each screen is rotated (see example directly above) to create the same concavity done previously as stretching. This means the image quality for the entire image, but most notably the outer edges, will be somewhat improved for VR, with less blurriness around the “sweet spot.” This tech just may help to improve VR’s visual fidelity in the long run.
Pascal operates at 2 times the performance of a Titan X card with 3 x the efficiency as shown in the image above. It runs twice as fast on half the energy of the Titan X!
I’ve read everything I can get my hands on (which still isn’t much at this juncture), and it all still sounds impossible and I don’t quite get the entire gist of how it all really works. In time, I hope others will break it down for us all with way more detail. For now, though, it’s enough to know that a crazily-detailed Unity demo running at 65 FPS in standard mode (two passes, one for each eye) suddenly jumped to 92 FPS when Single-Pass Stereo Mode got turned on. It was vast improvement, especially for PC-based VR where 90 FPS is the minimum required … for maximum comfort.
Also in the mix with Pascal is a software package called Ansel. You can use Ansel at any point in your game experience by pressing a keystroke. It is a screen-grabbing tool like nothing else. Your HUD goes away so you get clean screenshots. And you can move your camera around until you find the perfect spot to take your photo. Ansel allows for photos up to 65,000 pixels of resolution. It will zoom into the actual game and take the necessary slew of photos and stitch them all together for you into one large one. A final image could be 5-10 GB large if you aren’t careful.
Ansel also allows you to customize your screen shot with custom filters, vignette, coloration, and camera angles. You can even create 360-degree photos, but no word was given if those could be done in Stereoscopic 3D to boot. If so, I envision a lot of people posting Stereo 360 photos of epic places to be inside of their games for those in GearVR to enjoy. That would be quite amazing to see, for sure. Not just real world 360 photos, but gaming 360 photos. But stereoscopy would be even better in my book.
Now that a way exists to reduce the number of pixels being rendered to the screen, and further to render both eyes in a single pass, I seriously think Oculus and Samsung need to look urgently at replicating a form of this technology for GearVR and Android. First, we GearVR users could gain a theoretical 30% speed increase by changing the number of pixels being rendered down to ONLY what it is needed for each eye. Then this system shifts the geometry to take two images from two different angles of the same scene simultaneously, one for each eye, inside the same pass. This effectively halves the rendering requirements for VR! But since you were already 30% more efficient to begin with, it could give GearVR 60-70% more rendering power to work with overall. For games that don’t use Vulkan, that would be awesome enough. But for those that do use Vulkan, to get these other increases in speed above what Vulkan already provides, well… that would literally be incredible.
By 2019, Samsung is promising 12K screens. With some of the ideas nVidia has brought forth with Pascal in their new Geforce GTX 1080, I think what is going to be possible on GearVR by that time will literally leave untold scores of jaws dropped on the floor for years to come. GearVR might have started off a bit slow for most people’s tastes, but by 2019, I think mobile VR will have a great footing, with way more impressive visuals than we could have ever hoped for. nVidia has shown Oculus and Samsung the way, now their talented engineers need to dig into the GTX 1080 and see what they can discover for themselves about what it does, and how it does it. The Multi-Pass technology is mind blowing, and I truly hope Android will get some version of this technology for GearVR because then VR, even for mobile, could really become fantastically stunning, capable of so much more than what it offers today.
Just as it should.
Categories: Hardware Reviews