• VR is 7 times more demanding of your PC than 1080p gaming, say NVIDIA
    58 replies, posted
[QUOTE=paul simon;49460188]How would you solve it without rendering the scene twice?[/QUOTE] I guess you could move the camera every other frame, and have one eye delayed by a frame. Though while this would produce the same 3D effect it would leave you with half the effective frame rate. So no way around it really
[QUOTE=Ylsid;49460816]You still have the head tracking[/QUOTE] Head tracking doesn't make it VR.
Good thing this won't affect me, I couldn't afford VR. Doesn't really seem that impressive to me, I've never really cared about graphics in games.
[QUOTE=Clavus;49460909]Lets get some numbers in here: 1080p at 60hz = 1920x1080x60 = 124,416,000 pixels per second. Oculus Rift = 2160x1200x90 = 233,280,000 pixels per second. But wait! You have to render at 1.4x the resolution to compensate for the barrel distortion (which in turn compensates for the lenses). So what your GPU is actually doing: 3024x1680x90 = [B]457,228,800 pixels per second[/B]. Now also mind that the scene is rendered twice. And larger FOV means more of the world is visible, meaning less scenery is occluded. There are some optimizations done of course. You can cut away some parts of the screen that get lost in the distortion transform to lower the amount of pixels that need shading. You can do several optimizations to lower the double scene rendering overhead. You can compensate for some dropped frames by warping the last complete frame. You can render the outer edges of your vision at a lower resolution. That last one is pretty important. Imagine a 4K screen for example: 3840x1.4x2160x1.4x90 = a whopping [B]1,463,132,160 pixels per second.[/B][/QUOTE] Just so everyone knows, there are only about 457m PPS with the rift 2. The scene being rendered twice gets included in this number.
[QUOTE=Zezibesh;49460120]don't you have to render the scene twice though[/QUOTE] Your eyes are at two different positions though. There's some parts of a scene your left eye can see that your right eye cannot see. Your left eye will see stuff at a slightly different angle than your right eye (creating depth, etc). I'm not at all experienced in VR programming or rendering really, but it's hard to see how it would not involve some sort of second full or partial render for your other eye.
But Oculous, as the biggest headset right now, is already playable Besides, why can't I just do an sli config and let one GPU render for one screen and the other for the other screen?
[QUOTE=Tools;49460060]I don't see how it's more demanding. Oculus rift's total resolution is around 2000x1000 at 90 frames/sec, I'm currently gaming smoothly on an old GTX670 and 2560x1440 at +60 frames/sec So not even close to 7 times as demanding, more like x2 if anything due to the frame rate and sensor tracking.[/QUOTE] How about you leave the math to the people who have all the data to math right.
[QUOTE=Sam Za Nemesis;49461222]Crytek actually solved this years ago for having stereoscopy in 7th gen consoles using what they call Screen Space Reprojection just rendering the scene once, for a lot of cases it works very well [IMG]http://www.marries.nl/wordpress/wp-content/uploads/2012/05/reprojection_anaglyph1.png[/IMG] [URL]http://www.gdcvault.com/play/1013803/CryENGINE-3-Real-time-Stereo[/URL][/QUOTE] Yes you can use the zdepth to generate a 3D picture, but it quickly would fall apart when using the rift due to the artifacts it produces around edges of objects wherw it simply lacks the information to render the two perspectives properly. Aka it's OK for 3D glasses, but will not be convincing enough for the Rift.
Guess I'll give up depth perception and render once then
[QUOTE=paul simon;49462193]Yes you can use the zdepth to generate a 3D picture, but it quickly would fall apart when using the rift due to the artifacts it produces around edges of objects wherw it simply lacks the information to render the two perspectives properly. Aka it's OK for 3D glasses, but will not be convincing enough for the Rift.[/QUOTE] There are people who use something along those lines, but only for the background .... less occlusion to compensate for the further away the objects are. Having to run the vertex shader twice isn't exactly the biggest performance dump to begin with, pretty sure most developers won't consider the hacking worth it, especially since its not going to work with VR SLI implementations.
[QUOTE=Sam Za Nemesis;49461222]Crytek actually solved this years ago for having stereoscopy in 7th gen consoles using what they call Screen Space Reprojection just rendering the scene once, for a lot of cases it works very well [IMG]http://www.marries.nl/wordpress/wp-content/uploads/2012/05/reprojection_anaglyph1.png[/IMG] [URL]http://www.gdcvault.com/play/1013803/CryENGINE-3-Real-time-Stereo[/URL][/QUOTE] too bad it looks insanely shitty I've seen red/blue 3d vision in games. Best one that did it with nvidia settings was actually minecraft surprisingly. It gave a small sense of depth at the cost of looking horrible bad
[QUOTE=RichyZ;49462753]The example is using that shitty 3d but the idea is that you could do the same w/ a VR setup with that same tech, but it would have basically no depth and kinda beats the point.[/QUOTE] AKA equal to taping a screen you found to your face and calling it a day
[QUOTE=J!NX;49462570][B]too bad it looks insanely shitty[/B] I've seen red/blue 3d vision in games. Best one that did it with nvidia settings was actually minecraft surprisingly. It gave a small sense of depth at the cost of looking horrible bad[/QUOTE] Jesus christ... It's a god damn visual example using red/blue. Do you seriously think it's somehow limited to red/blue?
[QUOTE=Sam Za Nemesis;49462804]That's just a reference for stereoscopy, you can obviously use the two buffers it can generate separately, for what I saw in their talk it's extremely clever to avoid tearing of hidden objects[/QUOTE] Thing is, I've tried a similar approach in a game when using active 3D goggles, and while it's mostly convincing, there's artifacts around the edges of objects where it has to reconstruct the missing data using the nearest pixels for reference. It falls apart in several cases, most obvious being: - Things that are close to your view - Transparent textures with lots of detail (e.g. grass) I can't imagine it being acceptable at all in VR, where these artifacts will be amplified by the superior stereo vision.
[QUOTE=Ylsid;49460222]Could you not just forgo the 3d and use one image for both eyes?[/QUOTE] How can VR be real if our eyes aren't real?
VR basically needs two screens, or a single screen with a split output (this is what the DK2 does). I've tried using my DK2 with one eye closed (thus no depth perception) and it's an extremely jarring and uncomfortable experience. If it were set up so only one image was rendered then it wouldn't feel very "VR-ey" and at that point it would quite literally be a phone screen strapped to your head. The lenses help provide a sense of being there. If we removed the lenses or went with something monocular, it wouldn't really feel right. First and foremost you'd basically need a screen that wraps around a majority of your eyes. With the current prototypes of VR, they can simulate the same effect at a fraction of the screen size needed due to the nature of the optics and the lenses. Secondly, depth would go out the window and I think just about anybody who's seriously used a DK2 will tell you that depth is important. Realistically speaking, if you have a gaming PC that can mostly max your games in 1080p with an average of 40-60 FPS, your computer is ready for VR. You can always turn settings down to improve your framerate too, and 90% of the time the changes you make are not even noticeable in VR. The only reason NVIDIA are spitting out these absurd figures is because they're "futureproofing". VR is perfectly fine in 60FPS. When consumer VR hits, we probably won't even have any legitimate content beyond game demos and tech demos that are optimized well enough for the 90FPS benchmark. [editline]5th January 2016[/editline] Also last I heard, AMD and NVIDIA were working on some kind of an SLI/Crossfire thing so you could have two GPUs render each separate "screen".
[QUOTE=Sand Castle;49462096]But Oculous, as the biggest headset right now, is already playable Besides, why can't I just do an sli config and let one GPU render for one screen and the other for the other screen?[/QUOTE] In a Way 3DFX did that 20 years ago with the VooDoo2 using pre-nividia Scan-Line Interleave. One card did the odd scanlines while the other card rendered the even lines. The then independent frames were synced and combined to produce 3D that required twice the hardware but half the processing power of a single 3D accelerator While it meant paying twice the price it did not stop gamers making it the most popular 3D card of the 90's. I'd totally have no issues with a VR headset requiring two dedicated cards (either standalone or in separate SLI/crossfire configurations) per headset. Some monitors like the IBM T221 already require that to produce absurdly high screen resolutions.
[QUOTE=pentium;49464678] I'd totally have no issues with a VR headset requiring two dedicated cards (either standalone or in separate SLI/crossfire configurations) per headset.[/QUOTE] yeah but you're a huge computer nerd [I]most people[/I] would probably have an issue with needing to dual card to run a vr headset
There's some more interesting stuff going on though where this may no longer be the case. Nvidia is working on a thing called multi-resolution shading. [url]http://www.pcworld.com/article/2926083/nvidias-radical-multi-resolution-shading-tech-could-help-vr-reach-the-masses.html[/url] The idea is that because the edges are so heavily warped, we can render the center of the screen at high resolution and render the edges at a lower resolution. The eye has a natural falloff in acuity towards the edges so you're gonna be focusing towards the center 90% of the time, and the edges are so distorted anyway, that the falloff in quality isn't really felt. Another, very similar, concept is foveated rendering. [url]http://research.microsoft.com/pubs/176610/foveated_final15.pdf[/url] Microsoft is working on a system that will track your eyes and automatically make the center of your focal point the highest resolution. Then it fades off into lower resolutions at the edge. When you move your eyes, that new point will become the high resolution point. Because of the natural loss in visual quality of the eye in the peripherals, as mentioned above, this isn't really noticed. In the future as the tech matures, VR might be a lot easier to render and, if foveated rendering takes off, it might even be cheaper to render for VR than a traditional monitor because the eye-tracking tech can be perfected in the VR environment.
[QUOTE=Clavus;49460909]Lets get some numbers in here: 1080p at 60hz = 1920x1080x60 = 124,416,000 pixels per second. Oculus Rift = 2160x1200x90 = 233,280,000 pixels per second. But wait! You have to render at 1.4x the resolution to compensate for the barrel distortion (which in turn compensates for the lenses). So what your GPU is actually doing: 3024x1680x90 = [B]457,228,800 pixels per second[/B]. Now also mind that the scene is rendered twice. And larger FOV means more of the world is visible, meaning less scenery is occluded. There are some optimizations done of course. You can cut away some parts of the screen that get lost in the distortion transform to lower the amount of pixels that need shading. You can do several optimizations to lower the double scene rendering overhead. You can compensate for some dropped frames by warping the last complete frame. You can render the outer edges of your vision at a lower resolution. That last one is pretty important. Imagine a 4K screen for example: 3840x1.4x2160x1.4x90 = a whopping [B]1,463,132,160 pixels per second.[/B][/QUOTE] This math looks hugely flawed and I don't even know if counting pixels like this actually represents it properly.
I have a GTX 1060, an i5, and 8 gigs of ram and doing the steamVR test shows that I am more than capable of running VR. So I don't think this is entirely accurate
I thought PCGamesN is back.
Sorry, you need to Log In to post a reply to this thread.