• Welcome to the VRAM apocalypse
    10 replies, posted
[url]http://www.pcgamesn.com/welcome-to-the-vram-apocalypse[/url]
We worked around these problems for years, but now that the consoles don't have it, they don't seem interested to find solutions for the PC. Consoles don't display 3gb of VRAM data on the screen, there is no requirement to have all that data accessible from the GPU at all times. They just refuse to put in systems to manage the VRAM that is exclusively used for PC. -- Its like they all forgot what Sparse Textures are after the PS4 was released.
doesn't PCs take from the RAM if there isn't sufficient VRAM though? not to the extent that consoles do it, but I'm fairly sure GPUs can borrow a portion of your RAM. for example, my card has 3GB of VRAM but has 7GB usable [IMG]http://i.cubeupload.com/2sjjip.png[/IMG] I'm not 100% sure on this, but I have a strong feeling I read this somewhere
[QUOTE=PredGD;46081899]doesn't PCs take from the RAM if there isn't sufficient VRAM though? not to the extent that consoles do it, but I'm fairly sure GPUs can borrow a portion of your RAM. for example, my card has 3GB of VRAM but has 7GB usable [IMG]http://i.cubeupload.com/2sjjip.png[/IMG] I'm not 100% sure on this, but I have a strong feeling I read this somewhere[/QUOTE] It will start swapping buffers between the CPU and the GPU, which can be problematic. If you render a frame and the buffer still recides on the mobo's ram, it will have to transfer it first, and transferring a big texture will take long enough to drop a frame or 2.
Memory and VRAM should be pooled together. Don't know the specifics but if a PS4/Xbone can do it, why not a Windows PC?
[QUOTE=Korova;46082157]Memory and VRAM should be pooled together. Don't know the specifics but if a PS4/Xbone can do it, why not a Windows PC?[/QUOTE] Because in a PS4/Xbone the GPU is on the CPU chip. While in a PC the GPU is on a PCI-E socket, which makes the latency to high and the bandwidth is too low to effectively share ram.
[QUOTE=Korova;46082157]Memory and VRAM should be pooled together. Don't know the specifics but if a PS4/Xbone can do it, why not a Windows PC?[/QUOTE] Because you can't really pool together two different types, quantities, and locations of RAM across the motherboard without incurring a serious performance penalty.
I see a lot of people saying the same thing regarding PC memory access. Let me shine a more detailed light on the subject: The way a modern PC works, you have the CPU, connected to it's own memory over DDR3, and connected to a GPU over PCIe, which itself is connected to it's memory over GDDR5. Both of them can access the other's memory - the GPU can even access stuff that's in the CPU's memory space but isn't actually memory, like a memory-mapped file, which trigger some stuff on the CPU to load stuff from disk or network or whatever. GPU and CPU memory are actually different, because they're designed for different tasks. The CPU is built for insane single-threaded speed, so it has a very deep cache, lots of prefetching, and relatively low-latency memory. A CPU fetch to RAM takes about 50 nanoseconds to complete, and it can copy about 40GB per second. GPUs are designed for maximum throughput, so they have a wider, shallow cache and a very high-bandwidth, high-latency interconnect. A top-end card can push over 300GB per second... but any particular request will take around 500ns. They're designed to be able to tolerate that, but an increase in latency will still really hurt them. The link between them is usually pretty high-bandwidth (4-15GB/s), but also pretty high-latency (upwards of 700ns, higher if you have a PCIe switch like PLX makes in the way). And if it's a memory read, it has to go through the latency on the other side as well, so a CPU read from GPU memory can literally be measured in microseconds. Which is a lot, by the standards of processing time. The new consoles don't work quite this way. The CPU and GPU are on the same die, and crucially, share the same memory controller. On a PC CPU, there's one memory controller that's shared by all the cores on the chip, two or four or six or fourteen or whatever. The GPU has a separate memory controller, hooked up to all the modules or compute clusters or whatever the company has decided to call their "core". On these consoles (PS4, Xb1 and actually even the Xb360), both the CPU and GPU are tied to the same memory controller, and they even share a cache. This isn't uncommon on a PC - any recent integrated graphics uses the CPU's memory controller - but those integrated GPUs are very, very weak. They can't really be used for gaming. The upshot of this is that the consoles are more flexible with their memory. If you make a game that's extremely light on CPU memory, like a deathmatch shooter, you can throw a ton of memory at textures and models. Or if you make a game that's extremely light on graphic assets, you can use that memory to cache a ton of stuff from disk, for lower loading times. That won't exactly work on the PC, because the penalty right now for going outside the GPU's dedicated memory is too harsh. It doesn't matter how much more processing power it has, the problem is that it will be sitting idle waiting for the data it needs to finish its epic quest through the entire system. That said, memory is getting cheaper all the time. 512MB was a hefty amount for 2006, but within a few years of that, it was common, and then 1GB was almost a minimum. Now any gaming card will have 2GB, and more is becoming common (both of Nvidia's current 900-series GPUs stock 4GB). We'll shortly be able to simply crush the problem with brute force - have 8GB for the CPU, 8GB for the GPU, and no matter how heavily-tilted a console game is towards one or the other, it'll run just fine. Three additional points: 1) The unified-memory system of the consoles does have some other advantages - it makes it easier to juggle tasks between the CPU and GPU. It's plausible that parts of traditionally pure-CPU tasks, like AI processing, could get offloaded to the GPU. Or complex shading could be done on the CPU, writing to a render target the GPU will use later. That's not stuff that's really happening yet, but it would be much harder to do on the PC if it ever does happen. You couldn't just use the 8+8GB overkill solution. Things like this would need either a change to how PCs work, or some hefty reprogramming during the port. 2) AMD does sell APUs that are similar to what the consoles use, but they don't sell any with nearly as much power. They max out at four CPU cores instead of eight, and 2-8 GPU "compute units" instead of 12-18. But they do have the full linked memory system and everything. If they released one that had more GPU power, we would actually be able to match their unified memory system. Intel's integrated GPUs are even worse, but they also have a unified memory system. 3) The PS3 was sort of halfway between these two designs. Instead of just a CPU and GPU, it also had a third processor set, which they called "SPUs" (that's where their "8-core" claim came from - the CPU itself was only single-core, dual-thread). The GPU had it's own 256MB of memory, but the CPU and SPUs shared another 256MB of memory. They did this mainly because their GPU had very limited shader capabilities (the Xb360 was actually a generation ahead of the PC in that regard - until the 8800 launched, it was the most programmable GPU). So yeah, that's how it worked, if you were wondering.
PCI 4 should have support for shared memory busses. It'd be Fucking Incredible, provided OSes support it
[thumb]http://oyster.ignimgs.com/wordpress/stg.ign.com/2014/09/ShadowOfMordor-2014-09-25-20-55-27-91.png[/thumb] I have a GTX 670, but with only 2GB of Vram. I'm gonna wait for actual performance numbers, but I'll probably pick this up on PS4.
Sorry, you need to Log In to post a reply to this thread.