• AMD Releases Mantle Drive to Press
    79 replies, posted
[QUOTE=avincent;43750283]lol amd shitrace trying to brag about being able to play ONE game. Can't wait till nvidia shits drivers all over they asses[/QUOTE] yeah fuck competition right? the higher prices are worth it for fanboys. I have a 6950 that I never had one issue with. well worth the price I paid
I'm fine with 10-20 more frames when it comes to running Eyefinity on a single 680. Just barely scrapes by with BF4. AMD did say mantle was going to be for everything, right?
[QUOTE=avincent;43750283]lol amd shitrace trying to brag about being able to play ONE game. Can't wait till nvidia shits drivers all over they asses[/QUOTE] you clearly have no idea what an API is
[QUOTE=avincent;43750283]lol amd shitrace trying to brag about being able to play ONE game. Can't wait till nvidia shits drivers all over they asses[/QUOTE] Can't tell if joking or legitimately brain-dead
[QUOTE=GameDev;43750503]I'm fine with 10-20 more frames when it comes to running Eyefinity on a single 680. Just barely scrapes by with BF4. AMD did say mantle was going to be for everything, right?[/QUOTE] It [I]can[/I] be, but that would require Nvidia to write drivers to support it. From what I've heard, Mantle works best with AMD's architecture. Unless Nvidia starts making their GPUs similar to AMD, there won't be much in the way of gains. On the main topic, it's looking good. Mantle was pretty over-sold, as is usual for this kind of thing, but its gains are very real. It's designed to remove the CPU to GPU bottleneck that occurs. You'll see the greatest gains when you're using either a weak CPU, or a game that is heavily CPU bottlenecked. Even at the top end of the spectrum, you can't really complain about gaining a "free" 10-15 FPS.
[QUOTE=Morgen;43749088]The Phenom comment was referencing how AMD's best performing CPU when it actually came to raw CPU performance was a Phenom released in 2011 (I think this might of changed with the latest line of AMD's APU's though) so I was just pointing out again a weaker CPU being able to handle it better.[/QUOTE] The latest Kaveri APUs still use the Bulldozer architecture and still have the same problems (not really quads, terrible single thread performance and not enough cache.) Most of the performance increases are coming from the increasingly powerful IGPs they keep stuffing in the APU. The latest high end APU has an R7 something with 512 shaders at 720 MHz, compared to 256 or 384 on the past releases. One nice thing about the Kaveri is they finally got the TDP down to something reasonable (65W for all except the highest unlocked, compared to 100W before.)
I just want to point out that Mantle is a significant improvement at what it's trying to do, and they aren't trying to skew the results here. It reduces the time the CPU blocks which results in significant performance improvements in that very specific area. That is a very good thing and has been a big problem with GPU and CPU design for a long time and anyone who disagrees is an idiot or bandwagoning. The ONLY problem with this for the consumer is that it might stay proprietary.
[QUOTE=NinjaTomate;43748391][QUOTE=pebkac;43748230]Keep in mind that's with pretty much the best CPU money can buy. The main point of Mantle is reducing the amount of CPU time needed to do draw calls in order to remove unnecessary CPU bottlenecks. The situation looks way better with a relatively weak CPU: [IMG]http://www.guru3d.com/index.php?ct=articles&action=file&id=9016[/IMG][/QUOTE] Isn't that kinda useless? I mean, why would you be running a $500 GPU with a cheap-ish CPU?[/QUOTE] Because AMD hasn't made a fast CPU yet. [sp]disclaimer: Since I know some of you are stupid enough to take this seriously, this is a joke. [/sp]
[QUOTE=Zephyrs;43750713]Because AMD hasn't made a fast CPU yet. [sp]disclaimer: Since I know some of you are stupid enough to take this seriously, this is a joke. [/sp][/QUOTE] The last and only time they had a clear significant lead in performance on Intel was with the socket 939 Athlon 64 from 2003-2005. Once the Core 2 came out, it was a slow downhill spiral from there. One of the biggest reasons they slipped is they got drunk on the success of finally beating Intel at their own game and slowed way down in their R&D for future processors.
I'd think it has more to do with the fact that they fired a good part of the team that created those.
If that's true, that'd be ironic. Intel was saved from the brink of oblivion caused by the awful Netburst architecture by a tiny Israeli design team and AMD blew off their best engineers while they were at the top.
They can still suck my dick until they get it running on Linux. And they can keep sucking it until they kill it and tell everyone to just use OpenGL.
I'm a bit behind on the mantle info, does it only support the newest AMD cards or does it have support for older cards like the radeon HD 6770?
HD 7000 and up.
[QUOTE=Toyokunari;43751216]HD 7000 and up.[/QUOTE] Radeon HD 7700 and up. Nothing below that.
[QUOTE=nikomo;43750986]They can still suck my dick until they get it running on Linux. And they can keep sucking it until they kill it and tell everyone to just use OpenGL.[/QUOTE] Mantle would probably work pretty well on linux with how much dick DRI sucks
Hopefully console games utilize mantle. It seems like it would be great for that.
My main problem with these comparison graphs is that they're using FPS instead of frametime. FPS = 1 / frametime, meaning you can't compare differences at different ranges. For example, the difference between 150 and 200 fps is actually [I]less[/I] than the difference between 30 and 32 fps. (or to be more dramatic, there is a bigger performance increase going from 30->31 fps than there is going from 300->400 fps) To apply this to the graphs in the OP, the 52->58 improvement at 1440p on Ultra is ~1.99 ms/frame, whereas the impressive-looking 170->200 improvement at 1080p on Medium is only ~0.88 ms/frame, less than half as much. The first graph's small 6 fps difference is, in reality, over twice as much improvement as the seemingly larger 30 frames on the bottom graph. (Not to say these aren't definite improvements, but the use of FPS makes the increases somewhat misleading)
[QUOTE=nomad1;43752221]Hopefully console games utilize mantle. It seems like it would be great for that.[/QUOTE] I'm not sure what you mean by that. The consoles already have their own totally separate and unrelated low level graphics libraries.
[QUOTE=SGTNAPALM;43748567] Maybe the point is that with Mantel, users can afford to cheap out on the CPU a bit making PC gaming a bit more accessible.[/QUOTE] Too bad this would only apply to Mantle games. Other normal DX11 games would be just as slow.
Its to early to tell if mantle will receive wide adoption other then the intended titles. I say we give it a year.
[QUOTE=nomad1;43752221]Hopefully console games utilize mantle. It seems like it would be great for that.[/QUOTE] Console games don't need mantle as they generally don't pass trough a similar layer like openGL or DirectX. In a sense this is bringing console functionality to the PC.
[QUOTE=wraithcat;43753596]Console games don't need mantle as they generally don't pass trough a similar layer like openGL or DirectX. In a sense this is bringing console functionality to the PC.[/QUOTE] PS3 uses an OpenGL ES 2.0-like API (1.0 w/ extensions like cg shaders), and Xbox360 used an upgraded DX9. Some games might've used low-level access tho, but they did have said APIs.
[QUOTE=JohnnyOnFlame;43754429]PS3 uses an OpenGL ES 2.0-like API (1.0 w/ extensions like cg shaders), and Xbox360 used an upgraded DX9. Some games might've used low-level access tho, but they did have said APIs.[/QUOTE] most late games used low level access for performance gains
[QUOTE=NinjaTomate;43748391]Isn't that kinda useless? I mean, why would you be running a $500 GPU with a cheap-ish CPU?[/QUOTE] Remember what website you're posting on.
[QUOTE=JohnnyOnFlame;43754429]PS3 uses an OpenGL ES 2.0-like API (1.0 w/ extensions like cg shaders), and Xbox360 used an upgraded DX9. Some games might've used low-level access tho, but they did have said APIs.[/QUOTE] Both are based on those but of course support low level functions. Consoles would have a hard time lasting as long as they do without low level access. For comparison, devs have confirmed the PS4 uses a completely original graphics library, with two APIs newly made. One high level, the other low level.
Mantle is released to the public. [url]http://support.amd.com/en-us/download/desktop?os=Windows%207[/url]
[QUOTE=wraithcat;43753596]Console games don't need mantle as they generally don't pass trough a similar layer like openGL or DirectX. In a sense this is bringing console functionality to the PC.[/QUOTE] I remember John Carmack, when he talked about Rage PC performance issues, mentioned something about this. He said that the consoles did allow a more direct access, whereas PC had a middle layer and that middle layer is why Rage didn't perform as well on PC. So while consoles may use something like Directx or OpenGl in a technical sense, they don't use it the way PC games do(going through the operating system like Windows). The idea of Mantle, as I understand it, is to provide that kind of direct access to PC games, if developers use it and gamers have a videocard compatible with it.
[QUOTE=cecilbdemodded;43756137]I remember John Carmack, when he talked about Rage PC performance issues, mentioned something about this. He said that the consoles did allow a more direct access, whereas PC had a middle layer and that middle layer is why Rage didn't perform as well on PC. So while consoles may use something like Directx or OpenGl in a technical sense, they don't use it the way PC games do(going through the operating system like Windows). The idea of Mantle, as I understand it, is to provide that kind of direct access to PC games, if developers use it and gamers have a videocard compatible with it.[/QUOTE] The idea of Mantle is simple, low level access to the hardware for much more efficient use. This is not a new concept, see the Glide API. The only issue is, low level access isn't used for exactly the reason why Mantle will probably never catch on: it only supports very specific hardware. Higher level access is used on PC most of the time because even if it is less efficient it allows any hardware combination to be used, which may be less efficient, but is much healthier for PC gaming overall. It means you can use any computer, as opposed to only a computer with an AMD card. And yes, Nvidia can apparently theoretically do their own drivers for Mantle, but do you see that really happening? Plus not all hardware is supported. If any game was Mantle only, not only does this lower the potential audience, but it also kills backwards compatibility unless we manage to get a Mantle wrapper the same way we do for Glide, which was also a low level API used during the 90s.
[QUOTE=CakeMaster7;43756276]The idea of Mantle is simple, low level access to the hardware for much more efficient use. This is not a new concept, see the Glide API. The only issue is, low level access isn't used for exactly the reason why Mantle will probably never catch on: it only supports very specific hardware. Higher level access is used on PC most of the time because even if it is less efficient it allows any hardware combination to be used, which may be less efficient, but is much healthier for PC gaming overall. It means you can use any computer, as opposed to only a computer with an AMD card. And yes, Nvidia can apparently theoretically do their own drivers for Mantle, but do you see that really happening? Plus not all hardware is supported. [B]If any game was Mantle only[/B], not only does this lower the potential audience, but it also kills backwards compatibility unless we manage to get a Mantle wrapper the same way we do for Glide, which was also a low level API used during the 90s.[/QUOTE] Except there is never gone be an "Mantle Only" game, having engines supporting more then 1 GPU API is not a huge deal. Unreal Engine supports ~8, and Frostbite supports 5, Source supports 4. All i hope that comes from this is that the developers like the low level GPU access enough for NVidia to provide their own likewise api.
Sorry, you need to Log In to post a reply to this thread.