[QUOTE=BMCHa;43752294]My main problem with these comparison graphs is that they're using FPS instead of frametime. FPS = 1 / frametime, meaning you can't compare differences at different ranges. For example, the difference between 150 and 200 fps is actually [I]less[/I] than the difference between 30 and 32 fps. (or to be more dramatic, there is a bigger performance increase going from 30->31 fps than there is going from 300->400 fps)
To apply this to the graphs in the OP, the 52->58 improvement at 1440p on Ultra is ~1.99 ms/frame, whereas the impressive-looking 170->200 improvement at 1080p on Medium is only ~0.88 ms/frame, less than half as much. The first graph's small 6 fps difference is, in reality, over twice as much improvement as the seemingly larger 30 frames on the bottom graph.
(Not to say these aren't definite improvements, but the use of FPS makes the increases somewhat misleading)[/QUOTE]
What the hell are you talking about? Comparing frametimes in absolute terms is way more misleading. You're basically saying that a frametime improvement from 60ms to 30 ms is the same as the frametime improvement from 31ms to 1ms. The first case is a 2× speedup and while the second one is 31×. In what world is that the same?
Anyone with a bit of common sense will compare framerates in relative terms. In that case 58/52 = 1,1153 (11,5% improvement) and 200/170 = 1,1764 (17,6% improvement).
[QUOTE=booster;43749922]That's like buying a Ferrari, with a Honda civic engine.[/QUOTE]
But Vtec bro.
I'd rather see an open api standard that can be adopted by any hardware manufacturer than one that's AMD specific.
Particularly when it comes to using anything other than Windows.
[QUOTE=bord2tears;43760880]I'd rather see an open api standard that can be adopted by any hardware manufacturer than one that's AMD specific.
Particularly when it comes to using anything other than Windows.[/QUOTE]
So, OpenGL? Because the point of Mantle [B]is[/B] that it is AMD specific. You can't make a (very) low level API that works on a variety of different GPUs.
[QUOTE=Simspelaaja;43761171]So, OpenGL? Because the point of Mantle [B]is[/B] that it is AMD specific. You can't make a (very) low level API that works on a variety of different GPUs.[/QUOTE]
It's supposed to be open and not tied to AMD hardware but who knows if they have actually followed through on that or if Nvidia would be even interested in adopting it.
Back in the day, game developers had to write up support for things like sound cards. If the game developer included support for a Creative Labs Soundblaster, and you wanted to hear it, you had to either own or buy one of those cards. For obvious reasons not every different type of card was supported.
Then Microsoft comes up with their DirectX initiative. Now game developers only have to code in Directx support and their game will work with any type of card that the hardware manufacturer has built to be DX compatible.
The downside is that because the game is running through Windows AND DirectX, there's overhead and latency to deal with. Plus, DX is under MS's control and may not be up to date on what modern cards can do.
So now we have come full circle. It makes sense for AMD to do this Mantle thing now. Game developers still have DX and OpenGL to fall back on if they wanted general, widespread compatibility. But now they have Mantle to squeeze out previously unreachable performance. The key to this whole thing is how easy is it for developers to use Mantle. If you're a game developer and you can add Mantle support with relative ease and it allows you to show off a game that looks more impressive than your competitors(such as CoD vs BF), why [i]wouldn't[/i] you use it?
It was different in the Glide days because Glide was 3d cards only. You still needed to have a card doing 2d stuff. Today's videocards do it all, so getting a Mantle enabled card isn't as limited investment as if you'd bought a 3dfx card back then.
[QUOTE=Morgen;43761192]It's supposed to be open and not tied to AMD hardware but who knows if they have actually followed through on that or if Nvidia would be even interested in adopting it.[/QUOTE]
This is a very low level API, and AMD and Nvidia do not share the same architecture on their hardware.
Its like trying to overlap ARM and x86 i one API. Its much more likely for Nvidia to create a low level API of their own.
Nvidia will react to this somehow. They're not really slouches and they surely have some idea around for how to compete.
Mantle gives AMD an edge in my mind for the next few years of hardware. It's a software application of the hardware improving the value of that hardware, making AMD more worthwhile. I imagine Nvidia knows this and knows they'll have to find some way to do something similar in the future.
I hope so at least because I don't really want to buy an AMD card
[QUOTE=HumanAbyss;43762870]Nvidia will react to this somehow. They're not really slouches and they surely have some idea around for how to compete.
Mantle gives AMD an edge in my mind for the next few years of hardware. It's a software application of the hardware improving the value of that hardware, making AMD more worthwhile. I imagine Nvidia knows this and knows they'll have to find some way to do something similar in the future.
I hope so at least because I don't really want to buy an AMD card[/QUOTE]
Well something like G-Sync kinda makes it a non-issue since generally people care about framerates so they can match their monitors refresh rate, if you have a G-Sync monitor all you care about is getting > whatever you consider smooth enough eg 40fps.
I'd rather have ~45fps G-Sync with DirectX than ~60fps V-Sync with mantle.
[QUOTE=HumanAbyss;43762870]Nvidia will react to this somehow. They're not really slouches and they surely have some idea around for how to compete.
Mantle gives AMD an edge in my mind for the next few years of hardware. It's a software application of the hardware improving the value of that hardware, making AMD more worthwhile. I imagine Nvidia knows this and knows they'll have to find some way to do something similar in the future.
I hope so at least because I don't really want to buy an AMD card[/QUOTE]
The next Nvidia GPU architecture that should be out this year is supposed to have a CPU on the GPU that is rumored to give similar gains as Mantle does but with DX and OpenGL. If the rumor holds true I doubt Nvidia will adopt Mantle as that would be more beneficial to AMD to have it become widespread than it would be for Nvidia.
[QUOTE=pebkac;43758753]What the hell are you talking about? Comparing frametimes in absolute terms is way more misleading. You're basically saying that a frametime improvement from 60ms to 30 ms is the same as the frametime improvement from 31ms to 1ms. The first case is a 2× speedup and while the second one is 31×. In what world is that the same?
Anyone with a bit of common sense will compare framerates in relative terms. In that case 58/52 = 1,1153 (11,5% improvement) and 200/170 = 1,1764 (17,6% improvement).[/QUOTE]
You're trying to compare percentages with a nonlinear measurement. While at the extreme low end you would realistically be running into some overhead, in theoretical terms a 60->30ms improvement is indeed the same as a 31->1ms improvement.
To use a concrete example, lets say you're running 60ms frametimes (16 fps) but you have a postprocessing shader that takes 30ms to run no matter what the input looks like. If you remove this shader you save 30ms and drop to 30ms frametime. Now, lets say you change the scene being rendered and then the whole thing only takes 31ms to render. Taking the postprocessing shader out again saves 30 ms, dropping your frametime down to 1ms.
In this example the total frametime = scene rendering time + postprocessing time. In both cases the constant runtime postprocessing shader was removed to get a speedup; the same exact change was made. This does not mean that the second case was 30x as much of an improvement as the first. If you measure FPS that's what it looks like but when we measure frametimes we can see the truth of the matter: that it was a constant 30ms improvement in both cases.
(Have you read the link in the post above yours?)
[QUOTE=alien_guy;43763082]Well something like G-Sync kinda makes it a non-issue since generally people care about framerates so they can match their monitors refresh rate, if you have a G-Sync monitor all you care about is getting > whatever you consider smooth enough eg 40fps.
I'd rather have ~45fps G-Sync with DirectX than ~6fps V-Sync with mantle.[/QUOTE]
well that's kind of what I mean, right? They're going to come out with their own edge due to the competition.
Gsync is a good idea I just hope it's not expensive and prohibitive because of that.
[QUOTE=cecilbdemodded;43762338]Back in the day, game developers had to write up support for things like sound cards. If the game developer included support for a Creative Labs Soundblaster, and you wanted to hear it, you had to either own or buy one of those cards. For obvious reasons not every different type of card was supported.[/QUOTE]
In the DOS days, while a program had to be aware of a particular sound card to use it, most games in the mid to late DOS era supported at least General MIDI or Adlib. Even rotgut cheap sound cards at the time at least tried to emulate these two standards in order to be competitive in the market. The result would often be distorted and shite, but at least it would work.
There were no sound standards in DOS, but like many other things back then, the industry adopted a loosely defacto standard around emulating a MPU-401 and/or an Adlib card (which was basically a Yamaha YM3812 with glue logic.) So pretty much any run of the mil sound card would work (except GUS, which is a completely different story.)
[QUOTE=cecilbdemodded;43762338]Then Microsoft comes up with their DirectX initiative. Now game developers only have to code in Directx support and their game will work with any type of card that the hardware manufacturer has built to be DX compatible.[/QUOTE]
It took a LONG time for games to adopt Direct X. It was introduced in Windows 95 and most games ignored it until 1999/2000 when it started to take off. It was ignored for a good reason, the early versions of DX were buggy, slow and lacked tons of features compared to the far more mature OpenGL and Glide.
[QUOTE=cecilbdemodded;43762338]The downside is that because the game is running through Windows AND DirectX, there's overhead and latency to deal with. Plus, DX is under MS's control and may not be up to date on what modern cards can do.[/QUOTE]
GPU manufacturers design the silicon specifically for running the Direct X API and OpenGL API, not the other way around. Before DX, most cards adhered to VESA, VGA, CGA and EGA modes and OpenGL was relegated to workstations on very expensive multi chip solutions.
DX was Microsoft's strong arm into the game market and they've pretty much ruined cross platform gaming over the last decade and a half. Like their GFW/GFWL programs forced DX down the throats of developers "or else". In the 90s, games supported OGL and DX almost equally, starting in 2000 almost no big games used OGL and they all used DX.
[QUOTE=cecilbdemodded;43762338]So now we have come full circle. It makes sense for AMD to do this Mantle thing now. Game developers still have DX and OpenGL to fall back on if they wanted general, widespread compatibility.[/QUOTE]
But DX isn't widespread compatible, it only works on Microsoft OSes. Both OGL and Mantle are superior since they can theoretically work on any platform.
[QUOTE=cecilbdemodded;43762338]It was different in the Glide days because Glide was 3d cards only. You still needed to have a card doing 2d stuff..[/QUOTE]
The only 3dfx cards that were specifically 3D cards were the Voodoo1 and the Voodoo2. The Voodoo3 and onwards had integrated 2D functionality.
[QUOTE=GiGaBiTe;43763210]DX was Microsoft's strong arm into the game market and they've pretty much ruined cross platform gaming over the last decade and a half. Like their GFW/GFWL programs forced DX down the throats of developers "or else". In the 90s, games supported OGL and DX almost equally, starting in 2000 almost no big games used OGL and they all used DX.
But DX isn't widespread compatible, it only works on Microsoft OSes. Both OGL and Mantle are superior since they can theoretically work on any platform.
The only 3dfx cards that were specifically 3D cards were the Voodoo1 and the Voodoo2. The Voodoo3 and onwards had integrated 2D functionality.[/QUOTE]
Gaming wouldn't be where it is now on the PC without DX. Standardizing allowed both gamers and game developers to bypass compatibility worries. The thing is, that's not a concern NOW, in today's market. OGL started it all with GLQuake marking the beginning of the 3d gaming era. DX had the advantage of being integrated with Windows, that's the main reason it became the standard.
When most people use Windows for PC gaming, 'widespread' can be Windows only. Linux support isn't even a drop of water in the ocean in the overall scheme of things. Sure, supporting all OSs would be nice, but certainly not a justification for abandoning potentially game altering technology.
3dfx cards were popular in 3d only. By the time they got around to doing a combo 2d/3d card it was too late, that's why that company died. Glide was happening during the 3d only phase, by the time 2d/3d were gaining market share Glide was doomed. Mantle does not face this problem. A Mantle card can do everything else a nonMantle card can do, Glide cards were only good for Glide.
14.1 is out for everyone now, but pay attention to this bit at the bottom
[quote]Mantle performance for the AMD Radeon™ HD 7000/HD 8000 Series GPUs and AMD Radeon™ R9 280X and R9 270X GPUs will be optimized for BattleField 4™ in future AMD Catalyst™ releases. These products will see limited gains in BattleField 4™ and AMD is currently investigating optimizations for them.[/quote]
Then what the fuck is the point, that's the main video cards everyone has that can use mantle
[QUOTE=TheTalon;43764119]14.1 is out for everyone now, but pay attention to this bit at the bottom
Then what the fuck is the point, that's the main video cards everyone has that can use mantle[/QUOTE]
Well to be fair it is a beta driver, so you can't really expect it to be perfect. I tried Mantle and while I seem to get a slight FPS boost I'm now getting stuttering, which is definitely worse than FPS dips.
[QUOTE=BMCHa;43763150]You're trying to compare percentages with a nonlinear measurement. While at the extreme low end you would realistically be running into some overhead, in theoretical terms a 60->30ms improvement is indeed the same as a 31->1ms improvement.
To use a concrete example, lets say you're running 60ms frametimes (16 fps) but you have a postprocessing shader that takes 30ms to run no matter what the input looks like. If you remove this shader you save 30ms and drop to 30ms frametime. Now, lets say you change the scene being rendered and then the whole thing only takes 31ms to render. Taking the postprocessing shader out again saves 30 ms, dropping your frametime down to 1ms.
In this example the total frametime = scene rendering time + postprocessing time. In both cases the constant runtime postprocessing shader was removed to get a speedup; the same exact change was made. This does not mean that the second case was 30x as much of an improvement as the first. If you measure FPS that's what it looks like but when we measure frametimes we can see the truth of the matter: that it was a constant 30ms improvement in both cases.
(Have you read the link in the post above yours?)[/QUOTE]
Yes, I have read that link and I understand its argument, but you've managed to take it out of context and completely miss the point. It makes perfect sense for developers to measure their frametimes in miliseconds and break them down by each stage of rendering in order to get a better overview how much exactly each effect they add costs them.
But when you're looking at a finished product it makes exactly zero difference if you measure in FPS or ms/frame because they tell you esentially the same thing. What you should be looking at when comparing different hardware configurations is the relative ratio of how much faster one is over the other.
To expand on my previous example: Let's say you have a GPU called A and benchmark it in scene 1 of whatever game you happen to be using, and it renders the game at 60ms/frame. Now you take GPU B and benchmark it in the same scene, and it's much faster at 30ms/frame. Now obviously GPU B is twice as fast as GPU A, correct? Now you benchmark the GPU A in scene 2, where it performs at 31ms/frame. Then you take GPU C and benchmark it in scene 2 as well, where it performs at 1ms/frame. That means C must be 31 times faster than A, but B is twice as fast as A, and by your argument, B and C should have pretty much the same performance. See any flaws in your logic?
[QUOTE=cecilbdemodded;43763634]3dfx cards were popular in 3d only. By the time they got around to doing a combo 2d/3d card it was too late, that's why that company died. Glide was happening during the 3d only phase, by the time 2d/3d were gaining market share Glide was doomed. Mantle does not face this problem. A Mantle card can do everything else a nonMantle card can do, Glide cards were only good for Glide.[/QUOTE]
Having a 3D only graphics accelerator wasn't what killed 3dfx, it was gross mismanagement.
They were mismanaged from the beginning, but the tipping point was around the time of the Voodoo3, when 3dfx bought STB Systems in an attempt to get into the lucrative OEM market. Before this time they were basically what Nvidia and AMD are now, they just supplied chips and reference designs.
Once they bought STB Systems, they basically ceased supplying chips to 3rd party card makers and started their in-house manufacture of their own cards. This alienated all of their OEMs because they no longer had multiple sources to get 3dfx cards from and were forced to buy from 3dfx directly. This meant less price control and more worry if they could supply the number of cards they needed. It also meant that their former partners were now their competitors.
The second and ultimate deciding factor was that they had long product development cycles in a rapidly moving industry. Their first iteration of the Voodoo4/5 was a train wreck compared to the at the time, newly unveiled Geforce 256, which they had to fix at the expense of much needed resources on their Rampage product. They probably could have staved off the inevitable for a bit longer if they managed to get their Spectre line out and abandon the awfulness that was the Voodoo5 6000.
[QUOTE=Scot;43750067]how is this getting so many agrees? are people here that tech illiterate?[/QUOTE]
It's not tech illiteracy, it's a lot of people being pedantic about graphs. It was a daily argument when I frequented overclocking websites. Every time a news article was posted there half the thread would be full of people bitching about nothing more than the graphs. Horizontal axis this, cropped superfluous data that, they just couldn't get it through their heads that these things are not necessary.
[QUOTE=booster;43749922]That's like buying a Ferrari, with a Honda civic engine.[/QUOTE]
'Tis why if I could only afford a $500 GpU and a crappy CPU I'd downgrade the GPU and upgrade the CPU. Keep the system balanced. Would rather have the PC equivalent of a Mustang GT than a Fezza with a Civic motor.
Sorry, you need to Log In to post a reply to this thread.