• Reports: AMD Polaris 30 Launching Mid-October, RX 570/580 Successors Coming
    25 replies, posted
https://wccftech.com/amd-radeon-rx-polaris-30-gpu-october-launch/ Supposedly this is to keep people happy until Navi is ready. This might be good if the GTX 2050/2060/RTX 2070 aren't released for a while (as they appear that way now) Watch the price points, these could beat Turing in performance/$
Doesn't look that great to be honest. Will put the new X80 finally above the GTX1060 in all benchmarks but its another release without any big flagship. I also feel that the 20 series lower bracket will not be good for this refresh once that is out. The article mentions how there is of course no GTX/RTX 2060 yet but NVidias release is also in a weird spot where the 10 series is not even close to being dead or outdated. GTX 1070, while being more expensive then current RX580, will by the looks of it still beat the new refresh.
I think Nvidia is abandoning the low/mid-end GPU market. They bet pretty big on RT/NN, and that doesn't come cheap. On the 2080 it basically doubled their transistor count. I am expecting the 2050 to come in at $250-300, and the 2060 around $350-400. The 2070 is already known to be $499. That leaves AMD able to seize the mid-end market, plus cheaper high-end cards without Nvidia's addons. You don't need raytracing when you're struggling to run at 1080p60 with just raster graphics. The margins are lower, and you'll still be fighting with used last-gen cards, but it's otherwise looking to be an open market. That will backfire for one of them. Not sure who yet. If nobody adopts raytracing or Nvidia's proprietary NPU offload, AMD wins big, because in practical workloads they'll be matching Nvidia cards twice their price. If raytracing or neural networks turn out to be a killer feature every next-gen game uses, AMD is hosed, scrapes by on low-end cards and CPUs for a few years while they design that stuff into their cards. I personally suspect RT is the future, but not the present. Nvidia's jumped the gun, their hardware still isn't good enough to pull it off. Another few years and we might be able to do it cost-effectively, but by then they won't be the only ones featuring it. Neural network acceleration will probably not be used much in games, but could be revolutionary elsewhere - but that pushes towards a dedicated NPU card for whatever users need it, not packing it onto a GPU for gamers.
Low-end? definitely, mid-end? not in a million years. the x50 and x60 cards earn them WAY more than the high-end cards. Which are both just drops in the bucket compared to their deep-learning and industrial business. Amd already owns the low-end with their laptop APU's, even having Intel replace their own shit with theirs. But they'll have to put up a fight (in the form of improved wattages!) for the mid-end and if they want the high-end, they need to sacrifice some of that CPU profit on their GPU R&D. It's not even about whether they can at this point, as much as whether the investors just want to settle for CPU and APU domination. But no. Nvidia already boofed in abandoning consoles and while it's a safrifice they can bear, it also means they can't afford to disregard the mid-end market, as that's where almost 90% of the mainstream market sits. Just go back and look at Steam surveys on GPU adoption. You'll see 280, 470, 560, 660 ti, 760, 970 and 1060 dominate each of their years.
They'll still sell cards they consider "mid-end". We'll still get a 2050 and 2060. They'll just cost about as much as a 1070 or 1080. The crypto boom pushed prices up and I don't think they'll fall that quickly, if ever. Sure. But the hardware we have now still isn't really ready for it - they just can't get enough bounces per pixel. They've hidden the artifacts with a fancy AI thing but that won't work in places where it matters. You won't be able to spot an enemy by their reflection or their shadow, not when a neural net is filling in all the holes and it has no knowledge of the environment. I don't know how long it'll be before raytracing is actually fit for purpose. It's complicated by the slowdown in process node improvements. I think we need at least another order of magnitude performance increase, maybe more. So it definitely isn't this card gen, probably isn't next card gen. Maybe the RTX 40xx will be the ones where it finally is worth it.
They will. adoption will be excruciatingly slow, AMD brings in their "lame" polaris+ generation at significantly more reasonable prices. boom Nividia has just become pressured utterly by their own hand. I believe that if AMD gets their GPU division going even at the level that this rumored refresh indicates, Nvidia can't keep just slotting refresh prices on top as extensions of previous gens. People will just learn to be content with what they have. A PS5 will easily be matched by a 1080 or 780 ti in SLI. Sure, they can try gimping drivers then, but if busted, that will just drive people to modify their drivers or adopt competing hardware from Intel or AMD. I don't believe this trend of just always increasing pricing that Nvidia would LOVE will last past this gen. No. But the first DX11 cards weren't remotely ready for DX11 acceleration either. PhysX is still cancer, meanwhile VR is turning into nearly usable technology. We won't. They'll get it going in concert with DLSS and i'm going to bet you 500 coins that they'll rather refine DLSS scaling so they can natively render 1080p60 RTX than spend R&D on getting RTX ready at 1440p or 4k before pushing for wide adoption. Lots of PCmasterrace purists are bitching about DLSS, meanwhile i'm sitting here seeing it as a really fucking juicy option for mid-end gaming these next 10 years. I agree that Ray tracing needs MUUUCH more computing power, but i expect Nvidia and AMD to figure out tricks and shortcuts to "make it work" and DLSS is one of those tricks. We'll have driver-level temporal upscaling within a generation or two and hopefully, on-the-fly remastering of graphical assets within the next 15 years. That's the roadmap i see ahead.
Keep in mind that raytracing is a DX12 feature that heavily relies on compute side of GPUs, and that's something tradtionally AMD has been far more powerful at for a while. It's likely that AMD won't have to do much to get any of their newer cards to do raytracing at an acceptable level.
So are they just abandoning Vega, then? What a shame. I heard someone else say recently that Vega's devs were smothered out of resources. Their reasoning was that Sony has been pressuring AMD to move all those resources to the Navi team to get that architecture ready for the PS5, which is supposedly going to be running that and some iteration of Zen. Any truth to this?
https://www.forbes.com/sites/jasonevangelho/2018/06/12/sources-amd-created-navi-for-sonys-playstation-5-vega-suffered/#2c57212224fd Vega with HBCC was always a enterprise-grade product, but it seems like the massive resource drain for Navi killed any chances of DSBR and other useful technologies for gaming/raster making it in working order into Vega silicon. Vega is a very large die, and it's a generally expensive process (HBM2, interpose, etc) still, and it just can't be profitable for the price in the consumer space, so I expect consumer Vega supply to just slowly dry up while new enterprise products are launched and ship.
This has very little to do with Vega, and the node itself is probably very similar to the current one.
I remember when physx came out that was exciting as hell for me, sadly it was hardly used at all, only mirror's edge used it correctly I expect rtx to share the same fate, most likely gonna be be shoved into dx12 entirely since I heard Microsoft created it.
RTX at the end of the day is just NVIDIA's proprietary take on a new rendering tech. It is entirely possible to achieve the same thing in just DX12, and I think AMD has already made a more open alternative to it in their current rendering suite. The bonus being AMD's tech is more than likely A LOT more open to developers. I'm just left wondering what the "edge" is in NVIDIA's offering. Usually NVIDIA's tech on NVIDIA's hardware might have something a bit more desirable compared to offerings from AMD and the implementations of the tech in game engines (well except for Hairworks which runs like ass no matter what GPU you're using). We'll just have to wait and see, I guess.
Mainly because nVIDIA are massive plastic dildos and usually bribed developers with money to use things like they're now dead GameWorks bullshit which just like PhysX, will be dead in about 3 years and will be replaced with far more accessible and usable libraries that are open.
Tell me what there is to undervalue though. We don't have any results on the performance penalties or potential lack thereof in NVIDIA's case for real time ray tracing in games yet, simply because there are no games out that are currently taking advantage of the technology, RTX or otherwise. We'll have to wait for more games to adopt this tech and refine it before we can have any ideas just how much of a difference NVIDIA's RTX cores make, and whether it's truly worth the extra investment to get a GPU that really offers nothing more over last generation other than specific hardware for a specific graphical implementation. If RTX offers substantially better performance in games with the same or better visual parity than a game just using "raw" DXR/etc., then yeah, I could see the investment being worth it in maybe another generation from now. As it stands now, we don't know much at all. All we do know is, the RTX 2080 is basically just an overpriced 1080ti in terms of pure raster performance.
Cpu physx runs like ass and is shoved into random games to disadvantage AMD. Also gameworks.
CPU PhysX runs just fine, and is featured in everything from 3DS games to high end PC games.
I worded that badly, any implementation of physx that runs on cpu with AMD cards when also enabled to run on Nvidia cards.
PhysX is a really janky piece of trash, it's only used in so many games because: Free UE4 / Epic+Nvidia being buddies Just about any projects I've seen where physics plays any important part have complained heavily about how hard PhysX is to just not be a pain in the ass. Bullet (for how unsupported it is) is a much much better alternative, and Havok is still the gold standard.
If you already have the 580, why bother
And the only reason it was pushed to the CPU was because Nvidia was having a hard time selling an API library that half the market can't use.
Sorry, you need to Log In to post a reply to this thread.