• Rumor: AMD Ryzen 3000 runs at 4.5 GHz, beats Ryzen 2000 by 15%
    33 replies, posted
https://www.techspot.com/news/79839-leak-ryzen-3000-runs-45-ghz-beats-ryzen.html
Can't believe midrange is going to be 8c16t soon. Going to build a multipurpose unraid system so this will be perfect! love you AMD, pls kick intel's ass into gear.
If my current B350 motherboard somehow supports Ryzen 3000 thanks to a BIOS update, this will definitely be a future investment if this turns out to be the case. My Ryzen 1700 has not shown any signs of struggling yet though.
AMD, please kick Intel in the balls. Skip the ass entirely and administer punishing, vision-interrupting slams straight to the crown jewels. It's been a long time coming.
Considering I just bought a B450 board and a first gen Ryzen 5 on mega clearance, I thin I'll wait for Zen 2+ since that seems to be the last generation for the AM4 platform, after that we should be getting into DDR5 and PCIe4.0 territory
Might worth upgrading my r5 1600. I can't get it past 3.8GHz.
Sadly Intel will still have better single threaded performance and you'll see people take the "Intel is better for gaming!!!!" approach because people still believe game engines don't use more than one thread in 2019
If AMD delivers these frequencies in an 8-core package at a 95W TDP, they have my money, no questions asked.
if you think it can last you another year, next year's ryzen 4000 series will be the last amd processors that will work with your motherboard
I'd be nice if they added ECC ram support, a mid tier 8 core 16 thread processor mades the low end Xeons look like shit.
My fear is that 3000/4000 series will have a CPU that just fucking destroys Gen 1 mobos and tarnishes AMD's rep they've been on such a good role so far keeping true to their word about AM4's universality
The thing with AMD's CPUs is they age really well. Even the Phenom II 965 (If you can get the one that has SSE2 instructions) can still game reasonably well. It's old as balls and doesn't have multithreading, but it'll still do Witcher 3 at 60fps
Uhhhhh, you can’t really say that “AMD processors age well”. You can say that a specific architecture has aged well (though I wouldn’t be sure phenom or thuban is really an example of that), but AMD processors in general? I mean take a look at bulldozer - it was a too ripe and hard avocado at launch, and then it had spoilt the next time you bothered to take a look. Computers aren’t wine.
Oh when I say destroys mobos, I mean it quite literally: complete hardware failure, fried chip, etc. Not destroyed as in such a massive performance boost they become totally obsolete
Ryzen has always supported ECC RAM if the motherboard supports it.
I don't totally understand all of it but her smile is incredible so I''ll rate 10/10
Maybe it's time to upgrade my 2600k.
I think it runs ECC in non-ECC mode though, which is okay. Theirs a distinct lack of Ryzen boards with enterprise features
It runs in proper ECC 1-bit correct, 2-bit detect. Also ASRock Rack > X470D4U
Single threaded performance is still very important. The 6-core 8700k easily beats the 8-core 2700x when you are CPU limited.
The whole FX line aged like milk that's been 24/7 on the sun.
my 1700X is not even 2 years old yet and I am already ready
They can take their single core performance up their ass when it comes to recording/streaming shit.
On top of the completely accurate points you're making... AMD has basically been tied with Intel for IPC since Zen dropped. Kaby Lake was maybe a hair faster than Ryzen 1000, Ryzen 2000 was a hair faster than Coffee Lake, but they're basically equal in terms of IPC with their competitor's equivalent. Some workloads might get better IPC on one versus the other, but (outside some niche stuff using Intel-only extensions) none of it really matters. Intel kept a slim lead in single-threaded performance through clock speed. Ryzen 1000 in particular didn't clock all that well, and Intel doubled down on their own propaganda by pushing their stock clocks higher and higher. Well, now AMD is going to be able to match their non-insane processors in terms of clock speed, while having a tiny IPC advantage. In other words: faster single-threaded performance. Now, add on top of that the way AMD still puts more cores on their chips than Intel's similarly-priced equivalent, meaning better multi-threaded performance... AMD wins. At least on desktop - I think they might be losing still on idle power draw and perf/watt, but not by enough to matter on desktop. (And Bobcat/Jaguar/Puma makes a lot more sense for mobile than Zen - or better yet, do like 4 Zen cores and 4 Bobcat cores, the way ARM does. I think AMD could win on this front too if they tried, they're just focused on the desktop/server market right now). And speaking of Jaguar, that's basically why game engines are so heavily multithreaded now (as you succinctly stated). Xb360 and PS3 weren't so multicore that you had to write your engines that way, you could get away with a mostly single-threaded engine with a few tasks offloaded to the other cores (or the SPUs, in Sony's case). PS4/Xb1 generation, though? They're Jaguar cores - pretty slow single-threaded, and you've got eight of them. If your engine is still single-threaded, you're dead on console. And in today's market, if you're AAA and dead on console, you're dead in the market. PC gaming may be bigger than ever, but it's still dwarfed by console sales. Single-threaded performance still matters, but mostly for legacy games. Particularly emulation - you can't emulate a single-threaded CPU with more than one thread. But the big games people are talking about these days? They're all multithreaded, because they've got to be.
Same, I'm likely gonna give it a year after the 3000 comes out though and get a 2nd hand replacement for my 1700, its rocking like a trooper currently though so I really wonder if its worth swapping it. It's pretty happy running at 3.6ghz on less than stock voltage.
Bulldozer was a stain on an otherwise pretty clean track record. My 1090T (Thuban) only just started to struggle with games last year when devs moved over to "better single threaded performance is needed" mode. It's still amazing at productivity work though (like compiling code or video editing). It's just a shame I program ~40% of the time and spend the other 60 gaming - so I'll be moving to a 3700x when they're released and moving the 1090T into a workstation.
Bulldozer was a good but wrong idea, executed badly. I can see why the concept would make sense. They started design just as virtualization started really taking off, and it was obviously aimed at the server space. Having a lot of cores, for cheap, with decent integer/logic perf but maybe not so much FPU/vector throughput would be fairly attractive for the general cloud server market. It would mean basically forfeiting the laptop market, at least until Bobcat could hit, but it could still maybe compete in the desktop market, if programs went wide fast enough. If they'd executed on that well, it still would have been a bit of a mistake. The desktop market went into a massive shrinking period, driven by laptops and tablets, and AMD had nothing for it. And, it took a long time for consumer programs to take advantage of multicore, so the ST/MT tradeoff was wrong, or at least badly timed. On top of that, Bulldozer was executed very, very badly. A lot of this falls on GlobalFoundries, which was already late with 28nm and utterly fucked up their 22nm node. AMD was basically an entire generation behind for the entire FX lineup - Ivy Bridge dropped in with 22nm the same year as Piledriver dropped in, still at 32nm, and two years after GF finally got to 28nm, Intel got to 14nm. You just can't compete when your competitor can cram in more transistors, each more efficient and faster-switching, than you, for less money. Intel would have had to completely shoot themselves in the dick with their design work for AMD to stand a chance. And Intel... well, it wasn't anything particularly great, but they kept up incremental improvements to an already solid design. Maybe if Spectre and Meltdown had been found five years earlier, it could have made a difference, but Intel didn't fuck things up enough to forfeit their massive fabrication lead. Right now, the situation has almost completely reversed itself. Intel has royally fucked up their 10nm node - GF isn't even trying to shrink beyond 14nm but AMD and GF are no longer closely coupled, AMD can just go to TSMC or Samsung for 10nm or even 7nm fabrication. AMD is the one with a shiny new uarch, while Intel is suddenly realizing that extending a uarch from the 1990s might not be such a great idea anymore, when security is so much more critical. AMD even has a better low-power uarch (Puma is a million times better than even the newest Atom, at least in a laptop/high-end tablet context... Atom might beat it in phones but Atom already lost the phone market to ARM), once they actually open that front, the battle will be pretty much over. Intel's not down and out, but they're on the ropes. Only inertia is keeping their brand on top, and that won't last forever. They've got a brief refuge in the laptop space, and existing contracts and supply chain agreements that will keep them in servers/corporate desktops for a few years, but quality eventually wins. And, Intel's got cash to burn. They can afford to do a crash project to build a new uarch, or even just bide their time until AMD fucks something up. We've been here before. Shit be cyclical. Most people probably remember the Pentium 4 era, but it goes back all the way to the 486 wars. Intel's been in the lead for a while, but now they're not.
What AMD must do to really kick Intel's balls is double down on laptop market, which Intel and Nvidia still dominate. I would really love a laptop with something like R5 2400G but more beefier iGPU.
AMD needs to get serious traction in the OEM-world period. Volume is key for ripping Intel a new one.
I can see why people would make this mistake, but the shared FPU was not the main reason why Bulldozer performed badly. It's actually integer performance that was dogshit as it only had two ALUs per core compared to 3 per core in Phenom/Phenom II and 4 per core in Zen. This resulted in the initial versions of Bulldozer having generally worse IPC than even Phenom II. They intentionally sacrificed per-core performance in order to deliver more cores. Yes, the FPU was competitively shared between pairs of cores, but it was also substantially upgraded compared to previous designs. Only 256-bit AVX operations really took over the entire thing, otherwise it could execute more instructions at the same time. I wonder why they chose this specific implementation instead of merging all the execution resources from one "module" into a single fat core with SMT. That could have resulted in way better single threaded performance. The only reason I can think off is that this choice must have significantly simplified the design. I mean they eventually got there because that's roughly what the Zen core looks like, but it took them way too long.
Sorry, you need to Log In to post a reply to this thread.