AMD will start making high-performance consumer-level processors again
25 replies, posted
[img]http://www.sweclockers.com/image/red/2014/05/06/AMD_ARM_Core_Roadmap_K12_2.jpg?t=paneBanner&k=d7f66872[/img]
[quote][B]
AMD reveals the architecture Zen, admits that Bulldozer was a massive failure[/B]
During the beginning of 2016, the criticized architecture Bulldozer will be replaced by Zen, which is designed completely differently, and is aimed towards high-performance PC's and servers.
After a lot of ballyhoo and promises of unsurpassedly high performance, AMD released their first processors built around the Bulldozer architecture in 2011, which fell flat straight into the ground. Their previous chief of server management, Andrew Feldman, called the architecture a complete failure. Most of the people responsible for the architecture also got fired.
Now no-one less than the company CEO, Rory Read, says that "everyone knows" that Bulldozer was not as good as it was supposed to be.
"Everyone knows that Bulldozer was not the game changing part it was supposed to be when it was introduced three years ago. We have to live with that for four years."
"But [for] Zen, K12, we went out and got Jim Keller, we went out and got Raja Koduri from Apple; Mark Papermaster; Lisa Su; we’ve built and are currently building the next generation of graphics and compute technology that customers are very interested in, and they'll move to the next generation node and they’ll be ready to go."
The company's employed several CPU-engineering veterans to work on their next x86-64 architecture; Zen, which will be accompanied by the high-performance Advanced Risc Machine processor K12.
The list of names include the CPU-architect Jim Keller, who was one of the lead engineers behind Apple's A4 and A5 processors, as well as AMD's K7 and K8 processors. He also contributed to Apple's Swift and Cyclone ARM-cores.
"Servers, there's very few people who know how to create server chips. Jim Keller has a lot of experience in that space. [...] We've had very good communications with the OEMs in the server field, and with the customers, and we tested it because if we didn't see the acceptance we were looking for, we would have ended the business a year ago."
There's currently no official information about Zen. However, AMD's expected to leave Cluster-based Multithreading, a fundamental part of their current module-design, where two cores split the resources in half between each other. Instead, it's likely that they'll turn to Simultaneous Multithreading, which allows each core to access two threads simultaneously, something which Intel is calling Hyperthreading.
[/quote]
Source: [url]http://www.sweclockers.com/nyhet/19310-amd-haussar-arkitekturen-zen-erkanner-bulldozer-som-ett-misslyckande[/url]
So I guess we're not having an Intel monopoly on desktop PCs after all...?
Thank fuck.
[quote]Most of the people responsible for the architecture also got fired.[/quote]
Ah yes, the perfect way not to repeat your mistakes.
[QUOTE=wickedplayer494;45973432]So I guess we're not having an Intel monopoly on desktop PCs after all...?[/QUOTE]
The reviews will tell. AMD has a lot to do to catch up.
[QUOTE=fishyfish777;45973463]The reviews will tell. AMD has a lot to do to catch up.[/QUOTE]
I'd love to see it happen though
[QUOTE=fishyfish777;45973463]The reviews will tell. AMD has a lot to do to catch up.[/QUOTE]
Not that much... Intel hasn't really been pushing performance, just efficiency. And most of that efficiency was idle power, not load. AMD could be competitive on the desktop pretty quickly.
Good. I want to see AMD compete with Intel again. Hopefully they can also hire better software engineers to improve their video drivers
[QUOTE=gman003-main;45973490]Not that much... Intel hasn't really been pushing performance, just efficiency. And most of that efficiency was idle power, not load. AMD could be competitive on the desktop pretty quickly.[/QUOTE]
Doubt they'll be on-par with the top-end performance any time soon, but at least they should try and get back to competing in price/performance.
What exactly made Bulldozer bad in the first place, though? Can anyone explain that to me?
[QUOTE=woolio1;45973545]What exactly made Bulldozer bad in the first place, though? Can anyone explain that to me?[/QUOTE]
I may be wrong, but I could've sworn it was mainly about the processor not having the amount of transistors (or something along those lines) as advertised?
[QUOTE=Llamaguy;45973585]I may be wrong, but I could've sworn it was mainly about the processor not having the amount of transistors (or something along those lines) as advertised?[/QUOTE]
[QUOTE=woolio1;45973545]What exactly made Bulldozer bad in the first place, though? Can anyone explain that to me?[/QUOTE]
This [URL="http://www.extremetech.com/computing/100583-analyzing-bulldozers-scaling-single-thread-performance"]link[/URL] I found puts it in words us simpletons can understand.
Apparently it also underdelivered in performance. It cost more than the intel 2500l/2600k but wasn't able to perform as well as them
[QUOTE=gman003-main;45973490]Not that much... Intel hasn't really been pushing performance, just efficiency. And most of that efficiency was idle power, not load. AMD could be competitive on the desktop pretty quickly.[/QUOTE]
I've been wanting AMD to get their heads out of their asses for 5 years now purely because of this. Intel has stopped working on raw horsepower. Their processors absolutely crush anything AMD brings to the table with a modest 10-15% improvement each generation. Intel could be doing 30% and not reducing power envelopes instead. While their current path is wonderful for laptops, it does very little in the desktop world where you are fine with a machine drawing several hundred watts. Quite literally, if you have a sandy bridge overclocked to 5.0, you might actually lose a fair amount of raw compute power by upgrading to haswell. Even if you have a first generation i7, you still probably don't NEED to upgrade it.
I don't want an eco race. I want an all out war for who has the most raw computational power. I don't care if my processor is drawing twice as much power under full load. If it's 50% faster, that's worth it.
excellent, hopefully i can dump this a4-4000 for some more amd-master race stuff. My r7 250 is surprisingly powerful, i love it.
Yes. This is what I want to see.
Ah yes good ole take-it-away-bring-it-back marketing.
[QUOTE=woolio1;45973545]What exactly made Bulldozer bad in the first place, though? Can anyone explain that to me?[/QUOTE]
TL;DR version without too much mumbo jumbo. If you want the technical reasons why, read gman's post 2 down from here.
Bulldozer has a lot of really dumb design choices that massively impact performance on most applications. Their single threaded performance was absolute garbage, and they "solved" this by having 8 cores on half of their flagship lineups. The problem is, that most applications can only use a single core at a time. For realistic desktop use, you have almost zero benefit from having more than 2 cores, since it's exceedingly rare for 2 applications to be both maxed out at once.
Then they took the pentium approach, and rather than improving the chips, they just started amping up the clockspeeds and power consumption, which meant that some of their chips were drawing 150 watts, while intel's were drawing 80, and intel's quads were crushing them in terms of single threaded performance, and even winning most fights where AMDs octocores could use all 8 cores.
They took shitty cores that couldn't compete, pretended like adding a lot of them could brute force through problems when it couldn't, released these products at shitty non competitive price points, and then buried their heads in the sand for 4 years by trying to just throw more electricity at the problem instead of redesigning a fundamentally broken product.
[QUOTE=Llamaguy;45973585]I may be wrong, but I could've sworn it was mainly about the processor not having the amount of transistors (or something along those lines) as advertised?[/QUOTE]
The number of transistors actually has very little to do with processor performance these days :v: The highest number of transistors available in any CPU was an Ivy Bridge i5 Xeon chip, which needless to say, was outperformed by its i7 brothers. It largely has to do with the clock speed, number of cores & threads, and how instructions are executed in those threads (the instruction pipeline).
The big problem was Windows didn't jive as well as it should have with the already sub-par attempt at better multi-threading on Bulldozer.
[editline].[/editline]
What this guy said VV
[QUOTE=woolio1;45973545]What exactly made Bulldozer bad in the first place, though? Can anyone explain that to me?[/QUOTE]
Bulldozer was designed for the wrong workload. It was built for highly-multithreaded, integer-heavy workloads.
To that end, the basic unit was the "module". It combined two integer clusters (four ALUs each), one float/SIMD cluster (four adders, four MACs, and a misc FPU, each 64-bits wide), a four-wide instruction decoder, and some cache. They advertised this as two full cores. If you were doing purely integer work, or even a 50-50 mix of integer and floating-point, it could act that way. But if you were doing heavy floating-point or SIMD work, that was a problem because the two threads would have to trade off. Their simple approach did let them make this "module" pretty small, so they expected to be able to cram a lot of them onto one chip. Outside of servers (some 8-module Opterons did come out), that didn't really happen.
For comparison, Intel's basic unit remained the "core". Nehalem through Sandy Bridge, it was a 4-wide decoder, 6-wide execution (Haswell upped it to 8-wide execution). They didn't clearly delineate between integer and float - execution port 1 had an integer ALU, FP add, and some vector stuff. HyperThreading is their term for letting two threads split those 4 decoders and 6 execution units - because they didn't segregate FP and int, any mix of workloads would run about as fast. But if you only had one thread running, it could use those extra units pretty effectively.
So while both sides had about as much raw power, AMD's was less able to put that power to work. This hurt even more when multi-threaded apps didn't magically pop into existence. Their strategy of powerful integrated GPUs for compute also meshed badly with this - if you have a big, parallel GPU, you don't need a bunch of parallel CPU cores as well. What you would want is a handful of really fast single-threaded cores.
Intel also has a manufacturing advantage. They run their own factories, and they have a pretty good lead right now in terms of fab tech. They've been building chips at 22nm. AMD sold off their factories (because they sucked and were losing money), and so they rely on others to actually make their chips. They launched using 32nm, later shrinking to 28nm. That meant both that Intel can put more stuff onto one chip at the same cost, and they can use less power. AMD can't really do anything about this anymore, but Intel's gearing up for 14nm, while nobody else has gotten below 28nm for processors (there's 1Xnm flash, but that isn't applicable here).
Good ol' days of AMD processor chip and Nvidia graphics card.
i hope they stop advertising quad core hyperthreaded cpus as 8 core cpus
cant wait for intel to get stomped on by amd
Did they even have time to stop making them? I feel like it was yesterday they announced they would stop making those.
[QUOTE=AJ10017;45974310]cant wait for intel to get stomped on by amd[/QUOTE]
Not gonna happen but it'll at least open up a better market. It's gonna take a while for AMD to get back into shape but if they can produce something that can match or at least push performance that is similar to the current Ivy or the old top end Sandy bridge Intel's it'll certainly be an improvement even if Intel's next release is better.
Sorry, you need to Log In to post a reply to this thread.