• Researchers create ultra-fast '1,000 core' processor, Intel also toys with the idea
    59 replies, posted
[img_thumb]http://www.maximumpc.com/files/u107541/gizmodo_0.jpg[/img_thumb] [url=http://www.engadget.com/2010/12/28/researchers-create-ultra-fast-1-000-core-processor-intel-also/]Source[/url] [release]We've already seen field programmable gate arrays (or FPGAs) used to create energy efficient supercomputers, but a team of researchers at the University of Glasgow led by Dr. Wim Vanderbauwhede now say that they have "effectively" created a 1,000 core processor based on the technology. To do that, the researchers divvied up the millions of transistors in the FPGA into 1,000 mini-circuits that are each able to process their own instructions -- which, while still a proof of concept, has already proven to be about twenty times faster than "modern computers" in some early tests. Interestingly, Intel has also been musing about the idea of a 1,000 core processor recently, with Timothy Mattson of the company's Microprocessor Technology Laboratory saying that such a processor is "feasible." He's referring to Intel's Single-chip Cloud Computer (or SCC, pictured here), which currently packs a whopping 48 cores, but could "theoretically" scale up to 1,000 cores. He does note, however, that there are a number of other complicating factors that could limit the number of cores that are actually useful -- namely, Amdahl's law (see below) -- but he says that Intel is "looking very hard at a range of applications that may indeed require that many cores." [/release] [img]http://i.dailymail.co.uk/i/sitelogos/logo_mol.gif[/img] [url=http://www.dailymail.co.uk/sciencetech/article-1342100/Scientists-unveil-1-000-core-chip-make-desktop-machines-20-times-faster.html]Source[/url] [release]Scientists have created an ultra-fast computer chip which is 20 times faster than current desktop computers. Modern PCs have a processor with two, four or sometimes 16 cores to carry out tasks. But the central processing unit (CPU) developed by the researchers effectively had 1,000 cores on a single chip. The developments could usher in a new age of high-speed computing in the next few years for home users frustrated with slow-running systems. And the new 'super' computer is much greener than modern machines - using far less power - despite its high speed. Scientists used a chip called a Field Programmable Gate Array (FPGA) which like all microchips contains millions of transistors - the tiny on-off switches which are the foundation of any electronic circuit. But FPGAs can be configured into specific circuits by the user, rather than their function being set at a factory. This enabled the team to divide up the transistors within the chip into small groups and ask each to perform a different task. By creating more than 1,000 mini-circuits within the FPGA chip, the researchers effectively turned the chip into a 1,000-core processor - each core working on its own instructions. The chip was able to process around five gigabytes of data per second in testing - making it approximately 20 times faster than modern computers. The team was led by Dr Wim Vanderbauwhede, of the University of Glasgow, and colleagues at the University of Massachusetts Lowell. He said: 'FPGAs are not used within standard computers because they are fairly difficult to program but their processing power is huge while their energy consumption is very small because they are so much quicker - so they are also a greener option.' While most computers sold today now contain more than one processing core, which allows them to carry out different processes simultaneously, traditional multi-core processors must share access to one memory source, which slows the system down. The research scientists were able to make the processor faster by giving each core a certain amount of dedicated memory. Dr Vanderbauwhede said: 'This is very early proof-of-concept work where we're trying to demonstrate a convenient way to program FPGAs so that their potential to provide very fast processing power could be used much more widely in future computing and electronics. 'While many existing technologies currently make use of FPGAs, including plasma and LCD televisions and computer network routers, their use in standard desk-top computers is limited. 'However, we are already seeing some microchips which combine traditional CPUs with FPGA chips being announced by developers, including Intel and ARM. 'I believe these kinds of processors will only become more common and help to speed up computers even further over the next few years.' He hopes to present his research at the International Symposium on Applied Reconfigurable Computing in March next year.[/release] [img]http://www.google.com/hostednews/img/ukpa_logo.gif?hl=en[/img] [url=http://www.google.com/hostednews/ukpress/article/ALeqM5j5eJ8wHnKypHvdquBem1g9UUXnUw?docId=N0349891293495424300A]Source[/url] [release]Scientists have created an ultra-fast 1,000 core computer processor which could speed up machines and make them greener. Originally, computers were developed with only one core processor, the part of a computer's central processing unit (CPU) which reads and executes instructions. Nowadays processors with two, four or even 16 cores are commonplace. However, Dr Wim Vanderbauwhede, of the University of Glasgow, and colleagues at the University of Massachusetts Lowell have now created a processor which effectively contains more than a thousand cores on a single chip. [/release] [img_thumb]http://profy.com/wp-content/blogs.dir/1/uploads/2007/01/WindowsLiveWriter/037e0f423a82_6C62/zdnet6.jpg[/img_thumb] [url=http://www.zdnet.co.uk/news/emerging-tech/2010/12/25/intel-why-a-1000-core-chip-is-feasible-40090968/]Source[/url] [release]Q&A Chipmaker Intel has been investigating the issue of scaling the number of cores in chips through its Terascale Computing Research Program, which has so far yielded two experimental chips of 80 and 48 cores. In November, Intel engineer Timothy Mattson caused a stir at the Supercomputer 2010 Conference when he told the audience that one of the Terascale chips — the 48-core Single-chip Cloud Computer (SCC) — could theoretically scale to 1,000 cores. Mattson, who is a principal engineer at Intel's Microprocessor Technology Laboratory, talked to ZDNet UK about the reasoning behind his views and why — while a 1,000-core chip isn't on Intel's roadmap — the path to creating such a processor is now is visible. Q: What would it take to build a 1,000-core processor? A: The challenge this presents to those of us in parallel computing at Intel is, if our fabs [fabrication department] could build a 1,000-core chip, do we have an architecture in hand that could scale that far? And if built, could that chip be effectively programmed? The architecture used on the 48-core chip could indeed fit that bill. I say that since we don't have cache coherency overhead. Message-passing applications tend to scale at worst as the diameter of the network, which runs roughly as the square root of the number of nodes on the network. So I can say with confidence we could scale the architecture used on the SCC to 1,000 cores. But could we program it? Well, yes: as a cluster on a chip using a message-passing API. Is that message-passing approach something the broader market could accept? We have shared memory that is not cache-coherent between cores. Can we use that together with the message passing to make programming the chip acceptable to the general-purpose programmer? If the answers to these questions are yes, then we have a path to 1,000 cores. But that assumption leads to a larger and much more difficult series of questions. Chief among these is whether we have usage models and a set of applications that would demand that many cores. We have groups working on answers to that question. As I see it, my job is to understand how to scale out as far as our fabs will allow and to build a programming environment that will make these devices effective. I leave it to others in our applications research groups and our product groups to decide what number and combination of cores makes the most sense. In a sense, my job is to stay ahead of the curve. Is there a kind of threshold of cores beyond which it is too difficult to program to get maximum use of them? What is it — 100, 400? There is no theoretical limit to the number of cores you can use. It's more complicated than that. It depends on, one, how much of the program can be parallelised and, two, how much overhead and load-imbalance your program incurs. We talk about this in terms of Amdahl's law. This law says that we can break down a program into a part the speeds up with cores — the parallel fraction — and a part that doesn't — the serial fraction. If S is the serial fraction, you can easily prove with just a bit of algebra that... ... the best speedup you can get, regardless of the number of cores, is 1/S. So the limit on how many cores I can use depends on the application and how much of it I can express in parallel. It turns out that getting S below one percent can be very hard. For algorithms with huge amounts of "embarrassingly parallel" operations, such as graphics, this can be straightforward. For more complex applications, it can be prohibitively difficult. Would Intel ever want to scale up a 1,000-core processor? That depends on finding applications that scale to 1,000 cores, usage modes that would demand them, and a market willing to buy them. We are looking very hard at a range of applications that may indeed require that many cores. For example, if a computer takes input from natural language plus visual cues such as gestures, and presents results in a visual form synthesised from complex 3D models, we could easily consume 1,000 cores. Speaking from a technical perspective, I can easily see us using 1,000 cores. The issue, however, is really one of product strategy and market demands. As I said earlier, in the research world where I work, my job is to stay ahead of the curve so our product groups can take the best products to the market, optimised for usage models demanded by consumers. Would the process of fabricating 1,000 cores present problems in itself? I came up with that 1,000 number by playing a Moore's Law doubling game. If the integration capacity doubles with each generation and a generation is nominally two years, then in four or five doublings from today's 48 cores, we're at 1,000. So this is really a question of how long do we think our fabs can keep up with Moore's Law. If I've learned anything in my 17-plus years at Intel, it's never bet against our fabs. Why is the 48-core processor not in Intel's product roadmap? I need to be very clear about the role of the team creating this chip. Our job is to push the envelope and develop the answer to the question: "what is possible?". This is a full-time job. Our product roadmap takes our "what is possible?" output and figures out "what does the market demand?". That is also a full-time job. Intel's product roadmap will reflect what our people in the product groups figure will be demanded by the market in the future. That may or may not look like the 48-core SCC processor. I have no idea. [/release]
Very cool. I think it's astounding they can hook that many together, wonder what the capabilities are. inb4can'tmaxcrisis
can'tmaxcrisis!!!!!!!!!!!11
Can it run Crysis? Nice that technology is moving forward so fast. This goes nicely with that total Earth Simulator a few threads down.
No, it as a matter of fact can not run Crysis, since it has no video processor.
[QUOTE=Fish_poke;27030385]No, it as a matter of fact can not run Crysis, since it has no video processor.[/QUOTE] I doubt it would use an x86 or x64 architecture anyway, so I concur.
So just by splitting a CPU up into a thousand processors makes it 20x more efficient.
My dad works for Intel, he was explaining some of this to me. I don't understand the principles behind it, but it is a cool idea. I was skeptical until he showed me a few graphs from some of the test machines. It'd be nice to have a desktop with some of this stuff, but I'll have to wait. :saddowns:
[QUOTE=Fish_poke;27030385]No, it as a matter of fact can not run Crysis, since it has no video processor.[/QUOTE] Couldn't it just emulate it
:pcgaming:
[QUOTE=FalconKrunch;27030448]So just by splitting a CPU up into a thousand processors makes it 20x more efficient.[/QUOTE] There's quite a bit more to it than that. It isn't your average x86/ x64 CPU like in a household PC. It's a specific kind designed origionally with supercomputers in mind it seems. [editline]28th December 2010[/editline] [QUOTE=TheKnife;27030688]Couldn't it just emulate it[/QUOTE] Have fun emulating X68/ 64 for Crysis to actually run after you are already emulating video processing.
I.. Uh... :psyduck:
Screw X86 and X64, make a new Architecture and stick it in a tablet computer running Android 3.0 and sell it for £1500. making a profit of £1300
Can't we have 8 core processors that run on 128 bit? That way we're going up in a pattern.
Stop beating around the bush and develop quantum computing already.
I bet you could roast a turkey with the heat output...
[QUOTE=Master117;27031738]I bet you could roast a turkey with the heat output...[/QUOTE] Didn't you here? That's the latest money and energy saving plan. Processors built after 2012 will come with an optional oven unit that can be used to cook the largest of meats in the shortest amount of time.
Mass-core chips have been made for years now. GPU's use the technology because the only way to cram that many cores onto a die is to make all their instruction sets very simple and thus constrained to doing one task very well. Think of it as an autistic CPU.
The next generation of consoles is going to be epic.
Crysis? Meh. Can it run... Doom? [editline]28th December 2010[/editline] At a proper 30fps that is. [editline]28th December 2010[/editline] No higher then 90 FPS, at that point and beyond the game will run too fast to be playable.
[QUOTE=Shoe Phone;27032465]The next generation of consoles is going to be epic.[/QUOTE] Yeah, they might actualy get multipul cores.
[QUOTE=Capn'Underpants;27033751]Yeah, they might actualy get multipul cores.[/QUOTE] uh, they mostly do have multiple cores do you ever post anything informed?
I wonder how many years it will take until they decide to stop robbing us and just bring us the good stuff? My guess is that this is going to take some 10 years before we see any actual sales related to the "ultra fast processors", It's just a processor with thousand cores slammed on it for fucksake. Producing these fucktwits will cost some 2$ each, And still they're going to sell us crap with the price of gold. [editline]28th December 2010[/editline] [QUOTE=MIPS;27032244]Mass-core chips have been made for years now. GPU's use the technology because the only way to cram that many cores onto a die is to make all their instruction sets very simple and thus constrained to doing one task very well. Think of it as an autistic CPU.[/QUOTE] [img]http://www.deviantart.com/download/132877339/oh_that_chris_chan_by_INSECTIREALICIDE.png[/img] ... You sure about this?
[QUOTE=Shoe Phone;27032465]The next generation of consoles is going to be epic.[/QUOTE] It probably won't because A) It'll take ages to even make the next gen and B) The Wii proved that shitty cheap hardware sells while the PS3 proved that making a $600 system with good parts isn't a very good idea. This will probably lead to very meh consoles, especially when compared to what PCs are going to be like by that time.
[QUOTE=Master117;27031738]I bet you could roast a turkey with the heat output...[/QUOTE] [QUOTE=OP]far less power[/QUOTE] [QUOTE=First Law of Thermodynamics]In any process in an isolated system, the total energy remains the same.[/QUOTE] So if I put less energy in I must get more energy out :downswords:
1000 cores? Why not use a GPU if you need that many?
[QUOTE=Capn'Underpants;27033751]Yeah, they might actualy get multipul cores.[/QUOTE] Are you implying the Xbox 360 and Playstation 3 home video entertainment systems lack multi-core processors?
[QUOTE=Capn'Underpants;27033751]Yeah, they might actualy get multipul cores.[/QUOTE] PS3 has Microprocessors
Yes but can it compete with the power of teh cell?
I've read about it 3 or 4 years ago. Too tired to find the exact article, but I'm fairly sure I've seen the earlier work done on this exact chip. Off-course I was considered a crackpot when I told facepunch.
Sorry, you need to Log In to post a reply to this thread.