• Cray XK6 supercomputer smashes petaflop record, humbly calls itself 'general-purpose'
    40 replies, posted
I read by 2040, probably much sooner, an AI will have the computational power to look you in the eye and lie. That's a little unnerving.
Fucking crysis is small time, yo. I'd be suprised if this computer could run Metro 2033 on [b]medium.[/b]
An infinite loop can end, actually, due to memory errors and all. Probably would have to run it such an immense amount of times though that... well, you would take a long time.
I'm still totally against the use of x86 in supercomputers. It's just super dirty technology-wise.
your name is MIPS, of course you're against AMD64. In other news, I want a Beowulf cluster of THOSE.
[QUOTE=deathstarboot;30049205]I read by 2040, probably much sooner, an AI will have the computational power to look you in the eye and lie. That's a little unnerving.[/QUOTE] Portal confirmed to take place in 2040
[QUOTE=MIPS;30055481]I'm still totally against the use of x86 in supercomputers. It's just super dirty technology-wise.[/QUOTE] I'd imagine it mostly runs on CUDA, considering the GPUs.
[QUOTE=macerator;30055751]your name is MIPS, of course you're against AMD64.[/QUOTE] x86 has been around for 30 years and the only reason it's still here is because it's a cheap technology and you can crank out generic software and support out on it in less than an hour. Extensions to x86 made it superscalar when the Pentium was released but it's a very rough implimentation of the technology. There were such better chips out there that could beat the pants off x86 in parallelism like Aplha, SPARC (SPARC 64 was a rough spot), and of course MIPS if they had ever managed to break the 1ghz barrier (which they didn't and people hated them for that). [quote]I'd imagine it mostly runs on CUDA, considering the GPUs. [/quote] I can't say much for CUDA as I do not know how that technology works.
[QUOTE=MIPS;30061335]x86 has been around for 30 years and the only reason it's still here is because it's a cheap technology and you can crank out generic software and support out on it in less than an hour. Extensions to x86 made it superscalar when the Pentium was released but it's a very rough implimentation of the technology. There were such better chips out there that could beat the pants off x86 in parallelism like Aplha, SPARC (SPARC 64 was a rough spot), and of course MIPS if they had ever managed to break the 1ghz barrier (which they didn't and people hated them for that). I can't say much for CUDA as I do not know how that technology works.[/QUOTE] So you like architectures based on parallel computing? then you'll [B]LOVE[/B] CUDA [QUOTE][B]CUDA is a parallel computing architecture developed by Nvidia. CUDA is the computing engine in Nvidia graphics processing units (GPUs) that is accessible to software developers through variants of industry standard programming languages.[/B] Programmers use 'C for CUDA' (C with Nvidia extensions and certain restrictions), compiled through a PathScale Open64 C compiler,[1] to code algorithms for execution on the GPU. CUDA architecture shares a range of computational interfaces with two competitors -the Khronos Group's Open Computing Language[2] and Microsoft's DirectCompute.[3] Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, MATLAB and IDL, and native support exists in Mathematica. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs. Using CUDA, the latest Nvidia GPUs become accessible for computation like CPUs. Unlike CPUs however, GPUs have a parallel throughput architecture that emphasizes executing many concurrent threads slowly, rather than executing a single thread very quickly. This approach of solving general purpose problems on GPUs is known as GPGPU. In the computer game industry, in addition to graphics rendering, GPUs are used in game physics calculations (physical effects like debris, smoke, fire, fluids); examples include PhysX and Bullet. CUDA has also been used to accelerate non-graphical applications in computational biology, cryptography and other fields by an order of magnitude or more.[4][5][6][7] An example of this is the BOINC distributed computing client.[8] CUDA provides both a low level API and a higher level API. The initial CUDA SDK was made public on 15 February 2007, for Microsoft Windows and Linux. Mac OS X support was later added in version 2.0,[9] which supersedes the beta released February 14, 2008.[10] CUDA works with all Nvidia GPUs from the G8X series onwards, including GeForce, Quadro and the Tesla line. Nvidia states that programs developed for the GeForce 8 series will also work without modification on all future Nvidia video cards, due to binary compatibility.[/QUOTE] That's just the juiciest bits, more info in the full article: [url]http://en.wikipedia.org/wiki/CUDA[/url]
[QUOTE=BrainDeath;30048470]I don't think cray linux has the "bullshit" driver installed. It'd be unable to speak politician.[/QUOTE] [quote=deathstarboot]I read by 2040, probably much sooner, an AI will have the computational power to look you in the eye and lie. That's a little unnerving. [/quote] :foxnews:Skynet to stand for President in 2040 elections!:foxnews:
Finally [I]something[/I] which can run Vista.
Sorry, you need to Log In to post a reply to this thread.