How much of an impact does hardware make on compiling?

I have recently acquired a dell precision t7400 workstation tower. It has 2x Quad core Xeon processors @ 2.8Ghz.

I was wondering if it would compile maps/vvis/vrad faster than my primary desktop, which has a Quad core i7-3770 @ 3.6Ghz with hyperthreading.

I know that the tools supposedly take advantage of the extra cores, but these Xeons do not seem to have hyperthreading support.

Is it worth moving my mapping assets over to compile them on this other machine?

I think there’s a limit on how fast the tools compile stuff, but in general yes, it makes a huge difference.

Don’t vvis/vrad have GPU compute support now? I seem to recall that being added a few years ago. I’ve been out of the Hammer game for a while though.

I don’t believe that exists.

To the OP: If you want to decrease compile time, one thing that really helps is increasing the lightmap scale on everything, and lowering it specifically for things that need it. For example, when I map, I make everything at a lightmap scale of 128, and only decrease it for things that need more detailed shadows. It can cut compile time from 10 minutes to 30 seconds in some cases.

Thanks for all the advice. I’m not having many issues with long lighting compile times, but vvis seems to be the one that takes forever. I haven’t delved much into area portals, so that may explain the longer times.

FYI, standard compilers don’t freeze up your PC on Windows 8.

Honestly, areaportals probably wouldn’t help compile time THAT much. How much do you know about optimization? This explains most of what you’ll need to know about vis http://www.optimization.interlopers.net/index.php?chapter=notices

There is no limit. Both VVIS and VRAD are threaded and will work faster with faster hardware. If you have a server with 48 CPU cores (and they exist) compiles will be retarded quick.

VVIS/VRAD have had VMPI support on and off since some time in 2003 (the HL2 leak had VMPI enabled compile tools.) VRAD VMPI on the Orangebox compile tools is very broken, but VVIS still works with some initial setup.

They have never been able to use OpenCL for GPGPU acceleration.

Areaportals are a client side optimization, they don’t affect compile times at all because the compile tools still have to parse which portals can see which other portals. Doing things to reduce visleaf cutting and using strategically placed func_viscluster however will result in much faster compiles.

Anything 16+ cores causes a compile error, even if you limit the compiler to 16 or less cores.
There is a limit of performance, I’ve experienced that it stops being worth throwing better hardware at the problem once you reach 8 cores.

Give this guide on optimization a read http://www.optimization.interlopers.net/

If it’s hanging up on vvis it normally means you need to look into func_detailing more/better.

I’ve had up to 36 cores/threads working on a single compile, mind you not all on the same machine.

All of the blue dots are a single core/thread (ie. a quad would spawn 4 of those.)