Server rates....

Assuming that I have million of internets and bazillion of cpuz… :stuck_out_tongue:

Does anyone think that I may beable to tweak these rates to get more out of my gmod server? (I have a dedicated server running just gmod 2.4ghz 8core 1gb/s upload)

I run a sandbox server and with around 20 people on it I see the lerp start turning yellow, and choke hits about 20-30

//Networking
net_compresspackets “1”
net_compresspacket_minsize “64”
net_compressvoice “1”
net_maxcleartime “8”
net_maxfilesize “2”
net_maxfragments “1260”
net_maxroutable “1260”
net_queued_packet_thread “1”
net_splitpacket_maxrate “1048576”
net_splitrate “3500”
rate “50000”
sv_maxcmdrate “100”
sv_mincmdrate “20”
sv_client_cmdrate_difference “80”
sv_maxupdaterate “100”
sv_minupdaterate “20”
sv_maxrate “50000”
sv_minrate “40000”

I think the choke is on the client, type rate 60000 in the console and see what it does. It ‘defaults’ to 30k for some reason.

Sandbox servers suck a mega-heap of RAM at times. How much do you have?
Also, Gmod servers only run off of one core. No matter how many cores you have, it wont really help that server be any faster. Just a tip so you know.

I have 16 gigs of ddr3 9-9-9-24 running at 2100hz

I have a i7 with load balancing so it runs on all cores.

I definitely see a difference with 20 players on the server on 1 core vs all of them.

Doesn’t matter how many cores you have, srcds (the server software) will only work on one core. There is no way around this. Load balancing won’t help you. Talking like that makes you look stupid.

Sorry, Not to trample on your parade. But there is a significant difference with load balancing enabled.

But, I wasn’t really asking for your opinion with load balancing as I can tell you cannot read.

I was looking for assistance with rates adjustment. Not your dickhead’ish attitude.

Archemyde was very polite with his response without being a prick. Which was much appreciated. Unfortunately for you, I cannot say the same. Your useless response has provided zero information to me as I have been experiencing increased performance with my multiple core balance with setup. I can see the load from day to day be switched to cores that are less loaded.

try lowering the tickrate

I am currently at 100 tic rate, what should I lower it to?

I have done alot of reading about the tic rate, isnt a higher better to allow more updates?

Higher is better to a point, however it mainly depends on the end-users connection, trying to force too many updates upon a client with a bad connection then causes choke to raise and more lag-prediction is then done.

A Tick of 66 or 100 I believe is best, experiment with it at 66 and see what you get. There is not really any right or wrong way to configure them, really just experimentation to get the best out of it.

Try changing:
sv_mincmdrate “20”
sv_minupdaterate “20”
to 66

I think you have tickrate confused with something else

tickrate is NOT better as higher. tickrate will define how many physics calculations are done per second, so if he’s having issues with lots of lerp then his best option would be to lower the tickrate, not raise it - more physics calculations per second == more cpu usage == more lag

[editline]15th August 2012[/editline]

oh and iirc srcds will mess up if it’s set at anything higher than 66, and it’s recommended to use 16/33/66

[editline]15th August 2012[/editline]

higher is better for the performance of the client but it also means more CPU usage

Maybe a stupid question , I think the answer is already no…

Is there any possible way to change tic rate on the fly with any hack or memory editor on srcds, so that I could test it real time just as any server cvar

you can set it in the command line only, -tickrate 33

I will definitely test it. Thank you.

Nope. One thread is utilised for networking, one for the rest of the game. As such, anything beyond dual core won’t be properly utilised.

This doesn’t even happen in the beta.
Pirate.