[Suggestion] Improving Network Responsiveness

The current network protocol for Rust is currently inferior in comparison to other real time games and programs. In this example, I will illustrate how ping variability leads to poor game performance beyond what is standard/optimal.

Rust currently has a rubber band system for unstable connections. Even in the presence of zero packet loss and a strong download speed, variable ping (Jitter) is potentially exploitable and negatively influences the player experience.

To break this down with an example from counter-strike, optimal responses to jitter (variable ping) is to ensure the client and server side data match. In CSGO, a player who has a consistent ping of 300 will be able to play as normal with a delay of 300 ms. The user will be able to smoothly play with a delay of 300 ms from the server.
Now with Jitter (variable ping), a player does not have a constant ping. The users ping may fluctuate from 50 to 150, 100 to 300, 75 to 180, ect. In CSGO, this transition of ping is hardly noticeable. The delay will become either shorter or longer; notably, there is not rubber banding or freezing because the client never surpasses the server. In other words, the client is always up to date with the server.

In rust, variable ping leads to rubber banding or freezing because the client and the server become unequal. A ping which was constant at 50 ping, involving running a player in a straight line, opening a door, running through this door, which then has a ping fluctuations desyncs from the server. In rust, the client is able to go beyond the client without automatically updating upon ping fluctuations. This leads to the following:

Rubber banding

We’ve all had it. Sometimes this is even the servers fault for lagging and falling behind, the server gets an influx of information and spits out information at a different rate, players keep moving client side, and are forced back to where the servers location is for those players.
With client side fluctuations in Rust, players are able to open doors two or three doors deep without having moved server side. A player may be standing still, but due to server desync, a player can keep going for often 10-20 seconds at a time. The server will respond to these “future” actions only to have a delayed server check which then rubber bands the player back to where they were 10-20 seconds ago. Simply, the server needs a better detection for when the client is desynced from the server.


The data buildup effects which are seen in freezing can also happen with rubber banding. A player may completely desync from the server due to a variable ping (say jitter 50 to 500 to 30 to 300, ect). A server will keep sending information to the player which will then build up for the client to process causing what I like to call a (time warp), where everything which happened from the start to the end of the freeze happens at once. This process can then create more lag both towards client FPS and data buildup which exacerbates further network instability. This process can then spiral on a connection creating either intermittent (5-15 second freezes) or long duration (up to a minute) where the client completely falls behind even on a high end machine.

The current network protocol needs to address handling resync on otherwise strong jittery connections. The buildup which arises from desync further degrades the game play experience when other real-time applications such as realtime voice, video, and games have a smooth experience on the same connection.

I understand the increased network load of Rust due to map generation but there is still more room for network optimization. The increased network load (especially with how this game operates on 1-50 kb/s) should not be a factor on a connection which can maintain +5mb/s download speed.

Note: This has been a comprehensive test with my internet connection, which has been thoroughly tested over the duration of two weeks and many hours of testing.

I think you’re forgetting were still in early access, FacePunch will definitely optimise the network code in the future, but right now it’s that great of an issue and there are more important things to work on.