Redirect / Fake Server Report Megathread - ALL FAKE SERVERS POSTS GO HERE
286 replies, posted
This
vs
comparing it with other servers
80 ms difference from server browser and in game, even with half the server tick-rate the gap shouldn't be this wide.
Latency plays a part in giving the server a ranking in the server list, so using a cache from a POP that isn't on the same location as the game server seems a bit questionable.
Oh I agree it isn't healthy in the long-run for the server browser unless it's modified. It's worse in the UK:
(note I didn't sort that server browser, that's the default sort order with this system)
welcome to the party
The fact that you're getting a 60 ms jump from LAX to DAL concerns me more than you assuming that in game latency calculation is valid (which includes status, tab menu, console, etc). At most you should be getting 155-160 ms to the endpoint which is during high load.
Not to be rude but why would a hosting provider ask a game developer for permission when improving their network reliability and potential DDoS mitigation capacity? This is a system that can be used for any game and not just Garry's Mod. One point of this system is to be able to properly filter large region specific attacks with ease. Most in depth DDoS attacks come from Eastern Europe for example.
What I would like to look into is why exactly your latency spikes that bad. I would rather look into fixing that as it should not be like that. I'm surprised that your latency is that low to the POP though. Why can't AUS have that good routing?
I was playing devils advocate; I'm sure various server owners (specifically the ones whom would have access to this type of setup...) would argue the server browser ping should represent the ping to the nearest POP. Otherwise it could be justified through means of DDoS mitigation preventative measures, or a straw man argument on how broken the server browser menu is.
Ultimately, though - what I think doesn't matter. I think I could make a lot of money from this being allowed - but it'll also require a substantial initial investment, which I'm just as happy making if it's not allowed.
Why would a provider route a significant amount of data through the POPs to be filtered at the endpoint which would cause potential congestion when it could be handled directly at the POP as it should. At that point, Anycast might as well be scrapped which is silly. The network would be open to specific upstream related attacks.
There is no reason to waste a large amount of money for bandwidth or setup a dedicated dark fiber link between each POP specifically to route "dirty" traffic as you are probably already very aware of with OVH who drops it at each major POP. The reason why the network capacity can be so large is because when it is handled at the edge POP, it takes up a lot less of a load on the entire network.
That's not entirely true. You can drop a load of traffic at the network edge still - without having the caching there. Most attacks aren't A2S attacks - and for the ones that are A2S, they're rarely close to what you'd see from a normal attack in volume. So if it's volume that's a concern you should be focusing on UDP entirely - not a niche attack within UDP, that rarely hits the volume of many other more popular methods.
Caching it at the network edge does make the sense still though. I would personally opt to realistically inflate the ping to represent what a player might expect in game if I was forced to pick one or the other, as that still carries the DDoS "absorption" capacities.
Nowhere, which is why I asked for clarification. If you don't have it yet - that's fine
I think a number of people here seem to be conflating things. There is nothing wrong with using anycast for routing, loads of hosting providers do it with no ill effect. It absolutely does help with DDoS mitigation and latency to the endpoint.
The problem is that when a client makes a query to a server it isn't making it all the way to that endpoint. It's replying from a POP. This has nothing to do with anycast. This is something else that GMC has decided to put in place which I cannot fathom any other reason for than artificially lowering ping.
There isn't even an argument to be made for network load as a query exchange is literally 230B of data from what I can see in wireshark.
In a nutshell: I would like a server to show it's actual latency in the server list. I'm baffled that this is some sort of controversial opinion.
Anyone doing this needs to have their servers blacklisted from within the gmod source code itself
I'm not sure why you quoted this
It was specifically in the context of mirror servers i.e. servers that the player ends up on while not being the one they specifically connected to. It holds no value in an argument here and I'm going to interpret it as you fishing for some sort of zinger with it.
I'm not going to contribute to the anycast ethics discussion. But, I think a lot of people don't really understand the networking differences between browsing servers and being connected to one. While the POPs may cache A2S packets, and while I can completely understand why people believe that creates a dishonest server listing ping, even receiving an uncached A2S reply in 30ms in the server browser absolutely does not mean in-game ping will be reflective of that at all. It's unreasonable to have such an expectation.
Off the top of my head, low tickrate (16t = 62.5ms minimum ping increase) and very high counts of non-static entity/player entities (larger snapshot payload increasing choke potential) will immediately ruin the prospect of matching browser and in-game pings.
I don't think anyone is expecting the latency in game to exactly match up with that on the server list. How server owners load down their servers, try to pile players on, fuck with tick, etc is a whole different discussion. I think we're just expecting when we see that ping reading in the server list shows us the actual ping to the server giving a rough idea of how close a server is to you on the network. Otherwise there's literally no point in having it there as it'll turn into a race to the bottom of everyone and their grandmother attempting to get the lowest artificial ping.
We can sit and discuss all day about the difference in server list vs in game ping and all of the various gripes with the server list. At the end of the day when I see a low ping in the server list I can pretty routinely expect decent performance on that server vs when I see a high ping I can expect a pretty shitty frustrating experience on that server. Hence why the metric is there in the first place.
That's exactly what I still don't understand. People just don't understand that no matter what they do, the in-game latency will be off by quite a lot regardless if they live in the data-center or 250 miles away.
If the latency in game was actually the proper value or if the game itself was actually directly dependent on ping then I would completely agree with you but I can't even join a server 5 miles from me without "having" a 150-200+ ping in every detectable way in-game. The main point of this system is to increase DDoS mitigation capacity and reliability but people using images of status or net_graph doesn't help their cause since it's off on every single server.
Lunas picture proved that point:
I hope that people don't actually believe that every single person on the server regardless of how far they are, have over 200 ping. That in itself is insanity. This is how every single server on Garry's Mod is as latency is always off by hundreds.
I can understand some peoples point and would agree with some of it. What I cannot accept however is people assuming things like net_graph 4 or in-game latency as a whole to be valid.
People will not understand what sort of attacks come through especially if they never ran a server in the top 20 and went through it all themselves. People can always make assumptions that X is only Y bytes or that this attack is "rare" according to their experience. They may argue that "you shouldn't prioritize it," but I have to give my clients the DDoS mitigation that they have come to expect from me over the years and more.
Interesting slight of hand...
But you do indirectly try to contribute by undermining the core principle of A2S ping reliability, and in turn the ethics at large;
However, this is still a straw man argument.
No one is expecting or saying the latency in game should match up with the latency in the server browser perfectly. You're trying to move the "goal post" to weaken this argument. You're going from people saying they want an A2S ping to the actual server, not a POP - to a point that the A2S ping to the game server doesn't match the in game ping perfectly. Everyone knows there can be minor disparities in a normal use case between A2S & in game ping; it's there to give you a rough idea. What is being asked for is a realistic representation of what the in game ping will be - on the A2S replies.
The disparities caused by the normal operation (and even your worse-case scenario) of A2S is not justification for actively trying to reduce the ping to a level which is unrealistic. That's really the key word here - realistically; I wont get a ~1372% difference in ping on the server list and in game.
Your reply seems designed to re-position the argument using slight wordplay and worse-case scenarios, to use the fact that A2S ping isn't perfectly accurate; as justification for the unrealistic difference in ping caused by the anycast A2S caching. I see two totally different things here that you're trying to conflate. Maybe you aren't trying to intentionally do this - but I definitely get that vibe.
I see this in a few different lights, let me clarify my stance:
I actually don't see the point in POP-based query caching. We have our own query caching solution that can run (is running? I don't know for certain) locally on the host machine. It serves its purpose of taking the responsibility of A2S responses off of SrcDS's shoulders and takes the brunt of A2S flooding. The latter situation is where I think POP A2S caching can be beneficial - a further upstream method of filtering out these sorts of attacks.
That being said, the upstream caching or methods of "fixing" it is not my argument to have - as Ertug's customers, we reap the benefits, but whether he sets his system up that way or not is not in my control, nor do I individually care, particularly.
I still can't agree that the ping difference between server browser and in-game is minor, as you say. I'm in central Arkansas, but my ISP's gigabit routing takes my connection to Chicago before hitting a path to dal-gmchosting-ddos-router. Even with that, my ping in server browser is in the mid-30s yet in-game I get the same 150-250ms variation that everyone else gets once connected. This is not exclusive to SUP, either. When I connect to any highly populated server it's the same story repeated.
The query cache you used would shave at max ~10ms off your A2S response time - the point in anycast A2S caching is to make the ping appear lower than it otherwise would be, thus increasing the server rating & in turn getting players who wouldn't have otherwise joined. As you pretty much admit yourself - the system isn't actually needed to handle attacks.
GMC had upstream filtering long before the anycast routing which you guys used to use - it ran on a local "router" before your end-point machine. Anycast isn't needed for this at all - unless you're literally unable to get enough bandwidth locally.
I can't even call this manipulative - it's just a lie. This is something you do have control over it - the going rate is ~$200-300 per POP a month on top of the dedicated server. Assuming SUP utilise 3 POP's you're paying ~$600/mo for it (at a good rate) + the dedi costs. Moat Gaming for reference paid ~$2500 for 3 months of this service.
I never said this; we get DDoSed very regularly. A2S flooding is no exception when there's millions of PPS, heavy enough to congest the line. And, I very specifically said that POP-based caching would help against A2S flooding, keeping it away from the host machine in the same way that they filter out other network attacks from the regions they originate in to prevent congestion closer to home. That isn't even part of the debate here. POPs have a well defined and accepted set of benefits in regard to network attacks.
Like I said, the rest doesn't matter to me one way or another. How the ping issue is addressed, and all that.
I appreciate your "insight" but all you're doing at this point is making rash assumptions about our DDoS mitigation, network capacity and overall network topology. While I'm flattered that you were curious enough to look into things but making bold assumptions when you have no idea what actually goes on behind the scenes is quite silly.
I do also enjoy your overzealous generalizations on in depth DDoS mitigation. My recommendation to you is to go ahead and attempt to do it without anycast since you clearly know so much about it and then get back to us when some random attacker takes out your upstream providers and leaves you bone dry.
Apologies - I've corrected my post. Guess that makes two of us.
Yeah I understand that. But it's no-dice that things are intentionally designed to achieve the same results as the mirror servers. Penguin was already planning for this while the mirror-servers were still going on. & the mirror servers weren't bought to handle attacks, they were bought to try and get players from other regions purely.
Aren't you supposed to be perma-banned for faking players on the serverlist from an exploit that you bought from ley? :s
Isn't ban evasion permanent ban too? :sss
Also this screenshot makes you look quite hypocritical
https://facepunch.com/member.php?u=734091???
That's not my account buddy.
Great reply.
Oh dear this thread certainly has devolved.
I'll be honest I went into this swinging with a lot of uninformed assumptions. After giving this whole thread a good read I can see the reasoning for using A2S caching as a method of DDoS mitigation. That being said, is there no compromise that can be made? Ie: Can you not do some sort of passive filtering of A2S at the POP? Perhaps watching for connections that exceed a certain PPS and simply dropping that traffic at the POP allowing clean traffic to proceed directly to the endpoint. That or as drizzy suggested artificially reinflating ping to it's original realistic value using the latency between the POP and endpoint.
As it stands if this system becomes commonplace ping becomes completely useless and clients will be playing guessing games trying to find a server actually close to them.
That's exactly what NFO does. Doing so allows people to easily and cheaply take your server off the server list, stop people from connecting for days on end and various other things depending on the packet. I've been there and honestly I think working on withstanding those types of attacks is what's made GMC such a popular Garry's Mod host. As for adding a send delay I personally don't see how that wouldn't severely limit throughput but I don't know enough about GMCs internal workings to have anything more than an opinion.
The issue with doing it per PPS based is that, that is a rate limit which just will not work. 99% of attacks are completely spoofed. Each packet usually has it's own separate IP. There is no proper way to do heuristic based filtering past what our system already does. The days of people sending mass packets through a single IP is long gone.
Even major mitigation providers such as Voxility, OVH, and Corero who use only heuristic based filtering fail immediately when attacked by a person who actually knows what they're doing.
I've seen A2S specific exploits (not just A2S_players) down some of our 10 Gbps routers multiple times in the past from the sheer amount of traffic. The custom system that we have in place is designed to handle quite a lot of bandwidth per POP once I get all of the network providers together and tweak things more. A delay would destroy that goal since we would have to effectively artificially delay the packet which in itself is sketchy on my part.
Thank you both for the comprehensive explanation. I suppose that leaves this whole thing in quite the pickle. Damned if you do damned if you don't. Here's hoping we find some reasonable compromise at some point. Until then it appears you'll have people upset about latency on one side and hordes of kids with booters on the other.
Best of luck Ertug and god speed.
Can someone point me in the right direction on this topic here.
I currently run a Gmod Day-Z server. everything was fine til today we started seeing a lot of traffic and someone reached out to us and said hey there's a problem there's redirect links in the servers list. Trying to address this issue before anything stupid happens between other servers for some competition. I don't host redirects I even confirmed it wasn't a setting by shutting down the VPS and links were still live. Can a admin or someone higher up please look into this issue as we are just simply looking to host a game mode.
Server IMS (Ill Minded Servers) Day-Z
Sincerely,
Frost
After looking into those servers, some of the variables that are being sent to the valve server master lists differ from actual SRCDS instances. They appear to be spoofing players and flooding the master server lists with "fake servers". Each server is join-able but appears to connect to the same location. The game-port value which is sent to the Valve master servers isn't even set.
The offending IP:Ports are:
31.16.172.93:27017
31.16.172.93:27018
31.16.172.93:27019
Somebody is obviously putting fake servers onto the Valve master server lists and is using forwarding to connect it all onto one IP:Port. From my understanding, these are attempting to act as "mirror" servers and simply runs the connect command (connect 192.252.223.113:27015) after the client connects to one of the "mirror" instances.
This is nothing new but there really isn't much that can be done here other than perhaps forcing GSLT keys but even that wont solve the issue. The issue here is that the individual made three separate instances of the same server with the player count modified to increase the location of the server on the master server listing then for whatever reason redirect it onto your main server at 192.252.223.113. There appears to be nothing in-between. The client is redirected accordingly to 192.252.223.113 and is not being forwarded through the mirror servers so there is no possibility of it causing "lua errors" or modifications anywhere assuming that that was plausible but with our Source is, I doubt it would have been.
@aStonedPenguin idk if your admin or someone higher but could you please help me in finding where I should post this. I'm not looking to get my server banned for some sorry competition...
The problem is that they can always just change the IP Address of the redirect server. That IP that they are using is a dynamic IP from what I can see as well. Back when the whole fiasco happened, the best that they could do was blacklist each server IP then host-name. There is no proper solution here without having to manually blacklist over and over. I don't even know if they monitor this anymore since after the template design, it's pretty much dead which I assume was the intended goal.
They are also over a year late if I'm not mistaken. Whoever is doing it should just stop.
Sorry, you need to Log In to post a reply to this thread.