Retoast from WDYNHW in the programmers forum (because nobody answered me sadly):
I can think of too many problems with something like this. It’s too expensive. Instead, you should make it so that the connection is not lost in the first place.
- You’d have to run each query twice. Once in the global database, and once in the local database, so that when the global database fails, the local database is synced.
- You must implement a query queue, so that after the global database fails, the queries that should have run get saved, and re-run when it’s back online, after which the global database can start being used.
- Doing all this safely, such that all data is consistent, is going to be a pain in the ass.
- Also, if data changes in the global database WHILE the snapshot is being made, it’s gonna be a problem, because communication with the other servers need to be made to fix this.
Can you provide more information as to what you are saving in the database? and what are these servers running?
Things are going to get too complex and something or the other WILL break if you implement something like this. A simple approach won’t work.
IMO, the most robust way of going around this:
An intermediary “sync-database” server that runs alongside on the database server.
Connections should not be made to the database directly, but instead to the intermediary application. The application should keep track of complete transaction log. When a connection is made, the client should send database credentials of it’s own local database. The application would then block the query queue, make the snapshop, start the queue, send the snapshot, and then sync the local server with the global with the new queries that were in the queue.
OH WAIT, I just realized you have “servers”, not just a “server”, which means that there needs to be two-way, three-way or even more syncronization depending on how many servers you have. It’s too hectic. Don’t bother.
You’ll need to detect all the synchronization problems that will occur and avoid them. Everything might crumble and before you know it all your servers are in a crash state. Thats the problem with something like this, if one component doesn’t work, the other won’t either.
Well the connection is already only lost quite rarely (a few times a month at absolute most), the problem being that when it finally is lost, nothing works and I get a thousand people PM’ing me on Steam about it.
I’m aware, I had already planned on doing that. I don’t know about the actual performance cost of this, I don’t think it could be too expensive? I’ve never really done any performance tests with it.
I guess I could have a “cache” file on every server that accumulates SQL queries in case the connection is lost, and then runs them whenever it can (in the right order of course). A problem with this however, if another server (that is still connected) changes the same data as an offline (offline as in not connected to the database) server then the offline server would eventually overwrite that data and I’m not sure how I’d get around that without including a time stamp with every single data change.
It doesn’t need to be 100% consistent all the time though, this is a back-up method that just needs to retain emergency usability.
I don’t see how that’d affect it, since it’s queue based? If 1 server is making a snapshot, and another server changes some data then that data wouldn’t be changed until the 1st server has finished reading?
And to the last question, the system I use right now stores a bunch of player data (last known name, last known IP-address, steamID, time played, when they last joined, if a player is muted/gagged and when to un-mute/gag them, inventory (cosmetic items), their credits, warnings, infractions, bans).
As well as groups/ranks and a few other things.
Then it’s simple, check if connection is lost, remake it. If everything breaks when a connection is lost, restart everything. If restarting everything is a problem, implement a query queue and don’t run queries if a connection is not available.