net.WriteTable has problems sending data in particular forms

When I send a table to a client I’m getting “Failed to Read type” errors with bogus types like 2 (LightUserData), 33 (Video) and a couple of others I can’t remember off the top of my head. The tables are always consisting of strings and never once does the error type say it’s a string.

So far I’ve tested the following conditions, and they usually yield the net.ReadType error:

-If the keys of the table are SteamID’s
-If the written table contains a nested table with a SteamID entry.
-If the written table contains a nested table with non-english characters ( russian text is especially aggressive )

You would think that SteamID’s would be causing the error, but when I replace the SteamID with the Steam64 equivalent, I still get the net.ReadType errors.

My initial thought was that these tables are too big and are getting cut off along the way, but allocating the main table can still randomly produce the error ( I haven’t gotten to do more testing on the conditions within the allocations )

These tables can range anything between 1 and 1000 entries, and I’ve had these net errors appear even from a single entry with the above conditions.

I should also point out this happens on high server load ( tested with 63 players ) and a single lonely person.

The table can also be sent at any time and still produce the error. Most of the time these tables are sent when the client has already loaded.

Has anyone come across this issue and know how to get around it? Or is there some better way to prevent corruption? Or hell, just have any idea why it happens. This has been a huge stump for me

Yeah, I’ve gotten the same issues with net.WriteTable.

As an alternative, I just use GLON, encode the item as a string, and send them using net.WriteString.

(yes I know GLON is outdated, please don’t judge me)

Manually read and write the table, don’t use WriteTable. It’s much faster to anyway.

I had no idea that it would be faster. Does the same translate for larger tables, because it feels like reading through each to write the table might overflow the client

It’s generally bad practice. It’s kind of convenient, but not performant.

The problem is that WriteTable also sends over the data types so the client can read it properly. You shouldn’t need that since you know what kind of data to expect.

net.Start( "TransferTables" )
      net.WriteInt( #YourTable )
      for k, v in pairs( YourTable ) do
            net.WriteString( k )
            net.WriteString( v.StringData ) --Run for all string entries
            net.WriteInt( v.IntData ) --Run for all integer entries
net.Send( ply )

I just imagine it being a heavy burden on a client

The question you need to ask yourself is: Do I need to transfer that data? Try to put stuff in client/shared files if you can. Only transfer “dynamic” data (groups/teams, entity data, etc.) and keep static stuff (shop data like prices, items, etc.) in files.

Try to avoid strings if possible. For example: If you want to display a info/tooltip/error message, setup a table (in a file) which gives each message an index. Now if you want to display the message on the client, just send over the message id.

All the data being transferred is dynamic information called from an SQL.

The information being sent is on the demand of the user rather than automated by the server.

I’ve tried working this problem a lot of different ways and the only solution I could see that would reduce corruption errors and still get all the information would be to have an external page with all the information and use an HTML panel to open the page up. But this is limited if you want to DISPLAY the information on a screen rather than have the client do something. Although I can’t think of any possible reasons why the client would need a large table to play with.

Right now the theoretical aspect of this is how to transfer large tables without corruption using just lua. Or figuring out what kind of characters trigger the system to send the wrong type along with WriteTable

If you really need to send tables, use compressed JSON.




: Better ideas welcome.

Strings seem to be the worst things to network and compressing an entire json table to a string seems like a baaaad idea. Compression increases likely hood of corruption too

I did a bit of ghetto benchmarking before and found net.Write/ReadTable and using compressed JSON take around the same time to relay.

I doubt it was very accurate but I’d be surprised to see the result of real benchmarking.

Properly send the data from the table you need?

Something like this, maybe even chunk it into multiple messages if needed.

net.WriteUInt(#tbl, 10)
for k, v in ipairs(tbl) do
net.WriteUInt(v.IntData, 32)

net.Receive(‘MyTable’, function()
local tbl = {}
for i = 1, net.ReadUInt(10) do
tbl* = {
StringData = net.ReadString(),
IntData = net.ReadUInt(32)

I like to choose JSON over net.Write/ReadTable because of the “Couldn’t read type” errors. JSON seems to work absolutely fine when not sending entities.


Why use JSON? Compress the data itself and you won’t have any JSON encoding/decoding overhead.

I usually network tables with vON serialize to stringify, then > util.Compress > net.WriteData

Is there a reason you should just network the table strings directly instead of compressing them first? I found that the bit length is much smaller when I compress it first instead of networking it directly.

Maybe my table structure is just really bad.

I avoid sending compound types across the network because I don’t trust it to arrive in exactly the same format as it was sent. Not that it’s buggy or anything, but you have more control when you’re working with primitive data types when using the net library.

This method hasnt produced a read type error, although at larger tables ( around 1000 entries ) with 7 values, there was some jitter on the users end when they received it at once. Allocating it into 10 pieces of 100 immediately fixed that.

It looks like the net needs a slight delay before the next message in the case of allocations, especially when you have a lot of pieces, but theres basically no discontinuity. 0.05 seconds seem to work great