[IDEA] Preventing x86 segfaults by using VirtualAllocEx->WriteProcessMemory on an external process ?

Since months, players are reporting this error : “Lua Panic Not enough memory”

Even 16gb ram users have this error, i bet it’s from the 15gb of textures/geometry of cars/weapon/nudes_mods or whatever.

The idea is to keep all the engine and Lua related data in the process
and send all the textures/models to an external process (Or many as we need) when they are created via VirtualAllocEx->WriteProcessMemory.
And when they are visible on screen (or a bit before :v: ), load them back to game process and create them as d3d objects in video memory.
I know it’s not going to be as simple as it sound, a lot of d3d magic hook are required to don’t fuck up the game.

That’s what ENB is doing on Skyrim, the process never use more than 400mb and the memory host process is at ~2gb of memory.
But ENB’s dll is protected with Yoda protector so we can’t really see how it works. (RIP IDA).

That idea is horrible, first of all you would have some insane performance bottleneck because WriteProcessMemory is a rather heavy operation you don’t want such thing running each update.

The next thing is how do you gonna surpass the 32 bit addressing as you are on a 32 bit process you will be only able to call WriteProcessMemory with its 32 bit exported function you would be also stuck on 4GB on a different process.

There is actually a way to do this but kid you must be crazy to think about something like this.

Well, it does works in Skyrim, and skyrim is a 32 bit game :v:

I mean come on, Robotboy need to make a x64 build or do something about this. (Or the 3Gb build flag at least)

Edit : i just want to fix this fucking game :xfiles:

One radically simpler way to fix this bug (as i assume this bug happens because the game loads all models at once) is to simply NOT load all models at once but just when the game actually wants to spawn/use the model.

Yes, it would hang the user for a bit before the model is loaded but that can simply be fixed by loading the real model on another thread (while using a temporary model) and just swap it when loading completes.

The thing is async model loading basically exists if multicore is enabled, however this is exactly why it loves to crash because lua was not designed to do such things.

The 3gb flag is not considered on x64 operating system.
Any x86 application that runs on a x64 system with more than 4 gb ram has 4 gb of address space reserved for itself. On x86 systems it has only 2 gb of address space reserved.

[editline]8th November 2015[/editline]

Lua WAS designed to do such things. The lua_lock and lua_unlock are supposed to be overriden with your own implementation, the default is no-op for them in default lua because by default it’s supposed to run on a single thread. I’m not sure about LuaJIT tho.

The point of multithreading is not to cockblock each other by locking.

But crashing is?

[editline]8th November 2015[/editline]

When you access one resource from multiple threads you need locks unless the operations are atomic.

  1. Never said its good why even bring it up?
  2. LuaJit does not have lua_lock/lua_unlock like in the ordinary Lua code and the profiler behind it is not made for such operations, a unique lua state is required per thread.