https://files.facepunch.com/forum/upload/108652/eda8521e-7476-4e7e-8302-c3c7cd76f2ab/60rZB3.png
Accidental Windows XP simulator.
Soo which WAYWO thread are we actually using now? This one or that other one?
Finally got persistent object-spawning to work in Battlefield 3. The objects are loaded at runtime and is synced between players, even if a client joins after the object is spawned.
https://cdn.discordapp.com/attachments/319452109995114497/403660630181281792/Captura_de_pantalla_60.png
Can you walk around inside of those?
I'm currently working on a computer vision project to park cars more accurately using tags. I'm using OpenCV
https://i.imgur.com/ECu9bvv.gifv
Trying to bake shadows for a single light source in a huge Unity scene without having the editor keel over and throw up all over itself turned out to be rather hard. After painfully exporting all my geometry (average of 5mil triangles) and baking shadow maps slowly in 3DS Max I figured, why not just write my own shadow mapper, right in the editor? And so I did.
https://files.facepunch.com/forum/upload/132339/90bee4ad-6b4a-447c-86c5-cfaa301bd52c/image.png
Takes a while to generate all lighting points if you want decent resolution but it is way, way faster than what Unity does and it sure as hell does not involve tons of manual work exporting and settings renders up. Just press a button and go do something else for a couple of minutes.
Yeah! The collision and everything is synced between clients and the server.
//Programming in C#
if (receiver == PeerType.Client) OnServerMessageReceived?.Invoke(receiver, message); else if (receiver == PeerType.Server) OnClientMessageReceived?.Invoke(receiver, message); else throw new ArgumentOutOfRangeException(nameof(receiver), "Invalid NetPeer class.");
//PROGRAMMING WITH CSHARP!@#!@
var result = new Dictionary<PeerType, IncomingMessageDelegate> { [PeerType.Client] = OnServerMessageReceived, [PeerType.Server] = OnClientMessageReceived } .TryGetValue(receiver, out var action) ? action : (a,b) => throw new ArgumentOutOfRangeException(nameof(receiver), "Invalid NetPeer class."); result.Invoke(receiver, message);
I guess we're using this waywo then, here's what im working on.
https://cf-e2.streamablevideo.com/video/mp4/15qsw.mp4?token=1516384432-YSYdvdhPAyclkSoZhyAkLweZwN9AFDbUxNg634fzLBk%3D
The most frustrating part about OpenGL is having to determine whether you're currently reading a tutorial for core profile or for something deprecated. I've been trying to figure out what's the difference between defining the projectionviewmodel matrix yourself vs. using glMatrixMode for over an hour now, just to learn that glMatrixMode is the old way to do things.
The reverse can suck too.
At one point I learned that a particular functionality I was using was too new and didn't exist on 1 of the 2 computers I was testing on. (bindless textures)
It was something silly like the card supported opengl 4.2 but only glsl 4.1 or something like that
Yeah the worst is you read some functionality like scissor test and you're like "oh that must be deprecated, looks so much like glVertex and the fixed pipeline" but nope it's not.
Here is an extremely forward looking, opinionated, and barebones overview of necessary vs unnecessary functions: OpenGL Good Parts?
It's not essentially true that everything deemed "bad" is actually useless, most newer users or those who do not need extreme granularity will find things like glDrawElements useful. However, when the advice is to use another function instead, such as glDrawElementsInstancedBaseVertexBaseInstance in this case (what a mouthful), it's because this function encapsulates all functionality that glDrawElements has and more. There's a lot of redundant OpenGL functions so it's a good idea to take a look and see if something more general is available.
I found docs.gl to be a great reference when I was doing some GL 3.2 core stuff a few years ago.
For example, you can type in glMatrixMode and notice that it got removed in 4.0, which points to it having been deprecated and then removed from the spec.
Also of course it's a good general reference for all the gl functions.
jesus have mercy
Whoa, it kinda works!
https://files.facepunch.com/forum/upload/106992/faff530d-353f-46b2-b40b-e99c28a319ba/Screenshot_2018-01-21_23-57-56.png
Unfortunately (or fortunately) this is the longest named function. What a happy coincidence.
Also does anyone else have spellcheck issues? It goes red but then undoes itself after a bit of typing.
https://www.youtube.com/watch?v=gKh0AKjbmGI
Hnnng, almost! Something is wrong in my lookAt matrix though... maybe someone can help? Why do I have to transpose orientMat for it to kinda work in the first place at all?
def lookAt(self, eye: np.array, target: np.array, up: np.array = np.array([0, 1, 0], np.float32)) -> None:
forward = target - eye
forward /= np.linalg.norm(forward)
up /= np.linalg.norm(up)
side = np.cross(forward, up)
up = np.cross(side, forward)
orientMat = np.transpose(np.matrix([[side[0], up[0], -forward[0], 0],
[side[1], up[1], -forward[1], 0],
[side[2], up[2], -forward[2], 0],
[0, 0, 0, 1]], np.float32))
translMat = np.matrix([[1, 0, 0, -eye[0]],
[0, 1, 0, -eye[1]],
[0, 0, 1, -eye[2]],
[0, 0, 0, 1]], np.float32)
self.viewMatrix = orientMat * translMat
Calling it like this, so it should go in a circle around the model:
t = glfw.get_time() * 1
self.camera.lookAt(Vec3(math.sin(t)*3, 0.5, math.cos(t))*3, Vec3(0, 0.2, 0))
Oh good, graphics stuff so I'll be on topic! I've been working on the system my renderer uses to simplify pipeline creation. I've vaguely mentioned it before, but Vulkan requires you to declare a bunch of stuff up-front: like OpenGL, this includes vertex attribute and binding data. That stuff is what I hadn't addressed thus far, as all I'd tackled was the layout of uniforms and descriptors so that I could create a "VkPipelineLayout" object, among others. This last step gets me 99% of the way to what I need to create an entire Vulkan graphics pipeline object by reading one JSON file. The project itself can be found here: a few days ago I also made it compile to a DLL and use PImpl, so there's no more linking the 6+ libraries it depends upon into client projects and it doesn't require recompiling client applications to update ShaderTools. I used to hate PImpl, but now I'm really liking how much it helps manage dependencies and keep interfaces clean between client and the library itself.
Anywho, parsing the vertex data is fairly simple actually. Getting a single vertex attribute's information looks like this:
VertexAttributeInfo attr_info;
attr_info.Name = cmplr.get_name(attr.id);
attr_info.Location = cmplr.get_decoration(attr.id, spv::DecorationLocation);
attr_info.Binding = cmplr.get_decoration(attr.id, spv::DecorationBinding);
attr_info.Offset = running_offset;
attr_info.Type = cmplr.get_type(attr.type_id);
running_offset += attr_info.Type.vecsize * attr_info.Type.width;
Unfortunately, not all of this works ideally. The location of an attribute is what you specify in the shader - the binding should be what VBO it's bound to, but I don't think this data is in SPIR-V at all (as far as I can tell). Querying that always returns 0, for now, and it looks like that'll have to be specified by the client (which is fine). "get_type" is something I didn't realized existed until I struggled for a while - it returns a "SPIRType" structure that contains just about everything we could need, e.g something like this:
struct SPIRType {
data_type_enum baseType; // i.e, "Float", "Int", etc
uint32_t width = 0; // width of baseType, e.g 32 for "Float" basetype
uint32_t columns = 1; // defaults to 1, not relevant for attributes usually
uint32_t width = 0;
uint32_t vecsize = 1;
};
The "vecsize", "basetype", and "width" fields give me all the information I need, like allowing to compute the offset of an attribute like I do in the loop above. They also give me everything I need to get the VkFormat enum value I need to make a VkVertexInputAttributeDescription struct for a single attribute, though the code to do that really isn't pretty
The plan now is to allow for the specification of "feature renderers", objects dedicated purely to rendering one type of object, entirely through JSON if I can. I'm also working on a generative shader system, so that the shaders specified in a feature renderer's JSON file are in an intermediate format. From there, I'll have to tweak things based on the type of renderpass+subpasses setup being used (i.e making sure there's no descriptor set binding collisions). The end goal is to make this absurdly data-driven, with very very little state tracking and almost nothing being compiled-in for setup or logic. Which is quite a contrast to what I've had up to this point: each object type being rendered was nearly identical and only had boilerplate code for descriptors, vertex attributes, and buffers change.
Here's an example of the output JSON from ShaderTools thus far, in case anyone's curious. It still needs some work, like labeling the descriptor and attribute categories to make importing this JSON simpler but it's looking pretty good so far.
https://facepunch.com/fp/emotes_std/wow.gif
Always remember to line wrap at 80 characters
I won't do what you tell me
https://files.facepunch.com/forum/upload/113105/7419e354-0f22-4842-a81b-7df937b8a6ba/image.png
LWJGL is fun, gonna work on implementing FBOs next so I can do deferred lighting.
https://www.youtube.com/watch?v=5Om1poM895c
I've added the model matrix, so now I can move the models around! I think I have to pass the model matrix separately to the MVP matrix as well, otherwise the shader won't be able to calculate the pixel world positions required for deferred shading.
Don't output world position for deferred shading, just send linear depth and calculate position from that, it'll save you a big chunk of video memory and you can use an R32F rendertarget for depth
Don't forget to adjust the normals to world-space as well, which will require the model matrix separately. Hopefully that's not what you meant and/or are already doing. That looks something like this in the vertex shader:
mat3 normalMatrix = transpose(inverse(model_matrix));
outNormal = normalMatrix * inNormal;
You can also perform the same transformation on the tangent vector.
Homebrew N64 Rom
[IMG]http://www.kyle93.co.uk/i/e7860.png[/IMG]
What have I done... It's using 3GB of RAM too because I'm yet to implement referenced resources.
https://files.facepunch.com/forum/upload/113105/a60506c2-c616-489a-b56b-22d3bcfc75d3/image.png
Before referenced resources, I'd make sure you're using instanced drawing. You can have a secondary instance buffer that stores the color of individual teapots.
As mentioned in WDYNHW, I don't use OpenGL anymore so I can't offer much more help than linking the wiki page on instanced drawing: Vertex Rendering . This is still good practice though, as it reduces your drawcalls from N (where N is the number of objects) to 1. Reducing drawcalls is good, but it also can sometimes let the hardware optimize things away too. Also, if you're using GLFW maybe enable some anti aliasing ;p (that can be done by calling "glfwWindowHint(GLFW_SAMPLES, sample_count) - 2-4 is a reasonable starting value)
https://files.facepunch.com/forum/upload/133270/0058c772-36e5-4e37-92d6-1b03c141fff2/image.png
I did an OpenGL!
Oh we doing graphics, nice.
Playing with some rendering stuff again, too, recently had my first attempt at depth of field using bokeh blur:
https://files.facepunch.com/forum/upload/132774/e700ffcf-52ec-490e-a6d3-599e320740a3/ss+(2018-01-24+at+08.04.57).png
Also I'm totally ready for VR:
https://files.facepunch.com/forum/upload/132774/2bea21d2-8ae3-4336-b4a0-465f01afa75a/2018-01-21_14-09-04.mp4
I made a new portfolio! Check it out
It had to be done as the old one was getting really outdated and I'd like to keep finding freelance work
Sorry, you need to Log In to post a reply to this thread.