Whoops I missed what you wrote, there is no glTextureImage3D, only glTextureSubImagae3d and glTextureStorage3D. Only immutable textures with DSA.
Oh okay that makes a bit more sense
I did a bit more research and found that the core DSA functionality is a little bit weird compared to the extension
For instance, if vertex array objects are generated with glGenVertexArrays, then glEnableVertexArrayAttrib doesn't work, but glEnableVertexArrayAttribEXT does.
glEnableVertexArrayAttrib only works if the VAO is generated with glCreateVertexArrays.
Regardless of which creation function is used, the rest of the DSA functions for vertex arrays and vertex buffers doesn't matter.
Here check this out, there is a (somewhat lackluster) write up about how to use the new DSA stuff with regards to the functions they replace: https://github.com/Fennec-kun/Guide-to-Modern-OpenGL-Functions
Hey i dont mean to sound super dumb, but i've been trying to fiddle around with getting into programming, and wanted to know where i should start. it seems like such a hefty hobby to take on, but ive been growing interest in it for quite a while.
That source was very very helpful, thank you for posting it!
I got taught the basics of programming at school and kept experimenting from there. For me at least getting taught the basics was enough to get into it - there are probably plenty of books or online tutorials that can do that (sorry - I've personally got no recommendations). Once you've learnt the basics start working on something to teach yourself more.
HTML Are good to become as a game?
I'm having a little bit of trouble writing to a particular of mapped gpu memory
I have a uniform buffer which stores an array of structs. I create it like so:
glCreateBuffers(1, ubo);
glNamedBufferStorage(ubo, sizeof(BufferStruct) * 256), 0, GL_DYNAMIC_STORAGE_BIT | GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT);
bufferPointer = glMapNamedBufferRange(m_reflectorUBO, 0, sizeof(BufferStruct) * 256), GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT | GL_MAP_COHERENT_BIT);
I created it to be the size of the struct * 256, allowing for up to 256 elements in the array
I then attempt to map each element of the array to the buffer like so:
Buffer_Struct **buffers = reinterpret_cast<Buffer_Struct**>(bufferPointer);
buffers[ INDEX ] = &data;
but inspecting the buffer in nSight, I see its just garbage
Am I casting this wrong?
Heyo.
I got pulled into a machine learning hackathon tomorrow. I know very little about it. Has anyone got any crash course resources that isn't the google one or other good resources to get through in a day?
https://i.imgur.com/uBq94wA.png
Hey,
I'm currently working on a small game and I'm trying to find a way to calculate a point that is furthest (or as far as possible) away from specific objects. Here's an example:
https://i.imgur.com/wPqyWXw.png
So the black square is the area in which the point has to be, the black circles are different objects and the green spot would be an optimal point in this case, because it is almost as far away as it can get from the other objects. But how would I go about doing this programatically? I imagine I do a loop for the whole area size and calculate the distance for each point, but I image this would get quite slow and be very inefficient. Any other ideas?
wouldn't the best point be in middle of right side or bottom left?
Yeah you're probably right, but in my case the bottom left corner would actually be as good as where the green dot is. I just need a point that has a certain distance to the closest object.
Thanks man. I decided the best course of action was to just use Azure / Google Cloud to do most of the heavy lifting as it just boils down to hooking up an API or two at that point.
Can you use it in a sentence?
Is it normal to have no ideas for programming when you're still a beginner? I've been trying to learn programming on and off for a few years now and I always stop because I don't know what I want to do with the skill. I stop seeing the point in learning when I have no clue on why I'm learning in the first place. People tend to recommend writing something that solves a problem you already have but there's nothing I can think of that I feel needs to be solved. I'm still very interested in learning it though as I've always found it to be extremely interesting, I just don't know how I'd use the skill myself. Like I already asked, is this a normal phase to go through in the beginning?
I've been self teaching my self and programming for a couple years now and I still run out of ideas - the best things I ever found to work on have been similar to the often repeated "solve a problem", but I think something just as good is to make something actually useful to you or your friends/peopel you know - I made a little thing to help disable mouse smoothing in Fortnite then set the file to read only before they made that a default setting, I made a Discord bot that helped gather people for Destiny 2 (lol) raids, etc
just little stuff like that can almost take you out of the mindset of being down about trying to scrape together some idea and you just focus on making the thing, I know it can be hard too but sometimes those ideas just pop into your head, if you really can't find anything though I found the first year of advent of code to have some really fun and interesting stuff to try and solve
Finding stuff to make is hard, when I was getting started I mostly just did small random projects without any large goal in mind, I think it worked pretty well because I never got fatigued from any one project, and I got to try many different types of problems.
Examples of stuff I did:
Made small game engines/2D rendering experiments which taught me graphics, handling I/O, optimization.
Implemented an L-system parser which taught me parsing, text processing, more graphics.
Much more random stuff.
So basically find some small idea that interests you and just go at it for as long as it interests you; Show off in WAYWO, archive, repeat.
Yeah, that page is indispensable. @Karmah if you're still stuck lmk.
@HolySnickerPuffs I don't think that LearnCPP site or the Bjarne Strostroup books are that good. C++ Primer is by far the best C++ beginners book I've ever read and I cannot recommend it enough. It starts right out with using C++11, and that's a good thing.
If you see "new" or "delete" in a C++ tutorial, you're almost certainly in the wrong place. Same with C-style "[ ]" arrays, when std::array exists.
I think I have it figured out for the most part
The only issue I have at this point is optimizing persistently mapped buffers.
I'm getting visual artifacting when moving my camera (Buffer using shader storage) using one of these persistently mapped buffers, despite using sync fences and having the map coherent bit set.
I'm using a wrapper class I built, and this problem doesn't seem to manifest elsewhere despite being used in quite a few places.
I'm not sure why this happens, but the problem goes away if I tell it to use bufferSubData instead of using the pointer to memory.
I've also tried triple buffering it, binding a buffer range, but the problem doesn't go away and it runs slower
Reading more, it seems barriers in OpenGL aren't really used for syncing host writing to shader-visible memory, but shaders writing to host-visible memory. So this might not be relevant for you. Reading more into the GL docs, I think you should stop using the persistent bit:
Data written to a coherent store will always become visible to the server after an unspecified period of time.
So there's no guarantee when the data will make it to the server? I'd start using glFlushMappedNamedBufferRange , and creating the buffer with the appropriate flags. A flush tells the server/device that it needs to update it's version of the data - which based on info from glMemoryBarrier - could be important as different invocations may each cache data in different ways, and they're seemingly not required to be kept coherent? Not sure. OpenGL is confusing as fuck to me, anymore. Tangled web of documentation and legacy code :V
It is best practice, btw, to use glBufferSubData for updating: this doesn't require recreating the entire data store like glBufferData does, and this may be part of the problem. This will require a wholesale transfer to the device of data likely, and the flushing of the whole range too, I bet. It's going to be more performant regardless to use glBufferSubData, though. What type of buffer are your buffers? fwiw, storage buffers are going to be really slow in general. Uniform blocks are much better, usually.
So in short, I'd stop using fences/synchronization primitives, and start using 1. glFlushMappedNamedBufferRange 2. after using glBufferSubData to update the range you want to be visible in the shader.
Ah okay.
I used to use a lot of buffers, but I'm trying to cut it down a bit.
For instance, I currently have a single UBO per light, but I'm switching over to a single UBO for all lights of a common type, held in a single array.
And my camera's currently have a single SSBO each, but maybe I'll put them in a single array too.
Switching over like this adds a lot of complexity as each object will require some mediator to register/unregister.
Here's a link to my buffer-wrapper implementation if you're curious. It should be well documented.
I only map once at initialization. I've written 3 different write functions, the first just checking against the fence then memcpy'ing. The second uses subdata, and the third is my attempt at triple buffering, but it has worse perf and doesn't solve the issue.
https://github.com/Yattabyte/Delta/blob/Active/src/Utilities/GL_MappedBuffer.h
In any case, the second method (subdata) isn't terrible and works in the mean time.
For efficiency I would start dividing the area into parts.
Like, dividing this into 4 parts ad checking how many objects are within the area already allows you to throw aways 3/4th of the area that you don't need to check.
You're going to have worse performance with that third method almost certainly because you have a binding call, as those are always going to be rather bad for performance.
I'm still not sure you should be using fences for this, or that you'll need fences for this. Indeed, some stack overflow posts say you only need to do explicit synchronization if you're using persistently-mapped memory. If nothing else, you should probably use one fence per frame when doing the triple buffering setup. So you'll need to find the index of the current swapchain image you're rendering to (probably 0-2), but I don't know how you do that in OpenGL
I'd just stick with buffer sub data. What happens if you don't use the fences and use write_immediate?
It works without fences, I was just trying to be ultra safe to protected against edge cases.
I try to use unique locks sparingly, I use them in separate threaded sections where I'm writing to memory that could be accessed in the main thread.
In a nutshell, its just during asset (file) loading time, because I've opted to allow for that to be multithreaded.
I'm happy with the functionality of my asset loading system, but I'm going to redo it one more time because I've tacked on a bunch of stuff to it since its inception (and its headers look quite unruly)
Okay, I gotcha. Yeah, with OpenGL fences seem mostly to work in the device->host direction, rather than guarding from host->device. I'd still try the manual flushing, myself - but you can still keep things persistently mapped.
If you're not going to defer locking, or don't need to unlock then re-lock the mutex, you might consider using lock_guard. There's not going to be a performance difference, but lock_guard can more explicitly state what you're doing since it locks on construction and unlocks on destruction. This is definitely a style thing, though. Not terribly important at all.
I can't really contribute on asset systems, as I prefer to use external libraries for that. tinyobjloader's threaded experimental version + mango have been my go-to's for asset loading stuff. I'll probably eventually expand on their system with my own, though, as I should be pooling resources and mesh data in a more ideal fashion for sure
I'm looking for a book on algorithms and data structures. Preferably a book which would provide an example for each problem and would focus more on the programming side rather than the mats of each algo. So far I've looked through:
Cracking The Coding Interview - I like how each problem has more than one solution, also the explanations are nice. However its too focused on the interview part and seems to be all over the place.
Introduction to Algorithms - has lots of algorithms, though a bit too complex and academic for me.
Could anyone suggest a nice book for me?
To be honest most algo books worth their salt that I've seen shy very far away from the programming side. That's because programming is an implementation detail. It sounds weird but it's the truth. Maybe research papers of actual implementations of an algorithm you find interesting will yield better results? They usually look more qualitatively at runtime and things that matter (cache cohesion, locality, misses).
I may be entirely wrong and talking out of my depth here, this is just based on my university foray into algorithms. I've read parts of both those books and I agree that CTCI is more of an interview prep guide and Intro is more of a reference and rigorous material. However as I stated most books I looked at (involving online algorithms, graphics, quantum programming) do not have implementation details but psuedocode in very high level languages.
I'm taking a data structures and algorithms course at my local university right now
The textbook we are using is titled "Data Structures & Algorithms in Java"
I think it explains the topics pretty good, it ofc provides examples of everything.
You can almost most certainly find a version of this book on the internet
First few chapters seem a bit weak, though the following ones have some nice material. I'll definitely check it out.
Sorry, you need to Log In to post a reply to this thread.