• What Do You Need Help With? V8
    833 replies, posted
  • I'm working on my scene-kit esque library now, since my next big task is getting things to render. I'm running into a design dilemma though, and one I want to make sure I'm not approaching in the worst possible way. I have a base triangle mesh class - intended to handle the setup and storage required for rendering a mesh that uses triangles w/ vertices+indices - that lots of my renderable objects inherit from. This is a pretty concrete class, and I feel fine in inheriting from that. I also have some renderable objects that don't inherit from this class at all though - like my Billboard class uses the geometry shader, so it doesn't really need to store any vertices and indices. They all however have a function with the same signature: [cpp] void Render(const VkCommandBuffer& cmd_buffer, const VkCommandBufferBeginInfo& begin_info, const VkViewport& vp, const VkRect2D& sc); [/cpp] I'm realizing I could really use some way to store the renderable objects so that I can iterate through and render them, so I was wondering if using multiple inheritance for this wouldn't be a terrible idea. In the case of those that already inherit from TriangleMesh, they would now inherit from an interface class like this: [cpp] class Renderable { public: Renderable() = default; virtual ~Primitive(); virtual void Render(const VkCommandBuffer& cmd_buffer, const VkCommandBufferBeginInfo& begin_info, const VkViewport& vp, const VkRect2D& sc) = 0; }; [/cpp] This would at least let me store a list of the renderable objects in a scene, but I still don't much like using inheritance - let alone multiple inheritance, it really makes me question what I'm doing (and it makes me feel gross). This would solve a few issues, but it leaves a few more: - What if I needed to sort the order things are rendered in? (although, I plan on using a forward+ implementation I found to do some kind of early culling and sorting) - Could this be done with some kind of delegate function instead, rather than inheriting from an interface class? - What about classes that need things like UBOs and push constants updated frequently, sometimes every frame? This is not at all exposed via the interface. I'm trying to read through all of the books, presentations, and reports I have on how to approach this problem but I have yet to find anything that really feels like advice or input on this kind of situation or setup.
  • Inheritance isn't always the wrong approach, but you could also do with delegates. I'd measure first and then decide, are you sure the function call is the code's hotspot?
  • [QUOTE=WTF Nuke;52952962]Inheritance isn't always the wrong approach, but you could also do with delegates. I'd measure first and then decide, are you sure the function call is the code's hotspot?[/QUOTE] I haven't implemented either of them yet - so no idea if this is a hotspot or anything, I'm just asking more from the stance of design. I'm thinking of trying delegates though, maybe even something similar to the type-erasure shenanigans I pulled to make a generalized task queue system.
  • [B]fixed - I had two python versions installed somehow[/B]​ Hi guys, could anybody help me understand why I'm unable to run my python script? It works fine when I run it from the command line (python filename.py) and the same if I 'build' from sublime text. But if I run it from My Documents folder I get told no module named send2trash. I install the modules using pip. Python3, windows 8.1 [code]#! python3 # organise.py - Organises downloads folder. import shutil, os, send2trash ---stuff--- [/code] [editline]7th December 2017[/editline] Ah, it seems I have two versions of python installed. If I open idle it's version 3.2.2, if I run python from the command prompt it's version 3.6.3
  • [QUOTE=paindoc;52952626]I'm working on my scene-kit esque library now, since my next big task is getting things to render. I'm running into a design dilemma though, and one I want to make sure I'm not approaching in the worst possible way. I have a base triangle mesh class - intended to handle the setup and storage required for rendering a mesh that uses triangles w/ vertices+indices - that lots of my renderable objects inherit from. This is a pretty concrete class, and I feel fine in inheriting from that. I also have some renderable objects that don't inherit from this class at all though - like my Billboard class uses the geometry shader, so it doesn't really need to store any vertices and indices. They all however have a function with the same signature: [cpp] void Render(const VkCommandBuffer& cmd_buffer, const VkCommandBufferBeginInfo& begin_info, const VkViewport& vp, const VkRect2D& sc); [/cpp] I'm realizing I could really use some way to store the renderable objects so that I can iterate through and render them, so I was wondering if using multiple inheritance for this wouldn't be a terrible idea. In the case of those that already inherit from TriangleMesh, they would now inherit from an interface class like this: [cpp] class Renderable { public: Renderable() = default; virtual ~Primitive(); virtual void Render(const VkCommandBuffer& cmd_buffer, const VkCommandBufferBeginInfo& begin_info, const VkViewport& vp, const VkRect2D& sc) = 0; }; [/cpp] This would at least let me store a list of the renderable objects in a scene, but I still don't much like using inheritance - let alone multiple inheritance, it really makes me question what I'm doing (and it makes me feel gross). This would solve a few issues, but it leaves a few more: - What if I needed to sort the order things are rendered in? (although, I plan on using a forward+ implementation I found to do some kind of early culling and sorting) - Could this be done with some kind of delegate function instead, rather than inheriting from an interface class? - What about classes that need things like UBOs and push constants updated frequently, sometimes every frame? This is not at all exposed via the interface. I'm trying to read through all of the books, presentations, and reports I have on how to approach this problem but I have yet to find anything that really feels like advice or input on this kind of situation or setup.[/QUOTE] Do you want a user of the library to be able to provide their own types of renderables? I assume so, because if not you could solve most of your issues with just making the renderables discriminated unions and be done with it. I would suggest instead of interfacing with the renderables, interface on rendering features. I.e, instead of having each renderable implement an interface of some sort, have an interface for feature renderers. The interface would provide factory functions to allow the user to create the renderables that the specific feature renderer handles, as well as a single entry point that gets called when the global renderer requests rendering work. That method would then go about performing all the steps required to render the type of renderable managed by the corresponding feature renderer. So basically, instead of each renderable implementing an interface and taking care of rendering itself, you have one object per type of renderable that takes care of how that type of renderable is to be drawn. It also takes care of allocation, ideally. You can still have a base class for renderables if you want to like give out references or something but I'd probably go with like a templated base class for the feature renderers. There's some nifty tricks you can pull even so that the user still only interacts with the main renderer for access to the renderables. Feature renderers register to the main renderer, potentially providing information about in what pass they'd like to be called, etc. Alternatively they could just provide like a sorting key or whatever and submit their work in a command based fashion. The main point though is that instead of lots of instances that render themselves, you only have a few objects that get called and that submit batches of renderables. This comes with some nice benefits: > Storage for renderables can be optimised for visitation by the feature renderer > The amount of virtual calls is very low compared to calling a virtual function for each instance of renderable > sorting is trivial because each feature renderer can easily sort their renderables (same type, same size), and you can sort by type on a higher level as well > the logic for each type of renderable is still in one place, very concisely - you just happen to handle more of them in one spot which also happens to be nicely data oriented while you don't lose the benefits of an object oriented design. > it slots in with what you have so far - your triangle mesh type can still take care of managing its vertex data and whatnot, although I'd advocate simplifying the renderable types a lot and pushing all that into the feature renderers. > as a side effect, disabling specific types for like debugging purposes or whatever becomes trivial, just set a flag on the corresponding renderer. Try that with ten thousand particle system instances that all need to be disabled. In general this simplifies type wide configuration at runtime.
  • [QUOTE=brianosaur;52952520]A couple things. I'm not super familiar with powershell, so I could be very wrong, but did you look at the documentation for the command? I presume you mean to use the command "New-MailboxExportRequest" and not "new-mailboxexoirtrequest". Assuming everything you told us is correctly typed, then this very well could be a problem. I assume you wont get the error that you specify however. Another thing. There doesn't seem to be an -Identity parameter referenced in the documentation. Do you mean to use the required parameter "Mailbox"? See: [url]https://technet.microsoft.com/en-us/library/ff607299(v=exchg.160).aspx[/url][/QUOTE] I didn't even see that oops. It's typed correctly in the script, I just made the mistake typing it out here. In my edit I've managed to solve the issue of actually exporting a .PST, however that only works for an individual user at a time, which is a bit of a pain but may have to do. We're currently now trying to create a script to export to a .CSV, but with this script which we are using for just getting a list of people: [QUOTE]Get-Mailbox -ResultSize Unlimited –RecipientTypeDetails UserMailbox | Where {(Get-MailboxStatistics $_.Identity).LastLogonTime -le (Get-Date).AddDays(-180)} | Sort -Property @{e={( Get-MailboxStatistics $_.Identity).LastLogonTime}} -Descending | Select-Object mailbox,@{n="LastLogonTime";e={(Get-MailboxStatistics $_.Identity).LastLogonTime}}[/QUOTE] It gives us a list, but of all users that have logged it rather than the time that we specify. Google isn't really helping a great deal, however we have tried this part [QUOTE]-le (Get-Date).AddDays(-180)} [/QUOTE] with the -lt(less than) -le(less than orequal) and -gt(greater than), it gives us identical results each time. Should we be looking at more of an -enddate and -startdate sort of command rather than the get.date part? I understand you're not familiar with Powershell Brianosaur, but maybe someone else could shed some light too.
  • [QUOTE=JWki;52953855]Do you want a user of the library to be able to provide their own types of renderables? I assume so, because if not you could solve most of your issues with just making the renderables discriminated unions and be done with it. I would suggest instead of interfacing with the renderables, interface on rendering features. I.e, instead of having each renderable implement an interface of some sort, have an interface for feature renderers. The interface would provide factory functions to allow the user to create the renderables that the specific feature renderer handles, as well as a single entry point that gets called when the global renderer requests rendering work. That method would then go about performing all the steps required to render the type of renderable managed by the corresponding feature renderer. So basically, instead of each renderable implementing an interface and taking care of rendering itself, you have one object per type of renderable that takes care of how that type of renderable is to be drawn. It also takes care of allocation, ideally. You can still have a base class for renderables if you want to like give out references or something but I'd probably go with like a templated base class for the feature renderers. There's some nifty tricks you can pull even so that the user still only interacts with the main renderer for access to the renderables. Feature renderers register to the main renderer, potentially providing information about in what pass they'd like to be called, etc. Alternatively they could just provide like a sorting key or whatever and submit their work in a command based fashion. The main point though is that instead of lots of instances that render themselves, you only have a few objects that get called and that submit batches of renderables. This comes with some nice benefits: > Storage for renderables can be optimised for visitation by the feature renderer > The amount of virtual calls is very low compared to calling a virtual function for each instance of renderable > sorting is trivial because each feature renderer can easily sort their renderables (same type, same size), and you can sort by type on a higher level as well > the logic for each type of renderable is still in one place, very concisely - you just happen to handle more of them in one spot which also happens to be nicely data oriented while you don't lose the benefits of an object oriented design. > it slots in with what you have so far - your triangle mesh type can still take care of managing its vertex data and whatnot, although I'd advocate simplifying the renderable types a lot and pushing all that into the feature renderers. > as a side effect, disabling specific types for like debugging purposes or whatever becomes trivial, just set a flag on the corresponding renderer. Try that with ten thousand particle system instances that all need to be disabled. In general this simplifies type wide configuration at runtime.[/QUOTE] This is a lot to grasp (at least for me), but I like it. I was thinking of doing something like feature renderers, tbh - I had at least begun thinking that grouping by rendering technique was a good idea. Discriminated unions and things like std::variant provide a ton of interesting ways to setup this system too, and it moves much of the polymorphism into being static (and more strictly defined) compile-time polymorphism, which I favor over virtual shenanigans. You also listed a ton of other advantages, so I'll be giving this a deeper look. Its at least somewhat like what PBRT (which i've been working through) uses for their rendering setup, except they still have tons of virtual functions and inheritance going on. Regardless, there's a split between single instances of renderables and the things doing the actual rendering. I already liked that it made setting up things like instanced rendering easier, and I bet it can also be used to make things like indirect drawing easier too. Cheers, because this is a super useful post! [editline]edited[/editline] hell, PBRT had me already somewhat thinking along these lines. most of my distinctly renderable objects are in a "geometries" folder, which really belies their purpose. How do you see this integrating with various passes? my idea for "feature renderers" was split among passes, not entirely like what you said. I probably need to sketch and plot more of this out visually, to help me get it. Also, where are you or where did you learn so much about this stuff? You seem to have both more knowledge and more experience in this topic than I do by a large margin, so I wonder if you're in-industry, but you've also mentioned uni courses so it seems you're in school? Makes me jealous (a little) of your university experiences if that's the case. Unfortunately, I'm the only applications and graphics developer at my workplace full of embedded guys (though thats set to change soon, thank fuck) so getting advice on topics like this is impossible outside of FP or reddit.
  • How does floorcasting work. I'm using SDL2 and the tutorials I find are confusing and often poorly worded.
  • Am I correct in my understanding that multiple OpenGL contexts can be linked or share resources to some degree, which should afford me the ability to in 1 thread render a scene and in another thread load textures load textures / models, etc? Are there any restrictions to this? Can I create texture objects / buffers whenever in either thread and have them be accessible in the other? I'm using GLFW for this and they say that sharing should be possible though I'm having no success yet. I'm trying to work on a system where 1 thread (The main thread) is responsible for rendering, a secondary thread is responsible for sending data to the gpu, and then a cluster of worker threads can be responsible for file IO Eg: thread 1 (main): Main rendering context active, Engine loads map -> creates material -> push request to asset manager threads 2-7: check for active work orders, initialize one (load all data from disk, process as much as possible) thread 8: Secondary context active, asset manager -> finalize work order (generate texture object, bind, upload texture data) [URL="https://www.lucidchart.com/documents/view/786da197-9db5-44e1-9095-244e17dac062"]Here's a small pseudo diagram of what I mean[/URL] My previous system worked in a similar way, though the finalization step happened at the beginning of every frame in the main thread.
  • [QUOTE=Animosus;52953914]I didn't even see that oops. It's typed correctly in the script, I just made the mistake typing it out here. In my edit I've managed to solve the issue of actually exporting a .PST, however that only works for an individual user at a time, which is a bit of a pain but may have to do. We're currently now trying to create a script to export to a .CSV, but with this script which we are using for just getting a list of people: It gives us a list, but of all users that have logged it rather than the time that we specify. Google isn't really helping a great deal, however we have tried this part with the -lt(less than) -le(less than orequal) and -gt(greater than), it gives us identical results each time. Should we be looking at more of an -enddate and -startdate sort of command rather than the get.date part? I understand you're not familiar with Powershell Brianosaur, but maybe someone else could shed some light too.[/QUOTE] How are you generating this code? I'm questioning if you are looking at the right documentation I don't believe there is a "Where" command like you are using. You probably mean the "Where-Object" command. see: [url]https://technet.microsoft.com/en-us/library/ee177028.aspx[/url]
  • [QUOTE=paindoc;52954720]This is a lot to grasp (at least for me), but I like it. I was thinking of doing something like feature renderers, tbh - I had at least begun thinking that grouping by rendering technique was a good idea. Discriminated unions and things like std::variant provide a ton of interesting ways to setup this system too, and it moves much of the polymorphism into being static (and more strictly defined) compile-time polymorphism, which I favor over virtual shenanigans. You also listed a ton of other advantages, so I'll be giving this a deeper look. Its at least somewhat like what PBRT (which i've been working through) uses for their rendering setup, except they still have tons of virtual functions and inheritance going on. Regardless, there's a split between single instances of renderables and the things doing the actual rendering. I already liked that it made setting up things like instanced rendering easier, and I bet it can also be used to make things like indirect drawing easier too. Cheers, because this is a super useful post! [editline]edited[/editline] hell, PBRT had me already somewhat thinking along these lines. most of my distinctly renderable objects are in a "geometries" folder, which really belies their purpose. How do you see this integrating with various passes? my idea for "feature renderers" was split among passes, not entirely like what you said. I probably need to sketch and plot more of this out visually, to help me get it. Also, where are you or where did you learn so much about this stuff? You seem to have both more knowledge and more experience in this topic than I do by a large margin, so I wonder if you're in-industry, but you've also mentioned uni courses so it seems you're in school? Makes me jealous (a little) of your university experiences if that's the case. Unfortunately, I'm the only applications and graphics developer at my workplace full of embedded guys (though thats set to change soon, thank fuck) so getting advice on topics like this is impossible outside of FP or reddit.[/QUOTE] Glad that you find it helpful. WRT passes, there's multiple ways that could be approached, one would be to just have each feature renderer specify a bitmask of passes, so the main renderer could keep a list per pass which renderers to call back for each pass. When calling them back it could then specify what pass is currently being rendered. Another approach would be to have the feature renderers specify callbacks for each pass it cares about and have those stored instead (this also gets rid of the need to have feature renderers inherit from a base class - or have them be a class at all). It depends on how your passes are defined really. Also you'll want a way to share some data between renderers sometimes. A more complex approach would be to have the feature renderers just submit work to different queues in a unified format - if you think about it, all types of renderables come down to a combination of pipeline state and geometry buffers as well as some constant buffers (UBOs), so you could just have a set of queues in the main renderer and each feature renderer adds their renderables to the queues when adding and removing renderables, and when called back they have access to all the queues. It's a complex topic really, the main gist of my original explanation was that you'll want inversion of control - instead of building implicit systems via type hierarchies, make the systems explicit, and manage objects of different types explicitly. This applies to all areas really, not just rendering. Feel free to ping me on fp programming discord or whatever if you want to discuss stuff like this, it gets a bit tedious in forums. Oh and as to my background, I am currently doing my masters yeah. We have classes that teach game related stuff, although architecture usually isn't a core concern, they tend to focus on the algorithms and theory (i.e. how do you do rigidbody simulation, how does graphics feature X work, etc etc), most of what I know comes from crawling the web for blog posts, articles etc though and also trying out stuff in my free time. I've been through the whole scene graph object type hierarchy bs with virtual DrawMe() methods on everything for example so I learned what the problems with that are. That and exchange with people from the industry via twitter or whatever helps to get a perspective. I'm fortunate that I had a few classes that basically were "make a game or tech demo or engine or whatever from scratch", too. Funnily enough the usual outcome wasn't a game but like 60 percent of an engine. If you're interested I can dig up some footage or whatever. [editline]8th December 2017[/editline] Assuming my posts will be merged here [QUOTE=Karmah;52955325]Am I correct in my understanding that multiple OpenGL contexts can be linked or share resources to some degree, which should afford me the ability to in 1 thread render a scene and in another thread load textures load textures / models, etc? Are there any restrictions to this? Can I create texture objects / buffers whenever in either thread and have them be accessible in the other? I'm using GLFW for this and they say that sharing should be possible though I'm having no success yet. I'm trying to work on a system where 1 thread (The main thread) is responsible for rendering, a secondary thread is responsible for sending data to the gpu, and then a cluster of worker threads can be responsible for file IO Eg: thread 1 (main): Main rendering context active, Engine loads map -> creates material -> push request to asset manager threads 2-7: check for active work orders, initialize one (load all data from disk, process as much as possible) thread 8: Secondary context active, asset manager -> finalize work order (generate texture object, bind, upload texture data) [URL="https://www.lucidchart.com/documents/view/786da197-9db5-44e1-9095-244e17dac062"]Here's a small pseudo diagram of what I mean[/URL] My previous system worked in a similar way, though the finalization step happened at the beginning of every frame in the main thread.[/QUOTE] You're right, OpenGL contexts can share resources, but which resources can be shared is a fuzzy question to say the least. There's a list somewhere with what the spec says can be shared but in practice it varies across drivers (what doesn't really). So yes there's restrictions and you should look up what can be shared, but iirc textures and buffers is included in that and what you're describing is something that is done in practice sometimes. Although this is one of the areas where OpenGL really doesn't shine. WRT to how your threads are structured, to actually saturate CPU and GPU and not starve the CPU for however long it takes to submit and execute GPU commands for each frame, you'll want rendering to not happen in the simulation thread. Usually what used to be done is that you do all your simulation work in one thread, and have another thread that lags behind the sim thread by one frame or so and performs rendering. On top of that you have worker threads that perform task based work. Lately the buzz is to have the entire engine run in tasks, so there's no dedicated threads at all anymore, but that's somewhat more involved to set up. You could maybe even give each worker thread a shared context so you can upload resources from there directly.
  • [QUOTE=brianosaur;52957703]How are you generating this code? I'm questioning if you are looking at the right documentation I don't believe there is a "Where" command like you are using. You probably mean the "Where-Object" command. see: [url]https://technet.microsoft.com/en-us/library/ee177028.aspx[/url][/QUOTE] I'm using this: [url]https://www.codetwo.com/admins-blog/list-of-active-mailboxes-powershell/[/url] With a combination of other websites that are giving information about commands we can use in powershell. I may give the where-object command you suggested and see if that help out in maybe getting what we want.
  • [QUOTE=Animosus;52958509]I'm using this: [url]https://www.codetwo.com/admins-blog/list-of-active-mailboxes-powershell/[/url] With a combination of other websites that are giving information about commands we can use in powershell. I may give the where-object command you suggested and see if that help out in maybe getting what we want.[/QUOTE] Yeah, I dont think the focus was on the Where filter, so he might have just threw that in I'd always just go to the official documentation because it might change. It's not bad and seems to be written extremely well
  • [QUOTE=JWki;52958252] You're right, OpenGL contexts can share resources, but which resources can be shared is a fuzzy question to say the least. There's a list somewhere with what the spec says can be shared but in practice it varies across drivers (what doesn't really). So yes there's restrictions and you should look up what can be shared, but iirc textures and buffers is included in that and what you're describing is something that is done in practice sometimes. Although this is one of the areas where OpenGL really doesn't shine. WRT to how your threads are structured, to actually saturate CPU and GPU and not starve the CPU for however long it takes to submit and execute GPU commands for each frame, you'll want rendering to not happen in the simulation thread. Usually what used to be done is that you do all your simulation work in one thread, and have another thread that lags behind the sim thread by one frame or so and performs rendering. On top of that you have worker threads that perform task based work. Lately the buzz is to have the entire engine run in tasks, so there's no dedicated threads at all anymore, but that's somewhat more involved to set up. You could maybe even give each worker thread a shared context so you can upload resources from there directly.[/QUOTE] Would you happen to know if bindless textures work across the contexts? Like can I generate the texture handle and make it resident in 1 thread, and then use it in the other? I don't know where I can find more concrete information on what is shareable or not.
  • [QUOTE=Karmah;52960876]Would you happen to know if bindless textures work across the contexts? Like can I generate the texture handle and make it resident in 1 thread, and then use it in the other? I don't know where I can find more concrete information on what is shareable or not.[/QUOTE] I mean you can try it out and see what happens. I don't know about texture residence, that may be local to a context but I haven't tried that really.
  • [QUOTE=JWki;52961009]I mean you can try it out and see what happens. I don't know about texture residence, that may be local to a context but I haven't tried that really.[/QUOTE] My first attempt didn't work, but I'm busy studying for finals + going to work so all I had time was to ask questions. Just fishing for information is all.
  • So I'm not really well versed in C#, but I am decent in C and C++. I'm debugging someone else's program where the issue is that doing a specific assignment statement is throwing an OutOfMemoryException. Thing is, I can write the exception handling, but I still need the return value of that variable assignment, and attempting to redo the assignment after a forced garbage collection isn't fixing it. I did set Visual Studio and the app config to build for x64 so memory [I]shouldn't[/I] be a problem, but I'm still getting this exception. What can I do to fix this?
  • [QUOTE=huntingrifle;52962153]So I'm not really well versed in C#, but I am decent in C and C++. I'm debugging someone else's program where the issue is that doing a specific assignment statement is throwing an OutOfMemoryException. Thing is, I can write the exception handling, but I still need the return value of that variable assignment, and attempting to redo the assignment after a forced garbage collection isn't fixing it. I did set Visual Studio and the app config to build for x64 so memory [I]shouldn't[/I] be a problem, but I'm still getting this exception. What can I do to fix this?[/QUOTE] Would really have to see the code to truly understand what's going on. This could be [i]anything[/i], and without seeing a single line of code, there's no definitive answer to your problem.
  • [QUOTE=huntingrifle;52962153]So I'm not really well versed in C#, but I am decent in C and C++. I'm debugging someone else's program where the issue is that doing a specific assignment statement is throwing an OutOfMemoryException. Thing is, I can write the exception handling, but I still need the return value of that variable assignment, and attempting to redo the assignment after a forced garbage collection isn't fixing it. I did set Visual Studio and the app config to build for x64 so memory [I]shouldn't[/I] be a problem, but I'm still getting this exception. What can I do to fix this?[/QUOTE] Could you go into some more detail about what the assignment is doing? Like is it calling a method to get the value you want to assign? If it is, I would try stepping into it to see if you can spot what's trying to allocate a huge array. ReSharper lets you decompile and debug libraries that you don't have the source to so you can step into any third party code you're calling.
  • [QUOTE=CarLuver69;52962330]Would really have to see the code to truly understand what's going on. This could be [i]anything[/i], and without seeing a single line of code, there's no definitive answer to your problem.[/QUOTE] [QUOTE=Ziks;52962339]Could you go into some more detail about what the assignment is doing? Like is it calling a method to get the value you want to assign? If it is, I would try stepping into it to see if you can spot what's trying to allocate a huge array. ReSharper lets you decompile and debug libraries that you don't have the source to so you can step into any third party code you're calling.[/QUOTE] Here's the snippet of the source code where the error occurs. I wrote an exception around it: [code]try { GV.Reference_Floats = new double[GV.Skeleton[x].Ref_Floats_Count]; //This is the command that throws the memory exception. } catch (System.OutOfMemoryException) { Console.WriteLine("Handled OutOfMemoryException, value of GV.Reference_Floats is " + GV.Reference_Floats); Console.WriteLine("Attempting to run Garbage Collection."); GC.Collect(); GC.WaitForPendingFinalizers(); Console.WriteLine("Garbage Colletion successful, attempting again."); GV.Reference_Floats = new double[GV.Skeleton[x].Ref_Floats_Count]; }[/code] I tried forcibly calling the garbage collection to try to free up any memory, then attempt the same command again, only to get the same exception thrown.
  • [QUOTE=huntingrifle;52962562]Here's the snippet of the source code where the error occurs. I wrote an exception around it: [code]try { GV.Reference_Floats = new double[GV.Skeleton[x].Ref_Floats_Count]; //This is the command that throws the memory exception. } catch (System.OutOfMemoryException) { Console.WriteLine("Handled OutOfMemoryException, value of GV.Reference_Floats is " + GV.Reference_Floats); Console.WriteLine("Attempting to run Garbage Collection."); GC.Collect(); GC.WaitForPendingFinalizers(); Console.WriteLine("Garbage Colletion successful, attempting again."); GV.Reference_Floats = new double[GV.Skeleton[x].Ref_Floats_Count]; }[/code] I tried forcibly calling the garbage collection to try to free up any memory, then attempt the same command again, only to get the same exception thrown.[/QUOTE] What is the size of Ref_Floats_Count? Its probably just way too big.
  • [QUOTE=huntingrifle;52962562]Here's the snippet of the source code where the error occurs. I wrote an exception around it: [code]try { GV.Reference_Floats = new double[GV.Skeleton[x].Ref_Floats_Count]; //This is the command that throws the memory exception. } catch (System.OutOfMemoryException) { Console.WriteLine("Handled OutOfMemoryException, value of GV.Reference_Floats is " + GV.Reference_Floats); Console.WriteLine("Attempting to run Garbage Collection."); GC.Collect(); GC.WaitForPendingFinalizers(); Console.WriteLine("Garbage Colletion successful, attempting again."); GV.Reference_Floats = new double[GV.Skeleton[x].Ref_Floats_Count]; }[/code] I tried forcibly calling the garbage collection to try to free up any memory, then attempt the same command again, only to get the same exception thrown.[/QUOTE] [QUOTE=paindoc;52962868]What is the size of Ref_Floats_Count? Its probably just way too big.[/QUOTE] It looks like this is some sort of animation importer, so I suspect there's something wrong with how Ref_Floats_Count is being read from a file. I think 100% of my OutOfMemoryExceptions were from when I've made a mistake writing an importer and read a length of something from the wrong place / with the wrong endianness.
  • [QUOTE=paindoc;52962868]What is the size of Ref_Floats_Count? Its probably just way too big.[/QUOTE] [QUOTE=Ziks;52963553]It looks like this is some sort of animation importer, so I suspect there's something wrong with how Ref_Floats_Count is being read from a file. I think 100% of my OutOfMemoryExceptions were from when I've made a mistake writing an importer and read a length of something from the wrong place / with the wrong endianness.[/QUOTE] You're right, it is an animation importer. I've been trying to update [URL="https://bitbucket.org/Volfin/hkx2smd"]this tool developed by Volfin[/URL] to convert Havok HKX files into usable formats. The issue is that the tool hasn't been officially updated in quite some time, but Volfin made the source code free to use, but I'm not proficient in C#. At the last release, it worked on HKX file versions 2010.2.0 and 2012.2.0, but I'm trying to work with versions newer than that.
  • Can anyone help with a .Net Core WebAPI? I have some async actions that are returning 404 every time. I think I've got my routing working because I can get non-async actions to work in the same controller. Here's my action: [code] [HttpGet("{id}")] public async Task<string> GetProfile(int id) { CloudTable table = tableClient.GetTableReference("profiles"); await table.CreateIfNotExistsAsync(); TableOperation retrieve = TableOperation.Retrieve<Profile>("root", id.ToString()); TableResult retrievedResult = await table.ExecuteAsync(retrieve); if (retrievedResult.Result != null) return ((Profile)retrievedResult.Result).configuration; else return "{}"; } [/code] Even a simple one like [code] [HttpGet] public async Task<ActionResult> Get() { return Ok(); } [/code] returns 404. However, the debugger recognizes that these are the actions that were requested, but didn't return properly. Does anyone know what I'm missing? [editline]11th December 2017[/editline] It was all routing. Definitely the routing
  • [QUOTE=JWki;52958252]-rendering stuff-[/QUOTE] I've done some more thinking on this, and started implementing things. I think I'll need one "base" class, but I don't feel its a bad use of virtual stuff. So as you pointed out, pipeline state describes a lot of the unique rendering required for a unique type: so I thought, hey, I'll give each feature rendering a graphics pipeline object to store for itself. That doesn't quite work, though, since you need a handle to the renderpass the pipeline will be bound in to create the pipeline. So then maybe the renderpass should store the pipeline - but no, because feature renderers will have different shaders and pipeline layouts, and sometimes different requirements for the pipeline state (like vertex attributes). Instead, I think I'm going to have the renderpass class store multiple graphics pipeline objects per unique feature renderer. Then, feature renderers will have to register themselves into a renderpass: this will use the base class, merely defining the interface for registering some kind of feature renderer into any of the renderpasses Inside the pass itself, creating the pipeline the first time will be the longest and most wasteful. Further pipelines can derive from this first one, or I might be able to delay initialization and create them all at once using the array syntax (and still get derived benefits) that Vulkan allows for this. Plus, with pipeline caches further reducing pipeline creation cost this shouldn't really bog anything down. I can then store these pipeline objects in a map - with the key being a std::type_index. At rendertime, the overarching class managing most of the scene will be able to order renderpasses as it sees fit, and then just render these either sequentially or in parallel (dependencies affecting this, probably). I'm thinking storing a callback - maybe via std::function - to a feature renderer's render function might work so that the renderpass class can iterate through the its map of enabled feature renderers and just render away. The only inheritance-related stuff is just to reduce code duplication, so that definitely outweighs any slight drawbacks it might have (especially since I have no plans to make things tangled or to blur responsibilities) I think this is where I'm headed, at least. I still need to do more planning, but I'm having to mix moving forward with the general goals of my work project with leaving room for (and doing) architectural design for our rendering system that we need to use for a number of applications (all of which will have significant performance burdens and fairly large sets of required features)
  • [QUOTE=proboardslol;52966944]Can anyone help with a .Net Core WebAPI? I have some async actions that are returning 404 every time. I think I've got my routing working because I can get non-async actions to work in the same controller. Here's my action: [code] [HttpGet("{id}")] public async Task<string> GetProfile(int id) { CloudTable table = tableClient.GetTableReference("profiles"); await table.CreateIfNotExistsAsync(); TableOperation retrieve = TableOperation.Retrieve<Profile>("root", id.ToString()); TableResult retrievedResult = await table.ExecuteAsync(retrieve); if (retrievedResult.Result != null) return ((Profile)retrievedResult.Result).configuration; else return "{}"; } [/code] Even a simple one like [code] [HttpGet] public async Task<ActionResult> Get() { return Ok(); } [/code] returns 404. However, the debugger recognizes that these are the actions that were requested, but didn't return properly. Does anyone know what I'm missing? [editline]11th December 2017[/editline] It was all routing. Definitely the routing[/QUOTE] Curious, why do you want an async controller action? Is that some requirement?
  • [QUOTE=brianosaur;52967664]Curious, why do you want an async controller action? Is that some requirement?[/QUOTE] Azure table doesn't expose a synchronous API for .net core. Maybe I could've abstracted it a bit but the problem ended up not being the async action [editline]11th December 2017[/editline] My company is putting all our future technology on .net core to maximize compatibility with Linux and cloud platforms
  • [QUOTE=brianosaur;52967664]Curious, why do you want an async controller action? Is that some requirement?[/QUOTE] You want everything to be asynchronous if possible to maximize concurrency and system resources.