• What are you working on? January 2012
    3,401 replies, posted
My raytracer is now in C# and supports 3D. [img]http://i.imgur.com/StZ6q.png[/img] I got line-plane intersection down, now I shall make line-sphere intersections. (unfortunately it's really slow because this scene took 6 seconds to render) EDIT: Now with cooler patterns, more controls and more infinite planes. [img]http://i.imgur.com/cBFqC.jpg[/img]
[QUOTE=DeadKiller987;34309104]Image looks like the guys holding the gun with his left hand, on the right side of the screen. [editline]20th January 2012[/editline] Bloody snipper ninja[/QUOTE] He's holding it with both hands. Still looks odd being on the right hand of the screen, though.
I ought to be revising, but Smashmaster's posts inspired me to finally start adding implicit 3D relations to my grapher :) Here's a spherical volume: [img]http://i.imgur.com/oCGnY.png[/img] And a sphere surface: [img]http://i.imgur.com/0pyeZ.png[/img] No position interpolation or normals yet, I'll add those in next. Think I've got an efficient way to get per-vertex normals from the field, same way as my raymarching shader :)
[QUOTE=NotMeh;34309883]How do you even get IntelliSense working in VC++ 2010? It just says that it isn't available for C++/CLI.[/QUOTE] It's a feature of Visual Assist X.
[QUOTE=DeadKiller987;34309104]Image looks like the guys holding the gun with his left hand, on the right side of the screen. [editline]20th January 2012[/editline] Bloody snipper ninja[/QUOTE] Yeah I stole it from somewhere, we desperately need an artist.
Position interpolation working - here's a bunch of tubes: [img]http://i.imgur.com/8zP8n.png[/img] The wireframe of that (kind of, I just changed it to GL_LINE_LOOP instead of GL_TRIANGLES): [img]http://i.imgur.com/FHaMR.png[/img] Normals next :)
[QUOTE=r0b0tsquid;34310085]I ought to be revising, but Smashmaster's posts inspired me to finally start adding implicit 3D relations to my grapher :) Here's a spherical volume: [img]http://i.imgur.com/oCGnY.png[/img] And a sphere surface: [img]http://i.imgur.com/0pyeZ.png[/img] No position interpolation or normals yet, I'll add those in next. Think I've got an efficient way to get per-vertex normals from the field, same way as my raymarching shader :)[/QUOTE] Looks like there are some resolution issues. I dunno if you have a full CAS (it looks like you're solving numerically), but if you do you could probably solve the intersection of the ray with the surface analytically in most cases. This would give you pixel-accurate images. Assuming the graphing area is 256x256, the break even point for 3D sampling vs analytical raytracing is: X^3 = 256^2 X=40 So it would probably be faster if you plan on sampling the function at resolutions much higher than 40x40x40. I, mean, this is speculation though. I haven't actually tried it. Solving numerically and sampling in 3D is definitely the 'safer' choice.
[QUOTE=NotMeh;34309883]How do you even get IntelliSense working in VC++ 2010? It just says that it isn't available for C++/CLI.[/QUOTE] C++/CLI is the .NET interop language for C++. Intellisense works just fine for a plain C++ project/solution.
[QUOTE=ROBO_DONUT;34311443]Looks like there are some resolution issues. I dunno if you have a full CAS (it looks like you're solving numerically), but if you do you could probably solve the intersection of the ray with the surface analytically in most cases. This would give you pixel-accurate images. Assuming the graphing area is 256x256, the break even point for 3D sampling vs analytical raytracing is: X^3 = 256^2 X=40 So it would probably be faster if you plan on sampling the function at resolutions much higher than 40x40x40. I, mean, this is speculation though. I haven't actually tried it. Solving numerically and sampling in 3D is definitely the 'safer' choice.[/QUOTE] I think you're under the impression I'm raymarching it? I should have made it more clear, sorry :/ I'm evaluating a 3D field (it's not really "solved" as such - it's more like an explicit relation with an extra "value" dimension on top of the x, y, z), and then I'm using marching cubes to generate triangles to pass to OpenGL. The spiky bits on that sphere are because of the vertices being snapped to the nearest gridline - that's all sorted now, thanks to a bit of reverse interpolation to guess their actual positions. You can see the new, smoothed meshes in the post with the tubes. The raymarching comment was just referring to the differential method that an old GLSL raymarcher I made used to extract normals from a field using partial derivatives. I was thinking of using a ray tracing / marching method to draw, seeing as it does scale a lot better to higher qualities, but as far as I know they're both pretty slow on CPUs vs. graphics cards? And seeing as a lot of the computers at my school are pretty old and lacking dedicated graphics hardware, fragment shader raytracers aren't much of a possibility :/ [editline]20th January 2012[/editline] Blargh, wall of text :/
[QUOTE=r0b0tsquid;34311954]I think you're under the impression I'm raymarching it?[/QUOTE] Nope. It's really quite clear from the pictures that you're using marching cubes + rasterization. I'm [i]suggesting[/i] that raytracing might be an option, since the surface is defined by a single function. We use rasterization in video games, etc, because we have to deal with a mess of polygons and arbitrary surfaces that can't be defined by a single function. For surfaces that can be defined by a single function, solving the line-surface intersection should be feasible, so raytracing might be an option. It should also scale better because you only have to solve once for each pixel in a 2D image, whereas with marching cubes you have to sample in 3D space.
[QUOTE=Chandler;34311527]C++/CLI is the .NET interop language for C++. Intellisense works just fine for a plain C++ project/solution.[/QUOTE] Thanks. Figured it out.
[QUOTE=ROBO_DONUT;34312256]Nope. It's really quite clear from the pictures that you're using marching cubes + rasterization. I'm [i]suggesting[/i] that raytracing might be an option, since the surface is defined by a single function. We use rasterization in video games, etc, because we have to deal with a mess of polygons and arbitrary surfaces that can't be defined by a single function. For surfaces that can be defined by a single function, solving the line-surface intersection should be feasible, so raytracing might be an option. It should also scale better because you only have to solve once for each pixel in a 2D image, whereas with marching cubes you have to sample in 3D space.[/QUOTE] I would like to see that attempted, since if I remember right you're testing each pixel in the 2D view why not provide the same resolution in the 3D view?
[QUOTE=ROBO_DONUT;34312256]Nope. It's really quite clear from the pictures that you're using marching cubes + rasterization. I'm [i]suggesting[/i] that raytracing might be an option, since the surface is defined by a single function. We use rasterization in video games, etc, because we have to deal with a mess of polygons and arbitrary surfaces that can't be defined by a single function. For surfaces that can be defined by a single function, solving the line-surface intersection should be feasible, so raytracing might be an option. It should also scale better because you only have to solve once for each pixel in a 2D image, whereas with marching cubes you have to sample in 3D space.[/QUOTE] That's an alternative, and I'll keep turning that over in my head - the only thing that really worries me is the "analytically in most cases" part. I'm not really sure where I'd start with a CAS, although I could probably bash my head against a wall until I came up with something that half-worked, but I'm concerned by all of the little edge cases that a CAS would choke on, and that field evaluation chugs straight through. I'm also a little concerned by the idea of speed being a function of window size - I'd rather not have it grind to a halt because the window is resized :S That's thought-provoking though, I'll keep all that in mind. Thanks :) Anyway, normals: [img]http://i.imgur.com/U0Csi.png[/img] They're effectively per-face at the minute, but they're known for every grid point. I'll interpolate the normals between the points to it all out, but that's gonna take a bit more groundwork.
[QUOTE=r0b0tsquid;34312573]I'm also a little concerned by the idea of speed being a function of window size - I'd rather not have it grind to a halt because the window is resized :S That's thought-provoking though, I'll keep all that in mind. Thanks :) [/QUOTE]To try and ease performance, you could split the window up into smaller sections, calculate the result of the whole section, then split that section up again and calculate each subsection until the subsections are the size of pixels. That doesn't speed up the overall drawing time, but it would help to make it seem more responsive.
[QUOTE=danharibo;34312470]I would like to see that attempted, since if I remember right you're testing each pixel in the 2D view why not provide the same resolution in the 3D view?[/QUOTE] I don't test every pixel - there's a resolution-independent grid spread across the screen (I think it's 50x50 at default?) and then I use marching squares to find the lines/polys. He's right though. O(n^2) (for screen area) is always going to be smaller than O(n^3) when the numbers get big - it's just a matter of how big the numbers get, and of how easy the two implementatons are. Plus, raytracing would only see a benefit if the trace time was constant(ish) for each pixel - just raymarching into a 3D field is still order of n^3, because you'd still have to evaluate the whole volume. [editline]20th January 2012[/editline] [QUOTE=danharibo;34312737]To try and ease performance, you could split the window up into smaller sections, calculate the result of the whole section, then split that section up again and calculate each subsection until the subsections are the size of pixels. That doesn't speed up the overall drawing time, but it would help to make it seem more responsive.[/QUOTE] So reduce the sampling density while the user's interacting? It does that already with the current method. Ironically, you'd actually get a better performance boost from that with a volumetric method than with a screen-area one: n^3 grows faster than n^2, but it also shrinks faster :v: Also, I think users would be more perceptive of an image resolution change than a volume resolution change: the jagged square edges and reduced resolution would probably be a lot more jarring than slightly rougher approximations to the curves, especially with normal smoothing and whatnot - you'd have pretty much the same amount of detail, but the rasterized version would be much sharper/cleaner.
[QUOTE=r0b0tsquid;34312856]I don't test every pixel - there's a resolution-independent grid spread across the screen (I think it's 50x50 at default?) and then I use marching squares to find the lines/polys. He's right though. O(n^2) (for screen area) is always going to be smaller than O(n^3) when the numbers get big - it's just a matter of how big the numbers get, and of how easy the two implementatons are. Plus, raytracing would only see a benefit if the trace time was constant(ish) for each pixel - just raymarching into a 3D field is still order of n^3, because you'd still have to evaluate the whole volume. [editline]20th January 2012[/editline] So reduce the sampling density while the user's interacting? It does that already with the current method. Ironically, you'd actually get a better performance boost from that with a volumetric method than with a screen-area one: n^3 grows faster than n^2, but it also shrinks faster :v: Also, I think users would be more perceptive of an image resolution change than a volume resolution change: the jagged square edges and reduced resolution would probably be a lot more jarring than slightly rougher approximations to the curves, especially with normal smoothing and whatnot - you'd have pretty much the same amount of detail, but the rasterized version would be much sharper/cleaner.[/QUOTE] Well that's quite informative, thanks. I tried to create [url=http://bytecove.co.uk/jplot/]something[/url] for plotting in 2D once using <canvas>, it uses some primitive methods though so I'd like to revisit it sometime.
[IMG]http://i.imgur.com/vdjeA.gif[/IMG] Context menus! button.lua [lua] local togglable = false local down = false local img_open = "io_button_open.png" local img_close = "io_button_closed.png" function Component:onSpawn() self:SetImage(img_open) self:SetToolTip("Button (Off)") self:SetOutputs({ {"Out", 0, self:GetWidth()-4, self:GetHeight()/2} }) end function Component:onClicked(x, y, b) if(b ~= 0) then return end if(togglable) then if(down) then self:SetImage(img_open) down = false self:SetToolTip("Button (Off)") self:TriggerOutput(1, 0) return end end self:SetImage(img_close) down = true self:SetToolTip("Button (On)") self:TriggerOutput(1, 1) end -- user may have unclicked outside of the button, so we use this instead of onUnclicked function Component:onMouseReleased(x, y, b) if(b ~= 0) then return end if(down and not(togglable)) then self:SetImage(img_open) down = false self:SetToolTip("Button (Off)") self:TriggerOutput(1, 0) end end [/lua] add.lua [lua] function Component:onSpawn() self:SetImage("math_add.png") self:SetToolTip("Add (Out = A + B + C + D)") self:SetInputs({ {"A", 0, 4, 12}, --inport 1 {"B", 0, 4, 20}, --inport 2 {"C", 0, 4, 28}, --inport 3 {"D", 0, 4, 36} --inport 4 }) self:SetOutputs({ {"Out", 0, self:GetWidth() - 4, 12} --outport 1 }) end function Component:onInputsTriggered() local a, b, c, d = self:GetInputValue(1), self:GetInputValue(2), self:GetInputValue(3), self:GetInputValue(4) local out = a + b + c + d self:TriggerOutput(1, out) self:SetToolTip(a .. " + " .. b .. " + " .. c .. " + " .. d .. " = " .. out) end [/lua] At this point, what's left: - More UI stuff (New, Save, Load, Exit, etc...) - Script the rest of the default lua package (Only did the button, and some math chips)
mmmm contexual
I have a question that maybe is kinda obvious but I'm not entirely sure how to approach it. Lets say I have 20 units I want to walk to a specific point. I don't want all 20 of them running their A* algorithms all in the same frame because that would cause a noticeable slow down. What's a good way to partition the group so that they call the pathing algorithm in say, 5 seperate frames? I have an idea as to how to do this but it seems pretty janky.
[QUOTE=Socram;34314314]I have a question that maybe is kinda obvious but I'm not entirely sure how to approach it. Lets say I have 20 units I want to walk to a specific point. I don't want all 20 of them running their A* algorithms all in the same frame because that would cause a noticeable slow down. What's a good way to partition the group so that they call the pathing algorithm in say, 5 seperate frames? I have an idea as to how to do this but it seems pretty janky.[/QUOTE] either increase the heuristics such that the result isn't the best BUT way faster OR use the center of the group and plot a path and just align every unit to that path and their neighbours (adhesion, cohesion) AKA flocking
[QUOTE=Socram;34314314]I have a question that maybe is kinda obvious but I'm not entirely sure how to approach it. Lets say I have 20 units I want to walk to a specific point. I don't want all 20 of them running their A* algorithms all in the same frame because that would cause a noticeable slow down. What's a good way to partition the group so that they call the pathing algorithm in say, 5 seperate frames? I have an idea as to how to do this but it seems pretty janky.[/QUOTE] Either make a list of the nodes and run 5 of them at a time, or make a AstarStep() method and and call it a few hundred times a frame for each or so.
[QUOTE=Nighley;34314855]either increase the heuristics such that the result isn't the best BUT way faster OR use the center of the group and plot a path and just align every unit to that path and their neighbours (adhesion, cohesion) AKA flocking[/QUOTE] While I appreciate the tips, I'm currently looking to speed things up in the way I mentioned above, any ideas for that? [QUOTE=neos300;34314937]Either make a list of the nodes and run 5 of them at a time, or make a AstarStep() method and and call it a few hundred times a frame for each or so.[/QUOTE] Okay, I was looking into the list method but for some reason it was running even slower, but perhaps I had implemented it stupidly. But the fact that you mentioned it means I'll look into it some more.
Done with exams, managed to do some work on the bot today. Specifically starting to write any method libraries I may be needing in the near future. Also a viewer panel thing that captures the game screen so I can draw debug on top of that. [img]http://puu.sh/e2VA[/img]
[QUOTE=Socram;34315002]While I appreciate the tips, I'm currently looking to speed things up in the way I mentioned above, any ideas for that? Okay, I was looking into the list method but for some reason it was running even slower, but perhaps I had implemented it stupidly. But the fact that you mentioned it means I'll look into it some more.[/QUOTE] How were you implementing it?
[QUOTE=neos300;34315124]How were you implementing it?[/QUOTE] I would add all the units to a toBePathed list, and then 4 at a time I calculated their path and then removed each unit until it was empty. I have to go now unfortunately, thanks for any more help I recieve!
[QUOTE=Ploo;34315109]Done with exams, managed to do some work on the bot today. Specifically starting to write any method libraries I may be needing in the near future. Also a viewer panel thing that captures the game screen so I can draw debug on top of that. [img]http://puu.sh/e2VA[/img][/QUOTE] This is some pretty sweet work you're doing.
Overv has stopped working on FPAndroid, so I forked it. [t]http://puu.sh/e3fy[/t] [t]http://puu.sh/e3fJ[/t] [t]http://puu.sh/e3lS[/t]
[QUOTE=WalkDinosaur;34316299]Overv has stopped working on FPAndroid[/QUOTE] strange, that's very out of character for overv
Where the fuck do you go to learn the secrets of raytracing
Working on linking control points in my animation system.
Sorry, you need to Log In to post a reply to this thread.