[QUOTE=Tamschi;50154531]On that note: Is there something like a "for dummies" guide on referencing a native library in VC++ (VS Community 2015 Update 2)?
(Specifically the FBX SDK in this case, but that's [I]probably[/I] fairly standard, with a .lib and a .dll, and a (ton of) header file(s) of course.)[/QUOTE]
You can't directly reference native dlls in VC++ like you can with managed dlls in C#, it's still the standard .lib and headers procedure. You CAN make a lib file for linking if you only have a dll tho
[code]
dumpbin /exports library.dll > funcs.def && lib /def:funcs.def /out:library.lib
[/code]
Headers are also nice to have but not mandatory, if the names are mangled you can get the full prototypes by demangling them.
Talking about this, i should continue working on my C++ -> C# binding generator, it should even be possible to merge the unmanaged library and managed wrapper into a single dll.
[editline]18th April 2016[/editline]
[QUOTE=Number-41;50155847]Well yeah the Kinect sort of operates on the sample principle, but it uses IR light so my guess is that it will only be able to detect deformations of >700nm (which is sufficient I think for heart beats :v:, although that's only part of a huge problem)[/QUOTE]
No, the Kinect works on another principle, it projects fancy light patterns and uses two cameras to infer depth.
[img]http://51.254.129.74/~anon/files/1461004990.png[/img]
[editline]18th April 2016[/editline]
That gives me an idea. I suppose it could be possible to infer depth using a single camera and a projector projecting a filled grid circle pattern and comparing circle sizes. Wouldn't be very precise.
[QUOTE=cartman300;50155890]
That gives me an idea. I suppose it could be possible to infer depth using a single camera and a projector projecting a filled grid circle pattern and comparing circle sizes. Wouldn't be very precise.[/QUOTE]
I guess it could work for surfaces with surface normals that are purely parallel with the camera axis (Giant's Causeway-ish). Sort of related is fringe pattern analysis:
[img_thumb]http://i.imgur.com/9hFHivO.png[/img_thumb]
Don't know the details though, it's my supervisor's main area of interest.
I remade the note thingy :v: found out I can use blur to make it look super cool, so I did!
[vid]https://dl.dropboxusercontent.com/u/65179543/ShareX/2016/04/2016-04-18_21-07-14.mp4[/vid]
It's WPF for anyone asking. (also the love2d folder on my desktop has nothing with anything to do)
[QUOTE=cartman300;50155890][...]
Talking about this, i should continue working on my C++ -> C# binding generator, it should even be possible to merge the unmanaged library and managed wrapper into a single dll.[/QUOTE]
Yes please. I'd probably still have to C#ify it manually, but it would be much less terrible than figuring out the tools by far.
(I still don't quite get where I have to add the settings (and the folders) in VS, since I don't use the command line compiler, but knowing this I can probably look it up for vanilla C++.)
[video=youtube;-NRgsUMPBWs]https://www.youtube.com/watch?v=-NRgsUMPBWs[/video]
Managed to get a fire to smoke gradient going. Lost transparency somewhere along the lines. Far more complicated that i'd hoped.
But I actually like the look. The banding on the fire is starting to really irk me as I have no clue what's causing it. :ohno:
And a small little update! Super happy with how it has turned out so far, been spending hours of being happy so far.
[vid]https://dl.dropboxusercontent.com/u/65179543/ShareX/2016/04/2016-04-18_22-41-05.mp4[/vid]
I won't post more of this for now because it isn't very long ago I posted the last version :v:
[QUOTE=Socram;50152492]I can't imagine programming like this, sounds miserable.[/QUOTE]
Sometimes I guess, but I at least try to figure out why what I did works afterwards.
[QUOTE=cartman300;50155890]You can't directly reference native dlls in VC++ like you can with managed dlls in C#, it's still the standard .lib and headers procedure. You CAN make a lib file for linking if you only have a dll tho
[code]
dumpbin /exports library.dll > funcs.def && lib /def:funcs.def /out:library.lib
[/code]
Headers are also nice to have but not mandatory, if the names are mangled you can get the full prototypes by demangling them.
Talking about this, i should continue working on my C++ -> C# binding generator, it should even be possible to merge the unmanaged library and managed wrapper into a single dll.
[editline]18th April 2016[/editline]
But I was not talking about on what principle kinect works but that it returns similar data (the picture I posted). Similar to Number 41s creation.
No, the Kinect works on another principle, it projects fancy light patterns and uses two cameras to infer depth.
[img]http://51.254.129.74/~anon/files/1461004990.png[/img]
[editline]18th April 2016[/editline]
That gives me an idea. I suppose it could be possible to infer depth using a single camera and a projector projecting a filled grid circle pattern and comparing circle sizes. Wouldn't be very precise.[/QUOTE]
Kinect has only one IR camera, and another color camera. (and IR pattern projector, just as you said)
I managed to coax my neural network into approximating sinusoidal functions:
[IMG]http://puu.sh/onkjP/ac388ec179.png[/IMG]
Left is [y = (sin(x)+1)/2], right is [y = (cos(x)+1)/2].
I had to switch to a sinc function for neuron activation. When I tried various other activation functions, the networks converged to outputting a constant and ignoring the input. I also had to rework the mutation settings. Mutation chance now decreases as the networks become more fit. This change brought me up to 4 significant figures on linear functions. It's still not amazingly accurate on sinusoidals, but it's good enough to assure me that this network wasn't a complete waste of my weekend.
[QUOTE=Kevin;50156518]And a small little update! Super happy with how it has turned out so far, been spending hours of being happy so far.
[vid]https://dl.dropboxusercontent.com/u/65179543/ShareX/2016/04/2016-04-18_22-41-05.mp4[/vid]
I won't post more of this for now because it isn't very long ago I posted the last version :v:[/QUOTE]
If you could put this on github that would be awesome. I have wanted to learn WPF but every time I start its just "ugh screw this" because i don't have a project or something to look at.
Working on scripting for an RPG idling game called Avabur.
There's a subreddit if you want to check it out here:
[url]http://reddit.com/r/relicsofavabur[/url]
I've made some plugins and such
[QUOTE=Fourier;50156723]Kinect has only one IR camera, and another color camera. (and IR pattern projector, just as you said)[/QUOTE]
To elaborate on that: It projects a dot pattern, where the dots' respective offsets to a perfect grid form (afaik) a 2D [URL="http://datagenetics.com/blog/october22013/index.html"]De Brujin[/URL] pattern.
This means that even when part of the image is out of range or not reflected back otherwise, it's still possible for the device to determine the correct reference location for each dot (up to a certain extent).
[QUOTE=icantread49;50150994][vid]https://zippy.gfycat.com/SinfulAgileAxolotl.webm[/vid][/QUOTE]
So, I'm looking for a level designer. Where do you guys think I should post an ad? I was thinking of making a thread in GGD (with some more videos and info), but I PM'd the mods for permission and they haven't responded yet. Thoughts?
[vid]https://zippy.gfycat.com/LeafyBlankIberiannase.webm[/vid]
[QUOTE=CopperWolf;50156862]I managed to coax my neural network into approximating sinusoidal functions:
[IMG]http://puu.sh/onkjP/ac388ec179.png[/IMG]
Left is [y = (sin(x)+1)/2], right is [y = (cos(x)+1)/2].
I had to switch to a sinc function for neuron activation. When I tried various other activation functions, the networks converged to outputting a constant and ignoring the input. I also had to rework the mutation settings. Mutation chance now decreases as the networks become more fit. This change brought me up to 4 significant figures on linear functions. It's still not amazingly accurate on sinusoidals, but it's good enough to assure me that this network wasn't a complete waste of my weekend.[/QUOTE]
This might be a stupid question but why not run multiple parallel copies and remove the least fit every X generations (averaging two other random nets out of a set of say, ten [breeding]), or at least change the % mutation in weights to exponentially slow down as it approaches maximum fitness?
[QUOTE=Radical_ed;50157441]This might be a stupid question but why not run multiple parallel copies and remove the least fit every X generations (averaging two other random nets out of a set of say, ten [breeding]), or at least change the % mutation in weights to exponentially slow down as it approaches maximum fitness?[/QUOTE]
The primary reason (actually the only reason) I'm not running the networks in parallel is that my understanding of threading is abysmal. I really should look into that, as run times are getting annoying.
I'm working with populations of 100 networks currently, and dropping the worst-performing 50 from each generation. I do have a breeding system in place, but rather than averaging each value between the two networks, creating an average network, it basically flips a coin at each value and copies that value directly from one of the parent networks.
I definitely do want to make the % mutation to slow down, and it currently does, but figuring out how fast exactly it should slow down is tricky, and mostly trial and error. Currently fitness = A/abs(dist), where dist is the distance between the desired result and the actual result, and A is an arbitrary constant. Mutation chance is B / fitness, where B is another arbitrary constant. So mutation chance is currently C * abs(distance), with C replacing the ratio of arbitrary constants. So as results become closer to the desired result by a certain amount, the network becomes linearly less likely to mutate.
As per your suggestion, I'm going to attempt a few thousand generations with fitness = A / (dist^2) so that it'll exponentially slow down mutation rather than linearly.
Writing this out has actually helped me see a couple errors in the way I'm doing this. I might actually need to reduce the mutation size and leave the mutation likelihood the same to get more precise convergence.
If I can get this to 10 significant figures of accuracy within 5000 generations then I'm not gonna bother trying to make it more precise.
1. Got drunk and joined IEEE. [I]wild[/I]
2. Someone check my math for me pls
[url]http://alissa.ninja/stochatta/about.html[/url]
Long time lurker, first time poster.
I just finished up my firstludum dare entry (and first completed game): CombiBox
[t]http://puu.sh/onzWS/f11734f759.jpg[/t]
Short in game clip:
[URL]http://gfycat.com/DisfiguredOccasionalCur[/URL]
Ludum dare ended up being just after I finished exams this quarter, so perfect timing for me to spend all weekend on it.
Download links are via the ludum dare page here: [URL="http://ludumdare.com/compo/ludum-dare-35/?action=preview&uid=93256"]CombiBox on Ludum dare[/URL]
More fun with orbits. I'm planning to integrate this real time into a game, so taking two minutes to calculate some of the longer trajectories is perhaps... a little too long
So, I've firstly drastically cut down on my search space, but upped the number of recursive subdivisions a little. This means I'm less likely to find 'fun' trajectories with lots of gravitational intercepts, but its significantly faster
I've also got a system of adaptively reducing the number of ticks calculated to reduce waste calculations after the intercept is achieved. There's quite a lot of margin for error built into the waste tick removal, so that if a more optimal (but longer) intercept is discovered as we subdivide the regions we're exploring, it doesn't get miseliminated. This is particularly problematic for very far away planets like pluto (its my simulation I can classify it however I like).
I've also added support for automatic circularisation - once the probe reaches its target (within a margin of error, generally body + fixed constant distance), it automatically calculates the correct orbital trajectory to put the probe into orbit about the target body. This is fairly easy to do - the parents delta pos is effectively its orbital velocity (we need real, not mean), so you take the perpendicular between the probe and the body, calculate orbital velocity around the body, and then add the parents orbital velocity to this. This gives you a correct velocity to orbit at as far as I can tell
[IMG]https://dl.dropboxusercontent.com/u/9317774/calculated.PNG[/IMG]
This is a completely computer calculated earth -> some gas giant intercept, and then orbit. The manouvers, particularly the circularisation are not particularly realistic as it can sometimes just go 180 in the opposite direction (although I'm going to fix this somewhat). I've given up on super strict realism sadly as I'm trying to make this into a game (so no brute force simulation, its calculation all the way). Orbiting the rocky planets is super hard due to their high orbit velocity, low gravity, and possible relative inaccuracy of the simulation, so I'm trying to make that work at the moment
[QUOTE=Number-41;50155847]Well yeah the Kinect sort of operates on the sample principle, but it uses IR light so my guess is that it will only be able to detect deformations of >700nm (which is sufficient I think for heart beats :v:, although that's only part of a huge problem)
I'm wondering if this technique is actually used for fluid dynamics measurements...
[Editline]18th April[/editline]
It [URL="http://www.acs.psu.edu/drussell/Publications/Hologram-AJP.pdf"]is[/URL]!
[Editline]18th April[/editline]
As cartman300 pointed out, it does not work with interferometry but somewhere in the process you do end up with a phase-wrapped image.[/QUOTE]
what's your thesis?
[img]http://51.254.129.74/~anon/files/1461086649.png[/img]
Ah, finally properly parsing the C++ types and putting functions in proper places
White are namespaces, green are classes and blue are functions
[QUOTE=krail9;50160071]what's your thesis?[/QUOTE]
Well just that, implementing realtime digital holography. Perhaps with a few results like measuring thermal expansion or something like that.
Some progress on my Alien Carnage remake.
Today was HUD day.
I implemented most of the HUD (no radar yet, because no enemies) and added fancy edges at the sides.
[vid]http://a.pomf.cat/nftvlx.mp4[/vid]
Next up: Doors and enemies.
[QUOTE=cartman300;50155890]You can't directly reference native dlls in VC++ like you can with managed dlls in C#, it's still the standard .lib and headers procedure. You CAN make a lib file for linking if you only have a dll tho
[code]
dumpbin /exports library.dll > funcs.def && lib /def:funcs.def /out:library.lib
[/code]
Headers are also nice to have but not mandatory, if the names are mangled you can get the full prototypes by demangling them.
Talking about this, i should continue working on my C++ -> C# binding generator, it should even be possible to merge the unmanaged library and managed wrapper into a single dll.
[editline]18th April 2016[/editline]
No, the Kinect works on another principle, it projects fancy light patterns and uses two cameras to infer depth.
[img]http://51.254.129.74/~anon/files/1461004990.png[/img]
[editline]18th April 2016[/editline]
That gives me an idea. I suppose it could be possible to infer depth using a single camera and a projector projecting a filled grid circle pattern and comparing circle sizes. Wouldn't be very precise.[/QUOTE]
Just to add, this is only true for the first generation connect. The second generation Kinect uses a time of flight camera to measure depth, there are some details at [url]http://www.gamasutra.com/blogs/DanielLau/20131127/205820/The_Science_Behind_Kinects_or_Kinect_10_versus_20.php[/url] but I'm sure more information is available elsewhere. It's a very interesting device which is unfortunately used for some pretty crap games.
oi
[img]http://jesusfuck.me/di/Y5NK/die.png[/img]
Would there be any interest here in a bot tournament for Hockey? now that there's an API out?
We're making a video game about hexagons
[video=youtube;vX1s9reuULs]https://www.youtube.com/watch?v=vX1s9reuULs[/video]
[URL]http://www.pcgamer.com/mosaic-the-new-game-from-the-makers-of-among-the-sleep-may-be-hiding-an-arg[/URL]
This thread is now an ARG. I'm sorry guys.
A bit of a cross post from the UE4 Dev thread, but I'd like to introduce a project i'm working on to you guys as well :
[url=http://www.willofthegods.com/]Will of the Gods[/url] is an arcade-blended with-rts 1v1 God game where the goal is to guide more 'followers' to your temple than the other God, while dissuading the enemy 'heathens' from reaching their temple.
[media]https://www.youtube.com/watch?v=hwMNBk1Gtvs[/media]
The team started working on the game as part of the 2016 Global Game Jam, where it won 2nd place and the People's Choice award. A bit after that I hopped on as the marketing/web guy and we've been working on the game for about 6 weeks now.
Recently we got some [URL="https://www.youtube.com/watch?v=kN4skbFkOqY"]gamemode modifiers working[/URL] (Stuff like 'Random Lightning strikes occur around the map') and we got multiplayer via steam running as well(sort-a). And probably most importantly - you can now assplode the little neutral critters by zapping them.
I'd love to hear what you think.
[QUOTE=Prism;50166936]
I'd love to hear what you think.[/QUOTE]
I think that looks like a very interesting game mechanic. I especially dig the very polished artstyle and sounddesign. You guys did a really fantastic job on that!
I'm fooling around with DirectInput hooks. Small steps!
[t]http://i.imgur.com/dX9C7Ju.png[/t]
And like that, I just finished a mod that should make mouse input on Metro: Last Light 1:1 rather than that annoying x/y sensitivity difference.
Anybody up for helping me test on different setups? If so, drop me a PM. The input hooking will eventually be open-source, even if it's an ugly steeming piece of shit.
Merge broke, sorry mods :(
Sorry, you need to Log In to post a reply to this thread.