[QUOTE=Darwin226;46805436]Is GLES 1.1 fixed-pipeline stuff?[/QUOTE]
Yup. Things got to the point where I need to do some nice gradients and refractions and distance based text rendering.
[QUOTE=voodooattack;46805859]It was too CPU intensive, even on my i7. I abandoned the idea for now. I'll wait for quantum computers before trying to model organelles.[/QUOTE]
What was the bottleneck? Physics?
[QUOTE=Ott;46805877]What was the bottleneck? Physics?[/QUOTE]
Yeah, it couldn't handle small populations. I guess it could have used GPU acceleration, but I didn't find any GPU-accelerated 2D physics libraries at the time, and I didn't have the background to make my own.
[QUOTE=Tamschi;46805099]On Windows it would be a better idea to just use .NET, considering that's updated through Windows Update (or at least Microsoft Update).[/QUOTE]
True, but it's a good idea to target the Mono platform for cross-platform projects right off the bat, so that you don't later have to change your code when you discover something Mono doesn't support (I learned the hard way). You can then build your applications for both I suppose, and distribute the .NET one to Windows users - on that note, is there anything Mono has that Windows .NET doesn't support?
I have a feeling I messed up somewhere. :v: (Give it a second)
[vid]https://dl.dropboxusercontent.com/u/27247197/galaxia3.webm[/vid]
[QUOTE=Trumple;46806080]True, but it's a good idea to target the Mono platform for cross-platform projects right off the bat, so that you don't later have to change your code when you discover something Mono doesn't support (I learned the hard way). You can then build your applications for both I suppose, and distribute the .NET one to Windows users - on that note, is there anything Mono has that Windows .NET doesn't support?[/QUOTE]
Yes: Circumventing security settings due to missing support.
(I think the only difference is that you can compile into a standalone executable using Mono and technically don't have to install it system-wide. Other than that it's a strict subset afaik.)
[QUOTE=Billy2600;46806090]I have a feeling I messed up somewhere. :v: (Give it a second)
[vid]https://dl.dropboxusercontent.com/u/27247197/galaxia3.webm[/vid][/QUOTE]
You appear to have a few bugs in your program.
Anyone has a good resource on how to make a doomlike game engine using sfml or the like (in C#)? Been wanting to look into this for quite some time now, but can't find a good resource helping me. Do note that i do not have opengl experience :P Performance isn't a top priority either. It's mainly something ive been wanting to do for a long time. So i don't want to go the unity way because that would be easier.
[QUOTE=Arxae;46806670]Anyone has a good resource on how to make a doomlike game engine using sfml or the like (in C#)? Been wanting to look into this for quite some time now, but can't find a good resource helping me. Do note that i do not have opengl experience :P Performance isn't a top priority either. It's mainly something ive been wanting to do for a long time. So i don't want to go the unity way because that would be easier.[/QUOTE]
There's not really any good resource on how to make a good "doom like game engine using sfml", more than there's just resources on how to use SFML for shit, and work your way from there. Get to know OpenGL (through, for instance, [url=http://open.gl]open.gl[/url], or some official site like the opengl pages, and don't forget about the daunting docs.gl page.
Nevertheless, there's a few books on the subject too, but if you're more into getting things done fast, I'd just say use Unity. If you *really* want to get your hands dirty, prepare to take your time, because these things [b]do[/b] take time, and a lot at that.
Move! You beautiful bastards.
[vid]https://dl.dropboxusercontent.com/u/27714141/cilium1.webm[/vid]
Those are spiking neural networks controlling them. They're not very good yet, but that's because the genomes are randomly generated. No evolution yet.
In case you're wondering, spiking means that neurons can fire on their own. Unlike conventional artificial neural networks, spiking neural networks are capable of autonomous behaviour. They fire at will.
[QUOTE=Amiga OS;46806812]I've just finished the API & DB caching classes, onto the UI!
[t]https://s3.amazonaws.com/pushbullet-uploads/udxZS-YCa6hVqmRAEG24U4HTgEXZqLf3nPXoxn/DFG_2014-12-28-01-02-12.png[/t][/QUOTE]
RIP
[QUOTE=Arxae;46806670]Anyone has a good resource on how to make a doomlike game engine using sfml or the like (in C#)? Been wanting to look into this for quite some time now, but can't find a good resource helping me. Do note that i do not have opengl experience :P Performance isn't a top priority either. It's mainly something ive been wanting to do for a long time. So i don't want to go the unity way because that would be easier.[/QUOTE]
I made a Wolfenstein 3D style engine using SFML. My recommendation? Use SDL. If you want to do oldschool style software rendering, you'll want to have control of every pixel, which SDL is much more suited to.
Doom uses lines instead of tiles for level data, this will probably be your biggest challenge. In order to speed up rendering, Doom uses BSP trees, however there are alternatives, such as the Build Engine's area portals. Another challenge is rendering sprite rotations, based on the rotation of the player. I have code that accomplishes this, if you'd like to look at it. Or you could look at the source to something like Wolf4SDL. Other than that, and some limited Z checking for 'things' like rockets, Doom's engine isn't that much different from a 2D game's engine.
[QUOTE=Cold;46804975]So how are you planning to lit and render the volume.
The normal approach of raycasting would have a pretty significant overhead, when you upscale the volume.[/QUOTE]
Currently the processing of upscaling *itself* is fairly expensive. Upscaling from 200x200x200 -> 400x400x400 costs 16ms (though its not tremendously optimised). Rendering a 400x400x400 mesh with as a point cloud costs ~1.5ms or so, so I'm not expecting rendering overall to take much longer.
At the moment i'm planning to just test how long a naive raycast costs. If thats too slow, ill probably do some sort of isosurface extraction, possibly surface nets because they seem fast. I might also try and come up with something better that fits my requirements (not tremendously accurate, smooth), or maybe even use some kind of horrible voxel splatting
Its next on the list of things to do, but unfortunately there are a lot of possibilities but no simple answer so far
Currently I am restructuring some classes related to language-specific formatting into read-only and read-write versions. The more I read my old code, the more apparent the difference is between code I wrote when I had too much coffee, and code I wrote when I didn't have enough. Holy shit, what a bizarre mess.
But it's going to look [I]glorious[/I] when I'm finished.
[QUOTE=Icedshot;46807495]Currently the processing of upscaling *itself* is fairly expensive. Upscaling from 200x200x200 -> 400x400x400 costs 16ms (though its not tremendously optimised). Rendering a 400x400x400 mesh with as a point cloud costs ~1.5ms or so, so I'm not expecting rendering overall to take much longer.
At the moment i'm planning to just test how long a naive raycast costs. If thats too slow, ill probably do some sort of isosurface extraction, possibly surface nets because they seem fast. I might also try and come up with something better that fits my requirements (not tremendously accurate, smooth), or maybe even use some kind of horrible voxel splatting
Its next on the list of things to do, but unfortunately there are a lot of possibilities but no simple answer so far[/QUOTE]
Well when you'd for example slap your camera right into the smoke that it covers your whole screen, in a raytrace worst case scenario you'd be doing fairly simple logic, but you'd be accessing the data ~400 times(depth of the voxel data) for every single pixel on the screen.
Yet afaik its pretty much your only option, extracting an isosurface would still require you raytrace the contents to get transparancy and lightning. Slicing would probably have worse performance, and Splatting and Shear warping have such significant visual issues that i am not sure if you can fix them.
[QUOTE=Amiga OS;46806812]I've just finished the API & DB caching classes, onto the UI!
[t]https://s3.amazonaws.com/pushbullet-uploads/udxZS-YCa6hVqmRAEG24U4HTgEXZqLf3nPXoxn/DFG_2014-12-28-01-02-12.png[/t][/QUOTE]
Be sure to read the [url=http://www.google.com/design/spec/components/cards.html#cards-usage]Google design guidelines[/url]
[t]http://i.imgur.com/iarNjw7.png[/t]
[QUOTE=voodooattack;46806860]Move! You beautiful bastards.
[vid]https://dl.dropboxusercontent.com/u/27714141/cilium1.webm[/vid]
Those are spiking neural networks controlling them. They're not very good yet, but that's because the genomes are randomly generated. No evolution yet.
In case you're wondering, spiking means that neurons can fire on their own. Unlike conventional artificial neural networks, spiking neural networks are capable of autonomous behaviour. They fire at will.[/QUOTE]
No wonder there is no evolution, these guys can't even swim, not to mention find an egg cell.
So, I bought a router to do some trials on how well it fares up as a low spec, low consumption file server.
Knowing that for this application I needed storage, I opted for [url=http://wiki.openwrt.org/toh/tp-link/tl-wr842nd]this one[/url] because of it's USB capabilities. I'm using the v2.0 model.
First things first, the default software is terrible, so...
[img]http://i.imgur.com/lSY2fx7.png[/img]
Not too shabby! Process was painless, there was a trunk image ready for it, just needed to flash it using the default tp-link software's upgrade facilities.
I decided to do a performance test, so I configured OPL on my PS2 and a samba daemon on my router, a few taps on my controller and...
[img]http://i.imgur.com/7CwUFv1.png[/img]
ISO Backups of PS2 games being streamed from the USB thumbdrive connected on the router, 30% CPU usage when streaming FMVs. Also stutters less than my notebook-as-network setup when doing so.
This is the internal memory after installing nearly all packages I'm going to use (git, ssh, sftp, smb, rsync, etc, only missing the irc bouncer)
[code]root@OpenWrt:~# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
rootfs 4928 3460 1468 70% /
/dev/root 2048 2048 0 100% /rom
tmpfs 14336 584 13752 4% /tmp
/dev/mtdblock3 4928 3460 1468 70% /overlay
overlayfs:/overlay 4928 3460 1468 70% /
tmpfs 512 0 512 0% /dev[/code]
Early conclusion: OpenWRT is awesome, and its entirely possible to use one of these to host files and stuff if you don't need too much speed. The USB is capable of delivering ~900mA and seems to be ok with delivering constant 700mA, should be enough to power an external usb hdd I guess. Did not test rsync performance yet, and did not test performance with an external hdd drive yet.
Having the weirdest bug...
Still trying to make a path tracer, and now I have 3 different possibilities:
I get this, which makes sense as it should be very grainy:
[thumb]http://i.imgur.com/y0w9haT.png[/thumb]
However, sometimes I get this:
[thumb]http://i.imgur.com/1uNrZk4.png[/thumb]
And other times, I get this, which seems to be a combination of the two.
[thumb]http://i.imgur.com/vfy54NY.png[/thumb]
I am at a loss here so I guess I'll go adjust things and hope for the best :(
[editline]derp[/editline]
Wait what..
Removed all multi threading, and somehow it works:
[thumb]http://i.imgur.com/MCuoeJs.png[/thumb]
[QUOTE=Cold;46808536]Well when you'd for example slap your camera right into the smoke that it covers your whole screen, in a raytrace worst case scenario you'd be doing fairly simple logic, but you'd be accessing the data ~400 times(depth of the voxel data) for every single pixel on the screen.
Yet afaik its pretty much your only option, extracting an isosurface would still require you raytrace the contents to get transparancy and lightning. Slicing would probably have worse performance, and Splatting and Shear warping have such significant visual issues that i am not sure if you can fix them.[/QUOTE]
Why would I need to trace through the whole voxel data? For smoke I only need to extract and render the surface + a certain amount of voxels in (if I want transparency). With the camera in the smoke, itd just find the surface is on the screen and terminate immediately?
[QUOTE=xThaWolfx;46810347]Having the weirdest bug...
Still trying to make a path tracer, and now I have 3 different possibilities:
I get this, which makes sense as it should be very grainy:
[thumb]http://i.imgur.com/y0w9haT.png[/thumb]
However, sometimes I get this:
[thumb]http://i.imgur.com/1uNrZk4.png[/thumb]
And other times, I get this, which seems to be a combination of the two.
[thumb]http://i.imgur.com/vfy54NY.png[/thumb]
I am at a loss here so I guess I'll go adjust things and hope for the best :(
[editline]derp[/editline]
Wait what..
Removed all multi threading, and somehow it works:
[thumb]http://i.imgur.com/MCuoeJs.png[/thumb][/QUOTE]
I had same problem once when I tried C#s pararell-izing functions (forEach) and it put random stuff in renderer.
Looks like you need to make multi threaded data structures or stuff gets funky.
[QUOTE=Icedshot;46810382]Why would I need to trace through the whole voxel data? For smoke I only need to extract and render the surface + a certain amount of voxels in (if I want transparency)[/QUOTE]
Well you can prematurely end the raytrace when certain conditions have been met, such as the density being so high that continuing would have no impact.
But worst case scenario(grid empty(with no optimization), or filled with a very low density smoke) you'll have to trace the whole thing.
[QUOTE=Fourier;46810487]I had same problem once when I tried C#s pararell-izing functions (forEach) and it put random stuff in renderer.
Looks like you need to make multi threaded data structures or stuff gets funky.[/QUOTE]
Yeah, I had the same thought. It's still nice to play around with, even on one thread:
[thumb]http://i.imgur.com/4FZqccB.jpg[/thumb]
I did some OpenGL stuff a while ago. Got pretty far, but it was made in such an unusable way with no support for additions.
So I started over again from scratch. Making sure (At least try) to make it in a smart way.
For now, two shaders being used at once - Which I had major problems with last try, I remember.
[img]https://dl.dropboxusercontent.com/u/99717/TwoShaders.gif[/img]
Currently working on MLG Pong / MLG Brick Breakers, it's 0% original and everyone will hate it, but hey, I've got spare time and I'm enjoying adding silly elements to it.
Video here:
[url]https://www.youtube.com/watch?v=Bmk-l0GGZBI[/url]
Download it here it you want:
[url]https://www.dropbox.com/sh/7qadc34n1u6wc7d/AABa_eWDciXVbjHOjzExPu6Ta?dl=0[/url]
[QUOTE=xThaWolfx;46810347]Having the weirdest bug...
Still trying to make a path tracer, and now I have 3 different possibilities:
I get this, which makes sense as it should be very grainy:
[thumb]http://i.imgur.com/y0w9haT.png[/thumb]
However, sometimes I get this:
[thumb]http://i.imgur.com/1uNrZk4.png[/thumb]
And other times, I get this, which seems to be a combination of the two.
[thumb]http://i.imgur.com/vfy54NY.png[/thumb]
I am at a loss here so I guess I'll go adjust things and hope for the best :(
[editline]derp[/editline]
Wait what..
Removed all multi threading, and somehow it works:
[thumb]http://i.imgur.com/MCuoeJs.png[/thumb][/QUOTE]
I am assuming you're using C# here, so here goes.
It's funny you mentioned this, because I encountered the same problem when implementing multithreading into my ray tracer (for reference, issues such as [URL="http://puu.sh/dumwu/ecd45105c3.jpg"]this[/URL]). Want to know why it works when you remove the multithreading? I imagine it's because you're using the same Random instance between threads. [B]Random is not thread-safe,[/B] and using the same one across threads will often corrupt it and return bogus results. The solution would be to create a separate Random instance for each thread and use that - it's how I overcame the issue.
how terrible is this on a scale from 1 to 10?
[code]void Enemy::push(float dt)
{
if(beingPushed == true)
{
pushTime += dt;
if(pushTime < 0.45)
{
float speed = 256*exp(-9.467f*pushTime)*dt; //fancy math bitch
cout << speed << endl;
if(pushDirection=="right")
x+=speed;
if(pushDirection=="up")
y+=-speed;
if(pushDirection=="left")
x+=-speed;
if(pushDirection=="down")
y+=speed;
}
else
{
beingPushed = false;
pushTime = 0;
}
}
}[/code]
it's the method for pushing things (enemies) around after they get hit.
wish I could show it, but I don't know how to .gif/.webm easily
[QUOTE=war_man333;46811030]how terrible is this on a scale from 1 to 10?
[code]
...
pushTime += dt;
...
[/code]
it's the method for pushing things (enemies) around after they get hit.
wish I could show it, but I don't know how to .gif/.webm easily[/QUOTE]
Don't.
Imagine if you didn't use vsync or FPS caps and had quite a great performance. Delta time can be 0 in such cases and as a result your enemy would be pushed forever. I encountered that before in my apps.
And in most cases it would not be accurate at all (for bigger measurments) because of losses.
I recommend you use global game time and manually calculate deltas each frame like this
[code]
pushTime = appTime - pushStartTime;
[/code]
[QUOTE=cartman300;46811277]Don't.
Imagine if you didn't use vsync or FPS caps and had quite a great performance. Delta time can be 0 in such cases and as a result your enemy would be pushed forever. I encountered that before in my apps.
And in most cases it would not be accurate at all because of float losses.
I recommend you use global game time and manually calculate deltas each frame like this
[code]
pushTime = appTime - pushStartTime;
[/code][/QUOTE]
I don't know how to get the appTime? I have 1 clock I use for everything, so essentially that's appTime, I guess?
It works pretty well right now, not sure how to capture it at 60 fps though.
[QUOTE=cartman300;46811277]Don't.
Imagine if you didn't use vsync or FPS caps and had quite a great performance. Delta time can be 0 in such cases and as a result your enemy would be pushed forever. I encountered that before in my apps.
And in most cases it would not be accurate at all because of float losses.
I recommend you use global game time and manually calculate deltas each frame like this
[code]
pushTime = appTime - pushStartTime;
[/code][/QUOTE]
this sounds like a really unlikely problem
maybe just use doubles for dt then instead
Sorry, you need to Log In to post a reply to this thread.