• What are you working on? May 2012
    2,222 replies, posted
So i upgraded the map generating a bit to not generate vertices that cannot be seen. The vertice count is halved right now. The largest map size has grown alot, but generating the map takes a [I]fucking long time.[/I] Im trying to achieve the same effect but with a faster generating right now. [editline]5th May 2012[/editline] Great, now i made it even slower :suicide:
[QUOTE=swift and shift;35831064]holy crap it actually works [img]http://i.imgur.com/gLRdI.png[/img] [img]http://i.imgur.com/VdXSc.png[/img][/QUOTE] It does?
[img]http://puu.sh/t1VN[/img] currently it just doesn't render faces on adjacent blocks now to add combining vertexes
[QUOTE=amcfaggot;35824845]Today, I'll be working on another mock-game realization. Following the award winning, world-renown success of [URL="http://facepunch.com/threads/1131269"]Knight Game II[/URL], comes [URL="http://www.youtube.com/watch?v=7URogxWx_aE&feature=g-all-a"]Adventure Ponies: The Video Game[/URL]. [IMG]http://img839.imageshack.us/img839/9607/46954291.png[/IMG] [media]http://www.youtube.com/watch?v=7URogxWx_aE[/media] The process involves resource ripping, so the art assets will be the same as what you've seen in the video linked above. It will most likely be another "joke" game, but we'll see. ETA: 24-48 hr completion.[/QUOTE] I just had an epiphany.. [img]http://dl.dropbox.com/u/27714141/ponyhunt.png[/img] [editline]5th May 2012[/editline] Come at me bronies! COME AT ME
[QUOTE=calzoneman;35829297]I always thought [url]http://polycode.org/[/url] looked interesting (It's a C++ API and also has Lua bindings). It has wrapped 3D support: [url]http://polycode.org/learning/3d_basics[/url][/QUOTE] It also has 101 memory leaks. It really doesn't take that much effort to set up an OpenGL project. [img]http://sae.tweek.us/static/images/emoticons/emot-irony.gif[/img]
[QUOTE=GamingRobot32;35833257][img]http://puu.sh/t1VN[/img][/QUOTE] BorgCraft
I think it's time I fixed my timestep. Everything is framerate dependent and slows down/speeds up when the FPS changes. :(
[QUOTE=NovembrDobby;35835082]I think it's time I fixed my timestep. Everything is framerate dependent and slows down/speeds up when the FPS changes.[/QUOTE] I literally just fixed this in my own C++ project. In my case I have a maximum framerate. Here's the code I use: [cpp]// Main-loop while(gameRunning) { if ((glfwGetTime() - lastFrameTime) * 1000 < 1000 / maxFramerate) { glfwSleep(0.001); } else { currentFrameTime = glfwGetTime(); Update((GLdouble)(currentFrameTime - lastFrameTime) * 1000); Draw(); // update fps-counter GLdouble time = currentFrameTime; if (time - fpsLastUpdate >= 1) { GLdouble fps = (currentFrameTime * 1000 - lastFrameTime * 1000); glfwSetWindowTitle(toString((int)(1000 / fps + 0.5)).c_str()); fpsLastUpdate = time; } lastFrameTime = currentFrameTime; } }[/cpp] Maybe someone could point out any obvious flaws or stupid mistakes(c).
[QUOTE=Murkrow;35826911][URL="http://i.imgur.com/zPFG6.png"]Binary.[/URL][/QUOTE] Unicorns should be able to send data directly to the machine
[QUOTE=Mordi;35835910]I literally just fixed this in my own C++ project. In my case I have a maximum framerate. Maybe someone could point out any obvious flaws or stupid mistakes(c).[/QUOTE] Setting a maximum fps isn't how you should solve your timestep. [url]http://gafferongames.com/game-physics/fix-your-timestep/[/url] This article is a great resource for learning how to fix your fps dependent issues.
[img]http://puu.sh/t4TV[/img] Added some buttons, gonna work on particles or something now.
Finally getting back into programming with some XNA. [IMG]http://i.imgur.com/ekQOu.png[/IMG] Spritebatches and the content pipeline seem really confusing to me. I tried to scale up the whole rendered image, but there was an 'Effect' parameter that demanded I pass it something I couldn't find so I just thought fuck it I'll learn as I go. Does anyone else find it to be a bit fiddly?
What was that HTTP testing thing by apache? I need it to test my lua webserver
[QUOTE=Nigey Nige;35836951]Finally getting back into programming with some XNA. [IMG]http://i.imgur.com/ekQOu.png[/IMG] Spritebatches and the content pipeline seem really confusing to me. I tried to scale up the whole rendered image, but there was an 'Effect' parameter that demanded I pass it something I couldn't find so I just thought fuck it I'll learn as I go. Does anyone else find it to be a bit fiddly?[/QUOTE] Effects are just shaders. You can pass in null for that, I believe.
[QUOTE=Naelstrom;35836472]Setting a maximum fps isn't how you should solve your timestep. [url]http://gafferongames.com/game-physics/fix-your-timestep/[/url] This article is a great resource for learning how to fix your fps dependent issues.[/QUOTE] I put the limit there because I wanted it there, though. Reason being because I can't get any higher fps than 60 anyway, since apparently ATi-drivers puts a cap to whatever the screen's refresh rate is. That's why you see 50 fps in most of my screens. My TV is 50Hz, and my computer monitor (laptop) is 60Hz. I have the framecap set to 999 anyway. :P
[QUOTE=geel9;35837095]Effects are just shaders. You can pass in null for that, I believe.[/QUOTE] It works! [img]http://24.media.tumblr.com/tumblr_m39os5ygsf1qcvmibo4_250.gif[/img]
Just did a full test on the Android version's Google Play integration :) [img]http://i.imgur.com/3mLdE.png[/img]
[QUOTE=Map in a box;35837077]What was that HTTP testing thing by apache? I need it to test my lua webserver[/QUOTE] ApacheBench (ab) [URL]https://httpd.apache.org/docs/2.0/programs/ab.html[/URL]
[QUOTE=Mordi;35837199]Reason being because I can't get any higher fps than 60 anyway, since apparently ATi-drivers puts a cap to whatever the screen's refresh rate is. That's why you see 50 fps in most of my screens. My TV is 50Hz, and my computer monitor (laptop) is 60Hz.[/QUOTE] Um, you obviously have [url=http://i.imgur.com/YSnF0.png]vsync enabled somewhere[/url] or it's equivalent. I have ATI in my laptop and have no problems. And your gameloop has some problems. Namely that you update and draw at the same time, always. All you've done really is implement a hard cap on framerate similar to what vsync does. In your particular setup, if you get a very low FPS your input will most likely feel very sluggish, and if you implement physics there will be various unexpected "explosions". Read up on the link naelstrom posted, or [URL="http://www.koonsolo.com/news/dewitters-gameloop/"]this one[/URL] which says pretty much the same thing in a different format.
Been working on performance a bit today: [img]http://i.imgur.com/gowrg.png[/img] I'm now culling faces across chunks, as well as within them - reduced the face count by ~40%. That gives me plenty of headroom for a 15 chunk draw radius :) I've also been dabbling with lighting a bit, but that's pushed back for now. Water was added and then removed between this post and the last, because I'm gonna have to render that in a separate pass to get the translucency right. [editline]5th May 2012[/editline] That could do with some fog I think.
[QUOTE=HeroicPillow;35837345]Um, you obviously have [url=http://i.imgur.com/YSnF0.png]vsync enabled somewhere[/url] or it's equivalent. I have ATI in my laptop and have no problems. And your gameloop has some problems. Namely that you update and draw at the same time, always. All you've done really is implement a hard cap on framerate similar to what vsync does. In your particular setup, if you get a very low FPS your input will most likely feel very sluggish, and if you implement physics there will be various unexpected "explosions". Read up on the link naelstrom posted, or [URL="http://www.koonsolo.com/news/dewitters-gameloop/"]this one[/URL] which says pretty much the same thing in a different format.[/QUOTE] Hmm, you're right. That's a stupid mistake(c)! Also, I meant nVidia. Not ATi. I have turned off vsync in OpenGL. I haven't looked at my driver-settings either, but I've read that some drivers simply don't allow you to turn off vsync. Edit: Oh hey, turns out there is an option for vsync. Got 600 fps now. Nice.
Phyzicle Sandbox waiting for review from Apple. Time to submit to RIM!
OpenCL renderer now has support for one texture for every object under the new model of pass everything in as one large buffer reallocated asynchronously every time a new object is added Essentially i assigned the w coordinate of the uvw coordinates to be the objects gID (but later i'm going to change it to be objectid*mip_levels), then allocated an Image3D to contain all the different textures. The texture is fairly simply retrieved within the kernel by reading from any vertex's w coordinate in the usual position (with the inverse of the interpolated uv coordinates with respect to depth as the other two coordinates) As you can see from the picture below, I should probably implement mip mapping next to fix all the horrible artifacting from a distance: [IMG]http://dl.dropbox.com/u/33076954/New/NeedsMipmapping.PNG[/IMG] The lighting falloff also needs to have another look taken at it, its not right at the moment
[QUOTE=icantread49;35837927]Phyzicle Sandbox waiting for review from Apple. Time to submit to RIM![/QUOTE] [quote] 1Release(s) for 'Phyzicle Sandbox' have been sent for Review. [/quote] Woohoo! Now I'm preparing the Android version and as soon as the other two are accepted, all three versions are going public!
[img]http://i.snag.gy/XSmGf.jpg[/img] Not pretty at the moment, but I'm remaking a game I worked on a year ago. Pretty much beat hazard, with lots of random weapons and a rpg-like leveling system.
[QUOTE=icantread49;35838366]Woohoo! Now I'm preparing the Android version and as soon as the other two are accepted, all three versions are going public![/QUOTE] So it's free with paid for features?
[QUOTE=ben1066;35838559]So it's free with paid for features?[/QUOTE] iOS & Android: Free with 2 paid-for upgrades (local saving/loading, community features) PlayBook: $1.99 (Marmalade doesn't expose the PlayBook native APIs yet)
Hopefully Apple app reviewers work on weekends :v:
[QUOTE=Icedshot;35838225]OpenCL renderer now has support for one texture for every object under the new model of pass everything in as one large buffer reallocated asynchronously every time a new object is added Essentially i assigned the w coordinate of the uvw coordinates to be the objects gID (but later i'm going to change it to be objectid*mip_levels), then allocated an Image3D to contain all the different textures. The texture is fairly simply retrieved within the kernel by reading from any vertex's w coordinate in the usual position (with the inverse of the interpolated uv coordinates with respect to depth as the other two coordinates) As you can see from the picture below, I should probably implement mip mapping next to fix all the horrible artifacting from a distance: [IMG]http://dl.dropbox.com/u/33076954/New/NeedsMipmapping.PNG[/IMG] The lighting falloff also needs to have another look taken at it, its not right at the moment[/QUOTE] Whats the performance like with this compared to just OGL? Also do you know what openCL was initially created for? I though it was to allow people to run programs (calculatons or such) on the GPU instead of the CPU
Added vi-key support to my rougelike which is making it a lot easier to learn vim.
Sorry, you need to Log In to post a reply to this thread.