• What are you working on? v67 - March 2017
    3,527 replies, posted
I've modified my shadow rendering logic so that the user can specify how many shadows per light type can get rendered per frame, where half of that number is dedicated to the most important lights (light radius / distance to camera) and the other half is filled with the oldest of the visible lights (oldest time to last rendering update) Since I switched all my light maps to texture_2D arrays where each light owns a specific spot in the array, my lighting won't break if a given light is flagged to not update. So if someone has a slow computer, the world won't flicker, just some lighting data might be slow to update - but shouldn't end up racing and falling behind. This has made me think: what if I replace my local environment probes from cubemaps to dual paraboloids just like my point light maps? Maybe I could shoehorn in this updating system so that maybe every '12' seconds, 1 of the oldest paraboloid reflections nearest the camera will update. I'd like to try it out, but I'm uncertain to what extent screen-space reflections have to fall back on other cubemaps when they fail. If hardly ever, then there may be little point in pursuing this any further.
[QUOTE=Radical_ed;51916481][img]http://i.imgur.com/D5PGmpY.gif[/img] [img]http://i.imgur.com/HcTUJ1l.gif[/img] This lil guy turned out real cute[/QUOTE] [img]http://i.imgur.com/3o0ANbI.gif[/img] Working on a chat system right now, I might have a little demo up soon
Working on some OpenGL application and I've updated to 4.5, using DSA now and it feels so good. Now to figure out what programmable vertex pulling is .
Oh yeah also got text rendering for the first time in an opengl project: [img]http://imgur.com/diAUmXK.png[/img] Crap automerge failed
[QUOTE=Radical_ed;51933657][IMG]http://i.imgur.com/3o0ANbI.gif[/IMG] Working on a chat system right now, I might have a little demo up soon[/QUOTE] Hey that's pretty cute :) We did the same thing and it worked pretty well for simple communication. [img]http://i.imgur.com/NoODm4K.gif[/img]
If anyone remembers me from WAYWO of two years ago, back then I started work on a train simulator (a subway train simulator). After that I worked for a certain while at a company where I couldn't post about stuff I did, afterwards we did development of our simulator in relative secrecy. But now we've announced our sim so I can start posting progress from it again! There's not much to update in this post, just gonna remind of some key features of our sim: - Detailed systems model for the trains - all electric relays, control circuits and power circuits, pneumatics and so on are fully simulated. The train systems model is based on the actual schematics, verbatim. - Advanced physics/dynamics model. The wheel vs rail collision is fully calculated and all dynamics of motion of profiled wheel vs profiled rail are recreated (hunting oscillations, etc) - Multiplayer support which permits multiple people to ride or control trains - FPS-style movement with freedom to move around the train and around the world - All track data and signalling layout is based on the real metro tracks with high precision - Lots of graphics detail - although everything is still a work in progress at the moment! [thumb]http://i.imgur.com/dlshA0R.png[/thumb][thumb]http://i.imgur.com/C2KkDZq.jpg[/thumb]
[QUOTE=BlackPhoenix;51935352]If anyone remembers me from WAYWO of two years ago, back then I started work on a train simulator (a subway train simulator). After that I worked for a certain while at a company where I couldn't post about stuff I did, afterwards we did development of our simulator in relative secrecy. But now we've announced our sim so I can start posting progress from it again! There's not much to update in this post, just gonna remind of some key features of our sim: - Detailed systems model for the trains - all electric relays, control circuits and power circuits, pneumatics and so on are fully simulated. The train systems model is based on the actual schematics, verbatim. - Advanced physics/dynamics model. The wheel vs rail collision is fully calculated and all dynamics of motion of profiled wheel vs profiled rail are recreated (hunting oscillations, etc) - Multiplayer support which permits multiple people to ride or control trains - FPS-style movement with freedom to move around the train and around the world - All track data and signalling layout is based on the real metro tracks with high precision - Lots of graphics detail - although everything is still a work in progress at the moment![/QUOTE] You say it's been announced, where can I find more info about this? I still remember the project from a long while ago.
[QUOTE=Cyberuben;51936135]You say it's been announced, where can I find more info about this? I still remember the project from a long while ago.[/QUOTE] We've got a website: [URL]http://subtrans.it/[/URL] And a devblog (to updating which I will return soon): [URL]http://devblog.foxworks.cz/[/URL] And just so it's a "what are you working on" post, I've been recently working a lot in trying to do a CFD of the trains inside the tunnel to estimate precise aerodynamics, especially how the train interacts when it drives past ventilation tunnels: [IMG]http://i.imgur.com/i1jHXoe.png[/IMG] Gonna do a post on that. The image above just illustrates the pressure gradient created by the metro train (moving from left to right) - it compresses air ahead of itself and causes lower air pressure behind itself. There's actually a weird result that CFD keeps giving me consistently: for a train that's moving through a tunnel at 80 km/h, the wind around the train can reach as high as 150-160 km/h, seemingly due to air flowing around the train through a narrower path. It's an interesting effect, I'm researching if this is in any way realistic right now.
[QUOTE=BlackPhoenix;51936410]We've got a website: [URL]http://subtrans.it/[/URL] [...][/QUOTE] [quote=Greenlight][...] Other info While this version of the simulator is only very slightly different from [B]the version used for training real metro drivers[/B], there are no difference in simulation features between the two. The game includes a fully comprehensive training course on the train operation as part of the game's storyline, [B]as well as educational materials used by real metro drivers, provided in English and Russian[/B]. [...][/quote][emphasis mine] Does this mean you got a government contract? Either way, it's seriously impressive you can provide those documents. I assume there's normally a good amount of red copyright tape involved to have something like this as part of a commercial product.
[QUOTE=Tamschi;51936667][emphasis mine] Does this mean you got a government contract? Either way, it's seriously impressive you can provide those documents. I assume there's normally a good amount of red copyright tape involved to have something like this as part of a commercial product.[/QUOTE] Short story, the trains were built by a company that no longer exists, in a country that no longer exists. I can't answer on the government contract stuff. The educational materials have long been available in public (although written in Russian), we will simply edit them somewhat and translate into English.
I am having a SERIOUSLY hard time dealing with semitransparent clouds over other clouds because I'm trying as hard as possible to only calculate light once per frag but I'm making progress. [IMG]http://i.imgur.com/icqeK4A.png[/IMG] If I bump up the turbulence, these should start to look real under certain conditions soon (and all at like 100+fps in VR on a vive on a 970)
[QUOTE=BlackPhoenix;51935352]If anyone remembers me from WAYWO of two years ago, back then I started work on a train simulator (a subway train simulator). After that I worked for a certain while at a company where I couldn't post about stuff I did, afterwards we did development of our simulator in relative secrecy. But now we've announced our sim so I can start posting progress from it again! There's not much to update in this post, just gonna remind of some key features of our sim: - Detailed systems model for the trains - all electric relays, control circuits and power circuits, pneumatics and so on are fully simulated. The train systems model is based on the actual schematics, verbatim. - Advanced physics/dynamics model. The wheel vs rail collision is fully calculated and all dynamics of motion of profiled wheel vs profiled rail are recreated (hunting oscillations, etc) - Multiplayer support which permits multiple people to ride or control trains - FPS-style movement with freedom to move around the train and around the world - All track data and signalling layout is based on the real metro tracks with high precision - Lots of graphics detail - although everything is still a work in progress at the moment! [thumb]http://i.imgur.com/dlshA0R.png[/thumb][thumb]http://i.imgur.com/C2KkDZq.jpg[/thumb][/QUOTE] At first I was like this is waywo, not 3D modelling then I realised that it was acctualy realtime which was impressive enough. I live in wales so theres really no metro/underground here but I recently went on the tube in london and was so curious as to what it looks like deep in the tunnels and how it would feel to drive a train through it. I really want this game, any word on a demo or alpha release?
[code] public class NoValue { public class NullType<T> where T : class { public static implicit operator T(NullType<T> Val) { return null; } public static implicit operator NullType<T>(T Val) { if (Val == null) return null; throw new Exception("Can not convert from a value to null"); } } public static implicit operator NoValue(NullType<object> Val) { return null; } } [/code] Here's a C# class which "defines a type for null", it allows you to define a struct like this [code] struct Pointer { public IntPtr Ptr; public static implicit operator Pointer(NoValue Null) { Pointer P = new Pointer(); P.Ptr = IntPtr.Zero; return P; } } [/code] which makes it possible to assign null to the Pointer type even tho it's a struct [code] Pointer P = null; [/code] There's no runtime overhead.
So clipping is a big problem. The main issue with clipping is that objects penetrate too far into the sword model, and it just looks terrible (mainly due to the inconsistency). I spent today creating a basic anti clipping system [media]https://www.youtube.com/watch?v=3Eu_mbb_9Xk[/media] For each bone in every finger, if the bones position is too close to the sword (treated as a circle, I may need to fix this), the bone gets displaced. Importantly, I then rotate the previous bone by the rotation that the current bone has moved around the previous bone, *and* then rotate the current bone's rotation by that rotation amount. This means you don't end up with weird disjoint looking hands because all the bones have been displaced irregularly Its not perfect, but its a big improvement vs raw clippity clip clop
It was a tremendous pain in the ass, but I got 3D simplex noise working (in some capacity): [vid]https://giant.gfycat.com/BeneficialAbleIchthyostega.mp4[/vid] Its low res because I can't get the resolution higher without getting memcpy errors and weird shit like that, despite checking all my indexing and pointer arithmetic eight ways from Sunday. No idea why it breaks so hard and fails to copy sometimes, but it mostly happens when I try to increase the volume I compute above 1024x1024 in the XY plane, or above 100 in the Z axis.
So uh my mouse movement is slower in release mode than it is in debug mode. I render faster in release mode so I am kind of confused why the effect is opposite of what I expected (if I didn't account for a delta I should turn faster when the render is faster). Kind of an issue but not big enough to fix, these render speeds are going to slow down over time.
[QUOTE=WTF Nuke;51943651]So uh my mouse movement is slower in release mode than it is in debug mode. I render faster in release mode so I am kind of confused why the effect is opposite of what I expected (if I didn't account for a delta I should turn faster when the render is faster). Kind of an issue but not big enough to fix, these render speeds are going to slow down over time.[/QUOTE] How frequently do you sample the mouse position? If you sample once per frame, I don't think you need to scale it by the delta time, since you would expect a mouse moving at constant speed to have a difference in sampled position proportional to frame time anyway. [editline]11th March 2017[/editline] Small update on my WebGL map viewer: [media]https://www.youtube.com/watch?v=ewwvGsp_qDM[/media] Wasted a lot of time trying to get this working before I discovered you need to explicitly enable a depth buffer extension to be able to attach a depth texture component to a frame buffer in WebGL.
[QUOTE=WTF Nuke;51943651]So uh my mouse movement is slower in release mode than it is in debug mode. I render faster in release mode so I am kind of confused why the effect is opposite of what I expected (if I didn't account for a delta I should turn faster when the render is faster). Kind of an issue but not big enough to fix, these render speeds are going to slow down over time.[/QUOTE] Also make sure the delta time does not round down to 0 if you render too fast.
[QUOTE=cartman300;51944226]Also make sure the delta time does not round down to 0 if you render too fast.[/QUOTE] Yeah I've had that particularly when scaling movement by delta time. Even if it's practically impossible for the delta time itself to round to zero (assuming it's a 32 bit float), when you try to add it to a larger value like when adding a delta scaled velocity to a position you will get precision issues for very small deltas.
[QUOTE=Ziks;51944280]Yeah I've had that particularly when scaling movement by delta time. Even if it's practically impossible for the delta time itself to round to zero (assuming it's a 32 bit float), when you try to add it to a larger value like when adding a delta scaled velocity to a position you will get precision issues for very small deltas.[/QUOTE] One of the few times I found usage for the double datatype was with stuff like this, as I was having similar strange issues when doing planet rendering stuff. The LearnOpenGL website is great, but the demo movement handling code he gives out in one of the tutorials plays a bit fast and loose with typecasting and conversions and I wouldn't be surprised if it was a common issue. Ever get really far into a project then just want to facedesk because you have an epiphany? Currently, the method through which a user can generate data using my library totally fucking sucks. Its messy, unintuitive, and restrictive, plus it doesn't work well with CUDA since 3D grids/computations get messy (and usually aren't necessary). Libnoise does this kinda neat thing where building a noisemap requires a "NoisemapBuilder". These come in various types, handling the population of a noisemap thats planar, cylindrical, spherical, etc. You can set the size, location, and bounds of the builder's separately too, letting you get a pretty fine handle on the resolution and scale of the output. I make things tough, because I tried to make it so that each module requires the user to specify the dimensions of the output image when creating the modules. This makes sense, because I need to make sure that each module has the same dimensions so I don't index out of bounds (and doing this in the constructor lets me get ahead on allocating the managed memory), and I can get a head start on allocating the managed memory required for a certain module. However, it turns into a lot of duplication of parameters that don't really matter when you're setting up a module chain: this setup makes sense if this was a library of standalone modules without much interconnectivity, but given that connectivity and modularity are pretty much the big thing I was after I can do better. So I'm going to pretty heavily change how things work, by doing away with the need to specify [I]anything[/I] related to the space/positions/coordinates that will eventually be fed to the modules when setting modules up and creating a chain of modules. I'll leave this to the builder classes. This also gives me a chance to address another issue, that being that all of the data being sent to my CUDA kernels has to be together in one grid of points with no spacing, jumps, or such between them. This works fine for simple heightmaps or noise textures, but what if you wanted to query a subset of disparate spatially scattered points? You couldn't. And for 3D noise, generated a planet like I do now would require computing noise for the AABB that could enclose the planet (which is all kinds of gross), then filtering out to only use the points on the surface of the planet when getting the final output data. So my new approach is to just use a super-simple aligned "Point" struct, which contains a float3 defining the point to be sampled and stores the value at this point. This increases memory usage for a single set of sampling locations/values, but the other advantages outweigh this and using managed memory means the GPU will push things with low refcounts back to the CPU when it needs the VRAM. There could be some potential issues with the shuffling of points in the array, but this shouldn't happen (afaik), and if it does happen there are ways I can fix it (probably creating a "checker" or "sorter" kernel that verifies and resorts the array of points to be in the correct order). We present this project on Monday though, so I'm not committing these changes back to master. I've just been making demos and running tests using the 2D noise functions, while working on porting my dev branch to this new system. I'm pretty hyped to try it out, it should make things like globe generation pretty easy to setup, and should make the library easier to use if/when I release it (I'm only wary because I don't feel the quality will be high enough).
Trying for a while to figure out what is using bandwidth on my home network, Figured that it might be time to write a [URL="https://en.wikipedia.org/wiki/SFlow"]sFlow[/URL] collector that can parse it and figure it out ( giving that [URL="http://blog.sflow.com/2016/02/cloudflare-ddos-mitigation-pipeline.html"]we do[/URL] [URL="http://2015.peeringday.eu/media/document/cloudflare_levy.pdf"]that at[/URL] [URL="https://www.ietf.org/proceedings/97/slides/slides-97-ietf-sessb-how-to-stay-online-harsh-realities-of-operating-in-a-hostile-network-nick-sullivan-01.pdf"]work a lot[/URL] ) So I set it up on my router and pointed it to my NAS ( something that is online, and already submits metrics on the network's behalf ) [code] ben@edge# show system flow-accounting interface switch0 interface tun0 sflow { sampling-rate 10 server 192.168.2.45 { } } [edit] [/code] and with a little bit of jankey low quality golang [url]https://gist.github.com/benjojo/a3a1ea49eb1178654c45c54a776e7fe4[/url] We have metrics in grafana! [img]http://i.imgur.com/WL26LSn.png[/img]
[QUOTE=Ziks;51944017]How frequently do you sample the mouse position? If you sample once per frame, I don't think you need to scale it by the delta time, since you would expect a mouse moving at constant speed to have a difference in sampled position proportional to frame time anyway.[/QUOTE] Actually you know what yeah it makes sense not to scale it by the delta. That fixed it, thanks. And yeah you guys are right that really small deltas can be bad (especially with denormalized zeroes) so I will fix to using a fixed logic timestep some time later.
Apparently page 3 is where hand animations live Its nearly over. The light is at the end of the tunnel. I've finished most of the rest of the hand animation work now. Everything works correctly as far as I can tell from the editor perspective. All thats left now is to record some actual hand animations, as well as figure out where the right hand is meant to go on the sword. I might have to extend the handle a little. Oh and there are some minor memory management issues :) [media]https://www.youtube.com/watch?v=m18C2-Na708[/media] Its worth noting that this is just essentially a random hand spasm, with a little care and reworking the animations look much *much* smoother with no snapping (not an artifact of the anti clip measure, but just sensor jitters). This was my 3rd shot on the video though, I'm pretty tired today so I'm giving up early. I'll probably be posting another 8000 videos later when I actually start integrating it into the fighter game. I also forgot to make it loop as well, so one of the transitions in the loop (close -> open) is entirely slerp'd. Still though, it showcases everything I can do in the new animation system
[QUOTE=Ziks;51944017]How frequently do you sample the mouse position? If you sample once per frame, I don't think you need to scale it by the delta time, since you would expect a mouse moving at constant speed to have a difference in sampled position proportional to frame time anyway. [editline]11th March 2017[/editline] Small update on my WebGL map viewer: [media]https://www.youtube.com/watch?v=ewwvGsp_qDM[/media] Wasted a lot of time trying to get this working before I discovered you need to explicitly enable a depth buffer extension to be able to attach a depth texture component to a frame buffer in WebGL.[/QUOTE] That brown water looks like the skin on milk when you heat it in a microwave.
[QUOTE=BlackPhoenix;51936750]Short story, the trains were built by a company that no longer exists, in a country that no longer exists. I can't answer on the government contract stuff. The educational materials have long been available in public (although written in Russian), we will simply edit them somewhat and translate into English.[/QUOTE] Have been following your progress since day one and have always played Metrostroi, glad everything is going well with the project!
[vid]https://dl2.pushbulletusercontent.com/IocLbXJAidyvWdb1yAtpbfP3aBpvsuq5/VID-20170312-WA0000.mp4[/vid] Doing more music visu stuffs!
That sounds really close to the experience I get when people play on the huge expensive pianos we got at school
[QUOTE=Sam Za Nemesis;51950828][IMG]http://volvosoftware.com/heavy/PBR/wtf_I_love_hbao_now.gif[/IMG] next up should be figuring out a fast way to do screenspace glossy reflections (under Shader Model 3!)[/QUOTE] There are line holes like, every 32 pixels or so in your AO going horizontally if you haven't spotted it already
Why are good actually working and complete models so hard to find? Even the tutorials don't have proper models, for example learnopengl's nanosuit model has incorrect faces. The top result for another nanosuit model has a working .obj file with no .mtl but the .3ds file is broken. Half the time I don't know if I am doing something wrong or I'm just feeding bad data. See now I'm figuring out that I'm doing something wrong (or possibly assimp is though I doubt this) but on the other hand some of the models I've tested on were also broken.
[QUOTE=WTF Nuke;51951865]Why are good actually working and complete models so hard to find? Even the tutorials don't have proper models, for example learnopengl's nanosuit model has incorrect faces. The top result for another nanosuit model has a working .obj file with no .mtl but the .3ds file is broken. Half the time I don't know if I am doing something wrong or I'm just feeding bad data.[/QUOTE] What does it look like when you're models go wrong? I doubt the ones passed around in these tutorials aren't working, but rather you may be making a fundamental assumption error like that: - the vertex arrays are already sorted into faces - the indices for faces are always explicitly triangles or quads (instead of checking index length for a given faces) - triangle order is (counter)clockwise - if using indices, not formatting the data correctly before sending it into the vertex buffers I've had troubles with assimp with some models. It may have been an issue on my part, but I need to triangulate all faces otherwise I get very noticeable errors. Simply using assimp's triangulate flag wasn't good enough because it tended to break models that were already triangulated in some cases, or had animations. Also there were issues between different file formats. Because of this, I tend to only use .obj for static geometry (or collision models), and .fbx for animated models.
Sorry, you need to Log In to post a reply to this thread.