• What are you working on? v67 - March 2017
    3,527 replies, posted
it's more performant yes, but it's also entirely unreasonable to ask me to write an additional renderer in their bastardized dialect of C++, using their vendor locked-in API, just to target a single platform especially considering how there already exists an open standard (Vulkan) with the exact same design goals and level of performance, but with support for multiple platforms literally all they're doing is making life harder for developers, and they're doing it on purpose we had this exact same situation in the 90s where every vendor rolled their own API, and it looks like apple wants to take us back apple's decision to deprecate OpenGL and push Metal also hurts OpenGL as a whole since there's now even less of a reason to use it (instead of just, say DirectX), considering you now only gain support for Linux instead of Linux and macOS i wouldn't be surprised if microsoft followed suit with Windows and DirectX based off of what they're doing with UWP right now (you can't publish apps in the microsoft store if they're using a graphics API other than DirectX)
Added another rendering optimization today, but just as everything else I do the trade off is complexity I've added a secondary class of renderable geometry to be used for static props which won't animate and won't move once placed. However, previously all my light sources did 100% dynamic shadows. So I gave my lights a second shadow map which gets used for static geometry only, and only fills after the level has finished loading (or at the light's own behest, such as itself changing). So now I can lessen my shadow-rendering footprint every frame, however I now need to do 2 shadow lookups every frame just to combine the results. Worth it in geometry-heavy scenes.
At my internship I was tasked with creating several microservices from the ground up in C#/.Net Core. I got full control over the framework and could essentially do what I want. I recently started a job that uses a proprietary mvvm framework for everything. There's no creating RESTful APIs from scratch, determining the routing, etc. The framework uses reflection to read models and generate endpoints from that. Then it reads the viewmodel on the frontend and connects the fields. You literally just have to write a model class in Java and it handles the DB and HTTP stuff, and then create a template with handlebars on the frontend and a stylesheet and it connects the fields you reference in your template with the fields from the backend. In principle, I see that it significantly reduces development time for our clients (though there is overhead in training new hires such as myself with it). I can appreciate that it leads to less code reuse and faster delivery time for the client (reducing costs). I see that it makes business sense. Additionally if there is something that you have to reuse between projects, you can write it once and then import it as a maven module (which is hell to set up, by the way). But holy fuck is it so not fun to develop in. I would VERY loosely call backend development with this thing "backend"; it holds your hand every step of the way and the most control you get is extending preset models. There is a way to create truly custom fields in your models but you have to write the frontend representation in JSP. As a form of therapy, I come home and code in C or C++ now, where I can control everything. I often find myself day dreaming about leaving and starting my own company
Was using a time differential to trigger an event. when the end - start MS > 1000, it would make an enemy move in a random direction. After about 30 seconds he quit moving. I was very confused until I realized that I was storing the timestamp as a short so it was just overflowing, and the end was always less than the start
Worked more on the shader wallpaper ding dong I posted a couple of months ago. Added the beginnings of a GUI for user tuning of shader uniforms today. Here's the current progress: https://www.youtube.com/watch?v=yodP5IjUxZc GitHub
been refactoring parts of the renderer to remove some of the (seemingly) arbitrary limitations.. you can now go over 256 materials in the scene, and doing so will start a new batch. you're now only practically limited by the size of the materials UBO, which is 32 MiB at the moment. i could remove this limitation as well, but i'm not sure how useful that would be.. it looks like we would have a really hard time hitting this limit, and all hardware i've tested this on has supported UBOs over 32 MiBs on the performance side, i noticed the lighting calculations were taking a lot of time so i also added a depth pre-pass which helped a lot i also made depth-only passes take less time by using glColorMask instead of an empty fragment shader, which in addition to this pre-pass, helped shadow map rendering performance before: https://files.facepunch.com/forum/upload/145623/b99fce9d-c556-4315-9817-64df0fd57387/image.png after: https://files.facepunch.com/forum/upload/145623/febd2df0-3a06-4976-aa0f-3c4d272896be/image.png i'm looking into implementing clustered forward shading next to further reduce the cost of lights, and to be able to support a larger number of them. right now i only have a quick and dirty forward shading implementation which is limited to 4 lights per object
I've been working on implementing the protocol Parrot drones use. Pretty much every library out there is lacking in some way so I'm building a low-level API and a higher-level one on top of that. I just got WiFi connections and video streaming to work by breaking my Bluetooth support (doing a refactor). I haven't implemented their ARStream protocol because it's only supported by their fancy expensive drones. Sorry for the shitty footage. Piloting a drone while recording is harder then it looks. https://shodan.me/stuff/mambo_drone_wifi.webm I'm using a PS4 controller for the input
Does anyone know of any form building software that would let users order a placement list for something like placement predictions?
i write physics models as part of my day job and: fuck rolling your own here. i always roll my own shit its why i've not been posting here lol, i'm too busy with work and building big stuff that's hard to describe. regardless, don't bother. you'll have to work really hard to be competitive in performance and accuracy, and numerical simulation + computational physics is a hell of a rabbit hole to fall into other responses covered it well but of particular note still is that the Vulkan-Metal shim doesn't support geometry shaders. because Metal doesn't do geometry shaders at all. cool
Just fixed a weird bug that would sometimes crop up in my engine, where objects would sometimes disappear (partially) when I got close to them. This was really weird with large models like terrain, only a small section of it could be seen in front of the camera, and the rest would fade out to black and disintegrate (discarding fragments with alpha of 0) I'm fairly certain this was because of the opengl mipmap generation function not finishing immediately, b/c right afterwards I call the ARB bindless texture generation function (which once called, the texture mustn't be modified). NSight would show during such an event the mipchain with garbage/incomplete. For the first time I've found an actual practical use of opengl sync objects
This is the first time? Have you not used compute ahaders before, or is there a way to have them work properly without the sync?
Managed to circumvent having to use them, so never got around to trying them yet Like with gpu accelerated occlusion culling, you could use compute shaders, or you could just write to ssbo's using fragment shaders
Here's a very early executable for my shader wallpaper program if anyone is keen on playing with it. Only tested on Windows 10 though it should work on Win7+ There's probably a lot of problems but any feedback is appreciated. Release alpha01 · Elizabwth/win-shader-wallpaper https://i.imgur.com/RSPyWaG.mp4
Public Domain Cookie Caveats – Collin Oswalt – Medium
I didn't really go into it during my last post, but I've been super busy doing a bunch of cool stuff related to my renderer project. I'm still shooting for a data-driven renderer that is stupendously configurable, and one of the key elements of acheiving that is a robust shader system: so I've been doing tons of polishing of ShaderTools to try and get it into a finished state. It's actually most of the way done, and I'm in the midst of writing an article about it that I really want to actually finish since I have so much to talk about when it comes to developing this library. At the end of the day though, it's about shader reflection - and until recently it turns out access qualifiers weren't being retrieved from the reflection system properly at all. In my Lua scripts used to process resources and shaders, one can specify a resource with qualifiers if they so choose: Lights = { positionRanges = { Type = "StorageTexelBuffer", Format = "rgba32f", Size = dimensions.NumLights, Qualifiers = "restrict", Tags = { "HostGeneratedData" } }, lightColors = { Type = "StorageTexelBuffer", Format = "rgba8", Size = dimensions.NumLights, Qualifiers = "restrict readonly", Tags = { "HostGeneratedData" } } } And this is definitely the ideal approach. But these resources are used in a number of different shaders, and knowing if they've had a readonly/writeonly qualifier applied by the optimizer helps me when I go to construct a rendergraph and want to make sure things are ordered properly, and that dependencies are properly understood. This way I can make sure to schedule submissions in the proper order, and require as little ordering information from the user as possible. Problem is, all my resources were coming back out of the reflection system as "readwrite". Every. Single. One. But if I checked the output GLSL generated by re-parsing the binary by this same reflection system (spirv-cross), I'd see writeonly/readonly flags applied to resources that I didn't specify those qualifiers for. So it was clearly generating some sort of qualifiers as it saw fit, but how? And why weren't these being picked up when I queried the reflection system for `spv::DecorationNonWritable` flag and all? I decided to check the SPIR-V assembly for decorators applied to resources, and this gave me my first hint: OpDecorate %Flags DescriptorSet 2 OpDecorate %Flags Binding 2 OpDecorate %Flags Restrict So the only decoration applied in the SPIR-V is "Restrict", but in the GLSL code generated from this SPIR-V the "Flags" resource looks like this: layout(set = 2, binding = 2, r8ui) uniform restrict readonly uimageBuffer Flags; um, what? Turns out that spirv-cross is able to set these flags based on how it sees the resource used: but it doesn't apply this "optimization" or decoration until it actually re-processes the given binary. This is done by calling "compile()" on the spirv-cross compiler class, but this fact is never really clarified. It's only called because outputting the source string for some SPIR-V assembly/binary code requires calling it- I was already using the compiler constructor that took a binary blob as it's parameter, but turns out that all it does when setting itself up is briefly just parse/lint the given binary blob at a minimal level. So it only sees the given decoration of "Restrict", and the compiler I used doesn't bother applying the additional "NonWritable" or "NonReadable" flags when compiling this SPIR-V code. Not sure why, either. It's unwieldy but now I wonder if I should have a really fucky loop setup: compile GLSL generated by my shader generator to SPIR-V assembly or binary, feed this to spirv-cross and "recompile" it to see if it applies additional specifiers, then re-compile this GLSL source generated from binary back into assembly just so I can see if it actually applies the decorators specified. But christ, what an ugly process.
Ansible is fucking amazing and anyone who still sets up servers manually should stop doing that and start using ansible immediately.
I tried it out but it spawns in a child window and not as my background, can;t get the source to run at all either. Looks really cool, any advice on getting working? I am on Win10 with at 1080p
Hey! I'm currently trying to pin down why that's happening. In the meantime, try this version Release alpha02 · Elizabwth/win-shader-wallpaper There's a button there now which handles parenting the opengl context.
Works great now, will report any bugs I find.
You can add your own shaders by dropping them in the /shader/frag/ directory and naming the source file to *.glsl GLSL Sandbox Gallery
I'm a fool. Second integer overflow problem I've caused for myself this week. In a tile-based moving system, I was saving the location that the player must move to (the location of the tile to move to) as a char, because I thought "The max map size is 128x128, so why not make the max tile to move to 127?" Only, I didn't store the TILE to move to, I stored the X COORDINATE to move to. So if you tried to move to coordinate 128, it would overflow back around to -127. The game would check "Is my X coordinate the same as the MoveToX coordinate? No? Well then keep moving in the same direction I was going in before!" which took you even further from the -127 "coordinate". Int everything, INT EVERYTHING INT EVERYTHING!!!
I just requested an OAuth token on slack so I can make a simple app which changes my status automatically. Immediately, 4 of my bosses get notifications about it and are messaging me frantically saying "WHAT ARE YOU TRYING TO DO???"
I programmed a nut-o-meter for my game https://files.facepunch.com/forum/upload/58146/7a7d7a71-98e3-4908-9fad-b61d9865b5fb/imagen.png
Been doing some more work on my Half-Life bot and stuff has stopped working with MetaMod which is pretty annoying.
I'm stuck writing Software Requirements Specifications at work and I'm really bitter because I know I'm the only one who will bother following our new documentation guidelines. Everyone else will say they're "too busy" and have no time for it. The worst part is that this has been really tough to write, as I don't have interfaces to really conform to and the customer doesn't know what he wants. Its still beneficial I think, but the review of this is gonna suck: most of the people who will be at the review write VHDL when they write code, or low-level embedded systems stuff otherwise. So they don't really have a ton of awareness of the kind of things I'm trying to accomplish here :/ despite posting less here my life is still the same overworked and underpaid (by at least 20k imo) mess. i'm writing two SRSs, one for the visualization frontend im building and one for the large-scale rewrite im doing of our spacecraft simulator. so far I'm the only one really working on this project, and I have probably about a year of work ahead of me. it gets exhausting to think about and unless i get a serious bump in pay and some backup im probably going to look elsewhere for work, just for the sake of my own health and not getting burnt out
(Audio programming - Loud) https://my.mixtape.moe/wfrfwu.mp4 Why does it keep changing pitch?
i feel your pain... i would have loved any type of documentation though
I'm stinking again. Put a new coat of paint on the GUI. Added about ~500 shaders to play with. Mouse input. Of course, feedback and constructive criticism is very welcome. Download: Release alpha04 · Elizabwth/win-shader-wallpaper Here's a very long sample of some of the options available. https://youtu.be/-hDqOsvj0g0 Some day I'd like to add a collection of shaders with tweak-able colours. Maybe an overlay which shifts the entire buffer color?
You could parse the glsl and expose uniforms as sliders or value inputs or something like that.
That's a great idea. Unfortunately, the majority of shaders store their colour vec3s arbitrarily. This could be a lot of work, but what if I parsed the shader for vec3 variables and allowed the user to edit those? Is that a dumb idea?
Sorry, you need to Log In to post a reply to this thread.