• What are you working on? v67 - March 2017
    3,527 replies, posted
I commute up from Fife to the main campus every day 11 AM to 7 PM shift. Traffic can be absolutely terrible in the 405 corridor. When I did live 4 miles from my apartment and my shift was 9 to 5 my drive could take up to 30 minutes depending on if there was a concert over in Marymoor Park. I technically work as a contractor, though I have my own office and don't deal with the contracting company at all, ever... Been here for 5 years and have been transferred around 3-4 different contracting companies and of course the one that Microsoft selects to try to move majority of people to is one of the worst ones out of the bunch. I really wish contractors were used for more temporary stuff and if you end up being good or proving your worth they would hire you on as an FTE. I am sure this isn't just Microsoft though and just what the industry is like over all.
Cool stuff guys, thanks for the responses. Not sure about teams yet, will get in contact with HR tomorrow/Monday to figure that out and see if they'll be moving forward with an offer. Still got a few cycles with some other companies so we'll see where I end up.
Currently reformatting and updating my project comments because I completely forgot doxygen was a thing Gonna have a boring weekend I guess
Thanks for the pixel art suggestion, that's cunning and I hadn't thought of it! I did just try fullscreening it on mobile though and it looked fine in the end - I guess because it's scaled up anyway so the subpixels aren't very noticeable. I'd like to add more persistence and bulk out the event system, but the thing it needs most urgently is more content. I'm also wary about expanding it too much because I want out of my current job ASAP and I need to be smart about my time expenditure - like, is it more valuable for me to spend a weekend adding persistent events or to spend that time building another app that will help me get a job? I dunno. IMO you're totally right to suggest storylines. Since the gameplay is just selecting one of a couple options then it needs some element of persistence for the player to buy-in, emotionally. Something like random named crew members who can be affected by events and prompt events of their own later on. Or something.
Need to work on my model loading a bit, but I got textures now. My first textured 3D model! https://my.mixtape.moe/dsvytl.png
Yay, held my talk about Puppet managed Kubernetes clusters without any major issues, so this means I now have time to spend working towards other things. Apparently I've been working on something that a lot of people have been silently wanting someone to do as well (Hyper-V provider for fog / Foreman), so might have to spend some time coding on that again. And as it turns out, writing a Kubernetes operator in C# is actually really easy - though something breaks doing a real-time watch of CRD resources, which I need to look into. All of which is fun and interesting programming, with not even a single remotely graphic-able thing.
Have you used graphviz with doxygen yet? I found that worked really well for making things look a bit nicer, since you got some sometimes useful diagrams showing interactions between classes. It helped highlight a few areas I could target for cleanup.
You can now pass arguments to functions including memory locations and registers. There's also syntax highlighting. https://i.imgur.com/bkZeVjq.png
https://files.facepunch.com/forum/upload/133270/8583cf1f-3076-4f8e-9f97-eaab3ecd0669/image.png I know it looks pretty much the same as the last video I posted a while ago, but there have been many changes under the hood. First, goodbye NPBehave: I'm not clever enough to be able to write the code for Behavior Trees without a GUI. Hello Behavior Designer, 80$ well spent. Fortunately, the code I wrote using NPBehave was easy to port to Behavior Designer. The architecture has gotten much neater as a result, which has allowed for Behavior Trees instead of Behavior Linked Lists. The GUI helped tremendously too. Unfortunately, I still really basically have no idea what I'm doing and just kind of hope that it all works out. https://files.facepunch.com/forum/upload/133270/86e00e98-846e-4f6d-b984-f07e515a1319/image.png That's the Behavior Tree for the Delivery Ships. Still needs some tweaks but overall I'm happy with it. If it runs into a Pirate on the way to a delivery, it'll (try to) run away. If it gets out of sensor range for long enough, it'll go back to trying to make the delivery. Then once the delivery is made, it'll fly back home, again running away from pirates on the way. https://files.facepunch.com/forum/upload/133270/c0eb6daf-2625-48f4-8d65-d04191ce126b/image.png This is the Pirate AI Behavior Tree. There's probably a lot of room for improvement. It works though. Chases targets weaker than itself, goes to the last known position of they get out of sensor range, and then gives up, or resumes the chase if they're found. Wanders between the midpoints of close moons/bases, since that's where trade ships will be passing the most. The mining ship behavior tree is pretty much done, but needs "running away from danger" features. Then I still have to do: Escort Ships that'll accompany Delivery Ships to defend them from pirates. Military Ships that are used by the moons to conquer each other/generally fight. Bounty Hunters that hunt down pirates (and associated systems, like bounty accrual). Scrapper Ships that 'eat' the wrecks created when a ship dies. AI for the Moons/Bases. Right now they just Tick to make resources, but I want them to construct additional pylons factories to produce fancier resources. Also Ships. Ships should come from factories. Also money still doesn't exist as a system or a concept in the game, but I'm sure that that'll resolve itself somehow.
Sorry to blog here so much So, I got an offer for $54k in my current lower COL city, and an offer for $60k in the same city. I was dead set on $60k and probably would've accepted it but they're taking os long to send me the letter. Last friday I went up to a job interview in my hometown (higher COL) on a whim, and interviewed with a company I've never heard of before. Their office is really nice and their company is bigger than my current company ($54k). I really nailed the interview especially after an anecdote involving a particular personal project. I didn't really think much of this interview because the position I was applying for was for $75-$80k, and that's about equal to $65,000 in my current city, plus I'd have to move. I kind of took this interview because of some advice you guys gave me: that I was working on mid-level stuff at work and that my current company was low balling me based off of the kind of responsibilities I was given. Well, I just heard back from the last company, and they aren't offering me "Entry Level Software Engineer" for $75-$80k, they're offering me "Software Engineer I" for $90k + 10% performance based bonus and a $3k signing bonus. So, I wanted to say thank you to the people who told me that I could do better than the offer I had already gotten. I feel much better about reneging on the $54k offer knowing that I'm leaving for almost double what they offered me.
I started pulling terrain data from the real world instead of generating it to speed up my workflow, and it turned out rather nicely! By exporting terrain at a progressively lower resolution the farther away from the play field you get it is possible to have absolute huge scenes which work really well with the atmospheric scattering solution I am using. Anyone want to take a guess where this is? https://puu.sh/zmhhg.jpg
That looks awesome! You rarely see those big dramatic slopes in procedural terrain, it's really nice to see. My guess is that famous castle in Austria, the one that Overwatch map is based on (?)
That's Castle Neuschwanstein in Germany: https://files.facepunch.com/forum/upload/106992/fb42f63a-eacc-4716-8543-328042e3262b/image.png
That looks awesome!
Thanks! It is indeed Neuschwanstein. I managed to squeeze som super-cheap shadows out of my trees by doing an additional pass with a shader that expands the billboard along the surface normal, based on the sun direction. It is just a black color blended on top of the shaded scene so it is not technically correct lighting but it does work really well for placing the trees in the world. https://puu.sh/zmyJG.jpg
Seems pretty cool, is this an RTS type thing or Moba or RPG? For the stealth for each ship you can have "MaxDetection", "CurrentDetection", "MaxStealth" and "CurrentStealth" int properties. Every x ticks/seconds you set CurrentStealth to be a random number up to the MaxStealth, you do the same for the Detection stuff. Then when you loop through the colliders to see who sees who you compare the numbers, compare if the spotter's currentDetection is higher than the spotted's currentStealth, if it is then do the detected stuff
RPG, basically going for Mount&Blade in space. Ship to ship combat will be a twin-stick shooter though.
Glad to see that you found a company willing to pay you the proper rate for the type of work you're doing.
Haven't touched android in over a year, found a very helpful libraries, as well as removing the keyboard layout for some testing apps (so I can salvage connectors) https://www.youtube.com/watch?v=ttSHx3HvhoY And then get to have cheap portable midi presets over USB-C, and I can pretend I know how to play the piano Takes generic mid files from externalstorage, but it's also just hardcoded because goddamn, UI just makes me nuts
Oh man OpenGL is one hell of a thing to learn. It took me quite a while, but I was finally able to bring all the parts together to be able to have "objects" as distinct units that are totally encapsulated. You just point the constructor at an .obj file and it automagically generates/binds all of the attribute/element buffers, loads all of the image files from the .mtl etc, and then you call .render on it and all of the sub-meshes properly render while only having to swap around the element buffer for each sub-mesh so it's fairly fast. Anyway, have cs_havana https://i.imgur.com/GirQDCB.jpg I'm looking forward to getting better shaders worked out, this is coming together extremely nicely so far.
I'm submitting some work applications for the summer so I decided it was time to grab some beauty shots of my current project. Hope you like it! http://fewes.se/imgnew/p_uhawk_03.jpg http://fewes.se/imgnew/p_uhawk_04.jpg http://fewes.se/imgnew/p_uhawk_05.jpg http://fewes.se/imgnew/p_uhawk_06.jpg
Is that HOME - Resonance? If so, huge respect to ya.
the clouds seem a bit spotty, doesn't really show off the awesome thing you created
Yeah, They're made by Robert James Mieta https://www.dropbox.com/sh/s1n2m6983x0fg6f/AAD6itVgkZWxJ_lY7S50Jswpa?dl=0
These are bloody gorgeous dude! Really really impressive work here. I'm finally getting down and dirty with some of the more powerful things Vulkan can do - right now, I'm working on a clustered forward pipeline setup test. I have that mostly in place and ready to go, but I realized my current .obj file importer is lacking. This kinda led down a rabbit hole as I realized: I had no real good way to handle material files, multiple texture imports, and that my current pipeline setup for binding resources was gonna suck. So I had to write a (currently primitive) texture pool class My .obj importer just made everything one single shape and assumed one single "part" per .obj file, which is a really broken and really dumb assumption that reveals how simple things had been thus far So I worked on the material and texture pooling system, and got that to a satisfying spot. But then I got to reading and realized I could do better, as my current setup would be like: bindBuffers(); for (auto& entry : materialList) { auto parts_using_material = objFileParts.equal_range(entry.idx); vkCmdBindDescriptorSets(...); // Bind material for the current range of parts for (auto& part : objFileParts) { vkCmdDrawIndexed(...); // draw a single part } } So I've got a bunch of binding calls - 24 in the case of this test Sponza setup. And then I have 380 (!!!) draw calls. That seemed really gross, so I decided to pool my textures into three descriptor objects inside one set, using a single 2D array per type of texture (so, array for diffuse, specular, normal, and roughness maps). That gets me down to one binding call before I get to dispatching draw commands, and then all I have to do is find some way to get the material index of the current model part in a shader. I still needed to reduce the amount of drawcalls though, and the last problem had me stuck so I began tackling that. By using indexed indirect drawing, and performing a ton of sorting, I managed to reduce 380 model parts into 68 indirect drawing commands - pretty good! This is where I get really excited though, because if the physical device supports it (and most do at this point), Vulkan will just let you dispatch all of your indirect drawing commands stored in a buffer at once. So I could reduce 380 drawcalls to one drawcall in total. Holy hell! Now I still need to figure out the right layer of my material texture arrays to index into, right? Well that itself is also going to be piss-easy. There's a built-in variable available in SPIR-V called "gl_DrawIDARB". This gives the current indirect draw index, unsurprisingly, and then I can do the following in the shader: layout (constant_index = 0) constant uint numLayers = 10; layout (set = 1, binding = 0) uniform sampler2DArray diffuse; layout (set = 1, binding = 1) uniform material_indices { int data[numLayers]; } materialIndices; void main() { vec3 uv = vec3(fragUV.x, fragUV.y, float(materialIndices[gl_DrawIDARB])); outColor = texture(diffuse, fragUV); } All told, my drawing code will now be: void ObjModel::RenderObject(const VkCommandBuffer cmd) { bindBuffers(cmd); // binds the underlying VBOs+EBOs this class has been told to use by the asset system vkCmdBindDescriptorSets(...); // Bind all the sets we need, including the global shared sets if (multiDrawIndirect) { // set as a const member variable on init of this class vkCmdDrawIndexedIndirect(...); } else { // welp, now we have to draw one-by-one } } The downsides are that leaving the number of layers as a constant will make it harder to later unload and stop drawing various segments of the model (by modifying segments of the buffer holding the indirect draw data), so that'll be next. I need to get my clustered forward pipeline setup first, though, then I'll be stress testing it to see how it works and trying to optimize it a bit by coalescing renderpasses together into subpasses if I can. Feeling pretty satisfied right now, though
So we had to implement a 3d camera for class. Why is mine wonky? https://files.facepunch.com/forum/upload/133270/8c2ca7bf-e8cd-4d3a-b877-e939bf6fbacf/2018-02-16_23-44-02.mp4
I'm not sure why it is, try double checking the methods for generating the matrix: "Perspective()"
Implemented IMGUI and got a console working (it's just redirected stdin/stdout but it supports ~colors~). Majorly tightened up my resource management system. The full spread of data in the .mat file is now accessible to the program, so that includes support for multiple channels, emission, etc (I also popped in a few magic values for some channels that let me do stuff like cutout transparency) Also, got some shitty vertex lighting working https://i.imgur.com/rsWdXjn.jpg
ImGUI is pretty dope. It let me do a lot of cool things in my last project.
imgui sucks [img]https://i.imgur.com/V47GR7Y.png[/img]
Sorry, you need to Log In to post a reply to this thread.