[QUOTE=Karmah;52424699]What are deferred cubemaps? Just generating a cubemap using 6 deferred rendering passes?[/QUOTE]
No, applying cubemaps as if they were point sources with an influence radius, so you can seamlessly blend reflections together. Should be fairly trivial to implement I guess if everything else is already running deferred.
I've been playing around with attractors
[t]https://i.imgur.com/UqHFnsv.png[/t]
[t]https://i.imgur.com/3MqV0Fj.png[/t]
[t]https://i.imgur.com/BQgESTU.png[/t]
[t]https://i.imgur.com/FlgHxAZ.png[/t]
[QUOTE=Sam Za Nemesis;52427842]Do you have a paper or even samples for that? I can have a look and this is certainly something I'd want to do with env probes[/QUOTE]
Best way to figure it out would be to look into how Epic are doing theirs in Unreal. Simple enough, point radius, sample reflection cubemap with gbuffer normals, roughness and eye vector, render it into another gbuffer somewhere, and just keep accumulating them (with some kind of influence mask so you don't have double reflections) and apply them at the end with your lighting pass.
The issue is the accumulation part. There might be some simple solution that I've overlooked, but environment probes look really bad when they overlap.
They also look bad on borders, but having them fade out to the skybox helps a lot in the regard.
implemented that stratified poisson sampling you were talking about
[t]http://i.imgur.com/BcYnvpy.png[/t]
performance is awful though, but i figure that's because i'm doing bilinear filtering in the shader instead of in the hardware
Anyone here proficient in Maple? I'm doing a collaboration with a professor on a simulation for a new energy storage device, and I'm trying to verify the analytical solution of the differential equation so I used odetest, and it outputs a long expression, and I was wondering what that output expression meant.
I've solved the differential equation on MATLAB 2016b, and I'm verifying the solution on Maple, but right now, they have different analytical solutions.
[QUOTE=gangstadiddle;52428212]Anyone here proficient in Maple? I'm doing a collaboration with a professor on a simulation for a new energy storage device, and I'm trying to verify the analytical solution of the differential equation so I used odetest, and it outputs a long expression, and I was wondering what that output expression meant.
I've solved the differential equation on MATLAB 2016b, and I'm verifying the solution on Maple, but right now, they have different analytical solutions.[/QUOTE]
I doubt many here are proficient in mathematics, but post your problem, otherwise we can't help.
[QUOTE=gangstadiddle;52428212]Anyone here proficient in Maple? I'm doing a collaboration with a professor on a simulation for a new energy storage device, and I'm trying to verify the analytical solution of the differential equation so I used odetest, and it outputs a long expression, and I was wondering what that output expression meant.
I've solved the differential equation on MATLAB 2016b, and I'm verifying the solution on Maple, but right now, they have different analytical solutions.[/QUOTE]
Doing an MMath in 3rd year, hit me up.
I can't post too much on the topic since this is unpublished work, but I can post the differential equation of the problem.
[IMG]http://i.imgur.com/X961VEV.png[/IMG]
I've gotten different solutions from MATLAB and Maple so far, and I'm trying to do a verification on the calculations
[QUOTE=Karmah;52428067]The issue is the accumulation part. There might be some simple solution that I've overlooked, but environment probes look really bad when they overlap.
They also look bad on borders, but having them fade out to the skybox helps a lot in the regard.[/QUOTE]
Store a gradient in the alpha channel of the reflection accumulation texture (which should probably be a HDR texture) and check with max when accumulating captures, each pixel will only have one reflection that way :)
So I got my fancy shcmancy memory system all working, and then went on and spent most of yesterday doing long-overdue refactoring and maintenance of much of my core rendering/vulkan-abstraction code (both to clean it up, and to get it using my memory system).
This has led to me realizing how bad and inflexible my current terrain system is, so I'm pretty sure I need to refactor that almost entirely. On the one hand, this stuff is always super satisfying when done. On the other hand, I wish I'd had the insight to write it better the first time.
I'm also trying to get coroutines working for managing file I/O since I need to start saving/loading height data from a file. Does anyone have any decent tutorials on that stuff? I can't find any, and the stuff I've found on the visual studio blog isn't [I]bad[/I], per-se, but it falls just short of being useful and /or applicable to my goals.
Working on some pointcloud data processing, and I finally got the viewer working with PCL 1.8 after having to rebuild it from scratch to get PCAPs working.
[video=youtube;Ypp52yVLzUg]https://www.youtube.com/watch?v=Ypp52yVLzUg[/video]
[QUOTE=paindoc;52429142]So I got my fancy shcmancy memory system all working, and then went on and spent most of yesterday doing long-overdue refactoring and maintenance of much of my core rendering/vulkan-abstraction code (both to clean it up, and to get it using my memory system).
This has led to me realizing how bad and inflexible my current terrain system is, so I'm pretty sure I need to refactor that almost entirely. On the one hand, this stuff is always super satisfying when done. On the other hand, I wish I'd had the insight to write it better the first time.
I'm also trying to get coroutines working for managing file I/O since I need to start saving/loading height data from a file. Does anyone have any decent tutorials on that stuff? I can't find any, and the stuff I've found on the visual studio blog isn't [I]bad[/I], per-se, but it falls just short of being useful and /or applicable to my goals.[/QUOTE]
What exactly are your goals with coroutines?
[QUOTE=Mr;52429695]Working on some pointcloud data processing, and I finally got the viewer working with PCL 1.8 after having to rebuild it from scratch to get PCAPs working.
[video=youtube;Ypp52yVLzUg]https://www.youtube.com/watch?v=Ypp52yVLzUg[/video][/QUOTE]
What is this for, self driving car?
[code]arm-none-eabi-gcc: error: unrecognized command line option '--cpu=Cortex-M0'; did you mean '-mcpu=cortex-m0'?[/code]
How did GCC know that
[QUOTE=r0b0tsquid;52430899][code]arm-none-eabi-gcc: error: unrecognized command line option '--cpu=Cortex-M0'; did you mean '-mcpu=cortex-m0'?[/code]
How did GCC know that[/QUOTE]
Levenshtein distance to a defined list of strings? It doesn't need to actually be smart :v: [url]https://en.wikipedia.org/wiki/Levenshtein_distance[/url]
Very common in compilers when matching identifiers that are mis-spelled or flags like that.
holy shit normal offset shadows are amazing
it completely got rid of all shadow acne and peter panning in this scene
[t]http://i.imgur.com/BcYnvpy.png[/t][t]http://i.imgur.com/phel0vv.png[/t]
slope scale on the left, normal offset on the right
[QUOTE=Fourier;52430823]What is this for, self driving car?[/QUOTE]
Close, connected vehicles actually. Mainly crash prevention with other vehicles, pedestrians, and wildlife.
The grad students are working on the object motion detection, intersection/roadway geometry detection, and background filtering in MATLAB that they're handing to me to implement in C++ (Thank god for the MATLAB Coder App). We got the go ahead from the state and US DOT to work on the project and test the data collection in public, and an almost go ahead with testing the algorithms, we just have to test it ourselves first with real scenarios.
I'll probably do a before and after once I get those implemented.
I am continuing to bend Lua to my will. I made the following syntax work in Lua 5.3.4:
[lua]
local transform = _x | x:upper() .. "!"
print(transform("hello world")) -- HELLO WORLD!
[/lua]
and with [URL="https://gist.github.com/bmwalters/28aa036e99b6e459be250a8ce3599af4#file-comp_fp-lua"]a bit more effort[/URL] the following also works:
[lua]
local data = { { foo = "bar" }, { foo = "as" }, { foo = "baz" }, { foo = "qux" }, { foo = "cr" } }
f(data)
:filter(_x | (#x.foo > 2))
:map(_x | x.foo:sub(1, 1))
:foreach(print) -- b \n b \n q
[/lua]
It's pretty limited - you can't do the following for example:
[lua]
local compare = _x | x.foo == "bar" -- Lua greatly limits the ability to overload ==
local log = _x | print("[INFO]" .. _x) -- haven't thought about how to use globals
[/lua]
but it's been a pretty fun challenge.
[URL="https://gist.github.com/bmwalters/28aa036e99b6e459be250a8ce3599af4#file-comp-lua"]Link to the implementation[/URL]
Picking further into Overwatch's highlight format. Finally caught the high-level structure of the actual replay data (which was compressed with a non-zlib algorithm that tripped me up for a while). Next is to figure out the actual replay data format.
[t]http://kirk.by/s/raw/qoskR9R.png[/t]
I'm a very visual developer, that's why i always liked flash. But boy howdy, what a time to be a dev, there are so many resources to teach you how to code, model, make music. It's beautiful. I remember going on newgrounds in highschool and begging people to fix my broken code. Anyway, i've made the jump to Unity, and it's great. Easy to use and understand, great tools. But I don't like using placeholders, which honestly cripples my ability i'll admit. I took up learning how to use blender, and thanks to the great tutorials online ([URL]https://www.youtube.com/watch?v=JYj6e-72RDs&t=723s[/URL], [URL]https://www.youtube.com/watch?v=DiIoWrOlIRw&index=1&list=PLFt_AvWsXl0fEx02iXR8uhDsVGhmM9Pse[/URL]) again, something that just wasn't available when I started wanting to develop games, I was able to create a sketchy, but decent looking model/animation.
[IMG]https://media.giphy.com/media/xUA7b83Yg46aCS7rMc/giphy.gif[/IMG]
Not exactly programming, but it gets me pumped up to code when i've got my own assets to use. As i'm used to 2d graphics i'm hoping the time animating and drawing will be cut in half since 3d is easier to manipulate, no need to draw multiple angles, they're just there. :buckteeth:
[QUOTE=DarKSunrise;52430907]holy shit normal offset shadows are amazing
it completely got rid of all shadow acne and peter panning in this scene
[t]http://i.imgur.com/BcYnvpy.png[/t][t]http://i.imgur.com/phel0vv.png[/t]
slope scale on the left, normal offset on the right[/QUOTE]
Do you have any resources for this? I incidentally found myself wanting to improve my shadows a bit, and thought I'd give it a go.
I've gotten the basic premise to work, though I'm not sure if it's entirely accurate. I don't know if it is possible to do any of it in the vertex shader.
This is all I do:[code]
float angle = dot(LightDirection, World_Normal);
float cosAngle = clamp(1.0 - angle, 0.0, 1.0);
vec3 scaledNormalOffset = World_Normal * cosAngle;
[/code]
Where the scaledNormalOffset gets added to the frag world position before transforming into light space to calculate the shadow factor.
However, it seems to squish the shadows a bit
[t]http://i.imgur.com/fE9iqnj.png[/t][t]http://i.imgur.com/MhqulXi.png[/t]
[QUOTE=Karmah;52433667]Do you have any resources for this? I incidentally found myself wanting to improve my shadows a bit, and thought I'd give it a go.
I've gotten the basic premise to work, though I'm not sure if it's entirely accurate. I don't know if it is possible to do any of it in the vertex shader.
This is all I do:[code]
float angle = dot(LightDirection, World_Normal);
float cosAngle = clamp(1.0 - angle, 0.0, 1.0);
vec3 scaledNormalOffset = World_Normal * cosAngle;
[/code]
Where the scaledNormalOffset gets added to the frag world position before transforming into light space to calculate the shadow factor.[/QUOTE]
that's pretty much it. there's not much to it. i don't even bother scaling it based on the angle, i just apply a constant offset, since that's what seems to work for my scene
[code]const float normal_offset_scale = 0.15;
vec3 offset_world_position = vertex_world_position + attribute_normal * normal_offset_scale;
vec4 light_space_position = uniform_light_space_matrices[i] * vec4(offset_world_position, 1.0);[/code]
from what i've seen, some people also like to combine this technique with a traditional depth bias but use a smaller value for the bias in that case
the squishing is to be expected i think, but it looks a bit drastic in your picture
[QUOTE=Karmah;52433667]
I don't know if it is possible to do any of it in the vertex shader.
[/QUOTE]
[del]it's possible to move the light space transform to the vertex shader, do all these things there, and then just pass the light space position vector to the fragment shader. this is how i'm currently doing it[/del]
actually don't do this, it'll break your perspective divide
[editline]5th July 2017[/editline]
some resources that i found useful
[url]http://www.dissidentlogic.com/old/images/NormalOffsetShadows/GDC_Poster_NormalOffset.png[/url]
[url]https://ndotl.wordpress.com/2014/12/19/notes-on-shadow-bias/[/url]
[url]https://mynameismjp.wordpress.com/2013/09/10/shadow-maps/[/url] (not the whole post, just the "biasing" section)
[editline]5th July 2017[/editline]
wait, are you applying scaledNormalOffset as is, or are you multiplying it by anything? you're probably going to want to multiply that by 0.15 or something, try to find the smallest value that works
[QUOTE=JWki;52430779]What exactly are your goals with coroutines?[/QUOTE]
Asynchronous I/O - being able to suspend/resume the method reading data from disk would've been rather useful, I believe. I can think of a few other places it would be useful too - like recording my compute jobs on a background thread between frames. I ended up learning the most from a talk given on the subject, but the documentation is honestly damn near non-existent even though its implemented to a stable level in MSVC. They seem neat, and really useful, but trying to set them up and create the "wrapper" classes for them just made me want the Concepts TS already
The coroutines video I watched did drive me kind of crazy though: the dude giving the presentation pronounced "std" as "stud", and "nullptr" as "null-putter" :what:
[QUOTE=paindoc;52435315]Asynchronous I/O - being able to suspend/resume the method reading data from disk would've been rather useful, I believe. I can think of a few other places it would be useful too - like recording my compute jobs on a background thread between frames. I ended up learning the most from a talk given on the subject, but the documentation is honestly damn near non-existent even though its implemented to a stable level in MSVC. They seem neat, and really useful, but trying to set them up and create the "wrapper" classes for them just made me want the Concepts TS already
The coroutines video I watched did drive me kind of crazy though: the dude giving the presentation pronounced "std" as "stud", and "nullptr" as "null-putter" :what:[/QUOTE]
I've come across both pronounciations a lot.
Do you really need suspend/resume for I/O? What you're describing for compute also sounds like regular task pool based threading would work well.
[QUOTE=JWki;52435505]I've come across both pronounciations a lot.
Do you really need suspend/resume for I/O? What you're describing for compute also sounds like regular task pool based threading would work well.[/QUOTE]
I have a task pool system now, its not a great implementation so I need to improve that. Streaming I/O would be really useful for loading pre-existing data from disk, saving me from having to compute things I've already computed and (if its streaming) not causing the rendering thread to lock up or hitch much. Using it for compute jobs was just a thought - I'd like to try it, because why not, but I'm not counting on it. The way the coroutine stack frame is stored (afaik) and reused might mean that I run into issues with thread ownership of command buffers and pools though (maybe? idunno).
There was a neat implementation (using boost::coroutine) of a streaming file loader that flushed chunks to a buffer in between suspend-resume cycles, and let you read the data in that buffer pretty easily. I figure if I save height data with a begin/end flag I can just check the I/O buffer for that data, take it and clear that section of the buffer, and go do something with it.
There's still just soooo much I need to figure out, though. I keep forgetting how hard what I'm trying to do is - which is why I've hidden from this problem by spending the last several days just cleaning up and refactoring my codebase, while I plan what I'm doing next.
[QUOTE=paindoc;52435558]I have a task pool system now, its not a great implementation so I need to improve that. Streaming I/O would be really useful for loading pre-existing data from disk, saving me from having to compute things I've already computed and (if its streaming) not causing the rendering thread to lock up or hitch much. Using it for compute jobs was just a thought - I'd like to try it, because why not, but I'm not counting on it. The way the coroutine stack frame is stored (afaik) and reused might mean that I run into issues with thread ownership of command buffers and pools though (maybe? idunno).
There was a neat implementation (using boost::coroutine) of a streaming file loader that flushed chunks to a buffer in between suspend-resume cycles, and let you read the data in that buffer pretty easily. I figure if I save height data with a begin/end flag I can just check the I/O buffer for that data, take it and clear that section of the buffer, and go do something with it.
There's still just soooo much I need to figure out, though. I keep forgetting how hard what I'm trying to do is - which is why I've hidden from this problem by spending the last several days just cleaning up and refactoring my codebase, while I plan what I'm doing next.[/QUOTE]
To clarify this a bit for me, are we talking fibers here?
[QUOTE=Sam Za Nemesis;52436585]I'd rather not use code that comes from the engine to avoid violating it's terms of service, I've found that the always-excellent Sebastian Lagarde wrote a paper for it and even more O:
[URL]https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/[/URL][/QUOTE]
Well it's not like there's that many ways of writing it, just look how it's done and implement it the same way and you'll be fine.
But yeah, Legarde is a good fella.
Made a little [url=https://gitlab.com/Nabile/ServerNotifier]Steam game server notifier[/url] for myself.
You just feed it server addresses, and optionally parameters: minimum players for triggering a notification (defaults to 1), and the update delay (defaults to 30 seconds).
[img]https://nabile.s-ul.eu/2C4SBSER.png[/img]
[sp]It's very lonely. I'm very lonely.[/sp]
Incidentally, [url=http://www.ndesk.org/Image:NotifySharp.png]I found Asgard[/url] when looking for a library I used.
[QUOTE=JWki;52436289]To clarify this a bit for me, are we talking fibers here?[/QUOTE]
Nope - [url]https://blogs.msdn.microsoft.com/vcblog/2014/11/12/resumable-functions-in-c/[/url]
If you're interested in the talk I watched that helped illuminate the topic, this is it: [url]https://www.youtube.com/watch?v=ZTqHjjm86Bw[/url] . The talk about Gor Nishanov goes more into detail about them and has been fairly interesting, but I'm not very far in yet. I'm still agitated by how hard it has been to find example code for this stuff.
Sorry, you need to Log In to post a reply to this thread.