I just learned of the existence of Parallax Occlusion mapping and happened to google it out of curiosity.
Turned out I just lumped together displacement mapping and parallax mapping. I didn't realize it was possible to fake depth effects without generating additional geometry. This is definitely getting bumped up on my to do list.
[QUOTE=Karmah;52417065]I just learned of the existence of Parallax Occlusion mapping and happened to google it out of curiosity.
Turned out I just lumped together displacement mapping and parallax mapping. I didn't realize it was possible to fake depth effects without generating additional geometry. This is definitely getting bumped up on my to do list.[/QUOTE]
Parallax mapping is the neatest, it totally breaks my mind but the effect it generates is super cool. especially given how it doesn't require extra geometry. I got my first "not unsuccessful" print using my slicer today:
[t]http://i.imgur.com/3r0LdON.jpg[/t]
The fancy infill pattern is just the hilbert space-filling curve. Its incredibly satisfying to watch it print this pattern, even if it seems like its going to shake the printer apart. This isn't actually useful for much, but mostly serves as a demo of how I'm trying to get the infill system working: any mathematical function you can plug-in will work. I've got a ton of polishing to do, but I'm also working on the Python interface for my program, wherein a user can specify a python script to load and this will be used to generate the infill. One of the biggest problems with the last slicer was that its in MATLAB, and adding a new infill type required extensive modifications to several sections of the poorly documented, poorly written, and horribly slow code. This way, the science-types at my work will be able to easily experiment with new infill types - and more, once I add in more locations to "inject" these scripts into the pipeline. I might actually release this once its done, so long as NASA and my boss give the okay! It'd be pretty powerful.
I've also been working on moving my LOD scheme's calculations to the GPU, using as many Vulkan compute queues as I can grab on the current device. On my 1070, this means I have 8 LOD tiles calculating per launch. On devices without dedicated compute queues, I take the last half of the graphics queues as my "pool" of compute queues, which works well enough on my Quadro at work. I use a ring-buffer of sorts to just assign my pool of queues to data requests in a stack, with one request per queue. I then record commands and launch the queues all at once, with fences and memory barriers to make sure I can't read data back until things are all-clear. I'm working on a task-pool of sorts too, so that I can bounce this off onto another thread (and I should probably thread buffer recording, too), at which point it should hopefully let me generate terrain at MUCH higher detail without having any lock-ups as I load/upsample height data.
So far, I'm running it so that each data tile has a resolution of 512 samples. This is 16x the previous size I used for the CPU implementation and it runs in the blink of an eye (not surprising, given how simple the computations for this are and how easily they parallelize). Unfortunately, the bane of my existence is back: that fucking rhombus/diamond artifacting I got in my CUDA project:
[t]https://images.discordapp.net/attachments/263886432110772225/330229631884197888/root_noise.png[/t]
I do know the upsampling is working well, though. It preserves all the details! i.e, the artifacts ;~;
[t]https://images.discordapp.net/attachments/263886432110772225/330229762603745280/compute_test_node_0.png[/t]
[editline]edited[/editline]
jesus christ I am terrible at typing, the amount of edits it took to get this post not awful is embarassing
[QUOTE=elitehakor;52416191]was curious if i could get this running on my iphone (7+, running the iOS 11 beta, safari) and it would crash for this map
but work perfectly for this map (although I can't interact with it)
[t]https://dl.dropboxusercontent.com/u/5168294/screencaps/Photo%20Jun%2029%2C%207%2058%2044%20PM.png[/t]
it also gets my iphone a little hot[/QUOTE]
Yeah I get the same on my phone. On cbble it's probably related to the frame buffer bug.
I'll add some mobile controls later
[QUOTE=Ziks;52411557]How well does this run for you guys? You'll need to leave it to load for a bit, I'll add a loading bar soon.
[url]https://metapyziks.github.io/SourceUtils/maps/de_cbble/[/url]
Oh also you can press F to go fullscreen[/QUOTE]
50 fps on an R9 Fury.
[t]https://my.mixtape.moe/tbozco.png[/t]
Skybox seems to be broken.
[QUOTE=Ziks;52411855]That's a weird artifact, I think it's caused by an oversight on my part where it tries to read from and write to the same frame buffer if there's water in the skybox, this map should work better:
[url]https://metapyziks.github.io/SourceUtils/maps/de_overpass/[/url][/QUOTE]
Solid 66 fps on both maps at around 15% GPU usage at 75% of boost clock on a GTX 1060 with a 6600k.
[QUOTE=Karmah;52417065]I just learned of the existence of Parallax Occlusion mapping and happened to google it out of curiosity.
Turned out I just lumped together displacement mapping and parallax mapping. I didn't realize it was possible to fake depth effects without generating additional geometry. This is definitely getting bumped up on my to do list.[/QUOTE]
Well yeah you're essentially doing "software rendering" inside the shader. If that amazes you, you should visit Shadertoy.com more often. :v:
EDIT: REALLY like how your tools are coming along btw.
[QUOTE=Tobba;52416788]Why the hell do people keep writing big UI-heavy apps in Java? The fact that I can tell it's Java within seconds of just looking at it is pretty indicative that something is [i]severely[/i] wrong with the libraries that are available. Not to mention the asinine memory usage and noticable GC pauses, which would be the other sign in case anyone ever wrote a real UI toolkit for it.
[...][/QUOTE]
I think it's partially a problem with the developers. Java is commonly taught in uni, and a lot of the devs just graduating aren't exactly great...
There are exactly three good Java (desktop) apps I've used so far. JDownloader 2 (starts [U]slowly[/U], but then it's alright), yEd Graph Editor (really quite good!) and XMind, which I haven't used much but can tell it was made sensibly.
Not that the GUI libraries help any. You can set e.g. Swing to look (mostly) native, but no-one bothers to do so because it's not the default.
It's pretty crazy that Java has a worse end user experience than [I]JS browser apps[/I] now.
I mean, those normally don't look native by any stretch either, and they (too) use way too much RAM, but at least they usually work.
At least with java you can tune the JVM for smaller memory footprint and architect your software to allocate less on heap, the problem with the language is the cargo cult community around it never letting it breathe.
Electron is a browser stack+nodejs based lovecraftian horror of bad software practice, I don't think you can tame such an awful combination of a beast. By design, it's destined to consume and compensate for speed, and there's nothing in it that appeals for a systems developer. I just hope people will flock to at least Rust/D/C++ to learn how much they bullshitted in the past with the devil, that is, js development.
To be fair, it's not that awful, but for anything serious it should be avoided at all cost.
made a quick levenshtein demo page:
[url]http://studygroupfinder.com/levenshtein.html[/url]
[QUOTE=Karmah;52417065]I just learned of the existence of Parallax Occlusion mapping and happened to google it out of curiosity.
Turned out I just lumped together displacement mapping and parallax mapping. I didn't realize it was possible to fake depth effects without generating additional geometry. This is definitely getting bumped up on my to do list.[/QUOTE]
If you target DX11 cards (which are required to have tessellation hardware) it may be worth it to just throw actual triangles at the problem instead of using up tons of texture bandwidth and compute. As long as you use hardware tessellation and don't store too many vertices in memory it should work a lot better.
[QUOTE=Tobba;52417819]If you target DX11 cards (which are required to have tessellation hardware) it may be worth it to just throw actual triangles at the problem instead of using up tons of texture bandwidth and compute. As long as you use hardware tessellation and don't store too many vertices in memory it should work a lot better.[/QUOTE]
I could try both. I'm not entirely sure on the implementation yet and I may need to adjust my rendering pipeline to get it to work. I'm probably going to need to separate models that have height maps from those that don't, performing 2 separate passes to fill my GBuffer. If that is the case, I could probably make a special third case for displacement mapped geometry.
[QUOTE=Tobba;52417819]If you target DX11 cards (which are required to have tessellation hardware) it may be worth it to just throw actual triangles at the problem instead of using up tons of texture bandwidth and compute. As long as you use hardware tessellation and don't store too many vertices in memory it should work a lot better.[/QUOTE]
Compute bandwidth on most GPUs and in most applications using the GPU is [I]drastically[/I] under-utilized. There was a neat article from AMD detailing how they profile applications to look for spots they can optimize for that application - and in nearly every case, the applications usage of the compute capabilities of a GPU was practically zero. That being said, texture fetches are expensive as hell and that [I]was[/I] something that most applications saturated. I'm also completely unfamiliar with DirectX, tbh, so I have on idea how it exposes these various abilities and could be entirely wrong. The Vulkan hardware database shows what queues each device exposes, and these correlate well to hardware: most enthusiast cards nowadays feature at least a handful of dedicated compute queues.
If there's one thing that upsets me about Python, its the GIL. I thread tons of things in my slicer because the majority of it is embarrassingly parallel: I'm able to max out the CPU, but it cuts the time to execute by a few orders of magnitude (especially after I went and cleaned things up with threading in mind). If I'm using an injected python script to make infill, though, I can't thread that section of the pipeline. And the threaded infill generation unfortunately was where I benefited the most from threading. I'm going to make the best of it though, and allow for functions that vary vertically though. This can result in neat things, like printing literal coiled springs.
[QUOTE=ichiman94;52417736]At least with java you can tune the JVM for smaller memory footprint and architect your software to allocate less on heap, the problem with the language is the cargo cult community around it never letting it breathe.[/QUOTE]
JS seems to have the opposite problem. There's a ton of trendy libraries, but a lot of them are falling apart at the seams (edge cases) from what I experienced so far.
[QUOTE]Electron is a browser stack+nodejs based lovecraftian horror of bad software practice, I don't think you can tame such an awful combination of a beast. By design, it's destined to consume and compensate for speed, and there's nothing in it that appeals for a systems developer. I just hope people will flock to at least Rust/D/C++ to learn how much they bullshitted in the past with the devil, that is, js development.
To be fair, it's not that awful, but for anything serious it should be avoided at all cost.[/QUOTE]
The thing I've posted two pages ago is actually a browser app :worried:
In my defence, I'm primarily targeting browsers though.
I want to make it installable, but that's not the primary target.
So far React is treating me well, but then again I'm writing this like a low-level imgui app.
Ironically enough, the only place where I used 'advanced' React features is currently broken:
[vid]https://cdn.discordapp.com/attachments/179908939083677696/330383268627349515/2017-06-30_18-24-12.mp4[/vid]
I have no idea what causes this, but it's probably going to fix itself when I make the app stateless and use my own invalidation event exclusively.
I've been happily forgetting everything about C until my girlfriend started a course on it.
I didn't forget about pointers, but man do they suck.
And having to terminate strings.
I really, really don't like C. I don't understand why you would do this unless you get paid a lot to fool around with this.
[QUOTE=war_man333;52418550]I've been happily forgetting everything about C until my girlfriend started a course on it.
I didn't forget about pointers, but man do they suck.
And having to terminate strings.
I really, really don't like C. I don't understand why you would do this unless you get paid a lot to fool around with this.[/QUOTE]
Null terminating strings is indeed a bit annoying, but I really don't get what issues people have with pointers. The concept of a memory address is pretty fundamental to programming isn't it? I'd hate to not be able to have them when I need them.
RAII makes pointers just that much more easy to work with, I dropped pure C the start of this semester and I don't plan on looking back too soon either.
[QUOTE=JWki;52418607]Null terminating strings is indeed a bit annoying, but I really don't get what issues people have with pointers. [b]The concept of a memory address is pretty fundamental to programming isn't it?[/b] I'd hate to not be able to have them when I need them.[/QUOTE]
I honestly don't need it, but I don't normally do low-level programming.
I think for me it's just the syntax. When am I dealing with a pointer? An array? A character?
If you stick to C++11 you should have no trouble figuring that out:
[code]
std::unique_ptr<char> ptr; //Pointer to single character (unless user creates instance with new char[] and custom deleter, but that requires a conscious effort)
std::unique_ptr<char[]> ptr2; //Pointer to array of characters
std::array<char, SIZE> array; //Fixed size array of SIZE characters
char& c; //Reference to a single character (technically could be a reference to the first character in an array, but i doubt anyone ever does that)
char c2; //Single character
std::string str; //String of characters, .c_str() returns null terminated string
char16_t c16; //16 bit character
char32_t c32; //32 bit character
std::u16string //16 bit character string
std::u32string //32 bit character string
[/code]
Unfortunately the course is in pure C. I think (?) that makes it a little harder.
C is a hell of a lot [I]simpler[/I], which can be a blessing or a curse depending on your viewpoint and use cases.
[t]http://i.imgur.com/9W9193G.png[/t]
shadows! filtering is still pretty poor, but it'll have to do for now
[QUOTE=war_man333;52418806]Unfortunately the course is in pure C. I think (?) that makes it a little harder.[/QUOTE]
I think it's just a matter of getting used to it, especially if you're only having troubles with the syntax.
[QUOTE=Sam Za Nemesis;52421852]I've completely refactored my PBR shaders to use a metallic workflow if an artist so desires as opposed to a specular workflow and I've made it so Roughness/Specular/Metallic/AO in addition to Selfillum and Colormaps can be packed in a single texture to save VRAM, previously each of these maps were separate
[t]https://cdn.discordapp.com/attachments/111951182947258368/330780862050467850/bilde.png[/t]
I had lost my original working texture files for the gascan I've had to rework over the specular maps for this test, so it might look a bit innacurate compared to a true metalness based asset[/QUOTE]
Oh that's actually really cool. Have you considered doing deferred similar to how Kris did it with Swarm? Might be worth trying a different approach now there's more refined methods available, although maybe there's not much hope with DX9 - though it's been done in other DX9 engines maybe Source is too much of a limitation.
Got my projectile system working (mostly!) :), although the current charge bar looks hideous :D
Am planning on converting it to a click and charge system instead of pressing enter twice (Once to confirm angle, and then confirm power).
Also having fun creating the menu system, as I'm basically attempting to remake playerunknown's drop-in style system where everyone can see everyone's character in the menu.
[vid] https://i.gyazo.com/d274e09fa64d94110ce1cc418fbfca1c.mp4 [/vid]
So its not a pretty picture like the previous post, but I got my allocator [I]soooo[/I] close to working that I can taste it. Regardless, its already accomplishing one thing really [I]really[/I] well: this vector of 232 Suballocation objects lie within one VkDeviceMemory object, saving a ridiculous amount of allocations and undoubtedly helping performance loads.
[t]http://i.imgur.com/rg9L89O.png[/t]
Unfortunately, there is one problem: I somehow keep mixing up device local and host visible/coherent allocations, which is rather bad. Host allocations are used for UBOs that I update from the CPU, and trying to map the little range of my UBO (entry 3, here) causes a crash because I can't do what I want to do since the parent VkDeviceMemory object is device-local. There were also some initial errors with me placing optimally tiled and linearly tiled image data on the same memory page, which also caused odd behavior and crashes. Turns out a misplaced set of parentheses in my page-alignment function caused that.
Oops. Should be a quick tune-up to my memory type index finder to fix the mixing of types, then back to work again - but I was far too proud of my progress this far, along with how easily it plugged into my pre-existing code (in face, it actually simplified a good chunk of it!)
[QUOTE=Sam Za Nemesis;52421852]I've completely refactored my PBR shaders to use a metallic workflow if an artist so desires as opposed to a specular workflow and I've made it so Roughness/Specular/Metallic/AO in addition to Selfillum and Colormaps can be packed in a single texture to save VRAM, previously each of these maps were separate
[t]https://cdn.discordapp.com/attachments/111951182947258368/330780862050467850/bilde.png[/t]
I had lost my original working texture files for the gascan I've had to rework over the specular maps for this test, so it might look a bit innacurate compared to a true metalness based asset[/QUOTE]
Is it feasible to use an array texture for an atlas like this? Textures for topologically complex models have always bothered me more than they should because you get terrible seams (left of the middle handle for example, though that's mostly a texturing error) and your mipmaps are going to bleed horribly between different parts of the mesh if you go too far away.
Though since you're working on Source you can't use them anyways, but still.
[QUOTE=Sam Za Nemesis;52422966]We are using his deferred solution as base but I have practically rewritten the most of the pipeline since then, hopefully this can be available to the community once this project is released
I've got hold of an asset made by [URL="https://facepunch.com/member.php?u=202761"]ZombieDawgs[/URL] that was made with proper metalness from the start and the results are even better
[t]http://image.noelshack.com/fichiers/2017/26/7/1498958230-pbr-1.png[/t]
[t]http://image.noelshack.com/fichiers/2017/26/7/1498958235-pbr-2.png[/t][/QUOTE]
Oh, good work :)
Any plans for deferred cubemaps? I know those were something Kris wanted to experiment with but didn't get around to it.
Dumped DigitalOcean. Vultr offers a 25gb ssd w/ 1gb ram for the same cost as a 20gb ssd w/ 512mb ram at DO.
[QUOTE=Legend286;52423905]Oh, good work :)
Any plans for deferred cubemaps? I know those were something Kris wanted to experiment with but didn't get around to it.[/QUOTE]
What are deferred cubemaps? Just generating a cubemap using 6 deferred rendering passes?
[QUOTE=proboardslol;52424255]Dumped DigitalOcean. Vultr offers a 25gb ssd w/ 1gb ram for the same cost as a 20gb ssd w/ 512mb ram at DO.[/QUOTE]
Dumped DO for cost issues as well. Moved over to Scaleway where I got the $40 DO VPS specs for about €12 :v:
Sorry, you need to Log In to post a reply to this thread.