• What are you working on? v66 - February 2017
    558 replies, posted
[QUOTE=paindoc;51879174]What precomputed table are you referring to though, just the permutation table of various char values, the gradient vector table, or something like what is made in the Scape example I linked? I'm not sure I understand what you need.[/QUOTE] Ah, sorry. Since the FBM output is the exact same for any given position, I decided to try precomputing all of the FBM output values into a buffer so that instead of generating it on the fly with [code] float noise(float3 p){ //return simplex noise } float FBM(float3 p) { float f; f = 0.5000 * noise(p); p = p * 3.02; f += 0.2500 * noise(p); p = p * 3.03; f += 0.1250 * noise(p); p = p * 3.01; f += 0.0625 * noise(p); p = p * 3.03; f += 0.03125 * noise(p); p = p * 3.02; f += 0.015625 * noise(p); return f; } [/code] per fragment I could just call [code]float FBM(float3 p) { return tex3D(_MapNoise, float3(p.x, p.z, p.y) / 10).r; } [/code] saving me all the noise computation during the actual rendering time. I wasn't looking to store part of it, I was hoping to store the entire output and then read from it as one would read from a texture (since I figured that it would be faster, but you seem to be implying otherwise. Also, sorry if my terminology is off, HLSL beyond the basics is somewhat new to me). My actual noise and FBM methods are fairly fast, I would just rather not compute them at runtime at all.
[QUOTE=Berkin;51878698]Well fuck. My external HDD with all my projects dating back to high school, years' worth of collected books and research papers, and countless other important things decided to become corrupt. Despite my best recovery efforts, I lost half my filenames and my entire directory tree which contained over 63,000 files. All of my uncommitted Rant work (which amounted to quite a lot) is pretty much fucking gone... along with everything else that wasn't pushed to GitHub. It was really difficult to gain back my motivation to program, and now I'm not sure where to go from here.[/QUOTE] That sucks. I used to keep all of my stuff on OneDrive in addition to Git, but moved it to local-only since Git and dev in general really sucks for cloud storage (Visual Studio can make hundreds of megs of intermediate files and databases for even a small project). You can always do --separate-git-dir with git clone though to keep the Git stuff off your cloud drive and set VS up to use a temp directory for intermediate files, but that's a lot of work just to feel safe. When I'm working on something I'd just pause syncing so that I'm not constantly uploading the small changes I make to code files and builds. You could also just set up a nightly robocopy from your dev directory into a cloud storage, and just have it ignore intermediate stuff and git folders. Better safe than sorry, eh? As always, this is motivation to push even unfinished stuff, use lots of branches, maybe have a local remote (I used to have a Raspberry Pi set up with Gitlab to have a local remote of all my GitHub repos, for when I wanted to keep WIP stuff private)
[QUOTE=phygon;51878946]I [I]think[/I] I need a precomputed table. I'm trying to swing realtime volumetric clouds in VR and enough GPU resources are consumed by the raymarching itself that I'd like to offload the other expensive part of the shader to something that can just be stored and called back to. I mean, isn't that what professional game devs do? I was initially using the noise from the shader itself but I was wondering why I'd bother doing the exact same computations several hundred thousand times per frame when I didn't have to. [IMG]https://dl.dropboxusercontent.com/u/12024286/Dev%20stuff/Volumetric%20shader%20balls.gif[/IMG][/QUOTE] What is the performance of this shader? Can you go "inside it" with camera? Would you mind sharing source or making asset on Unity store :p?
[QUOTE=Sidneys1;51879280]That sucks. I used to keep all of my stuff on OneDrive in addition to Git, but moved it to local-only since Git and dev in general really sucks for cloud storage (Visual Studio can make hundreds of megs of intermediate files and databases for even a small project). You can always do --separate-git-dir with git clone though to keep the Git stuff off your cloud drive and set VS up to use a temp directory for intermediate files, but that's a lot of work just to feel safe. When I'm working on something I'd just pause syncing so that I'm not constantly uploading the small changes I make to code files and builds. You could also just set up a nightly robocopy from your dev directory into a cloud storage, and just have it ignore intermediate stuff and git folders. Better safe than sorry, eh? As always, this is motivation to push even unfinished stuff, use lots of branches, maybe have a local remote (I used to have a Raspberry Pi set up with Gitlab to have a local remote of all my GitHub repos, for when I wanted to keep WIP stuff private)[/QUOTE] [url]https://github.com/github/gitignore/blob/master/VisualStudio.gitignore[/url]
[QUOTE=cartman300;51880831][url]https://github.com/github/gitignore/blob/master/VisualStudio.gitignore[/url][/QUOTE] It sounds like the problem was more about OneDrive syncing those big temporary files generated by VS.
[QUOTE=paindoc;51878322]but... what? how? I've included all the headers I could and never got any operators ;_;[/QUOTE] They removed it with CUDA 5.0 for reasons I don't understand.
[QUOTE=Berkin;51878698]Well fuck. My external HDD with all my projects dating back to high school, years' worth of collected books and research papers, and countless other important things decided to become corrupt. Despite my best recovery efforts, I lost half my filenames and my entire directory tree which contained over 63,000 files. All of my uncommitted Rant work (which amounted to quite a lot) is pretty much fucking gone... along with everything else that wasn't pushed to GitHub. It was really difficult to gain back my motivation to program, and now I'm not sure where to go from here.[/QUOTE] I had this happen once as well, excpt it wasn't really my projects, it was personal data, family pictures, and so on. The best we can do in these situations is to learn from it. I now have multiple backup locations, and redundancy on them. I know that it isn't a matter of "if", but a matter of "when". These things do happen, and we can only act accordingly afterwards.
[QUOTE=Berkin;51878698]Well fuck. My external HDD with all my projects dating back to high school, years' worth of collected books and research papers, and countless other important things decided to become corrupt. Despite my best recovery efforts, I lost half my filenames and my entire directory tree which contained over 63,000 files. All of my uncommitted Rant work (which amounted to quite a lot) is pretty much fucking gone... along with everything else that wasn't pushed to GitHub. It was really difficult to gain back my motivation to program, and now I'm not sure where to go from here.[/QUOTE] You said your recovery efforts... Did you try external services? They can be pretty expensive but last time I used them they even recovered complete virtual machines back from a basically burned hard-drive. Some of those labs have amazing skills where they can even transplant the disks and predict missing bits here and there to get your files back. Again those services can be a bit expensive but if it gets your motivation back / all your hard work you might want to look into that.
[QUOTE=Fourier;51880827]What is the performance of this shader? Can you go "inside it" with camera? Would you mind sharing source or making asset on Unity store :p?[/QUOTE] The performance is pretty high, but not quite as high as I'd like it to be yet. With the premade noise it's significantly faster, although I can't find out how to get a proper buffer into the shader (other than with a texture3d, which has somewhat slow read times and has a resolution limit of 1024 on the Y axis. You can go inside it, it's fully volumetric. It's important to remember that this only works if it's being rendered as a post effect, as if it's on an object then the shader stops executing as it gets frustum culled so as soon as you get near it disappears. I'd be more than willing to share my resources and some of my source code, but I am planning on chucking this on the asset store when it's done. Gotta pay for those textbooks somehow. In other news: lighting is working fairly well. [vid]http://i.imgur.com/SgGof1C.mp4[/vid]
How dirty is this? [code]map = macro data a: result = range 0 each v in (get data): result | append (call a v) result fold = macro data a b: accum = null each v in (get data): if accum == null: accum = v else: accum = call a accum b v accum x = range 21 | map a: a*2 | fold a b: a+b check x == 420[/code] Rate from [img]https://facepunch.com/fp/ratings/programming_king.png[/img] (dirty) to [img]https://facepunch.com/fp/ratings/box.png[/img] ("I died looking at this") [sp]macros take a hidden parameter (the block after ":"), and "call" calls it while mapping each odd parameter to the name contained in the even one[/sp]
Skin rendering! [t]http://puu.sh/umDxW.jpg[/t] Showing the transmisson/SSS: [t]https://puu.sh/umCy2.jpg[/t] Blending normals: [t]https://puu.sh/umCA3.jpg[/t] [t]https://puu.sh/umCAP.jpg[/t] Ambient only: [t]https://puu.sh/umCBV.jpg[/t] On vs off: [t]http://puu.sh/umCPX.jpg[/t]
[QUOTE=fewes;51883176][t]https://puu.sh/umCy2.jpg[/t][/QUOTE] AFAIK Subsurf occurs where there's no bones getting in the way of light I know the effect is exaggerated but that dude's head is a big chunk of flesh with no skull bones :v:
[QUOTE=ichiman94;51883221]AFAIK Subsurf occurs where there's no bones getting in the way of light I know the effect is exaggerated but that dude's head is a big chunk of flesh with no skull bones :v:[/QUOTE] I have removed the man's skull and will be using it to showcase my bone renderer next.
In the midst of trying to fix my shadow mapping code, I had realized that my .obj loader was processing normals oddly. I spent a good few hours across a couple days looking through the code and shaders trying to pinpoint the problem. Issue came out to be that I was creating the VBO for the normals with the size of a vec2 instead of a vec3, so it was cutting out a third of the coordinates. Which sadly means that this issue has been there for two years and I never noticed anything wrong with my models until now...
I'm working on literally not curling up in a ball and crying while debugging code that compiles perfectly in both code::blocks and VS2015 rn. The story behind this is basically one of my requisite classes this semester is an advanced microprocessor dev course, it's also apparently an introduction to c++ for some reason and idk why, like it's not even a refresher course in c++, it's a twice a week 3 hour lecture/lab with one 2 hour lab a week where the 2 hour lab is literally c++ for beginners while the 3-hour lecture/lab has content that you'd only understand if you knew some basic c++ or c already at the least, though most of it is based in assembly. Which would be fine but the real horror show is.. the fucking college uses [I][B]Bloodshed Dev-C++[/B][/I]... for all their c++ courses. Even the computer science guys have to use it apparently. What the fuck. WHY. You ever wondered what it was like to write c++ circa 2005 on a bootleg, ugly ass, poorly designed program written because someone didn't want to pay for visual studio? Download dev-c++. Oh and remember to install it on an older version of windows because it's apparently hit or miss if it will even compile code on win8+.
[QUOTE=fewes;51883176]Skin rendering! -skin-[/QUOTE] I do think it's funny that those of us working with graphic stuff are all kind of working towards similar goals. Also, It looks great!
[QUOTE=Berkin;51878698]Well fuck. My external HDD with all my projects dating back to high school, years' worth of collected books and research papers, and countless other important things decided to become corrupt. Despite my best recovery efforts, I lost half my filenames and my entire directory tree which contained over 63,000 files. All of my uncommitted Rant work (which amounted to quite a lot) is pretty much fucking gone... along with everything else that wasn't pushed to GitHub. It was really difficult to gain back my motivation to program, and now I'm not sure where to go from here.[/QUOTE] [QUOTE=quincy18;51881263]You said your recovery efforts... Did you try external services? They can be pretty expensive but last time I used them they even recovered complete virtual machines back from a basically burned hard-drive. Some of those labs have amazing skills where they can even transplant the disks and predict missing bits here and there to get your files back. Again those services can be a bit expensive but if it gets your motivation back / all your hard work you might want to look into that.[/QUOTE] I'd donate to the "Get Berkin's Data Recovered" Fund
It's a devastating feeling to lose data and hard work. I'm sure most of us have been there, and if not then you're lucky to have avoided it so far! It won't help you Berkin but everyone else...it's World Backup Day soon. Also, learn from GitLab: [url]http://checkyourbackups.work/[/url] Check your backups actually [I]work[/I], too! If anyone is suddenly interested in backing stuff up I can recommend CrashPlan. You can even back up to your Linux server for free if you're so inclined, or to an external HDD. I prefer to use their own hosting but it all works through the same interface, it's been quite reliable for me. What's nice about it is that it also versions all your files so you can restore a specific version. Kinda like version control but for your PC. [url]https://store.crashplan.com/store/[/url] I realise I sound like I'm being paid by them at this point...
Speaking of backups. Anybody got a "hopefully free" hosting/backup solution for my files?
[QUOTE=sarge997;51884633]Speaking of backups. Anybody got a "hopefully free" hosting/backup solution for my files?[/QUOTE] I think any backup solution worth having will have some cost associated with it, unless you're not concerned by size in which case go for Google Drive. Cheapest long run is to have an external HDD and use Cobian Backup or CrashPlan or just roll your own script if you don't trust anyone else
GPU programming is weird - local variables are an example of how much you need to change your mindset. Local variables are [I]really[/I] not free and while they can make your code readable, they positively massacre my GPU registers right now. A small change from this: [t]http://i.imgur.com/HTuLMQz.png[/t] To this: (removing the switch case and the local variables "xs", "ys", and "zs") [t]http://i.imgur.com/DlRBPv0.png[/t] Caused my GPU local memory usage to change like so: [t]http://i.imgur.com/KVCc1hn.png[/t] [t]http://i.imgur.com/ZBQw0c6.png[/t] That simple change saved me 1.5gb of storage to the SM registers, and 50 [I]million[/I] transactions to the SM memory! Holy fuck. It looks ugly as hell, and this is what most of my code is beginning to look like, but its practically required or you just run out of memory in most of your kernels. Those variables are only used once though, and the benefits are clearly worth the obfuscation. maybe I'm doing something wrong though, I dunno. This just feels weird and what I end up doing to my code to save memory makes me feel filthy. I'd already gone through and removed a ton of local variables - the original version of this code was using ~6gb of local memory alone. [editline]27th February 2017[/editline] note, it does slow things down of course. but its already so fast that it hardly matters - those changes saved me tons of memory and only cost me 2000us
Small update with a nicer lighting setup, think I am satisfied with it for now! [img]https://puu.sh/un01l.jpg[/img]
[QUOTE=Trumple;51884655]I think any backup solution worth having will have some cost associated with it, unless you're not concerned by size in which case go for Google Drive. Cheapest long run is to have an external HDD and use Cobian Backup or CrashPlan or just roll your own script if you don't trust anyone else[/QUOTE] If you speak German (or have someone to translate the instructions for you), you can also use [url]https://www.heise.de/ct/artikel/c-t-Windows-Backup-Image-auf-einen-Klick-3119441.html[/url]. It creates incremental system drive backups that double as Windows installation media, so you can restore your or another computer to exactly where you left off in one go for free. The main downside is that you need a bootable hard drive to use the installer function and, if you do that, can't store more than 2TB in a single backup.
[QUOTE=paindoc;51884733]GPU programming is weird - local variables are an example of how much you need to change your mindset. Local variables are [I]really[/I] not free and while they can make your code readable, they positively massacre my GPU registers right now. A small change from this: To this: (removing the switch case and the local variables "xs", "ys", and "zs") Caused my GPU local memory usage to change like so: That simple change saved me 1.5gb of storage to the SM registers, and 50 [I]million[/I] transactions to the SM memory! Holy fuck. It looks ugly as hell, and this is what most of my code is beginning to look like, but its practically required or you just run out of memory in most of your kernels. Those variables are only used once though, and the benefits are clearly worth the obfuscation. maybe I'm doing something wrong though, I dunno. This just feels weird and what I end up doing to my code to save memory makes me feel filthy. I'd already gone through and removed a ton of local variables - the original version of this code was using ~6gb of local memory alone. [editline]27th February 2017[/editline] note, it does slow things down of course. but its already so fast that it hardly matters - those changes saved me tons of memory and only cost me 2000us[/QUOTE] At least on AMD/opencl, the compiler is absurdly aggressive about removing redundant calculations, chances are the switch meant that something that is semi constant with xs/ys/zs (ie a previous common subexpression) could no longer be optimised because its unknown which branch will be taken There was also a bug in your original implementation too I think, zs = [b]p.y[/b] - floorf(p.z) (the last set of any copypaste strategy is most likely to contain an error) which may have affected the optimiser As far as I can tell, SM to cache means that your shader now stores less data to the cache (ie lower memory bandwidth usage), but I'm confused why this is an issue?
[QUOTE=paindoc;51884733]GPU programming is weird - local variables are an example of how much you need to change your mindset. Local variables are [I]really[/I] not free and while they can make your code readable, they positively massacre my GPU registers right now. A small change from this: [t]http://i.imgur.com/HTuLMQz.png[/t] To this: (removing the switch case and the local variables "xs", "ys", and "zs") [t]http://i.imgur.com/DlRBPv0.png[/t] Caused my GPU local memory usage to change like so: [t]http://i.imgur.com/KVCc1hn.png[/t] [t]http://i.imgur.com/ZBQw0c6.png[/t] That simple change saved me 1.5gb of storage to the SM registers, and 50 [I]million[/I] transactions to the SM memory! Holy fuck. It looks ugly as hell, and this is what most of my code is beginning to look like, but its practically required or you just run out of memory in most of your kernels. Those variables are only used once though, and the benefits are clearly worth the obfuscation. maybe I'm doing something wrong though, I dunno. This just feels weird and what I end up doing to my code to save memory makes me feel filthy. I'd already gone through and removed a ton of local variables - the original version of this code was using ~6gb of local memory alone. [editline]27th February 2017[/editline] note, it does slow things down of course. but its already so fast that it hardly matters - those changes saved me tons of memory and only cost me 2000us[/QUOTE] Seems to me like the switch would be a cause for concern more than the variables. If you do even a single tiny thing that the compiler can't determine the result of related to the switch it won't be able to flatten it correctly and your code will run like crap I managed to improve the speed of my cloud shader by a factor of 2 by removing a single if statement [I]designed to break out of the loop early to save time[/I]
Spent the better part of the day figuring out partial ordering of function templates (which also applies to class templates). At the stroke of midnight I had my eureka moment and just like Cinderella my ambiguous template deduction errors went away.
[B]Should we do an other WAYWO for March? [/B]Would make it easier to keep consistent hightlights. I can do the highlights again if nobody else wants to. [IMG]https://facepunch.com/fp/ratings/tick.png[/IMG] For Montlies [IMG]https://facepunch.com/fp/ratings/cross.png[/IMG] For 2-month monthlies. (Feb & March) [IMG]https://facepunch.com/fp/ratings/box.png[/IMG] For Post-limit, shitty highlights, never ending, anarchy monthlies.
Started a programming job on monday, so nice to be getting paid to do what I love. I now fully understand why people prefer coding on 9:16. Do have to deal with the servers being named after failed singers and connecting to bieber every time I need to check a database schema.
FUCKING YES [url=https://streamcomputing.eu/blog/2017-02-22/nvidia-enables-opencl-2-0-beta-support/]nvidia have officially released preliminary opencl 2.0 support[/url] *screams and runs around with arms flailing in the air* [editline]85th February 2017[/editline] This post originally contained a lot more expletives. This means I can remove a dirty hack that's been lying around in my renderer for *years* that means that the actual number of triangle fragments you render is probabilistic rather than being able to render the actual number of triangle fragments you have. Opencl also supports depth buffers as well, which means that if I build an OpenGL backend itll be super easy to interop, as well as a bajillion other things that come in the new opencl standard. Give me C++ kernels or give me death, I need templates so unbelievably hard (sadly 2.1) Also read_write support so no more dirty double binding hax
[QUOTE=powback;51886114][B]Should we do an other WAYWO for March? [/B]Would make it easier to keep consistent hightlights. I can do the highlights again if nobody else wants to. [IMG]https://facepunch.com/fp/ratings/tick.png[/IMG] For Montlies [IMG]https://facepunch.com/fp/ratings/cross.png[/IMG] For 2-month monthlies. (Feb & March) [IMG]https://facepunch.com/fp/ratings/box.png[/IMG] For Post-limit, shitty highlights, never ending, anarchy monthlies.[/QUOTE] I'd say just make a monthly thread whenever, it's not like we're gonna stick to either format anyway.
Sorry, you need to Log In to post a reply to this thread.