• What are you working on? v66 - February 2017
    558 replies, posted
[thumb]http://carp.tk/$/firefox_2017-02-25_09-18-03.png[/thumb] [thumb]http://carp.tk/$/firefox_2017-02-25_09-22-39.png[/thumb] Github, what are you doing?
[QUOTE=phygon;51872421]that doesn't sound right[/QUOTE] No, it doesn't. I'm assuming its something to do with that particular implementation: I haven't really done extensive profiling yet. I'm trying really hard, for the sake of completing this project, to avoid my usual habit of over-optimizing in lieu of actually making features. Like I finished the Turbulence module, I'm working on the Curve module (that was passing struct issues, I let users pass in curve points), added a ridged multi generator, made it suck less to save modules to an image arbitrarily, and I just united the majority of the wrapper functions by using an enum to decide on simplex vs perlin (just lots of code bloat w/o this).
[QUOTE=Sidneys1;51871601]Whoops, you're right, that should be: [code] void foo(char*** bar) { for (char* baz = **bar, (*bar) = 0; *baz; baz++, (*bar)++); } [/code] Good catch. And you're correct: [code] void strlen(char*** strptrptr) { for (char* str = **strptrptr, (*strptrptr) = 0; *str; str++, (*strptrptr)++); } void main() { char* str = "test"; char** strptr = &str; strlen(&strptr); long len = (long)strptr / 8; printf("The length of the string is %ld\n", len); } [/code] Now figure out why it doesn't work. ;)[/QUOTE] What on Earth are you trying to accomplish?
[QUOTE=cartman300;51872583][thumb]http://carp.tk/$/firefox_2017-02-25_09-18-03.png[/thumb] [thumb]http://carp.tk/$/firefox_2017-02-25_09-22-39.png[/thumb] Github, what are you doing?[/QUOTE] The first one is counted by bytes of code and the second is counted by files. This is the library used to calculate percentages: [URL="https://github.com/github/linguist"]https://github.com/github/linguist[/URL]
[QUOTE=Dr Magnusson;51872787]What on Earth are you trying to accomplish?[/QUOTE] Strlen in a single line. Length or str is stored in strptr after strlen returns. And it would work, except for one tiny little quirk in the syntax parser. [editline]25th February 2017[/editline] ** As a multiple of 8, because pointer arithmetic
[QUOTE=Sidneys1;51873012]Strlen in a single line. Length or str is stored in strptr after strlen returns. And it would work, except for one tiny little quirk in the syntax parser. [editline]25th February 2017[/editline] ** As a multiple of 8, because pointer arithmetic[/QUOTE] I can't tell if this is a serious attempt at implementing strlen, or if you're purposefully trying to write obfuscated code, but you're effectively just molesting the language. I don't know why, but I'm just getting irrationally angry when looking at that code, even though I believe it's done in jest.
[QUOTE=paindoc;51872656]No, it doesn't. I'm assuming its something to do with that particular implementation: I haven't really done extensive profiling yet. I'm trying really hard, for the sake of completing this project, to avoid my usual habit of over-optimizing in lieu of actually making features. Like I finished the Turbulence module, I'm working on the Curve module (that was passing struct issues, I let users pass in curve points), added a ridged multi generator, made it suck less to save modules to an image arbitrarily, and I just united the majority of the wrapper functions by using an enum to decide on simplex vs perlin (just lots of code bloat w/o this).[/QUOTE] So I might have missed this, but what does your project do / what is it for, exactly? It's pretty looking, but what's it going to be used for? I'm fairly curious.
[QUOTE=Dr Magnusson;51873033]I can't tell if this is a serious attempt at implementing strlen, or if you're purposefully trying to write obfuscated code, but you're effectively just molesting the language. I don't know why, but I'm just getting irrationally angry when looking at that code, even though I believe it's done in jest.[/QUOTE] I assume the point is trying to write it as a single line statement just for the fun of it.
Been working on an online 2D RPG for awhile now titled Voslom. I've done some small-scale(~20 people) private tests with some players of my community and it has went well! The game itself has rather simple game play, but it's pretty fun to play with friends. Some outlines of the game: Voslom is a 2D(Sidescroling) Multiplayer Online Hack n Slash game(can support 100+ people per server right now), where you and other players roam the world fighting various monsters, gaining experience, and hunting Loot. When you Level up your Character you get a Stat point, in which you can use to increase any of the Various character stats, which are listed below: Strength - This increases how hard you can hit and helps you penetrate higher defenses. Dexterity - This stat increases your hit accuracy(less 0's basically), and helps you penetrate higher defenses easier as well. Stamina - This stat increases your max Stamina, and increases the rate at which your Stamina Regenerates. Vitality - This stat increases your Max health. Agility - This stat increases your move speed, and jump height. It may be a good idea to either farm for Agility specific items, or train this at first, as you need a decent agility level to get to some areas in the World. Defense - This stat can only be increased from Gear, the more defense, the more likely your opponent will hit 0's or lower damage on you. Various armors and weapons give different bonuses to these stats. In the game there are what I call "Abilities". Abilities in Voslom are 'mechanics' that change how your character can interact in the world, using them will have a downside(such as using stamina). These Abilities have their own required stats to unlock, and all do some different things. Do note, these will dynamically lock/unlock if you put on armor that gives you just enough stat bonuses to have their requirements. Abilities will consist of things like Wall Jump, Wall Slide, Dash, etc. Each of these having their own requirements to unlock, for example: Wall Jump could require 15 Agility, 10 Strength and 2 Stamina to unlock. Don't worry, there will be armor pieces designed to give large boosts to specific stats, so you can hunt these down if you wish to use an ability but don't have the right character stat build for it. These abilities open up a whole new aspect to the game and make it more exciting. Some possible applications of them could be dungeons that will have puzzles requiring certain abilities. So you can either grind out the levels to unlock it, or hunt down the gear you need to unlock it with your stat build. Do note the game is still fairly early in development, and I'm only working on it in my spare time, but it's coming along :). I don't have a whole lot of screenshots of the game just yet, but here are some: Login Screen: [t]https://i.gyazo.com/93525293f3feeba6ff923fd1819dae5e.png[/t] Random screenshots: [t]http://s11.postimg.org/ks0uujzk3/voslom_screen1.png[/t] [t]http://i.imgur.com/tlkh1O5.png[/t] Random stuff: [vid]https://i.gyazo.com/c3808a6087bb71de9a7bae256f3f733e.mp4[/vid] [vid]https://i.gyazo.com/0758e3fc2ff0ab07e5f9cd74c758623d.mp4[/vid] [vid]https://i.gyazo.com/ffcdcec31bce9517eb0585cc70d27765.mp4[/vid] [vid]https://i.gyazo.com/733147f9be805097142d4fb8c9707305.mp4[/vid] I don't have many screenshots or videos of the older private tests (with more people) unfortunately, didn't think to grab any during them. Although I didn't show here, there are a number of different monster to fight currently. Most of which have a rare chance upon death to spawn a boss variant for that zone. These are essentially world bosses and have a chance to drop rare unique loot. So the more monsters you kill, the more likely a boss will spawn. There's also quite a bit of weapons and armor sets to collect too from the various monsters (didn't showcase very many). A group of my testers actually even put together a steamgroup so they could post announcements when a boss spawned. There was one really large and super tanky boss that took us all about an hour to kill, it was pretty painful. Oh and for those who are interested, I developed the game in C# XNA, then ported it to Monogame.
Very cool. Is there a playable version around?
[QUOTE=phygon;51873118]So I might have missed this, but what does your project do / what is it for, exactly? It's pretty looking, but what's it going to be used for? I'm fairly curious.[/QUOTE] It's mostly for the hell of it to be honest. But I've also found quite a bit of utility in using libraries like libnoise or AccidentalNoise in my own projects, but I was always kinda frustrated with their performance - its not the fault of the library developer, but going single threaded for noise generation and image writing stuff is just kinda silly! My end goal is really just libnoise with CUDA acceleration. Building a library is hard though, especially with CUDA, as I don't want to exclude people from being able to use the library just because their gpu is out of date. It's especially tough because things like unified memory could simplify handling datasets (and make chaining modules together less rough on the gpu), and other features like half precision support can greatly increase speed (not just in calculation, but also in transferring data and storing intermediary data). I haven't added any preprocessor macros or switches yet, but I was going to start doing that today. I've finished most of the difficult modules, so I need my partner to work on the easy modules while I work on turning this into a better library. The modules left won't need much changing to parallelize and get working in Cuda, I hope. I know there are uses for something like this too, so I need to get the code base into a healthy state and then start "marketing" it, so-to-speak.
[QUOTE=paindoc;51873987]It's mostly for the hell of it to be honest. But I've also found quite a bit of utility in using libraries like libnoise or AccidentalNoise in my own projects, but I was always kinda frustrated with their performance - its not the fault of the library developer, but going single threaded for noise generation and image writing stuff is just kinda silly! My end goal is really just libnoise with CUDA acceleration. Building a library is hard though, especially with CUDA, as I don't want to exclude people from being able to use the library just because their gpu is out of date. It's especially tough because things like unified memory could simplify handling datasets (and make chaining modules together less rough on the gpu), and other features like half precision support can greatly increase speed (not just in calculation, but also in transferring data and storing intermediary data). I haven't added any preprocessor macros or switches yet, but I was going to start doing that today. I've finished most of the difficult modules, so I need my partner to work on the easy modules while I work on turning this into a better library. The modules left won't need much changing to parallelize and get working in Cuda, I hope. I know there are uses for something like this too, so I need to get the code base into a healthy state and then start "marketing" it, so-to-speak.[/QUOTE] Why use CUDA out of interest? A CUDA GPU libnoise is great... except it only works on nvidia, so in practice nobody is able to use it This is what makes me so sad about CUDA. Great idea, but strangling the industry now that we have OpenCL as well
[QUOTE=Icedshot;51874198]Why use CUDA out of interest? A CUDA GPU libnoise is great... except it only works on nvidia, so in practice nobody is able to use it This is what makes me so sad about CUDA. Great idea, but strangling the industry now that we have OpenCL as well[/QUOTE] CUDA because I'm taking a course on it: therefore, I have to progam [I]a[/I] project in CUDA. I'm not a big fan of the closed-source option over the open-source option, as 3D printing has left me with an interest in supporting open vs closed source options when possible. However, part of the "problem" with CUDA being a commercial product is that Nvidia has invested quite a bit of time into it, so it seems to perform extremely well on Nvidia hardware. Memory allocation and being able to use cudaMallocManaged is something that comes to mind in this respect, I imagine allocations are more costly with OpenCL, both in terms of performance and dev time. I don't know enough about OpenCL though. I would like to port it, eventually. That means looking at HIP (another reason I'm keeping featureset sparse) for a lighter port, or actually doing a full port to pure OpenCL. I have to look more into HIP though, and seeing what kind of features it supports, and if there are options to change things depending on if someone is compiling for CUDA or OpenCL. I don't use any really advanced features of CUDA tbh, about as advanced as I get is using texture memory. This is also all dependent on how much time I have available - I'm re-starting school full-time next quarter, and it will be a bitch of a commute (about as bad as you can get in Seattle rip), and I've got a doggo to tend to. So I imagine I'll be pretty limited for time starting in late March. All my code is open source though, and I'm going to make sure to give it a really permissive license. Its only because of a tremendous amount of open source projects that I'm even able to get as far as I have been (which only makes me more unhappy about CUDA tbh). I wish AMD or someone would invest more in OpenCL though, as there's a lot I like about CUDA but a lot I don't (that being said, fucking vector operators godddamnit! for fucks sake nvidia! I'm at 550 LOC in just operators :v) [editline]edit[/editline] Texture+Surface functions and objects aren't supported by HIP, and CUDA array-type storage isn't either, so that pretty much removes it from being a contender since that's the vast majority of what I use in my code. I can use preprocessor macros to switch features on/off based on compiler, but this is still going to require pretty much writing a seperate branch of my codebase for HIP/OpenCL. Which yeah, might have been expected, but this was a class project so I'm kinda trapped by that. And the other options/ideas for projects didn't interest me, tbh.
[QUOTE=paindoc;51874246]CUDA because I'm taking a course on it: therefore, I have to progam [I]a[/I] project in CUDA. I'm not a big fan of the closed-source option over the open-source option, as 3D printing has left me with an interest in supporting open vs closed source options when possible. However, part of the "problem" with CUDA being a commercial product is that Nvidia has invested quite a bit of time into it, so it seems to perform extremely well on Nvidia hardware. Memory allocation and being able to use cudaMallocManaged is something that comes to mind in this respect, I imagine allocations are more costly with OpenCL, both in terms of performance and dev time. I don't know enough about OpenCL though. I would like to port it, eventually. That means looking at HIP (another reason I'm keeping featureset sparse) for a lighter port, or actually doing a full port to pure OpenCL. I have to look more into HIP though, and seeing what kind of features it supports, and if there are options to change things depending on if someone is compiling for CUDA or OpenCL. I don't use any really advanced features of CUDA tbh, about as advanced as I get is using texture memory. This is also all dependent on how much time I have available - I'm re-starting school full-time next quarter, and it will be a bitch of a commute (about as bad as you can get in Seattle rip), and I've got a doggo to tend to. So I imagine I'll be pretty limited for time starting in late March. All my code is open source though, and I'm going to make sure to give it a really permissive license. Its only because of a tremendous amount of open source projects that I'm even able to get as far as I have been (which only makes me more unhappy about CUDA tbh). I wish AMD or someone would invest more in OpenCL though, as there's a lot I like about CUDA but a lot I don't (that being said, fucking vector operators godddamnit! for fucks sake nvidia! I'm at 550 LOC in just operators :v)[/QUOTE] Hmm. Opencl 2.0 supports shared memory shenanigans, but as far as I know there's no performance benefits on hardware which don't have unified memory (amd does on its HSA platforms like the consoles, nvidia do not although may do in the future). This is part of what makes me sad, opencl 2.x is a lot better than opencl 1.2 with all the flashy features (give me fucking device side enqueuing I've needed it for 4 years for fucks sake :P), but *sigh* nvidia From benchmarks that I've seen, properly ported CUDA applications are just as fast as OpenCL, but poorly ported applications may suffer performance problems because they're not identical obvs. If you do decide to do an OpenCL port, hit me up and I'd be happy to help, plus I've spent 1000% of my life optimising OpenCL so I'm sure I can help you make it go fastest like speed of sound AMD does invest a lot in OpenCL unfortunately (so do arm, and a bunch of other folk, even intel is pretty alright at it these days), but its the other side that's being a huge troll [quote=paindoc]Texture+Surface functions and objects aren't supported by HIP, and CUDA array-type storage isn't either, so that pretty much removes it from being a contender since that's the vast majority of what I use in my code. I can use preprocessor macros to switch features on/off based on compiler, but this is still going to require pretty much writing a seperate branch of my codebase for HIP/OpenCL. Which yeah, might have been expected, but this was a class project so I'm kinda trapped by that. And the other options/ideas for projects didn't interest me, tbh. [/quote] Hmm. Odd that HIP doesn't support texture memory, texture memory can be useful (although frankly its 100% been rubbish in every use case I've ever used it for, including hardware accelerated linear interpolation and even bloody texturing! :P). OpenCL 1.2 has read and write textures, and while you're technically only allowed to bind a texture to only being read or only being written in 1.2 (as a kernel argument , eg __read_only image2d_t arg_name), you can pass it as two separate kernel arguments as read or write only and do both :v:
We always used to joke that even though OpenCL is crossplatform, most things you write in it probably won't be. This was over 3 years ago (and i've heard things got a lot better since) and some of the kernels where very big and complex. But what started off as "Lets use OpenCL so we won't be stuck to 1 platform" ended up with a small list of supported configurations. It was very reminiscent of high feature-level OpenGL, from 5+ years ago. Where there was constantly different behavior between Vendors, Drivers and GPU-Series. Only much more extreme.
Recently I've been working on [URL="https://github.com/bmwalters/facepunch-glua-codemirror"]a browser extension for the GMod Lua subforum[/URL], and while testing the linting functionality (which looks like this): [IMG]https://i.imgur.com/KXuCTD7.png[/IMG] I believed I had a bug because this [IMG]https://i.imgur.com/SCJYZSs.png[/IMG] certainly isn't valid Lua! It's CSS! Then I started to think about it... and I realized that that is actually totally valid Lua! Here's how you could make it run without any runtime errors: [lua] local body = function(tab) end local background = setmetatable({}, { __sub = function(self, other) end }) local color = { rgb = function(self, r, g, b) end } body { background-color: rgb(12,12,12) } [/lua] and here's a differently-formatted yet equivalent version which demonstrates how it works: [lua] body({ -- call body, passing a table/array as the first argument background - color:rgb(12,12,12) -- the first value in the table/array is the result of background - the result of color:rgb(12,12,12) }) [/lua] :chem101:
[QUOTE=paindoc;51874246](that being said, fucking vector operators godddamnit! for fucks sake nvidia! I'm at 550 LOC in just operators :v) [/QUOTE] [url]http://conteudo.icmc.usp.br/pessoas/castelo/CUDA/common/inc/cutil_math.h[/url]
[vid]https://zippy.gfycat.com/HideousUnfoldedEgret.webm[/vid] [vid]https://zippy.gfycat.com/TanHospitableAmericanredsquirrel.webm[/vid] Here's some unfortunate low quality videos of my AI tests. Does anyone know a good host for webms that doesn't rape your video to shreds?
first contact with unity and c# made a cube from code: [QUOTE][IMG]http://blog.sigsegowl.xyz/wp-content/uploads/2017/02/unity_first_cube.jpg[/IMG][/QUOTE] and a while later unity was broken: [QUOTE][IMG]http://blog.sigsegowl.xyz/wp-content/uploads/2017/02/when_u_break_unity.jpg[/IMG][/QUOTE] job done... EDIT: wait... i got it: [IMG]http://blog.sigsegowl.xyz/wp-content/uploads/2017/02/seamles_sphere.jpg[/IMG] aaand multiple noise layers: [IMG]http://blog.sigsegowl.xyz/wp-content/uploads/2017/02/seamles_sphere_multi_noise.jpg[/IMG]
I can't believe Unity is fucking dead
[QUOTE=alien_guy;51876643][url]http://conteudo.icmc.usp.br/pessoas/castelo/CUDA/common/inc/cutil_math.h[/url][/QUOTE] but... what? how? I've included all the headers I could and never got any operators ;_;
[QUOTE=Xerios3;51876730] Here's some unfortunate low quality videos of my AI tests. Does anyone know a good host for webms that doesn't rape your video to shreds?[/QUOTE] [url]https://mixtape.moe/[/url]
Well fuck. My external HDD with all my projects dating back to high school, years' worth of collected books and research papers, and countless other important things decided to become corrupt. Despite my best recovery efforts, I lost half my filenames and my entire directory tree which contained over 63,000 files. All of my uncommitted Rant work (which amounted to quite a lot) is pretty much fucking gone... along with everything else that wasn't pushed to GitHub. It was really difficult to gain back my motivation to program, and now I'm not sure where to go from here.
So I'm working on precomputing a FBM noise table so a shader can look it up at a later date, and I have to ask; what would be my best bet for actually storing the data? I'm using Unity, which uses HLSL / Cg. What would be my best bet for storing the data in a way that will take up as little space as possible, while mantaining compatability on less than state-of-the-art machines? I know that compute buffers are a thing but afaik they can't be used by hardware that's even remotely older. 3D textures are horribly laggy in general when it comes to tile sampling, and they can only have a depth of 512.
[QUOTE=phygon;51878829]So I'm working on precomputing a FBM noise table so a shader can look it up at a later date, and I have to ask; what would be my best bet for actually storing the data? I'm using Unity, which uses HLSL / Cg. What would be my best bet for storing the data in a way that will take up as little space as possible, while mantaining compatability on less than state-of-the-art machines? I know that compute buffers are a thing but afaik they can't be used by hardware that's even remotely older. 3D textures are horribly laggy in general when it comes to tile sampling, and they can only have a depth of 512.[/QUOTE] Do you absolutely need a 3D table? There are a number of ways of generating noise that doesn't require a table, and these ones work best for shaders imo. You can use the simplex noise from [URL="https://github.com/ashima/webgl-noise/tree/master/src"]here[/URL] (1d, 2d, 3d, 3d w/ derivative, and 4d) and then just plug that into your FBM accumulation loop. I use that code for generating the surfaces of my stars, and it works well enough. Avoiding a LUT is the best way to avoid hardware issues. You're already leveraging the parallelism of a GPU: don't hurt yourself by trying to wedge in a table or the newest/fastest feature. I've spent ages doing that with my CUDA library and its not worth the effort. The GPU is so much faster at this stuff anyways. Ironically, the easiest method is using a LUT (for me) since I can use __device__ __constant__ to place it in constant GPU memory, and then just declare the array like you would normally declare a LUT. Not sure how to do that with a shader though, and my choice of LUT vs the above simplex is because simplex gave me really fucking weird artifacting, for some reason. If you really want to use a LUT, this article goes through generating texture LUTs and does so using Cg, so it should be pretty easy to translate (bottom of article): [url]http://www.decarpentier.nl/scape-procedural-basics[/url] In other news, I was multiplying by the wrong value when scaling the position in ALL of my fractal noise generators. This caused noise that wasn't ugly or bad or anything, it just didn't look like what you'd expect, and increasing the number of octaves used didn't actually increase the "detail", so to speak. Now, it does! :D [t]http://i.imgur.com/eLhdppu.jpg[/t] [t]http://i.imgur.com/YtNU5lT.jpg[/t] I also made gifs that go from a single octave to about the max you can see (12): [URL="http://i.imgur.com/7pqfbWD.gifv"]1[/URL], [URL="http://i.imgur.com/WfTKoEN.gifv"]2[/URL]. I put a ton of work into making sure to use as few local variables as possible, along with tuning the block size of a kernel launch to balance utilization and register spilling. Outputting a 4096x4096 image with 12 octaves completes in about 40ms: a 16384x16384 image completes in about a second. This used to be a gamble, since kernels would sometimes mysteriously fail due to memory overflows, but that's gone away and the max image size is limited only by your VRAM and regular DRAM if you want to save the output to an image.
[QUOTE=Berkin;51878698]Well fuck. My external HDD with all my projects dating back to high school, years' worth of collected books and research papers, and countless other important things decided to become corrupt. Despite my best recovery efforts, I lost half my filenames and my entire directory tree which contained over 63,000 files. All of my uncommitted Rant work (which amounted to quite a lot) is pretty much fucking gone... along with everything else that wasn't pushed to GitHub. It was really difficult to gain back my motivation to program, and now I'm not sure where to go from here.[/QUOTE] Jesus.. That's one of my big fears. I feel really sorry for you. Just don't give up though. As bad as it seems now, you can keep going.
[QUOTE=paindoc;51878896]Do you absolutely need a 3D table? There are a number of ways of generating noise that doesn't require a table, and these ones work best for shaders imo. You can use the simplex noise from [URL="https://github.com/ashima/webgl-noise/tree/master/src"]here[/URL] (1d, 2d, 3d, 3d w/ derivative, and 4d) and then just plug that into your FBM accumulation loop. I use that code for generating the surfaces of my stars, and it works well enough. Avoiding a LUT is the best way to avoid hardware issues. You're already leveraging the parallelism of a GPU: don't hurt yourself by trying to wedge in a table or the newest/fastest feature. I've spent ages doing that with my CUDA library and its not worth the effort. The GPU is so much faster at this stuff anyways. If you really want to use a LUT, this article goes through generating texture LUTs and does so using Cg, so it should be pretty easy to translate (bottom of article): [url]http://www.decarpentier.nl/scape-procedural-basics[/url] In other news, I was multiplying by the wrong value when scaling the position in ALL of my fractal noise generators. This caused noise that wasn't ugly or bad or anything, it just didn't look like what you'd expect, and increasing the number of octaves used didn't actually increase the "detail", so to speak. Now, it does! :D [t]http://i.imgur.com/7pqfbWD.gif[/t] [t]http://i.imgur.com/WfTKoEN.gif[/t][/QUOTE] I [I]think[/I] I need a precomputed table. I'm trying to swing realtime volumetric clouds in VR and enough GPU resources are consumed by the raymarching itself that I'd like to offload the other expensive part of the shader to something that can just be stored and called back to. I mean, isn't that what professional game devs do? I was initially using the noise from the shader itself but I was wondering why I'd bother doing the exact same computations several hundred thousand times per frame when I didn't have to. [IMG]https://dl.dropboxusercontent.com/u/12024286/Dev%20stuff/Volumetric%20shader%20balls.gif[/IMG]
[QUOTE=phygon;51878946]I [I]think[/I] I need a precomputed table. I'm trying to swing realtime volumetric clouds in VR and enough GPU resources are consumed by the raymarching itself that I'd like to offload the other expensive part of the shader to something that can just be stored and called back to. I mean, isn't that what professional game devs do? I was initially using the noise from the shader itself but I was wondering why I'd bother doing the exact same computations several hundred thousand times per frame when I didn't have to. [IMG]https://dl.dropboxusercontent.com/u/12024286/Dev%20stuff/Volumetric%20shader%20balls.gif[/IMG][/QUOTE] What precomputed table are you referring to though, just the permutation table of various char values, the gradient vector table, or something like what is made in the Scape example I linked? I'm not sure I understand what you need.
[QUOTE=Berkin;51878698]Well fuck. My external HDD with all my projects dating back to high school, years' worth of collected books and research papers, and countless other important things decided to become corrupt. Despite my best recovery efforts, I lost half my filenames and my entire directory tree which contained over 63,000 files. All of my uncommitted Rant work (which amounted to quite a lot) is pretty much fucking gone... along with everything else that wasn't pushed to GitHub. It was really difficult to gain back my motivation to program, and now I'm not sure where to go from here.[/QUOTE] I had a harddrive in a very similar situation: All of my programming stuff, mostly, but also a lot of personal things too. It died unexpectedly and I couldn't even bring myself to turn on my computer for a few days. I was devastated. I did, however, eventually find the strength to begin programming again (now saving everything to Dropbox). I eventually re-made or came to terms with losing everything that was lost. Just yesterday, I plugged the harddrive in again for shiggles, and by some miracle it spun up and worked perfectly after months of gathering dust on my desk. Point being: Although losing a harddrive like that sucks, one should never give up. I give you my sincerest condolences and wish you the best of luck with recovery and recreation!
Doing some color space testing in OpenGL. Lighting in different color spaces and linear lighting can be difficult to explain but is very important, so to make it easier to understand I decided to do a little write-up I can refer to. There are a lot of resources on the net on how to do linear lighting, often including shader code as well. I think properly explaining [B]why[/B] it's so important is often overlooked though, so here goes! The basic gist of why it's necessary to do any sort of correction before doing lighting calculations has to do with how images (and therefore textures) are stored on a computer. When doing our lighting calculations we assume the inputs are [B]linear[/B], eg. a pixel value of 0.5/127 equals 50% brightness etc. Non-linear inputs will result in an incorrect output (it can most easily be spotted by inexplicably oversaturated values, particularily in specular highlights). The incoming light is most likely linear since we are calculating it inside the shader (not true for image-based lighting, though!), but what about the textures? Turns out, images are not stored in a linear color space but are [B]gamma encoded[/B]. [img]http://puu.sh/ulsg5.png[/img] This is because our eyes can differentiate between dark values more easily than between bright values, and so to make the most use of storage space the values are modified slightly before saving an image (storing images in a linear format would require a lot more storage space to achieve acceptable quality). Whenever you view an image on your computer screen it has actually been gamma corrected first to regain its intended appearance! The curves used to go from gamma to linear space and back look like this: [img]http://www.tomdalling.com/images/posts/modern-opengl-07/gamma_correction.png[/img] In shader code this amounts to: [code] // Sample texture float4 color = texture(sampler, uv); // Convert to linear space color.rgb = pow(color.rgb, 2.2f); // Do lighting calculations... // Back to gamma space before returning! color.rgb = pow(color.rgb, 1 / 2.2f); [/code] Modern hardware even supports doing this automatically, and so no extra shader computation is necessary. Now let's look at what difference this can make. Left image is a naive lighting implementation, right is gamma corrected (linear) lighting + Uncharted 2 tonemapping. Both images were rendered with the same light properties and tweaked in photoshop to appear similar. Some obvious improvements in the right image is richer blacks and less crushed whites (especially around the face), as well as a complete lack of oversaturated colors: [img]http://puu.sh/ulnrI.jpg[/img] The above images are very similar though, and the lighting environment can be tweaked somewhat to "correct" the incorrect lighting in the non-linear shader. If we bump up the exposure however, things turn nasty: [img]http://puu.sh/ulnSy.jpg[/img] Yikes! That really does not look very realistic at all. It's very troublesome for an artist to work with such sensitive lighting, and bright lights in general are not viable. Let's bring it into linear space: [img]https://puu.sh/ulnTo.jpg[/img] Looks better, but still washed out. The oversaturated colors are gone but the image is still very bright. The problem here is that the dark values receive a lot of light and look good, but bright values turn super bright! This is akin to how a camera sees the world (take a picture indoors by a window and the exterior will appear extremely bright). We could bring the exposure down to compensate, but that would rid us of the nice details in the dark areas! What we want instead is to try and mimic how our eyes work (sort of like an 'HDR' photo, which is really just several photos taken with different exposure levels mixed together). This is where tonemapping comes in. I'm using a version of "filmic" tonemapping which was used in Uncharted 2 and can be found on the net: [img]https://puu.sh/ulnTN.jpg[/img] Now we're talking! Even though the exposure is really high, we still get a lot of details and nice colors. Working in a properly tonemapped, linear lighting environment not only improves visuals but also greatly reduces the time spent tweaking lights, materials etc. Even in terrible, terrible lighting setups it still looks a lot more realistic: [img]http://puu.sh/ulox1.jpg[/img]
Sorry, you need to Log In to post a reply to this thread.