• What are you working on? v67 - March 2017
    3,527 replies, posted
[QUOTE=Socram;52393752]Not to drag this off topic (sorta?) thing on much longer, but how do people always immediately know it is him? Every time verideth posts with an alt someone calls it out within a couple posts, and this time, immediately after his first post. Teach me your shit poster detecting ways.[/QUOTE] Because who the hell claims to write an entire voxel engine in two days? My first attempt at just [I]rendering[/I] shitty voxels took me damn near a month to write. Engines take time. Also voxel engines seem to be the thing he claims the most lately. Just look for a post with vague details where someone claims to write something in X days (where X <= 3), combine with a few carefully selected screenshots, and add in way too many exclamation points. bam, babe, you've got a verideth [del]stew[/del] post goin! On the topic of voxel "engines", I'm still in pain looking back at my old code. Desperately resisting the urge to say "rewrite!" which I've already done 3-4 times - its hardly ever turned out to be worth the effort, and usually I end up over-estimating how much I need to re-implement. Leaving me with a ton of header stubs with no sources, and a poor idea of what in the hell I was planning to do. I think I was trying to do actual lighting last time, and trying to look towards networking, but I dunno tbh.
I finished up the first release of the Book Cover Generator Demo for Rant. You can download it [URL="https://github.com/TheBerkin/Rant.Demos/releases/tag/v1.0"]here[/URL] and try it out for yourself. I'd love some feedback on how I can make the covers better. [t]http://i.imgur.com/ArpPnRB.png[/t][t]http://i.imgur.com/ZSr93NR.png[/t][t]http://i.imgur.com/Tn0mp0y.png[/t][t][/t] It is very simple to modify to include ~extra spicy~ dictionaries[URL="https://github.com/RantLang/Foulmouth"],[/URL] but I'll refrain from uploading that build here.
Give me the spice Berkin
[QUOTE=Socram;52393885]Give me the spice Berkin[/QUOTE] A very spicy file has appeared on the release page. Archive password is your post.
I'm starting to implement terrain in my engine. I've attempted this once before a couple years ago, but I took a too naive of an approach and essentially created a shitty model editor that was awful to use. To that end, I've implemented a new level object of terrain type. Internally it's a single quad subdivided 'x' amount of times, but it self-generates on its own smooth mesh around those points fairly quickly. It also uses the same material system I use for my models, meaning each triangle can have its own material, and the materials can be loaded in at any time. I just need to implement some way for each face to have multiple materials and blending factors for obvious reasons. Further I need to create a new tool system. I currently have translation/scaling/rotating gizmo things that the user can drag around, but I should expand the system to allow for new tools ie: deforming terrain [t]http://i.imgur.com/AjSoXdc.png[/t][t]http://i.imgur.com/cIbL41f.png[/t] There's so much I want to do but I have to sleep and work and stuff :frown:
I registered the domain klikmadding.dk (it's a stupid literal translation of clickbait), so now I'm working on trying to classify and categorize types of clickbait. I had a script running for 2 days retrieving rss feeds from some of the biggest danish online newspapers and tabloids, only to find out today that the titles they give articles in the rss feed are completely tame and matter of fact, and way less clickbaity than the filth they post on their website, so now I have to write scrapers for them instead. I'm using n-grams to try and find common recurring sentences like "you wont believe what happened next" in the titles of the articles, and will hopefully be able to identify different types of clickbait based on that kind of signature in the title. Right now my sample size is so small however that recent events introduce so much noise that it's hard to find the underlying clickbait tendencies though. Hopefully the noise will get filtered out in the end.
Basically scrape the whole of buzzfeed (doesn't matter what) and use that as the ground thruth for training a machine learning algo You will probably have no false positives after that
[QUOTE=LennyPenny;52396729]Basically scrape the whole of buzzfeed (doesn't matter what) and use that as the ground thruth for training a machine learning algo You will probably have no false positives after that[/QUOTE] I don't really want to automate it completely, the point is to identify stupid common trends like this one that was posted on r/denmark the other day and inspired this whole adventure in the first place: [img_thumb]http://i.imgur.com/fatwXr2.png[/img_thumb] The titles are "Puts head in crocodile's mouth - he should never have done that" and "Man amok in Burger King - he should never have done that" The idea being that I'll set up a subdomain specifically for this type of clickbait, like shouldnthavedonethat.klikmadding.dk, and then have a feed on it of the latest items across a whole range of websites, similar to another danish website [URL]https://rasende.dk/[/URL] which has a feed of the latest articles with the words "angry" or "raging" in it to gauge the anger in Denmark. Basically I'm in the exploration phase right now where I'm manually looking for common types of clickbait, then I'll manually set up the classification later I know it's dumb :v:
Not strictly waywo but. Note to self: Don't join startups, i'm now officially unemployed again (I left, sadly no cocaine this time) Edit: This means I'm back to making crappy indie games again. W00t!
UBOs are fun. [t]http://i.imgur.com/mqI0MKu.png[/t]
[QUOTE=Dr Magnusson;52396703]I registered the domain klikmadding.dk (it's a stupid literal translation of clickbait), so now I'm working on trying to classify and categorize types of clickbait. I had a script running for 2 days retrieving rss feeds from some of the biggest danish online newspapers and tabloids, only to find out today that the titles they give articles in the rss feed are completely tame and matter of fact, and way less clickbaity than the filth they post on their website, so now I have to write scrapers for them instead. I'm using n-grams to try and find common recurring sentences like "you wont believe what happened next" in the titles of the articles, and will hopefully be able to identify different types of clickbait based on that kind of signature in the title. Right now my sample size is so small however that recent events introduce so much noise that it's hard to find the underlying clickbait tendencies though. Hopefully the noise will get filtered out in the end.[/QUOTE] Make sure you also scrape the Open Graph and Twitter Card information. I wonder if some sites make those extra clickbaity since they're shared through social networks.
[QUOTE=Tamschi;52397900]Make sure you also scrape the Open Graph and Twitter Card information. I wonder if some sites make those extra clickbaity since they're shared through social networks.[/QUOTE] Oh wow I had no idea this was a thing. Very interesting. Right now I'm just scraping the front pages (usually also extremely clickbaity) for titles and links, but I might look into deep scraping of content, like actually visiting the article links in the future. Also I put the project up on gitlab: [URL]https://gitlab.com/klikmadding/scraper[/URL] The Tabloid class is super simple and easy to configure since it's just using bs4 under the hood to find and extract information, tabloids.py contains the configurations I'm using to scrape articles from danish newspapers.
[QUOTE=Dr Magnusson;52397960]Oh wow I had no idea this was a thing. Very interesting. Right now I'm just scraping the front pages (usually also extremely clickbaity) for titles and links, but I might look into deep scraping of content, like actually visiting the article links in the future. Also I put the project up on gitlab: [url]https://gitlab.com/klikmadding/scraper[/url] The Tabloid class is super simple and easy to configure since it's just using bs4 under the hood to find and extra information, tabloids.py contains the configurations I'm using to scrape articles from danish newspapers.[/QUOTE] It would be interesting to have a table with different titles on the website instead of just having a list. Personally I'd probably bold the entries that match the query and use some abbreviation if some of the fields are the same (e.g. 'Facebook: Same as Twitter'). Then again, I won't be able to read this anyway :v: On a somewhat related note, you can also check the meta tags for advertising topics. Some sites will for example tag desasters or other negative headlines so that advertising plugins can avoid disgruntling advertisers who don't want to see their brands associated with that.
Doing some voice recognition shit. Cortana isn't very good at recognizing names, but she's fine at recognizing whitebread names like Sarah Johnson Problem is, if you say "Screenpop Sara Johnson", Cortana doesn't know if it's Sara or Sarah. So, I'm going to load a CSV file of all the contacts we need to use. Then, I want to get the name like this: 1.) Find all the contacts with the same name as what Cortana heard 2.) Get the Levenshtein distance of each contact from the input 3.) Try similarly spelled names (e.g: if Cortana heard "Christine", search along "Kristine" as well.) 4.) Use the name with the lowest Levenshtein distance Last names should help. While there could be 1000 Christines, there will probably be fewer "Christine Martin"s
[QUOTE=proboardslol;52398014]Doing some voice recognition shit. Cortana isn't very good at recognizing names, but she's fine at recognizing whitebread names like Sarah Johnson Problem is, if you say "Screenpop Sara Johnson", Cortana doesn't know if it's Sara or Sarah. So, I'm going to load a CSV file of all the contacts we need to use. Then, I want to get the name like this: 1.) Find all the contacts with the same name as what Cortana heard 2.) Get the Levenshtein distance of each contact from the input 3.) Try similarly spelled names (e.g: if Cortana heard "Christine", search along "Kristine" as well.) 4.) Use the name with the lowest Levenshtein distance Last names should help. While there could be 1000 Christines, there will probably be fewer "Christine Martin"s[/QUOTE] If I'm not mistaken, you can load the contacts as grammar directly into Cortana. That way, the system should handle picking the best available name for you. I don't know if it will do automatic disambiguation prompts if necessary, though.
[QUOTE=Tamschi;52398056]If I'm not mistaken, you can load the contacts as grammar directly into Cortana. That way, the system should handle picking the best available name for you. I don't know if it will do automatic disambiguation prompts if necessary, though.[/QUOTE] As far as I know, It will do this with native skills (email, etc.) but I've never had any luck with it for programmed skills. I'll try on monday though
[QUOTE=proboardslol;52398063]As far as I know, It will do this with native skills (email, etc.) but I've never had any luck with it for programmed skills. I'll try on monday though[/QUOTE] I admit I never got anything to work either. I don't know if that's because of the MSDN tutorial I followed or some issues with my system settings though.
GPUOpen/AMD released a [URL="https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator"]Vulkan memory allocator[/URL] that I've been integrating (although, rewritten to work more in line with what I need/want) in my common rendering code. I keep finding ways to run into the allocation limit really quickly in Vulkan, and I've been pleasantly surprised at how simple this code is and how intuitive it is in general. I will say that I'm not sure how to handle fragmentation, though. That would probably involve unbinding used resources, notifying the owner of that resource, then moving the destination of that resource and telling the owner to resend that data to the new location. I'm wondering if it'd just be wiser to split the pools on memory type and memory size, so that I have like 3 different pools for small-medium-large allocations - reducing fragmentation inherently as part of the design as the system, rather than trying to correct it later. idk, i'm just talking to myself at this point. Someone pointed out that you're too far gone when you write your own allocators :v:
Today I tried my luck with [url=http://www.karlsims.com/rd.html]reaction-diffusion[/url]. It was a lot of guesswork until I suddenly got something that evolved chaotically, but still with some sort of a initial pattern. It will produce a different pattern every time you run it. [url=https://www.shadertoy.com/view/lssfD2]drl0014[/url] [url=https://www.shadertoy.com/view/lssfD2][img]https://my.mixtape.moe/vzcjfh.png[/img][/url] [vid]https://my.mixtape.moe/nykeom.webm[/vid] [vid]https://my.mixtape.moe/whvcft.webm[/vid]
[QUOTE=DrDevil;52398203]Today I tried my luck with [url=http://www.karlsims.com/rd.html]reaction-diffusion[/url]. It was a lot of guesswork until I suddenly got something that evolved chaotically, but still with some sort of a initial pattern. It will produce a different pattern every time you run it. [url=https://www.shadertoy.com/view/lssfD2]drl0014[/url] [url=https://www.shadertoy.com/view/lssfD2][img]https://my.mixtape.moe/vzcjfh.png[/img][/url][/QUOTE] that looks like a grayscale cross section of large intestine
Working on the library code for "Yelo Sauce"... [t]http://i.imgur.com/PGJvHz3.gif[/t] [i](Yeah, next time I'll record an MP4, my bad)[/i] Who's the fucking idiot that thought this was a good idea?!
[QUOTE=CarLuver69;52399007]Working on the library code for "Yelo Sauce"... [t]http://i.imgur.com/PGJvHz3.gif[/t] [i](Yeah, next time I'll record an MP4, my bad)[/i] Who's the fucking idiot that thought this was a good idea?![/QUOTE] Codebase I was working on essentially had this in 3 different places (ie each parameter duplicated 3 times for 140 parameters), you had to manually add up each definition for a function that was just return {value}, and for each parameter you wanted to pass to a shader you had to duplicate it a further 2-3 times, as well as specifying it in a shader. 7 definitions per parameter to take it to a shader, including some manual adding up. I did not regret fixing that code. This was a general problem though, huge cascading if statements repeated in many places, often doing very similar things
[QUOTE=Icedshot;52399013]Codebase I was working on essentially had this in 3 different places (ie each parameter duplicated 3 times for 140 parameters), you had to manually add up each definition for a function that was just return {value}, and for each parameter you wanted to pass to a shader you had to duplicate it a further 2-3 times, as well as specifying it in a shader. 7 definitions per parameter to take it to a shader, including some manual adding up. I did not regret fixing that code. This was a general problem though, huge cascading if statements repeated in many places, often doing very similar things[/QUOTE] It really blows my mind people manage to write this kind of code without stopping to think "you know, maybe there's an easier way..." because let's face it, we all write hacky code at one point in our lives, but this is just blasphemy! There is literally no reason (unless you're sadistic) to write this spaghetti code. Ever. Oh and I'm realizing now marshalling isn't the proper way to do this, since not all exports are defined anyways. It literally broke my mind looking at this code...:v:
[QUOTE=CarLuver69;52399027]It really blows my mind people manage to write this kind of code without stopping to think "you know, maybe there's an easier way..." because let's face it, we all write hacky code at one point in our lives, but this is just blasphemy! There is literally no reason (unless you're sadistic) to write this spaghetti code. Ever. Oh and I'm realizing now marshalling isn't the proper way to do this, since not all exports are defined anyways. It literally broke my mind looking at this code...:v:[/QUOTE] Yup. It was 20 lines of code without exaggeration to fix the majority of the issue as well, although later I realised you could do it in even fewer with string_view for that particular problem :v: this was cpp though
Ahhhh, this looks so much better... [img]http://i.imgur.com/6iEvPhZ.png[/img] Just gotta write the code that grabs the attribute's value, and possibly move the exports to their own static class. This library definitely has potential!
[QUOTE=CarLuver69;52399027]It really blows my mind people manage to write this kind of code without stopping to think "you know, maybe there's an easier way..." because let's face it, we all write hacky code at one point in our lives, but this is just blasphemy! There is literally no reason (unless you're sadistic) to write this spaghetti code. Ever. Oh and I'm realizing now marshalling isn't the proper way to do this, since not all exports are defined anyways. It literally broke my mind looking at this code...:v:[/QUOTE] honestly with how much my work has involved pulling what i can from open-source projects, stuff like this doesn't surprise me anymore. You'll find little bits of really good code in a project and think you're safe and then BAM you get consecutive while(1)'s mixed with goto's mixed with for(;;)
I've literally been brought to this point because of how horrible the codebase is. [t]http://i.imgur.com/oDd15qB.png[/t] Hope the devs don't mind being roasted!
[QUOTE=paindoc;52398193]GPUOpen/AMD released a [URL="https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator"]Vulkan memory allocator[/URL] that I've been integrating (although, rewritten to work more in line with what I need/want) in my common rendering code. I keep finding ways to run into the allocation limit really quickly in Vulkan, and I've been pleasantly surprised at how simple this code is and how intuitive it is in general. I will say that I'm not sure how to handle fragmentation, though. That would probably involve unbinding used resources, notifying the owner of that resource, then moving the destination of that resource and telling the owner to resend that data to the new location. I'm wondering if it'd just be wiser to split the pools on memory type and memory size, so that I have like 3 different pools for small-medium-large allocations - reducing fragmentation inherently as part of the design as the system, rather than trying to correct it later. idk, i'm just talking to myself at this point. Someone pointed out that you're too far gone when you write your own allocators :v:[/QUOTE] Pooling by type and size sounds like a good idea. Way better than going in after the fact and trying to move resources around. Honestly though couldn't disagree more with the sentiment about allocators - I mean you're already writing as low level of graphics code as you can on PCs currently, writing allocators isn't really that insane. Writing your own general purpose allocator probably is but you should have very specific allocation needs anyways. As I said, for GPU resources pooling by memory type and size is probably good.
[QUOTE=CarLuver69;52399007]Working on the library code for "Yelo Sauce"... - [i](Yeah, next time I'll record an MP4, my bad)[/i] Who's the fucking idiot that thought this was a good idea?![/QUOTE] I've inherited code so much worse
[QUOTE=BlkDucky;52399549]I've inherited code so much worse[/QUOTE] It's not the same level but I've been dumped with changing payroll recently at work and it's a fucking nightmare. Only part of our software package we had outsourced and every time I go deeper it just gets more and more fucked. New client wanted to keep the 6 character employee codes from their old system - can't do it, we're 5 character. Figure we'll generate new 5 digit codes and stick the old one in alternate code so they can still search by it - alternate code also arbitrarily 5 characters. It's also in its own table despite being a 1-1 relationship - a single text box per employee - what the actual fuck. But whatever, I'm moving that shit back to the employee table, making it longer and I'll just have to write a script to go out in the update to move the data over and eventually drop the bitch. Payroll, not even once.
Sorry, you need to Log In to post a reply to this thread.