• What are you working on? v67 - March 2017
    3,527 replies, posted
I finally got it! I got screen space reflections to work [t]http://i.imgur.com/eC7ec2R.png[/t] Debugging this took a very long time. I am SO glad I took the time to build a shader editor in my engine, otherwise this would have took at least 50 times longer. I can now start ironing out the artifacts and have it fall back on the other reflection systems. :excited:
[QUOTE=Karmah;51966769]I finally got it! I got screen space reflections to work [t]http://i.imgur.com/eC7ec2R.png[/t] Debugging this took a very long time. I am SO glad I took the time to build a shader editor in my engine, otherwise this would have took at least 50 times longer. I can now start ironing out the artifacts and have it fall back on the other reflection systems. :excited:[/QUOTE] [IMG]https://i.makeagif.com/media/9-28-2015/AkJS46.gif[/IMG] [QUOTE]Everything is chrome in the future.[/QUOTE]
[QUOTE=Adelle Zhu;51966929] [quote]Everything is chrome in the future.[/quote][/QUOTE] Haha, its just a crude naive implementation. I'm implementing the niceties now that takes into account the surface properties (because the one in my post is just using the surface normals)
[QUOTE=PelPix123;51966815]i lifted an old kontakt script to port most of the functions to kontakt. I baked the modeling down to IRs and samples and i modified the script to trigger the samples at the right time. If there ever is a kontakt release (doubtful, I'd rather release the original) obviously I'm going to write a new script from scratch Do any pianists want to give this a test drive?[/QUOTE] I'd love to get my hands on this, the cleanest piano sound I have access to is a 14 year old digital piano and it's still not very good. I try to record my upright piano sometimes but I only have 1 mic so results are limited.
[QUOTE=Tamschi;51964761]Facepunch destroys emoji when you edit posts. There are also lots and lots of them that don't fit into a single [I]char[/I] (or even two) in .NET, so it's probably a good idea to not use that type at all for (directly) manipulating text that's not purely Latin.[/QUOTE] Yes, it seems that's the reason TMP uses int-sized "characters"... Anyway I got emoji working yesterday, now all that's missing is enabling them in chat and letting people spam them over their head ingame (I've seen that somewhere in a WAYWO thread here but I can't it again) Next up: actually working on getting the tooltip to use all the new code I wrote and "teams". Not sure if I should go with the quick and easy way of just putting all units of the same type in the same team, or actually making real teams where every team can be "at war" / "neutral" / "friendly" with each other. I finally got the swipe effect done as well: [url]https://streamable.com/hdceu[/url]
[QUOTE=Felheart;51967578]Yes, it seems that's the reason TMP uses int-sized "characters"... [...][/QUOTE] That's unfortunately not enough. Distinct (not splittable) visual fragments in text can be arbitrarily long in Unicode.
[QUOTE=Ott;51967325]I'd love to get my hands on this, the cleanest piano sound I have access to is a 14 year old digital piano and it's still not very good. I try to record my upright piano sometimes but I only have 1 mic so results are limited.[/QUOTE] Count me in as well, I only have one good Kontakt piano that I really enjoy using and even it can sound a bit stale if I use it too much.
[QUOTE=PelPix123;51966815]i lifted an old kontakt script to port most of the functions to kontakt. I baked the modeling down to IRs and samples and i modified the script to trigger the samples at the right time. If there ever is a kontakt release (doubtful, I'd rather release the original) obviously I'm going to write a new script from scratch Do any pianists want to give this a test drive?[/QUOTE] I play piano and my brother went to music college to play the piano, we would both quite like a try!
So we finally wrapped up our small game project for uni. We are going to polish up some minor stuff and then release it on itch.io for free but in the meantime here is a trailer. Made with OpenGL and SFML. We are all programmers so I ended up with the task of making the majority of the art but you know what they say, programmer art best art. We had to run our games on some shitty little Intel NUCs so we were a bit limited when it came to performance and neither of us are optimization wizards just yet, but for what it is I think it still looks nice and runs really well with four players at around 100 fps at 1920x1080. (Even more on proper pcs unsurprisngly.) [video=youtube;s5mDvwd2E1M]https://www.youtube.com/watch?v=s5mDvwd2E1M[/video] [URL="http://gameranx.com/wp-content/uploads/2016/10/no-mans-sky-08-18-16-1.jpg"]BILLIONS[/URL]
I have to make a design decision for my ecs and I was hoping to get some feedback from you guys about what you think would be the best approach. I laid it out in an issue so if anyone can take a look and weigh in that would be great [url]https://github.com/Yelnats321/EntityPlus/issues/13[/url]
Ported across all the hand animation stuff into the game. It wasn't too bad. I had a small issue where interrupting animations with a new animation caused some snapping (no time to smooth everything out, or something like that), so I swapped it to simply append the next animation onto the list) Now it works. I have an idle animation, and a nice finger tapping animation. I can add animations *fairly* easily too which is nice Edit: Turns out you can't embed dropbox images anymore? Edit 2: I forgot that dropbox decided to arbitrarily break all links for no adequate reason, other than to make more money (srsly fuck you dropbox). RIP everything I've put on facepunch for the past 5 years, which is largely pictures hosted on dropbox. Couldn't they have at least preserved existing links? You can still bloody share them, the links have just all changed >.> . Anyone know any good alternatives? Edit 3: [media]https://www.youtube.com/watch?v=QJ1x9EBR1pw[/media] Because i know if you don't get your hand animation fix, you'll all go spare. I may need to elongate the 'wrists' a little Edit 4: Thanks Tamschi!
[vid]https://fat.gfycat.com/CarelessBasicAustraliancattledog.webm[/vid] Added walljumping/sliding. Still gotta add traction loss when you hit the wall while descending. You can test out my semi-functional character physics for yourself [URL="https://chonks.itch.io/project-fulcrum-alpha-101?secret=0JBFIBYVlidA6Htp9MByM7rhk"]here[/URL]. Also, [QUOTE=Icedshot;51971276] Turns out you can't embed dropbox images anymore? Edit 2: I forgot that dropbox decided to arbitrarily break all links for no adequate reason, other than to make more money (srsly fuck you dropbox). RIP everything I've put on facepunch for the past 5 years, which is largely pictures hosted on dropbox. Couldn't they have at least preserved existing links? You can still bloody share them, the links have just all changed >.> . Anyone know any good alternatives? [/QUOTE] I feel your pain. I've had dozens of embedded images nuked :(
I added several fade out parameters so that the SSR doesn't cut off too sharply I've also made it possible to fallback on local environment probes. Right now they stencil out and fill in similarly to lights, but I think I can optimize it further. Lastly, once I reimplement my skybox I will have the local env maps fall back themselves onto the skybox for reflections. The screen space reflections and env probes line up really closely. In the following example I have a probe haphazardly placed somewhere near the puddle, though its bounding box doesn't extend all the way to the lion so everything between the lion and the pillar appears flat, but the curtain and the pillars appear in the correct place. [t]http://i.imgur.com/aUQT1cG.png[/t] [t]http://i.imgur.com/6oWOkCX.png[/t] Unfortunately, I haven't managed to implement BRDF into the SSR, so the hack I found appears a bit too dark compared to the env map counter part.
It still looks good in the first one. If it was a dark liquid like oil or grease, I wouldn't expect to see much of the lion wall.
[QUOTE=Icedshot;51971276][...] [vid]https://www.youtube.com/watch?v=QJ1x9EBR1pw[/vid] [...][/QUOTE] YouTube is [noparse][media][/noparse] or [noparse][hd][/noparse]. [QUOTE=chonks;51971365][vid]https://gfycat.com/CarelessBasicAustraliancattledog[/vid] [...][/QUOTE] You have to grab the video url from the Gfycat page (via context menu), in your case either [url]https://thumbs.gfycat.com/CarelessBasicAustraliancattledog-mobile.mp4[/url] (small) or [url]https://fat.gfycat.com/CarelessBasicAustraliancattledog.webm[/url] (HD).
[img]http://i.imgur.com/EuOY4z2.gif[/img] Starting to add some modules to my asset so users can dive right in to making their games and not worry about the back-end. I'm starting with dynamic messages- in case you need to tell your player something after you've already deployed your game
[QUOTE=PelPix123;51972312][...] [vid]http://picosong.com/cdn/50063b00846b208d4d097f36e082e385.mp3[/vid][/QUOTE] Now that one just plain doesn't allow hotlinking afaict. What's going on today, were you all affected by the Dropbox public link discontinuation? :v:
[QUOTE=CommanderPT;51969869]So we finally wrapped up our small game project for uni. We are going to polish up some minor stuff and then release it on itch.io for free but in the meantime here is a trailer. Made with OpenGL and SFML. We are all programmers so I ended up with the task of making the majority of the art but you know what they say, programmer art best art. We had to run our games on some shitty little Intel NUCs so we were a bit limited when it came to performance and neither of us are optimization wizards just yet, but for what it is I think it still looks nice and runs really well with four players at around 100 fps at 1920x1080. (Even more on proper pcs unsurprisngly.) [video=youtube;s5mDvwd2E1M]https://www.youtube.com/watch?v=s5mDvwd2E1M[/video] [URL="http://gameranx.com/wp-content/uploads/2016/10/no-mans-sky-08-18-16-1.jpg"]BILLIONS[/URL][/QUOTE] That looks absolutely sick. Would be a great mode in audiosurf.
[QUOTE=CommanderPT;51969869]*supernautic racing*[/QUOTE] This is great stuff. Also, that music is an amazing addition to that trailer - try watching without sound then again with sound.
[QUOTE=Icedshot;51971276][...] [...] Anyone know any good alternatives? [...][/QUOTE] For specialised video hosting, I'd say YouTube, Vimeo or Gfycat. Gfycat gives you the most flexibility, but they have terrible compression and I don't know whether they preserve sound at all. Vimeo probably has the best quality (looks pretty much lossless to me), but their video file size is large so it's not easy to watch in HD on a connection as slow as mine. For images, I guess the usual suspect is Imgur. You can also embed their galleries with [noparse][media][/noparse] tags here, I think. [URL="http://alternativeto.net/software/imgur/"]There are a lot of options depending on your requirements though.[/URL] If you're promoting things that way anyway, then Twitter is actually decent for throwing a video or a set of images online. Just note that images [I]need transparency[/I] to not get utterly wrecked by compression. GIFs are replaced by strongly compressed mp4, I think. On the flipside, videos you upload keep their sound (once the user enables that), have decent quality and the allowed video length should easily be enough for WAYWO purposes now. The main drawback is that the embed widget Facepunch uses is fairly small, doesn't work if the user that 'Twitter Button' blocked and iirc doesn't let you zoom without visiting their website. For hotlinking files verbatim you'll need an anime-hoster, because that's a terrible business strategy and noone else is insane enough to do it for free for the general public. For large verbatim [I]downloads[/I], Mega probably is the by far most hassle-free free option even though it has been shadily taken over. You can also use Dropbox and Google drive for these, but the latter probably rate-limits and their interfaces are less convenient (for the downloader. Dropbox lets you copy the new link type in one click from the explorer context menu now, though, so for uploading it's still the easiest). If you have spare money, then renting some space from Amazon or whatever is most likely the most economic option to get things done with great versatility, but I don't know whether they charge for bandwidth. [editline]edit[/editline] Now with 100% less accidentally shilling Berkin's new Patreon page[URL="https://www.patreon.com/login?ru=%2FBerkin"].[/URL]
(woo first post in the programming subsection!) Currently working on a fairly basic c++ program for shits and giggles to test how surnames are lost/gained over time after watching [URL="https://www.youtube.com/watch?v=5p-Jdjo7sSQ"]this[/URL]. Do note this is really really rudimentary and I just based it off of roughly how surnames work to my understanding in the US. In general it flows like this: 1.) Program starts with (number) of "people" who are all assigned a random gender and surname from a big list of surnames. Duplicates are intentionally possible. 2.) Total number of generations to simulate over, whether names can merge, and whether or not to display all the people with their corresponding names and genders after the end of each generation. 3.) Within each generation, every person in the list is run through and given a random person to marry unless they've already been married until everyone has been married off. Two "children" of random gender take their place. Then what happens next for surnames depends on the two's genders. 3a.) If person A = male and B = female, both children take A's name. 3b.) Vice versa. 3c.) 10% chance to ignore normal naming conventions and take the mother's name. 3d.) If person A's gender = Person B's gender, the name is chosen randomly (between A/B). 3e.) 10% chance to ignore all other options and combine the first half of A's name and the second half of B's name (for those scenarios where they combine lastnames). 4.) The two children take their place and the next generation happens. (Note: for simplicity's sake assume same sex match = adopted kids from an outside source but given the parent's surname. I don't know if it works exactly that way irl). So far it's done exactly as the video's claimed it would work with a large number of names dying out within a few (10's of) generations. Sometimes a bunch of weird combinations happen and oddball names get glued together or others "technically" get revived.
For the past few weeks I've been working on a game for the Vive/VR. There isn't enough game play yet to show off but one aspect I have been hard at work on, and an area I think I have made a lot of improvements to, is lighting. I'm using Valve's Lab Renderer for Unity and while it is pretty sweet there are some serious drawbacks to it, the biggest one being very limited shadows. Traditionally in a modern engine with real-time lighting, for sun shadows what you do is render several shadow buffers at increasing sizes, essentially giving you high-res shadows close to the camera, and lower-res shadows at a distance. This technique is called Cascaded Shadow Mapping, and the major drawback to it is that for every shadow map (usually around 3-5), you need to render the scene to a depth buffer again. Valve have (understandably) opted out of doing this with the lab renderer, and instead recommend (at least from what I gather) using a single, huge shadow map. The problem with this is that: High resolution shadows in VR look fucking stunning. It's hard to describe just what a boost in spatial awareness you get from it, so obviously it is something I'd like to have. The problem is this: I want high resolution real-time shadows close to the camera, I'm not particularly bothered about shadows being high res or real-time at a distance, but I am unable to use CSM. Something Unity and The Lab Renderer supports is dual light maps, where direct lighting is stored in one map and indirect lighting is stored in another, and after tinkering a bit with Valve's light shader, here's a very simple solution I came up with that I think works great: Bake direct lighting for [B]only a single directional light: the sun[/B], and treat the [B]direct[/B] part of the light map as a shadow map for the sun, then fade to high-res real-time shadows close to the camera. After testing this for a while I can confidently say that I have finally found a solution to my biggest gripe with graphics in VR. I had almost given up on real-time shadowing up until a few days ago, but with this system you get good performance and nice visuals. The drawback of course is: You cannot bake any other lights to the light map. It is possible, but will look odd as the lighting information will fade away when you get close to the light. Works for distant lights you won't get close to though, I suppose. [img]http://puu.sh/uNAda.jpg[/img] Even if you set the fade distance to something really small = higher res shadows, you don't really think about it much, since you rarely move a lot in VR without teleporting anyway. Another gripe I had with The Lab Renderer is that for point lights, shadows are rendered as [B]6[/B] (!) spotlights. This absolutely [B]wrecked[/B] performance whenever I used them, as it required a whopping 6 additional depth buffer passes per light. I figured I would use single spot lights for real-time shadows indoors, but they look really lackluster since they only light the environment in a cone. For light sources like a ceiling chandelier, it would be possible to use a combination of a spot light with shadows, and a point light for omni-directional lighting, but I really did not want to waste one of the 18 available lights in The Lab Renderer. Instead, I came up with what I call [B]hybrid lights[/B]: Acts like a point light (lights the environments in all directions), [B]but casts shadows in a single direction only[/B]. Shadows are faded using the spotlight cone, so there is no harsh drop off where shadows disappear. This has turned out to be incredibly useful for ceiling lights, as well as handheld light sources (if you hold a torch for instance, it casts shadows only in front of you). Below is a scene which has 1 directional sun light, casting a 4096x4096 shadow map, as well as 4 hybrid lights, each casting a 1024x1024 shadow map. The scene is fully lit with light probes, indirect light maps and a reflection probe. The wall/floor materials each sample 3 unique textures, as well as 2 detail textures and evaluate full Blinn/Phong shading. The scene runs at full speed (90fps) in about 1.2x resolution with 4x MSAA on a GTX 1070. [img]https://puu.sh/uNy5O.jpg[/img] I plan on doing a little write-up on how to get this running, and also sharing the modified shaders/scripts so anyone interested could try this out. I really do think it greatly improves lighting options for VR in Unity. Also now to actually make a game.
[QUOTE=PelPix123;51966815]i lifted an old kontakt script to port most of the functions to kontakt. I baked the modeling down to IRs and samples and i modified the script to trigger the samples at the right time. If there ever is a kontakt release (doubtful, I'd rather release the original) obviously I'm going to write a new script from scratch Do any pianists want to give this a test drive?[/QUOTE] If you still need help - I'm in. I'm not a professional player but I can play somewhat decently.
[QUOTE=fewes;51975267]For the past few weeks I've been working on a game for the Vive/VR. There isn't enough game play yet to show off but one aspect I have been hard at work on, and an area I think I have made a lot of improvements to, is lighting. I'm using Valve's Lab Renderer for Unity and while it is pretty sweet there are some serious drawbacks to it, the biggest one being very limited shadows. Traditionally in a modern engine with real-time lighting, for sun shadows what you do is render several shadow buffers at increasing sizes, essentially giving you high-res shadows close to the camera, and lower-res shadows at a distance. This technique is called Cascaded Shadow Mapping, and the major drawback to it is that for every shadow map (usually around 3-5), you need to render the scene to a depth buffer again. Valve have (understandably) opted out of doing this with the lab renderer, and instead recommend (at least from what I gather) using a single, huge shadow map. The problem with this is that: High resolution shadows in VR look fucking stunning. It's hard to describe just what a boost in spatial awareness you get from it, so obviously it is something I'd like to have. The problem is this: I want high resolution real-time shadows close to the camera, I'm not particularly bothered about shadows being high res or real-time at a distance, but I am unable to use CSM. Something Unity and The Lab Renderer supports is dual light maps, where direct lighting is stored in one map and indirect lighting is stored in another, and after tinkering a bit with Valve's light shader, here's a very simple solution I came up with that I think works great: Bake direct lighting for [B]only a single directional light: the sun[/B], and treat the [B]direct[/B] part of the light map as a shadow map for the sun, then fade to high-res real-time shadows close to the camera. After testing this for a while I can confidently say that I have finally found a solution to my biggest gripe with graphics in VR. I had almost given up on real-time shadowing up until a few days ago, but with this system you get good performance and nice visuals. The drawback of course is: You cannot bake any other lights to the light map. It is possible, but will look odd as the lighting information will fade away when you get close to the light. Works for distant lights you won't get close to though, I suppose. pic Even if you set the fade distance to something really small = higher res shadows, you don't really think about it much, since you rarely move a lot in VR without teleporting anyway. Another gripe I had with The Lab Renderer is that for point lights, shadows are rendered as [B]6[/B] (!) spotlights. This absolutely [B]wrecked[/B] performance whenever I used them, as it required a whopping 6 additional depth buffer passes per light. I figured I would use single spot lights for real-time shadows indoors, but they look really lackluster since they only light the environment in a cone. For light sources like a ceiling chandelier, it would be possible to use a combination of a spot light with shadows, and a point light for omni-directional lighting, but I really did not want to waste one of the 18 available lights in The Lab Renderer. Instead, I came up with what I call [B]hybrid lights[/B]: Acts like a point light (lights the environments in all directions), [B]but casts shadows in a single direction only[/B]. Shadows are faded using the spotlight cone, so there is no harsh drop off where shadows disappear. This has turned out to be incredibly useful for ceiling lights, as well as handheld light sources (if you hold a torch for instance, it casts shadows only in front of you). Below is a scene which has 1 directional sun light, casting a 4096x4096 shadow map, as well as 4 hybrid lights, each casting a 1024x1024 shadow map. The scene is fully lit with light probes, indirect light maps and a reflection probe. The wall/floor materials each sample 3 unique textures, as well as 2 detail textures and evaluate full Blinn/Phong shading. The scene runs at full speed (90fps) in about 1.2x resolution with 4x MSAA on a GTX 1070. pic I plan on doing a little write-up on how to get this running, and also sharing the modified shaders/scripts so anyone interested could try this out. I really do think it greatly improves lighting options for VR in Unity. Also now to actually make a game.[/QUOTE] Those hybrid lights are pretty cool. When I was making my new game early on I found that realtime point lights with shadows really shit on performance, so I assume it's because of the 6 spotlights for shadows. I ended switching to fully-baked lighting because of that. Also those hybrid lights would be super useful for every game, not just VR. Also, with the baked-realtime shadow fading, I assume GI and light bounce would be lost when you get close and transition to realtime shadows? Unless there's realtime GI/light bounce that I'm not aware of.
[QUOTE=Pelf;51976967]Those hybrid lights are pretty cool. When I was making my new game early on I found that realtime point lights with shadows really shit on performance, so I assume it's because of the 6 spotlights for shadows. I ended switching to fully-baked lighting because of that. Also those hybrid lights would be super useful for every game, not just VR. Also, with the baked-realtime shadow fading, I assume GI and light bounce would be lost when you get close and transition to realtime shadows? Unless there's realtime GI/light bounce that I'm not aware of.[/QUOTE] In my engine I use dual paraboloids for real time point light shadows. The trade off in quality can be mitigated by shadow filtering and using a bigger map The only problem is a seam around the border where the 2 maps meet. I've been thinking of using this approach instead of cubemaps for local environment probes, but obviously updating at a much reduced frequency, say like X seconds or frames or whatever. Also real time GI does exist, like radiance hints which can be done in a similar fashion to cascaded directional lights (I've implemented this), but other techniques exist too like LPV'S, both of which rely on reflective shadowmaps so shadowmaps that also store radiant flux aka albedo
[QUOTE=Pelf;51976967] Also, with the baked-realtime shadow fading, I assume GI and light bounce would be lost when you get close and transition to realtime shadows? Unless there's realtime GI/light bounce that I'm not aware of.[/QUOTE] Because Unity can store direct and indirect baked lighting in separate light maps, I've made it so indirect lighting is evaluated like normal and does not fade away. There is realtime GI in Unity (Enlighten) but frankly it is quite awful and in my opinion not really practical for a real game, especially not for a small one being developed by only a few people, and especially especially not for a VR game. [QUOTE=Karmah;51977261]In my engine I use dual paraboloids for real time point light shadows. The trade off in quality can be mitigated by shadow filtering and using a bigger map The only problem is a seam around the border where the 2 maps meet. I've been thinking of using this approach instead of cubemaps for local environment probes, but obviously updating at a much reduced frequency, say like X seconds or frames or whatever. [/QUOTE] This is how I assumed point light shadows were rendered at first (I think that's how Unity does it by default but not sure), but I'm going to take a guess that the people who made The Lab Renderer just didn't think of it as a priority at the time. The comment above the current point light shadow implementation reads: [code] // Point lights are just 6 fake spotlights for now [/code]
I've been working on improving performance for my game as well improving the AI a bit. I managed to fix up part of the animated sprite system which improved performance a lot and allows the game to handle almost twice as many enemies on screen. That helps make the Mayhem and Gatling Defence mini-games more fun and challenging. Also tweaking the AI to make the zombies not walk up/down towards their target until they're a certain distance away from them helped spread them out more in these modes. [video=youtube;lBDXOX1zGBc]http://www.youtube.com/watch?v=lBDXOX1zGBc[/video] The zombies also now look and and point their arms in the direction of their target. Before it didn't look very good when a horde of zombies instantly turned around when you ran/dodged past them (since they all had their arms pointing straight forward when they flipped). I also think hordes of zombies look cooler now that you can see all the arms reaching out for you.
I wrote a magic 8-ball bot for telegram that uses sentiment analysis to determine whether the question is positive or negative, and responds in the opposite manner. So if you ask "am I an asshole?", it'll pick up on the negativity of the sentence because of "asshole", and reply positively with "absolutely" or something similar. If you ask "Am I a good person?", it'll likewise pick up on the positivity of the sentence and reply "fuck no" Of course I've added a few exceptions, so if the sentence contains the word "you" (so when the question is about the bot), it flips again and replies positively to positive messages and negatively to negative messages, to feed its own ego. I may have added similar exceptions for questions about myself. I call it the magic h8-ball.
[QUOTE=Torrunt;51977691]I've been working on improving performance for my game as well improving the AI a bit. I managed to fix up part of the animated sprite system which improved performance a lot and allows the game to handle almost twice as many enemies on screen. That helps make the Mayhem and Gatling Defence mini-games more fun and challenging. Also tweaking the AI to make the zombies not walk up/down towards their target until they're a certain distance away from them helped spread them out more in these modes. [video=youtube;lBDXOX1zGBc]http://www.youtube.com/watch?v=lBDXOX1zGBc[/video] The zombies also now look and and point their arms in the direction of their target. Before it didn't look very good when a horde of zombies instantly turned around when you ran/dodged past them (since they all had their arms pointing straight forward when they flipped). I also think hordes of zombies look cooler now that you can see all the arms reaching out for you.[/QUOTE] Watched the video before reading what you changed (I'm not familiar with the game), and the zombie turning indeed seems pretty natural. However I kind of think that you could improve on that if you added a random delay to the turnrate of the zombies, that way it could look even more natural. You could use a simple low pass filter with a time constant randomly chosen using a gaussian distribution.
Trying my hand at a [URL="https://github.com/TheBerkin/Rant/issues/92"]typo generator[/URL]. Currently two modes are implemented: transposition and mistyping based on the keyboard layout. Example string: [CODE]the quick brown fox jumps over the lazy dog 1234567890 -+ [] \ ; ' , . /[/CODE] Transposing letters: [CODE]teh qucik borwn fxo jupms ovre hte layz dgo 1234568790 +- ][ \ ; ' , . / hte qucik rbown ofx jmups ovre hte layz odg 1235467890 +- ][ \ ; ' , . / hte uqick borwn fxo jmups ovre teh layz dgo 1243567890 +- ][ \ ; ' , . / teh qucik borwn ofx jupms oevr hte alzy dgo 1234657890 +- ][ \ ; ' , . / teh qucik borwn fxo jmups oevr teh alzy odg 1243567890 +- ][ \ ; ' , . / hte qiuck borwn fxo ujmps oevr hte alzy dgo 1234567980 +- ][ \ ; ' , . / hte uqick rbown ofx jumsp voer hte layz odg 1234567809 +- ][ \ ; ' , . / teh quikc brwon ofx jumsp oevr hte lzay odg 1234567809 +- ][ \ ; ' , . / teh quikc rbown ofx ujmps ovre teh alzy odg 1234568790 +- ][ \ ; ' , . / hte qucik borwn fxo ujmps oevr hte alzy odg 1234576890 +- ][ \ ; ' , . / [/CODE] Mistyping Keys: [CODE]the quick b3own fox jumps ove4 5he lasy d9g 1234567890 -+ [] \ ; ' , . / the quick brown rox jumps pver the lqzy dog 12345u7890 -+ [] \ ; ' , . / the quick brown fox jumps over the laay dog 1234567890 -+ [] \ ; ' , . / the quick brown fox jumpc obe5 fhe lazy dov 1134567890 -+ [] \ ; ' , . / ty4 qyick br;wn fox kumps ofer the .wzy dog 123e567890 -+ [] \ ; ' , . / the 2uick bdiwn fox jujps ov45 the lazy dog 223456u8o- -+ [] \ ; ' , . / the quicm browb fox jumps over thf lazy dog 1234567890 -+ [] \ ; ' , . / th3 quick browj fox jumpa ;ver the lazy doy 12e45y7990 -+ [] \ ; ' , . / the quick vr;wn fpx jumps over the lazy dog 1234567890 -+ [] \ ; ' , . / the 1uick brown f0z jumps over the lazy dog 1234567890 -+ [] \ ; ' , . / [/CODE] This one probably needs some adjustments but it looks pretty good so far.
Sorry, you need to Log In to post a reply to this thread.