• SrcDemo² - Render smoother Source engine movies, faster.
    276 replies, posted
[QUOTE=Dog;32535528]It keeps producing jumpy physics and NPCs. (On both Gmod and Ep2)[/QUOTE] There are some commands that change your clientside tickrate (in a sense), I'd start there.
Released a new build today. You can download it from [url]http://code.google.com/p/srcdemo2/downloads/list[/url] as usual. Changes include: - Bundled an extra .dll file that should allow Dokan to be detected properly on 32-bit systems (@ory25, you probably want this). - Use of a single PNG saving thread (PNG saving tasks are still done in the background and queued up, but now there's just one thread handling them and being reused rather than a lot of threads being re-recreated, so it's faster). - Added new debugging command-line switch: --dokan-debug, to debug the Dokan layer of things (warning: very verbose). - Added new not-so-much-debugging command-line switch: --srcdemo-hide-files, which makes the mountpoint display a completely empty directory. This has the advantage of not being memory-hungry for some people (someone on the Steam forums had problems with it), but has the disadvantage of you not being able to see your files in the mountpoint (though they're still in the output folder, of course). It makes things slightly faster, but not significantly so. - Fixed "Quit" button saying "Deactivate" on Dokan error message dialog.
Oh man, I like this idea a lot. Thanks! I'll play around with it in TF2 later on. :)
I compiled most the videos I have made since this came out, its one of the most useful tools I have used for making source movies tbh. [media]http://www.youtube.com/watch?v=IhCwCd_spsk[/media] [url=http://redcityreloaded.com/jwplayer/source_demo2_videos.html](Or be a man and watch the HD version)[/url]
[QUOTE=glitchvid;32534491]It would not look proper anyway, as I have stated before, L4D and L4D2 use optimized netcode that, when played back at anything slower than 120 FPS, will stutter, the higher the FPS playback, the worse the stutter gets.[/QUOTE] uh I've never seen this, what are you talking about
[QUOTE=Odellus;32579042]uh I've never seen this, what are you talking about[/QUOTE] [media]http://www.youtube.com/watch?v=x17KRjzyAYA[/media] And specifically this one V [media]http://www.youtube.com/watch?v=32beaZeNLlQ[/media]
60 looks great
[QUOTE=Odellus;32584694]60 looks great[/QUOTE] I doubt source2 (this program) recorder supports 60, because of the method it uses. It basically takes alot of screenshot at a high FPS and blends them down, it needs to have a high FPS to work properly, and I doubt 60 is a large enough sample pool.
[QUOTE=glitchvid;32585963]I doubt source2 (this program) recorder supports 60, because of the method it uses. It basically takes alot of screenshot at a high FPS and blends them down, it needs to have a high FPS to work properly, and I doubt 60 is a large enough sample pool.[/QUOTE]Indeed it isn't, and if you try to do a recording at 60fps (30fps / blendrate 2) with a 180 degree shutter angle, that would be exactly the equivalent of a regular 30fps render.
[QUOTE=glitchvid;32585963]I doubt source2 (this program) recorder supports 60, because of the method it uses. It basically takes alot of screenshot at a high FPS and blends them down, it needs to have a high FPS to work properly, and I doubt 60 is a large enough sample pool.[/QUOTE] it supports any framerate, it'll just look bad if the blending rate isn't high enough
[b]EDIT: This has been moved to [url]http://code.google.com/p/srcdemo2/wiki/ShutterAngle[/url].[/b] :eng101: I think I should take a moment and explain what the shutter angle really is, as it seems it is not a clear notion to some people. To kick things off and give people a reason to read this wall of text, here's a striking example of how important getting the right shutter angle is. This is the same scene, recorded at multiple shutter angles: [img]http://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Windflower-05237-nevit.JPG/800px-Windflower-05237-nevit.JPG[/img] Interested? Good, let's begin. This is a camera with a 180° shutter angle: [img]http://upload.wikimedia.org/wikipedia/commons/a/ad/Moviecam_schematic_animation.gif[/img] As you can seem each physical frame of the film is only exposed 50% of the time. For a 30 frames per second video, a new physical frame is made every 1/30th of a second. However, since it is only exposed half of the time, only 1/60th of a second's worth of exposure has gone onto the frame. The other 1/60th of a second is simply gone, not recorded. As you can imagine, a higher shutter angle means a longer exposure time. For example, a 270° shutter angle, the frame would get 3/120th of a second's worth of exposure, and 1/120th of a second would be gone. With a 360° shutter angle, the frame will get the full 1/30th worth of exposure, and nothing will be lost. Now you may wonder: "I don't like data loss! Why not always use 360°? And why is 180° the default?" The reason for that is because of the human brain. Unlike computers, where the "more data == better" applies, the brain doesn't see animation as such. The eye sees a series of still images, which the brain intuitively and subconsciously interprets and computes motion between the frames. This final "moving" scene is then what you see. The perception of motion is only due to your brain doing some pretty intensive work (that computers take ages to do!) just for you. The brain is very good at this; even from a low-framerate video, you can usually easily tell what is moving where, and how fast it is moving, without too much conscious brain effort. However, it is true that if you artificially feed the brain more frames than it needs in order to perceive motion, the overall work will appear smoother. Thus, higher shutter angle means more frames means smoother motion, right? No. Despite the fact that feeding more frames helps, the brain still knows that images arrive at a constant rate, thus that [b]there is a delay between each frame[/b]. It is used to that delay. If you use a 360° shutter angle, [b]the perceived delay between frame n-1 and frame n is much smaller than what the brain expects[/b], because the last blended frame in frame n-1 is much closer temporally to the first blended frame in frame n. As a result, the brain will get confused and will make you feel like things are moving too fast, making you disoriented a bit. This is why a too high shutter angle is a bad thing. Wanna see how much of a difference it makes? Here's an example, though it is not the clearest one: [video=vimeo;5249682]http://vimeo.com/5249682[/video] As you can see, the higher the exposure time (higher shutter angle), the smoother the water movement looks, but when you go too low (1/30, which is 360° shutter angle), it looks blurry and kinda crappy. Another good way to see this is simply to look at existing videos made by the TF2 replay renderer! :rolleyes: It uses 360° degrees (which means it doesn't care about shutter angle at all, just records and blends all frames). Here is a sample one (not my replay, not trying to get anyone a Frontline Field Recorder, just thought this replay demonstrates nicely how sickening the motion blur on TF2 replay is): [video=youtube;cQUNhV6UIGI]http://www.youtube.com/watch?v=cQUNhV6UIGI[/video] Watch in fullscreen to get a better feel. If you don't feel it, try looking at the sides/corners of the screen especially. If you still don't feel it, you've probably watched too many replays by now and are immune to the effect or something D: As you can see, while the video does look very, very smooth, it looks smooth to the point of being sickeningly blurry. And that, my friends, is why you will [url=http://tylerginter.com/post/11480534977/180-degree-shutter-learn-it-live-it-love-it]Learn, Live, and Love the 180° shutter angle[/url]. It is still not a completely natural effect, but the inter-frame time is much closer to what the brain expects, and the motion blur quality is optimal. It is what is used on film too, and even though today's cameras obviously don't use rotary shutters anymore, they still do use shutter angle simulation, because that's simply what people are used to and find comfortable. This is also why, for once, loss of data (in this case, half of the frames) is a [b]good thing[/b]. I hope this was clear :eng101: It is not a very easy-to-explain concept, but it does affect the output of SrcDemo² a lot, so I thought I'd define it more clearly.
Also wouldn't it be apparent that the lower your shutter angle, the higher frame samples you need, or you will start seeing Ghost frames because it no longer blurs them ?
[QUOTE=glitchvid;32619070]Also wouldn't it be apparent that the lower your shutter angle, the higher frame samples you need, or you will start seeing Ghost frames because it no longer blurs them ?[/QUOTE]No. Here's an example: Say you have an output FPS of 30 and a blendrate of 50 (effective FPS 1500). - With a 360 degree angle, each output frame would be sampled from 50 game frames, spread over 1/30th of a second. -> Each output frame represents 1/30th worth of movement with 50 blends, so each game frame has "lasted" (1/30)/50 = 1/1500th of a second (which is normal; that's the framerate we told the game to render at). - With a 180 degree angle, each output frame would be sampled from 25 consecutive game frames out of the 50 frames the game provides, and the 25-frames sample would be spread over 1/60th of a second. -> Each output frame represents 1/60th worth of movement with 25 blends, so each game frame has "lasted" (1/60)/25 = 1/1500th of a second (again we get the framerate we told the game to render at). - With a 90 degree angle, each output frame would be sampled from 13 consecutive game frames out of the 50 frames the game provides, and the 13-frames sample would be spread over 1/120th of a second. -> Each output frame represents 1/120th worth of movement with 13 blends, so each game frame has "lasted" (1/120)/13 = 1/1560th of a second (not exactly 1500, but pretty close; the small error is due to the fact that 50 is not divisible by 4). As you can see, in all cases, each game frame effectively lasts the same time, so there is the same amount of movement between every game frame, no matter the shutter angle. So no, the shutter angle does not need to be adjusted to avoid "ghost frames". But that's a very good question still! To be clear: The sample of frames is always [b]consecutive[/b] (otherwise it'd ruin the point; might as well just render at half the framerate). So basically, for 30fps/180 angle, it records 1/60th of a second, saves that to frame 0, then ignores the next 1/60th of a second, then records the next 1/60th of a second, saves that to frame 1, ignores the next 1/60th of a second, records the next 1/60th of a second, saves that as frame 2, etc.
thank you for for fixing the problem i was having it works fine now, its vary vary good :)
Thanks ory25~ I have [url=http://forums.steampowered.com/forums/showpost.php?p=26099385&postcount=75]uploaded an unfinished build[/url] for those who are running into the out-of-memory issues (which seems to be no one at all in this thread, but pretty much everyone on the Steam forums. Coincidence...?). I'm also posting that here because I'd love to get feedback on it, as I'll be releasing that version (except finished, of course) soon. On an unrelated note, I would appreciate it a lot if all videos made using SrcDemo2 uploaded on YouTube are tagged with the tag "srcdemo2". Just gives me an idea of how popular this utility is, and it gives people a cool way to find awesome-looking videos.
I ran the last demo in debug and I have the log it saved, TF2 stopped the movie with the error Couldn't write movie snapshot to file rawfs/h3allowhoene2_159332.tga. Stopped recording movie... [url=http://s3.amazonaws.com/stocno.glitchvid.com/logs/srcdem2/runlog_tf2.7z]Here is the log.[/url] (Its a .txt compressed with .7z, the text file was 18~ MB)
[QUOTE=glitchvid;33042757]I ran the last demo in debug and I have the log it saved, TF2 stopped the movie with the error Couldn't write movie snapshot to file rawfs/h3allowhoene2_159332.tga. Stopped recording movie... [url=http://s3.amazonaws.com/stocno.glitchvid.com/logs/srcdem2/runlog_tf2.7z]Here is the log.[/url] (Its a .txt compressed with .7z, the text file was 18~ MB)[/QUOTE]The log looks cut off, it stops in the middle of the line... But that looks like one of those out-of-memory errors that people in the Steam thread are complaining about. Can you try the same with --srcdemo-hide-files?
I don't know if anyone's been following [url=http://code.google.com/p/srcdemo2/source/list]the changes[/url], but the next version will be a big one. :3 Stay tuned.
[QUOTE=WindPower;33268700]I don't know if anyone's been following [url=http://code.google.com/p/srcdemo2/source/list]the changes[/url], but the next version will be a big one. :3 Stay tuned.[/QUOTE] Make a Twitter account so we can follow you :)
Nah, to be honest I'm not very comfortable with all those social networking thingies. (I know I have a Twitter link right there in my Facepunch profile, but I don't really use it regularly). However, you can [url=http://code.google.com/feeds/p/srcdemo2/downloads/basic]watch the "Downloads" RSS feed[/url] to get notifications of new releases, or [url=http://code.google.com/feeds/p/srcdemo2/svnchanges/basic]watch the SVN changelog RSS feed[/url] to get notified of all changes. I guess that can be hooked into Twitter and all that stuff if necessary (read: if there is demand). Anyway, new release incoming within the next few minutes.
[QUOTE=WindPower;33268700]I don't know if anyone's been following [url=http://code.google.com/p/srcdemo2/source/list]the changes[/url], but the next version will be a big one. :3 Stay tuned.[/QUOTE] I'm excited!!
Released a new build. You can download it from [url]http://code.google.com/p/srcdemo2/downloads/list[/url] as usual. Changes include: - The [b]--srcdemo-hide-files[/b] argument no longer has any effect, as this is now the default behavior. - New user interface, with separate tabs for audio, video, rendering, and other misc. stuff. - Support for more video formats: [b]JPEG[/b] (with adjustable quality), [b]TGA[/b] (with optional RLE compression). - Added a "[b]video disabled[/b]" mode (when you just want the audio). - Support for more audio formats: [b]Ogg Vorbis[/b] (with adjustable quality), [b]FLAC[/b]. - Added an "[b]audio disabled[/b]" mode (when you just want the video frames). - Added support for [b]audio buffering[/b] (condenses the sound file I/O into few big chunks rather than lots of small chunks). - Added a [b]rendering tab[/b], featuring more statistics about the video being rendered, a live preview of the last saved frame (no need to alt-tab to the game to see the progress anymore), and audio buffer status/control (shows buffer size and capacity, and lets you manually flush it if you feel like flushing it). - Added update checking. - Added (optional, disabled by default) automatic update checking on application startup. The screenshot on the first page has been updated, but here it is as well if you don't feel like clicking away: [img]http://srcdemo2.googlecode.com/svn/trunk/img/screenshot.png[/img] This release has not been tested as extensively as the previous ones (read: Only tested with Portal/Portal 2). I encourage you to try it out, and remember that you can always install the older one if something doesn't work right anymore in this version. The source code is available [url=http://code.google.com/p/srcdemo2/source/browse/?r=74#svn%2Ftrunk%2Fsrc%2Fnet%2Fsrcdemo]here[/url].
I'm getting an error, Couldn't write movie snapshot to file scrcdemo/test0000.tga. WaveFixupTmpFile( 'scrcdemo/test.WAV' ) failed to open file for editing Any idea? I named it test, if that's any help [editline]10th December 2011[/editline] Shoot never mind. Is there any way this will work with L4D2?
[QUOTE=cardboardtheory;33658227]I'm getting an error, Couldn't write movie snapshot to file scrcdemo/test0000.tga. WaveFixupTmpFile( 'scrcdemo/test.WAV' ) failed to open file for editing Any idea? I named it test, if that's any help [editline]10th December 2011[/editline] Shoot never mind. Is there any way this will work with L4D2?[/QUOTE] Read the previous page and watch its [url=http://www.youtube.com/watch?v=32beaZeNLlQ]video[/url]~ Although in your case it simply sounds like you just misspelled "srcdemo". Make sure you type whatever the name of the folder you created is.
If you're going to add AVI support in the next update (complete with OpenDML so it doesn't corrupt above 2GB), I would recommend using this: [url]http://umezawa.dyndns.info/archive/utvideo/[/url]
[QUOTE=Max of S2D;33661676]If you're going to add AVI support in the next update (complete with OpenDML so it doesn't corrupt above 2GB), I would recommend using this: [url]http://umezawa.dyndns.info/archive/utvideo/[/url][/QUOTE]My first priority is to fix the BSOD bug when trying to mount/unmount a directory for which the path contains an NTFS symlink in it. Then, I've received an email asking for a Mac OS X version using MacFUSE or Fuse4x, which should be relatively easy as all other libraries (all but Dokan) are cross-platform, and the API is the same as FUSE on Linux to which I am used to. It would also automatically be compatible with FUSE on Linux, should Valve release the Source engine there or should someone want to render stuff in Wine (can be useful since you can create virtual X buffers, so you could do fast offscreen rendering on Linux). But then yes, direct video support would be nice. For lossless codecs, Ut Video is good, but there's also H.264. The x264 encoder has a lossless mode which provides the best compression of all lossless codecs. It's not standard, but it's still useful and it's accepted by YouTube, so it is the best option to upload lossless video to YouTube right now. H.264 would also probably be the most popular option for lossy encoding as well, so all in all it is well worth implementing. Other than that, there's HuffYUV/Lagarith/FFV1, whichever is easier to implement... or perhaps all of them, or perhaps neither. And then, probably a way to automatically put those raw video streams together with the audio stream in a container format. The Matroska container format (.mkv) is probably the best choice as it is the most flexible as to which codecs it can contain, but .mp4 and .avi containers should probably also be supported because of their popularity. Anyway, this is all long-term planning and I wouldn't want to make any commitments, so consider the content of this message as a tease rather than a promise.
Couldn't write movie snapshot to file srcdemo/test0000.tga. WaveFixupTmpFile( 'srcdemo/test.WAV' ) failed to open file for editing Same error, no reason. I'm sure I spelled everything right. I put in hostframerate_90. I activated it right after. Then I submitted startmovie srcdemo/test in the console. All I want is proof that it works, no matter how bad it plays. So I'm probably doing something wrong. Do you have any idea what?
[QUOTE=cardboardtheory;33666698]Couldn't write movie snapshot to file srcdemo/test0000.tga. WaveFixupTmpFile( 'srcdemo/test.WAV' ) failed to open file for editing Same error, no reason. I'm sure I spelled everything right. I put in hostframerate_90. I activated it right after. Then I submitted startmovie srcdemo/test in the console. All I want is proof that it works, no matter how bad it plays. So I'm probably doing something wrong. Do you have any idea what?[/QUOTE] You probably mean "host_framerate 90", not "hostframerate_90". That isn't what is causing this error, though. Could you document the exact procedure you are going through, each click and each command, in SrcDemo, in Windows Explorer, and in the Source Engine game? Could you also provide the log file generated by SrcDemo when you start it in debug mode? Any extra detail would help :3
I'm not sure, but I believe lossless H.264 is not really widely supported by editing applications, and performance with it isn't too good
[QUOTE=Max of S2D;33668526]I'm not sure, but I believe lossless H.264 is not really widely supported by editing applications, and performance with it isn't too good[/QUOTE] Indeed it isn't; it is not part of the H.264 spec (so most video editing applications don't support it), and it definitely isn't meant for editing. It works like H.264 and most lossy codecs do, in that it has a bunch of sparse I-frames (full picture data of that frame) followed by tiny P-frames and B-frames (packets that only contain the difference in pixel data compared to the previous frame). This means that when asking the video editor to jump to a specific frame, the video editor has to read backwards into the file to find the nearest previous I-frame, then go frame by frame to apply all the deltas contained in all of the subsequent P-/B-frames, to finally arrive at the requested frame. This is a long process, and it is made especially heavy here when dealing with lossless, usually high-resolution pictures, so it is very slow. However, it is the most compressed lossless format, making it the best choice for minimizing disk space and for upload (minimizing bandwidth), so it is still worth implementing. Of course, the other lossless compressed formats with easy seeking should be implemented too~
Sorry, you need to Log In to post a reply to this thread.