• Question about speed and movement in a 2d game
    23 replies, posted
I'm working on a scrolling 2d space shooter with Pygame, which is just a Python wrapper for SDL. The way I have movement set up right now is to move x number of pixels every frame. I'm assuming that if you run it on a faster or slower computer the game will handle much differently. The faster computer will have a higher FPS, and objects will move faster, and a slower computer will have fewer frames so things would move slower. Is this a legitimate concern? If so is there a way to account for this in SDL/Pygame? Some code: [code] import sys, pygame pygame.init() size = width, height = 800, 600 move_right = [1, 0] move_left = [-1, 0] black = 0, 0, 0 screen = pygame.display.set_mode(size) pygame.key.set_repeat() pship = pygame.image.load("res/player_ship.png").convert() pshiprect = pship.get_rect() #moves ship to bottom of the screen pshiprect.move_ip(width/2, height - 25) while 1: pygame.event.pump() key = pygame.key.get_pressed() if key[pygame.K_RIGHT] and pshiprect.right < width: pshiprect = pshiprect.move(move_right) #[1, 0] if key[pygame.K_LEFT] and pshiprect.left > 0: pshiprect = pshiprect.move(move_left) #[-1, 0] screen.fill(black) screen.blit(pship, pshiprect) pygame.display.flip() [/code] I cleaned this up a bit and took out unnecessary stuff to make it a bit easier to read, because it's a mess right now. Where it says: pshiprect = pshiprect.move(move_right) it moves one pixel to the right for that frame I believe. If you keep holding the right arrow, it keeps moving to the right at a rate of one pixel per frame. Correct me if I'm wrong.
The general solution is to multiply the movement size (1px in your case) by the frame time (the time it took the last frame to render).
[QUOTE=Z_guy;16336371]The general solution is to multiply the movement size (1px in your case) by the frame time (the time it took the last frame to render).[/QUOTE] Something like that had crossed my mind, but I was thinking more like using the FPS, which is sort of like the render time. :v: I'll have to see if I can find a way to find the render time. I'm sure Pygame can do it, I've seen tutorials talk about how long it takes to render frames.
[QUOTE=PvtCupcakes;16336494]Something like that had crossed my mind, but I was thinking more like using the FPS, which is sort of like the render time. :v: I'll have to see if I can find a way to find the render time. I'm sure Pygame can do it, I've seen tutorials talk about how long it takes to render frames.[/QUOTE] Store the current time in a variable, wait one frame, compare the time.
[QUOTE=ROBO_DONUT;16336571]Store the current time in a variable, wait one frame, compare the time.[/QUOTE] That was so obvious. :bang: Thanks thouh. :v:
[QUOTE=ROBO_DONUT;16336571]Store the current time in a variable, wait one frame, compare the time.[/QUOTE] Oh, if only it were that simple. Most timer abilities have resolution only up to 1 ms, which is like an eternity in computer time. Just think if your program were running on a computer fast enough to get more than 1000 fps. That isn't even that fantastic. Hell, glxgears runs at 15000 fps on my machine. Using your method of subtracting the time, anything over 1000 fps would react as if no time at all had passed, and all of your movement stuff would just halt because you are mutliplying the velocities by 0. There are ways to get the number of nanoseconds using a high-resolution timer (using CPU time, that is referring the clock count register, and multiplying by the clock period) but these methods aren't nearly as accurate as the RT clock. (For reference, use the clock() function in time.h for a high-resolution timer)
[QUOTE=Cathbadh;16342057]Oh, if only it were that simple. Most timer abilities have resolution only up to 1 ms, which is like an eternity in computer time. Just think if your program were running on a computer fast enough to get more than 1000 fps. That isn't even that fantastic. Hell, glxgears runs at 15000 fps on my machine. Using your method of subtracting the time, anything over 1000 fps would react as if no time at all had passed, and all of your movement stuff would just halt because you are mutliplying the velocities by 0. There are ways to get the number of nanoseconds using a high-resolution timer (using CPU time, that is referring the clock count register, and multiplying by the clock period) but these methods aren't nearly as accurate as the RT clock. (For reference, use the clock() function in time.h for a high-resolution timer)[/QUOTE] Good thing my rendering time on my PC is ~40 milliseconds. :v: I think it'd be much faster if I did SDL in C, even though most of Pygame runs at C speed through SDL. [editline]12:10AM[/editline] Also, I think it'd be pretty easy to check if the time is 0 and then set it to some arbitrarily low number. But clock() works too.
Unfortunately, this timing a single (or fixed multiple) of frames is just hacky and not future-proof. Like in the olden days of computer games before RT clocks, they would control the speed of the game by putting the CPU in a tight loop that just decremented a number until it reached zero. Of course, faster machines would blaze right through those loops, so they implemented the even hackier solution of adding a 'turbo' button to the computer. The only way I can think of to get consistently accurate frame-times across all machines now and forever is to count the number of frames within a set period of time using an alarm interrupt or something. But then you run into the other problem of having a machine too [I]slow[/I], and you end up rendering 0 frames in the 100ms interval or whatever. At least the program will crash with a (DIVIDE BY ZERO) when it tries to compute the frame period.
This might sound really stupid, but is it possible to limit the fps just below the 1000 fps limit of the millisecond, and use robodonut's method? Nobody will really need more than 1000 fps in a 2d real time game.
[QUOTE=Cathbadh;16344232]Unfortunately, this timing a single (or fixed multiple) of frames is just hacky and not future-proof. Like in the olden days of computer games before RT clocks, they would control the speed of the game by putting the CPU in a tight loop that just decremented a number until it reached zero. Of course, faster machines would blaze right through those loops, so they implemented the even hackier solution of adding a 'turbo' button to the computer. The only way I can think of to get consistently accurate frame-times across all machines now and forever is to count the number of frames within a set period of time using an alarm interrupt or something. But then you run into the other problem of having a machine too [I]slow[/I], and you end up rendering 0 frames in the 100ms interval or whatever. At least the program will crash with a (DIVIDE BY ZERO) when it tries to compute the frame period.[/QUOTE] :psyduck: Also, I said 40 milliseconds, I meant 4. :v: I was reading it as 0.0040 which is there the 40 was coming from. Also Python is cool with it's big numbers, this is what I'm getting for render times: 0.00400400161743
[QUOTE=PvtCupcakes;16350412] Also Python is cool with it's big numbers, this is what I'm getting for render times: 0.00400400161743[/QUOTE] [I]How is that possible?[/I] Computers should not be able to get .01 picosecond resolution. It's gotta be either averaged or a rounding/casting error. [editline]11:39AM[/editline] [QUOTE=Christarp4;16349036]This might sound really stupid, but is it possible to limit the fps just below the 1000 fps limit of the millisecond, and use robodonut's method? Nobody will really need more than 1000 fps in a 2d real time game.[/QUOTE] It is possible with timer interrupts. I'm not sure how happy your OS will be when it is constantly having to catch and pass a torrent of timer interrupts set for 1 ms, but it could work. Only draw the scene [I]immediately after[/I] the timer interrupt goes off, and you have computed all the geometry and game logic and stuff. This would effectively round-up your frame period to a positive integer number of miliseconds. The only problem is that the OS on your PC is in no particular rush to deliver your regular timer interrupt, so it could be inaccurate (it probably isn't an RT Os like Green Hills), and you want to be able to handle all sorts of contingencies in games as fast as possible, and not have to wait for some arbitrarily long period of time to pass. Really, games should run as fast as physically possible.
[QUOTE=Cathbadh;16352018]Really, games should run as fast as physically possible.[/QUOTE] Yeah, but the rendering doesn't need to.
[QUOTE=jA_cOp;16352417]Yeah, but the rendering doesn't need to.[/QUOTE] Good point, most high-class engines are multi-threaded anyways to keep rendering from stalling game logic, but still doesn't solve our problem. In fact, it makes it worse by taking the rendering out of the game loop for which we are timing (making the period smaller and more susceptible to the quantization error)
[QUOTE=Cathbadh;16352665]Good point, most high-class engines are multi-threaded anyways to keep rendering from stalling game logic, but still doesn't solve our problem. In fact, it makes it worse by taking the rendering out of the game loop for which we are timing (making the period smaller and more susceptible to the quantization error)[/QUOTE] [code] main loop if TimeToRender(): Render() GameLogic() end loop [/code]
I propose a relatively simple compromise to Robo Donut's suggestion to hopefully keep timing accurate, but perhaps not immediately accurate. You are trying to compute the frame period 'ftime.' I suggest you also keep track of an integer variable 'num_of_frames.' Do timing similar to Robo Donut and record a time and a previous time. After each frame is computed, look at the difference in current time and previous time. If the difference is lower than some threshold, say 10 ms, then don't update ftime, nor do you store current time in previous time. You just increment the num_of_frames variable. Keep doing this every loop until the difference between current_time and previous time is at least 10 ms. At this point you set ftime = (current_time - previous_time)/num_of_frames. And set num_of_frames back to 1, and store current_time into previous_time. You want the threshold time to be sufficiently large to avoid the quantization error of a frame taking 1.425 ms to compute, but the timer just rounding it down to 1 ms resulting in an error of 31% (This is called the error due to quantization). 8 of these frames would take 11.4 ms, which would just be rounded down to 11 ms which is an error of only 3.6%, and the ftime would be computed as 1.37 ms, not too far off. Of course, this makes the assumption that frame calculation period should not change too quickly. You are using the same value for ftime for a whole 10 ms. [editline]12:26PM[/editline] [QUOTE=jA_cOp;16353115][code] main loop if TimeToRender(): Render() GameLogic() end loop [/code][/QUOTE] That's nasty, though. Frame time would spike when it has to render the scene, and still stalls game logic
Quick question. I had it set up wrong, so it was changing the speed, but I was resetting back to normal the next frame so the movement speed really didn't change. :v: When I do it the right way, my movement speed is so small that there is no movement. I'm using 1.5 as the starting move speed, then when I multiply by the render time, it obviously comes out really small. I've found that if I add 1 to the render time, it gets things moving again, it brings my move speed back to ~1.5. Is this okay, or will it screw things up on other faster/slower systems? Or will I just have to deal with big numbers for initial speed like 500?
1.5 what? That is supposed to be the speed in units/second. Are you sure that it isn't coming out at 0 ms passed, because that behavior is very consistent to what I already described.
[QUOTE=Cathbadh;16387139]1.5 what? That is supposed to be the speed in units/second. Are you sure that it isn't coming out at 0 ms passed, because that behavior is very consistent to what I already described.[/QUOTE] I meant 1.5 pixels per frame. Thats my initial speed before adjusting it for the render time. I'm not getting 0ms, the render time is about 0.004 seconds, and when I multiply it by the initial speed (1.5) it comes out to like 0.006 pixels per frame. So it basically doesn't move, as it'd take 166 frames to move 1 pixel. If I add 1.0 to the render time, it turns into 1.004 x 1.5 = ~1.5 pixels/frame. Would adding the 1 cause any problems on a slower system? I don't think that it would, since I'd still end up with things like 1.01 x 1.5 which would speed it up a little to make up for fewer frames being drawn. I'd just prefer to use smaller numbers for initial speeds rather than using something like 500 x 0.004. [editline]02:29AM[/editline] I don't think I worded the question that well, but I did some math. If I set the initial speed to 375 pixels per frame, and I multiply by a render time of 0.004 seconds, I get 1.5 pixels per frame. If I bump up the render time to 0.010 seconds for a slower system, my adjusted speed is 3.75 pixels/frame; wheras if I did 1.5 * 1.010 (adding 1 to keep things from being totally miniscule) it comes out to 1.510 So I'm guessing it's better to not add the 1 and use bigger initial speeds. Keeping low speeds and adding 1 to the time doesn't make any sort of noticable impact on the adjusted speed.
If you get 0ms just don't do anything in the frame. Including not setting the timer. After a few frames (no matter how fast it runs) you will eventually get to 1ms and it's running at the intended speed. :buddy:
[QUOTE=Cathbadh;16342057]Oh, if only it were that simple. Most timer abilities have resolution only up to 1 ms, which is like an eternity in computer time. Just think if your program were running on a computer fast enough to get more than 1000 fps. That isn't even that fantastic. Hell, glxgears runs at 15000 fps on my machine. Using your method of subtracting the time, anything over 1000 fps would react as if no time at all had passed, and all of your movement stuff would just halt because you are mutliplying the velocities by 0. There are ways to get the number of nanoseconds using a high-resolution timer (using CPU time, that is referring the clock count register, and multiplying by the clock period) but these methods aren't nearly as accurate as the RT clock. (For reference, use the clock() function in time.h for a high-resolution timer)[/QUOTE] You don't need more than 1ms resolution for movement or indeed lots of game logic. You can run the movement every 5ms and have the rendering which isn't capped interpolate between positions.
Cupcakes, why are you trying to specify the velocity in pixels/frame? I thought the entire point of this thread was to divorce speed on-screen with how long it takes to render a single frame. You should specify speed in pixels/sec, not pixels/frame. To know how much to move the sprite, you need to translate that into pixels/frame, and that's why we do this multiplication move. Pay attention to how the units cancel out: (1.5 pixels/sec) * (0.004 sec/frame) = .006 pixels/frame And don't forget, you need to store this pixel-level location in a float (or at least a sufficiently precise fixed-point), otherwise it is just going to get rounded down to 0.000 pixels/frame and movement halts again.
[QUOTE=Cathbadh;16395700]Cupcakes, why are you trying to specify the velocity in pixels/frame? I thought the entire point of this thread was to divorce speed on-screen with how long it takes to render a single frame. You should specify speed in pixels/sec, not pixels/frame. To know how much to move the sprite, you need to translate that into pixels/frame, and that's why we do this multiplication move. Pay attention to how the units cancel out: (1.5 pixels/sec) * (0.004 sec/frame) = .006 pixels/frame And don't forget, you need to store this pixel-level location in a float (or at least a sufficiently precise fixed-point), otherwise it is just going to get rounded down to 0.000 pixels/frame and movement halts again.[/QUOTE] Oh okay. I currently have the initial speed set to 375 pixels/sec, so it comes out to 1.5 pixels/frame. I've always ignored units. :v: Not really sure how to store something like 0.006 in a specific fixed point type since Python is untyped. :ohdear: Might be able to do something like pixels_per_frame = fixedpoint(pixels_per_second * time) Either way 0.006 pixels per frame is much too slow.
On another note, instead of moving x pixels (depending on the framerate) maybe use a simple physics system, with friction and stuff to smooth the movement out.
[QUOTE=ZomBuster;16405891]On another note, instead of moving x pixels (depending on the framerate) maybe use a simple physics system, with friction and stuff to smooth the movement out.[/QUOTE] No, that doesn't solve anything. The problem is that we are trying to figure out how far to move a sprite each frame such that it appears to move at the same speed, no matter how fast the machine running it is. Sure, you should have a physics system in place, but it won't let you get around the problem of varying framerate. The physics is good for letting the system determine the velocity in pixels/sec, but you still need this frame time stuff to translate that into pixels/frame.
Sorry, you need to Log In to post a reply to this thread.