• Transforming a light's position in world space to view space in OpenGL
    7 replies, posted
  • I've posted about this in the [i]"What do you need help with?"[/i] thread a couple times but I don't think it's really getting noticed, so hopefully a new thread will help. I'm coding a deferred renderer that uses view space normals and positions, so I'm trying to transform my light's world position into view space as well. However, things have not been going too smoothly. It's my understanding that to transform positions into view space I need to multiply the world space position by my View matrix, but this hasn't been working out for me. After too many hours of frustration, I've come here to ask for help. The problem is that the light's position seems to be marginally changing based on the camera's rotation. [img]http://i.imgur.com/LXu2b.png[/img] Full images: [url=http://i.imgur.com/DQkFv.png]left[/url], [url=http://i.imgur.com/jb3fb.png]right[/url] Currently I'm lighting using full-screen quads, however if I were to change to light volumes I would still have this problem while transforming directional lights, transforming their world-space directions to view space, so I'd really just prefer to find the source of the problem rather than just flat out avoid it. Creating my matrix and loading it into OpenGL: [code]glm::mat4 Projection = glm::perspective(fov, aspectRatio, zNear, zFar); glm::mat4 RotateX = glm::rotate(glm::mat4(1.0f), xrot, glm::vec3(1.0f, 0.0f, 0.0f)); glm::mat4 RotateY = glm::rotate(RotateX, yrot, glm::vec3(0.0f, 1.0f, 0.0f)); glm::mat4 View = glm::translate(RotateY, glm::vec3(-xpos, -ypos, -zpos)); glLoadMatrixf(glm::value_ptr(View)); [/code] Transforming my light's world space position to view space: [code]glm::vec4 LightPos = glm::vec4(5.0f, 1.0f, -15.0f, 1.0f); LightPos = View * LightPos;[/code] My best guess was that the light's position is being affected by the FOV as that's kind of what it looks like--everything's fine in the center of the screen, it's just around the edges that the problem occurs. However, I'm not touching the Projection matrix for anything other than transforming gl_Vertex positions in my geometry's vertex shader. I'm almost positive that the matrices I'm creating work fine because I'm using the same ones to rotate my geometry and they work fine there. I'm using [url=http://glm.g-truc.net/]GLM[/url] for matrices and vectors in C++. Just to isolate the variables here: What values all need to be in view space for my lighting to work? Vertex positions, normals, and my light position right? I've been questioning if I'm doing the other two correctly (in view space), but from my tests they both seem to check out okay. View space normals are just [i]gl_NormalMatrix * gl_Normal[/i], right? As for my positions, I'm getting them from depth, however if I visually compare the output of my positions from depth to [i](gl_ModelViewMatrix * gl_Vertex).xyz[/i] directly from the geometry shader stored in a render target, they're nearly identical--the only difference being the empty background in mine isn't black. Any ideas? I'm really on my last straw here. I've been trying to fix this problem for several days now, and it truly is driving me insane. [b]Update:[/b] It's been suggested by a friend that I may be reconstructing geometry positions wrong in my light shader. Since it's a deferred renderer, I'm rendering the geometry to a series of textures, then sampling them while lighting to get the appropriate information needed to light properly. This is how I'm reconstructing view space positions from depth: [code]float depthToZ = -zNear / (zFar - depth.x * (zFar - zNear )) * zFar; vec3 position = vec3( ( ( gl_FragCoord.x / screenSize.x ) - 0.5 ) * 2.0, ( ( -gl_FragCoord.y / screenSize.y ) + 0.5 ) * 2.0, depthToZ ); position.x *= -position.z; position.y *= position.z;[/code] My theory is that since I'm setting the [i]gl_Position[/i] of each vertex to [i]ModelViewProjection * gl_Vertex[/i] in the geometry stage (so it gets projected to the texture) and not just [i]ModelView * gl_Vertex[/i], it's being projected with FOV but I'm not taking FOV into account while reconstructing their positions. I'm not sure if this is actually the cause of the problem though, and I'd have no clue how to fix it if it was.
  • For sampling textures: [code]vec4 normal = texture2D( tNormals, gl_TexCoord[0].xy );[/code] Texture coordinates set by full screen quad I'm rendering on. For positions, I've just edited my original post. Now that I think of it, position reconstruction may be the issue here. I'm pretty sure I'm not doing anything wrong with transforming the light position now, and I'm pretty sure I'm not doing anything wrong with how I'm creating my normals either. I'm just not sure how I'd fix it.
  • Your positions are bad. gl_FragCoord is in screen-space. You want view-space coordinates, which are occur [i]before[/i] perspective division. You can reverse the whole process and scale, offset, un-persective divide, then multiply by the inverse projection matrix, but there's a much easier way. What you do is you add an extra vertex attribute to the full-screen quad, which is a vector parallel to the edges of the view frustum with a z-component of 1.0 like <dx, dy, 1.0>, where: dx = near_w/(2.0*near) = far_w/(2.0*far) = cos(fov/2) dy = near_h/(2.0*near) = far_h/(2.0*far) = cos(fov/2)/aspect_ratio With signs for the x- and y- components differing depending on what corner it is. You pass it to the pixel shader as a varying attribute, so that the the vector is linearly interpolated automatically by the hardware. This vector points directly away from the camera for every pixel. To get view-space position from depth, then, you simply multiply the depth by this vector, then add the view offset: position = depth * view_ray + view_offset; This, of course, assumes that your depth is linear and in view coordinates (after projection, it is normalized, which you don't want). I recommend rendering it out to an R32F attachment.
  • Sounds great, thank you ROBO_DONUT. I really appreciate your help. However, I'm a bit lost. How do I calculate dx and dy? I'm confused about the all equals signs in those formulas. Also, where do near_w/h and far_w/h come from? I'm assuming that's width and height, but of the screen? And view_offset, is that the camera position, matrix, or something entirely different? Thanks for your patience with putting up with me, this problem alone has been a real big learning experience. I'm just glad the problem could finally be identified, seriously this was driving me insane. :v:
  • [QUOTE=dvondrake;34189209]Sounds great, thank you ROBO_DONUT. I really appreciate your help.[/QUOTE] No prob :) [QUOTE=dvondrake;34189209]However, I'm a bit lost. How do I calculate dx and dy? I'm confused about the all equals signs in those formulas.[/QUOTE] I gave you a bunch of different formulas which should all be equivalent so that you'd have some options. You can use whichever one is most convenient. The only thing is the one that uses fov/aspect ratio might be different depending on whether fov defines the horizontal angle or the vertical angle. [QUOTE=dvondrake;34189209]Also, where do near_w/h and far_w/h come from? I'm assuming that's width and height, but of the screen?[/QUOTE] They are values which define the view frustum. 'near_w' is the width of the near clip plane in view coordinates, 'near_h' is the height of the near clip plane in view coordinates, 'near' is the distance of the near clip plane in view coordinates, etc. [QUOTE=dvondrake;34189209]And view_offset, is that the camera position, matrix, or something entirely different?[/QUOTE] Sorry, ignore view_offset entirely. I forgot we were working in view coordinates. You'd need the position of the camera if you were working in world coordinates, but view coordinates are probably the best choice in this case. [editline]13th January 2012[/editline] Hold up, I'm going to draw up a quick diagram because I know these things can be hard to visualize. [editline]13th January 2012[/editline] [IMG_thumb]http://img823.imageshack.us/img823/7752/viewfrustum.png[/IMG_thumb] That's what it looks like, you're looking for a vector along that edge with z=1. The projection process is probably the most difficult part to wrap your head around. Linear transformations (matrix multiplication) cannot produce perspective projection on its own, so OpenGL does an extra step where each of the components of the coordinate are divided by the w-component. What the projection matrix does is some basic scale/skew/translations and copy a scaled version of the z-coordinate to the w-coordinate. When you divide everything by w, everything in the distance gets smaller because it has a higher depth. After all this, you end up with normalize device coordinates, which are in the range 0-1 on x, y, and z (and z is mapped non-linearly). Finally, NDC coordinates are scaled by the width and height of the display, producing screen coordinates. When you access gl_FragCoord, it's giving you screen coordinates. I highly recommend taking a look at these two pages. They explain the process clearly with plenty of diagrams: [url]http://www.songho.ca/opengl/gl_transform.html[/url] [url]http://www.songho.ca/opengl/gl_projectionmatrix.html[/url]
  • [QUOTE=ROBO_DONUT;34189593]I gave you a bunch of different formulas which should all be equivalent so that you'd have some options. You can use whichever one is most convenient.[/quote] Ah! Whoops, now I feel a bit silly. I'm trying the [i]cos(fov/2)[/i] method, but that only seems to work when I'm really close to the object. Unless I'm right next to it, it doesn't get lit. However once I am right next to it, its lighting is correct (on the correct sides of the object). Maybe I should try creating a view frustum and trying that approach. How should I be creating depth? Using GL_DEPTH_ATTACHMENT_EXT straight from OpenGL on a texture seems to be all over the place, so I'm guessing I'll have to calculate it myself. Am I taking a vertex's Z and scaling it between 0 and 1, or is that what I need to avoid doing? I'll start working on a view frustum tomorrow, mainly because I'll need one for culling anyway. Hopefully that'll help fix the position problem. Although projection space is still a bit of a strange concept for me to fully grasp, I think it's finally starting to make some sense. Your explanation really helped, it's nice to see one in plain English for once.
  • [QUOTE=dvondrake;34190105]Ah! Whoops, now I feel a bit silly. I'm trying the [i]cos(fov/2)[/i] method, but that only seems to work when I'm really close to the object. Unless I'm right next to it, it doesn't get lit. However once I am right next to it, its lighting is correct (on the correct sides of the object). Maybe I should try creating a view frustum and trying that approach.[/QUOTE] You're using non-linear depth from the depth buffer. [QUOTE=dvondrake;34190105]How should I be creating depth? Using GL_DEPTH_ATTACHMENT_EXT straight from OpenGL on a texture seems to be all over the place, so I'm guessing I'll have to calculate it myself.[/QUOTE] The depth buffer result is non-linear. This is a (sometimes useful) side-effect of projection. Because OpenGL has traditionally used normalized integer depth buffers, this non-linearity gives the foreground more precision than the background. This, however, is not what you want for deferred shading. What you want is linear view-space depth. You can reconstruct view-space depth from the depth attachment value, but that is a tad convoluted and you end up having to do it all over the place. I recommend rendering view-space depth to a floating-point attachment because it makes things much easier. [QUOTE=dvondrake;34190105]Am I taking a vertex's Z and scaling it between 0 and 1, or is that what I need to avoid doing?[/QUOTE] No, do not normalize the depth. Just make a GL_R32F (or similar) color attachment and write the view-space z-component to it.