I've seen all these marketing campaigns about tessellation but how does it work? Does it actually make things smoother? Do they just make a very high poly model and tessellate it to lower polys? Do they use some "map" where they tessellate areas that are supposed to be rounder? Won't this cause problems on intentionally blocky objects being smooth?
Tesselation = smooth objects by adding more polygons.
Tesselation + Displacement = What the "Heaven" demo is all about.
It basically makes it very smooth, the edges seem to have fuckloads of unnecessary polygons.
Looks nice though.
Also, not every object is being tesselated, a box that shouldn't be smooth, will not be tesselated.
It's not like, if you turn on tesselation then every single object will become tesselated.
[editline]01:30AM[/editline]
They also need to stop tesselating the fuck out of every goddamn object, a goddamn rock doesn't need to be all white when you enable wireframe.
Tessalation seems kinda unoptimized.
Why not just model a high enough res object and then add LODs to it? That way you can optimize it more and still have lower end cards use the lower lod.
Is it possible to "de-tessellate"? I could see this as a replacement for LODs
Tessellation was basically on everything in that demo. And even then, it wasn't very well optimized.
[img]http://unigine.com/devlog/090928-dragon_tesselation_wire1.jpg[/img]
In real games, it won't be on everything, and it won't be so detailed.
He asked how it works, not what it does.
The problem with hugely detailed models is that there's massive bandwidth when it comes to rendering them. Often we have to load several models in with different LOD's and also load massive amounts of information if the model happens to be highly detailed.
Tessellation removes these concerns, because the polygons are split instead into patches. A patch represents a curve or region, and can be represented by a triangle, but the more common representation is a quad, used in many 3D authoring applications.
What all this means is that DirectX 11 hardware can procedurally generate complex geometry out of relatively sparse data sets, improving bandwidth and storage requirements. This also affects animation, as changes in the control points of the patch can affect the final output in each frame.
Even more cool is this can all be done in one model, and then it can be stripped down of detail based on the level of detail. No more LOD's, simply tell the engine not to render detail-heavy patches for better performance.
Putting it simply, a shader pipeline can divide polygons to create higher detail by using models instead of normal or bump maps like we do now. It can have anti-aliasing and filtering (remember the jagged blurry edges of Crysis' parallax maps?) and it's pretty quick to render and store.
And this will hopefully work better in games than in the Heaven demo.
Does anyone have any idea about how much the framerate drops in Stalker Call of Pripyat?
To add to thisispain's post:
Yeah, you can't just have a highly-detailed model because of bandwidth issues, otherwise it would make more sense just to tesselate everything on the CPU with more efficient use of polies. When you use a graphics API like DirectX or OpenGL, you send the GPU commands to draw individual polygons every frame(immediate mode rendering). Well, there are ways to store the vertex data on the GPU and just tell it to render the buffered object. However, there is still the initial upload, and then there's the problem of animation. While some games are moving to shader-based hardware animation, many still use CPU-based animation, which means that the CPU is transforming the vertices based on the bone transformations, and then sending the model to the GPU for rendering. If every model was tessellated before this, then the CPU would have many more calculations to do(it's all per-vertex), and the overhead of sending the model through the northbridge would cause slowdowns and be inefficient. Hardware tessellation allows developers to send models to the GPU to be tesselated and rendered (possibly with effects like vertex displacement, which is shown in Dr. Nick's post) by the GPU, which is designed to handle vertex data and operations, unlike the general-purpose CPU.
I learnt something in this thread.
In laymans terms... Isn't this just like some sort of reverse LOD? Taking an technically low-rez model with extra info to improve it's detail as one needs?
Yea...
Something like that.
[QUOTE=zanraptora;20035103]In laymans terms... Isn't this just like some sort of reverse LOD? Taking an technically low-rez model with extra info to improve it's detail as one needs?[/QUOTE]
Yeah, but the cool thing is that it's completely procedural. Developers don't need to program in high quality LODs.
seriously? i could just put in a geometry shader to my cute little virtual level, and i'd be on my way?
I have never heard of this until now. Holy fuck did that Heaven demo rape my system. My card was running at around 100C
[img_thumb]http://img524.imageshack.us/img524/255/wowb.png[/img_thumb]
I got a ok average fps, but there were times when it was running at like 20 fps.
[QUOTE=zanraptora;20035103]In laymans terms... Isn't this just like some sort of reverse LOD? Taking an technically low-rez model with extra info to improve it's detail as one needs?[/QUOTE]
It's a more effective alternative to normal mapping and bump mapping. This is more like displacement mapping than those methods because bump mapping and normal mapping only affect the normals of an object, IE, the thing right in front of the camera. It distorts when you move it because it doesn't change the geometry. We're only modifying the texture so it creates an illusion of depth.
Take for instance F.E.A.R., shoot a wall, it creates a hole, the hole is lit realistically and appears to have depth due to parallax mapping.
Now move to the side, you'll see there is no depth, we only created a simulation of it by moving the texture.
An even better example is a solid sphere. Add a normal map or a parallax map to the sphere. You'll find none of the details from that map appear in the shadow because there simply is no depth information. There's no Z, it's only trickery. Now, because there is no Z, AA and AF don't work either. There's no information. Load up the parallax maps in Crysis. You'll find even with AA and AF there is none of that being applied.
[b]
Tessellation changes that and adds actual depth by doing it in the model rendering stage, not the post-model texture rendering stage.[/b]
Therefor, sweet AF and AA in models that are sophisticated like high-poly models but are actually memory friendly and easy on the graphics card like normal mapping.
in layman's terms: tessellation fucking rocks.
[editline]01:28AM[/editline]
and rules
[editline]01:28AM[/editline]
my sox
[QUOTE=thisispain;20036702]
[b]
Tessellation changes that and adds actual depth by doing it in the model rendering stage, not the post-model texture rendering stage.[/b]
Therefor, sweet AF and AA in models that are sophisticated like high-poly models but are actually memory friendly and easy on the graphics card like normal mapping.[/QUOTE]
but relief mapping does the exact same thing, it just does it with textures(basically, it's a parallax map method, that actually projects the mapped object off of the plane, rather than getting tangent information and just moving the texture). these are flat objects here.
[IMG]http://img218.imageshack.us/img218/9785/relief1.jpg[/IMG]
wouldn't that save graphics, rather than having several hundred thousand textured tris onscreen? not trying to contradict, i'm actually curious, as i'm a game dev and i want to know.
[QUOTE=mrhippieguy;20037010]
wouldn't that save graphics, rather than having several hundred thousand textured tris onscreen? not trying to contradict, i'm actually curious, as i'm a game dev and i want to know.[/QUOTE]
Your card has a certain budget, as I'm sure you're aware. Each card is capable of generating a specific number of tris per second, and as long as the game is developed well, that number of tris in the scene will scale, according to what performance the card is capable of.
It's also worth noting that the idea of relief mapping was around long before parallax mapping. In fact, I would go as far as to say that parallax mapping is a cheap way of faking full relief mapping. The problem with relief mapping is that it's extremely expensive, because it handles everything on a per-pixel basis, using a method not dissimilar to raytracing.
Long story short, whenever games make the switch from rasterization to raytracing, relief mapping will become unbelievably cheap, because the cards will be optimized for it. But until then, full hardware tessellation is cheaper, and produces a result that's almost as good.
Sorry, you need to Log In to post a reply to this thread.