• 3D Vectors
    24 replies, posted
This idea was motivated by the analogy PIXELS:POLYGONS::VECTORS:_____ Pixels are an easy way to display images since they break them down into the simplest case: monochrome squares. Similarly, polygons make 3D graphics easier since they break complex shapes into the simplest case. The downside of both of these methods is that they make it impossible to represent very natural objects with high accuracy. For example only vertical and horizontal lines can be precisely represented by pixels as a line at any other angle can only be approximated. Similarly, perfect spheres (or any smoothly curved surface) cannot be represented by polygons. In order to "perfectly" represent 2D shapes one has to use vectors - i.e. define the shapes using mathematical equations (I know that vector images still need to be represented on the screen using pixels, but that's beside the point). However, I don't know of any analog for 3D. It seems feasible to me that one could modify a vector rendering engine to account for perspective and such in order to render 3D vectors. Then it would be possible to have 3D graphics with perfectly defined shapes - shapes with surfaces defined by vector curves. (Interestingly such an engine would also be able to render usual polygon-based models since they could be redefined using vectors) Does something such as this already exist? If not, is there any reason why it couldn't be created?
There are a few applications of this, google sketchup and other CAD programs come the closest. Basically they use basic architectural and mechanical engineering maths to generate grids, shapes, curves, etc. Into 3d geometry. Although to implement it into a live-rendering 3d environment(Or at least for a game or some kind of live simulation using advanced graphical technology, sketchup and other CAD programs have their own live renderer with basic textures and shading) would be impractical. Even in a film rendering situation. Of course even in the programs themselves need to break down these curves and such into vertex geometry. So for creating detail models, yes, having them re-detail themselves in real-time, no. It could be done but very inefficient. Though there are some special cases. Ever heard of the small indie developer by the name of .the-projeckkt? They made this really cool and literally only 956kb game called .kkrieger. It's model compression technique was very innovative, basically, it took a very simple mesh and used a [url=http://en.wikipedia.org/wiki/Subdivision_surface]subdivision[/url] algorithm with edge-sharpening data to create detail models. (Their texture system also is based off of vectors coincidentally.) So it has sort-of been implemented in your case, but by not all the same principles.
The 3D analogy of pixel-images are voxel-models. The 3D analogy of vector-images are polygon-models. For the former, the higher the resolution, the prettier the result. For the latter, the more vectors/polygons, the prettier the result. A technique to make graphics-hardware friendly 3D models out of arbitrary bodies is [url=http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter07.html]tessellation[/url].
Mathematical curves can be 3(or more) dimensional just as easily as there are 2 dimensional ones. Rendering them probably requires them to be converted to polygons first though.
Programs like Illustrator and Inkscape do not convert 2D vectors to polygons before rendering - they render directly to pixels in order to give maximum accuracy. But these programs can also easily create a perspective effect for 2D vector objects using simple transformations (by skewing and resizing) - so it's definitely possible to create an engine that renders smooth vector-based 2D shapes in 3D. It isn't a huge jump of the imagination to think about then rendering 3D objects with smoothly-defined surfaces. Granted Inkscape and Illustrator both take a while to create final renders of vector images, so such a 3D engine would need to employ a lot of time saving tricks (such as skipping AA) in order to render near real-time - especially since the engine probably wouldn't be able to take advantage of the GPU.
You could render 3d curves exactly by testing which curves a ray cast from each pixel intersects. It would be slow as fuck though. For realtime rendering polygons seem like a much better idea. I've never used any professional modelling software, so I don't know how they render those curves. I'm fairly sure that Blender converts NURBS to polygons for realtime rendering though.
Zeeky, tessellation isn't quite what I'm talking about since it still ultimately relies on creating polygons. Pebkac, I'm not exactly sure how Illustrator and the like render their vectors but I imagine it uses some kind of raycasting like you described. However I'm sure the process could be sped up by skipping steps (like AA) and by only looking at the edges of objects. Specifically I was thinking it could work by taking a complex 3D object and then determining what 2D vector curves defines the edge of the object when looked at from that direction. E.g. an elliptical shape would be defined by a family of elliptic curves, but when rendered it would determine which specific 2D curve defines the outline and only renders that. Since it has the equation for the curve (and we aren't bothering with AA), you could either cut down the area that needs to be raycast (skipping pixels far away from the curve) or render the curve parametrically (though that would sacrifice fidelity).
Vector-graphic programs usually use a software rasterizer, though with the ability of programmable shader hardware I think hardware solutions are on the way. On most handheld devices, OpenVG is adopted afaik. Tesselation is a good choice at the moment, because 3D graphics acceleration from hardware does rely on triangles. Now, I'm not exactly a pro in terms of graphics hardware either, but what you are saying seems to step from less knowledge than I possess. Does that just seem like that due to my little knowledge or do you really don't know much about it?
[QUOTE=ZeekyHBomb;26835469]Vector-graphic programs usually use a software rasterizer, though with the ability of programmable shader hardware I think hardware solutions are on the way.[/QUOTE] [url=http://alice.loria.fr/publications/papers/2005/VTM/vtm.pdf]Already here, actually.[/url] You could also use OpenCL (or CUDA, etc.) to utilize a GPU for rendering tasks that aren't Direct3D or OpenGL, like pixel-accurate NURBS rasterization.
displaying mathematical equations with infinite accuracy in 3d is entirely possible, but the only known method for doing so is raymarching and that is slow as fuck. i've done some raymarching experiments before: [IMG]http://i.imgur.com/EwkEf.gif[/IMG] quantized static prng [IMG]http://i.imgur.com/T8QD9.gif[/IMG] perlin noise although there's some people like [url=http://www.iquilezles.org/prods/index.htm]iq^rgba[/url] who take it to [url=http://www.iquilezles.org/www/articles/terrainmarching/terrainmarching.htm]another dimension[/url]. [editline]20th December 2010[/editline] or nurbs, apparently.
Thanks for all of the contributions. And piss off to all the people rating me dumb without contributing to the conversation.
Here's something pretty cool: [url=http://www.devmaster.net/forums/showthread.php?t=4448]raytracing Julia fractals on a GPU[/url]. And this was done half a decade ago, using Cg (basically, GL/D3D pixel shaders) because there was no CUDA/OpenCL yet. I've been thinking about trying this out as an OpenCL practice project at some point.
NURBS. You could render a geometric shape without converting to polygons, however this would be very slow and done in software as graphics hardware only accelerates polygons. And most offline software renderers are probably made with polygons in mind. [editline]22nd December 2010[/editline] Volumetrics don't really qualify I think.
[QUOTE=BmB;26864183]as graphics hardware only accelerates polygons.[/QUOTE] raymarching can be (and is usually) done in a pixel shader, and is MUCH FASTER if so.
I'm pretty sure I saw something about the Xbox being able to tesselate the required level of polygonal detail from a vector base model somewhere.
[QUOTE=deloc;26838063]displaying mathematical equations with infinite accuracy in 3d is entirely possible, but the only known method for doing so is [b]raymarching[/b][/QUOTE] [img]http://www.iquilezles.org/www/articles/terrainmarching/ray_05_g.jpg[/img] Holy fuck, if it wasn't for the low detail shadows I would've thought the aliens were here :ohdear:
What about sparse voxel octrees? Is that applicable here? I don't know much about the technique, but I've seen it used in some high-detail (maybe unlimited detail?) applications
are you talking about the unlimited detail technology?
Which unlimited detail technology? The last one I heard actually touted as "unlimited detail" was just point clouds.
[QUOTE=HubmaN;26878574]Which unlimited detail technology? The last one I heard actually touted as "unlimited detail" was just point clouds.[/QUOTE] Fractals, maybe?
The only "unlimited detail technology" I can find is a video of some strange-sounding aussie-sounding guy saying the words "unlimited detail" and "unlimited point cloud data" (pronounced 'darta', is that really how they say it over there?) over and over again with a few other words stuck in between occasionally.
there can't be unlimited data, maybe it's some datas which are then recreated over and over again with procedural generation methods.
raymarching is as close as you'll get to unlimited detail, since it does numerous point intersections of mathematical equations per pixel until it hits as determined by a condition.
I don't think that "unlimited detail" is a good way to describe what I'm going for... Like I said in my OP, the motivation was to think up a rendering method that could render objects with smooth surfaces (like spheres) in a way analogous to how vector images are used for rendering smooth curves in 2D. That doesn't necessarily require "unlimited detail", and could instead be an alternative approach to rendering that does not rely on polygons. Point cloud rendering is one such alternative approach but really it's just a different way of approximating smooth surfaces .
[QUOTE=Larikang;26893283]Like I said in my OP, the motivation was to think up a rendering method that could render objects with smooth surfaces (like spheres) in a way analogous to how vector images are used for rendering smooth curves in 2D. That doesn't necessarily require "unlimited detail", and could instead be an alternative approach to rendering that does not rely on polygons.[/QUOTE] In principle, direct (no tesselation) rasterization of NURBS is possible, but I'm not aware of anything that implements it. The mathematical complexity would probably make it too slow for real-time use (though I'm speculating).
Sorry, you need to Log In to post a reply to this thread.