[QUOTE=Ziks;52124218]I think that was me, if you mean this:
[vid]http://files.facepunch.com/ziks/2015/June/23/2015-06-23-1527-57.mp4[/vid]
It was done with all the wires using the same cylinder geometry, with the vertices deformed in [url=https://gist.github.com/Metapyziks/85c08387830011690937]this[/url] shader. _A and _D are the start and end points of the wire, and _B and _C are like anchor positions that make sure the wire is pointing the right direction at the start and end.[/QUOTE]
Someone has taken this concept and turned it into a full game, check out Soundstage on Steam.
[QUOTE=Fourier;52124585]Damn, that was fast! Thanks Ziks! Will take a look at it, looks slick!
Anyway, how should 3D cylinder look like? I mean what is dimension, orientation and radius of 3D model?[/QUOTE]
I think it was 1x1x1, oriented with the Z axis going through the middle.
I find it really strange that given the amount of things GLM supports and its ubiquity that it doesn't have a preprocessor flag like "#define GLM_USING_QT" or something that allows for conversion to/from Qt matrix/vector types. Same with Qt not supporting GLM in some fashion, be it through a preprocessor define or something. It adds an odd bit of bloat and boilerplate interface code between parts of my application, and it gets verbose when I have to use lots of calls to glm::value_ptr and such
its almost driving me to create my own vector type that I use in the relevant code, so that I can define conversions from the other two types in my own struct. but then i have three seperate vector types in my code, which just seems silly (and probably isn't worth the potential issues).
[editline]edit[/editline]
I also can't help but appreciate GLM more and more all the time, its certainly not a fancy or attention-grabbing library but its absolutely bulletproof and does everything you need it to do, even without the GTX extensions
GLM is awesome
I too have redundant vector and matrix classes with glm::mat4x4, aiMatrix4x4, and btMatrix4x4; I'm sure I even have a fourth one hiding somewhere else.
whyyy not just define conversion operators?
[QUOTE=JWki;52125656]whyyy not just define conversion operators?[/QUOTE]
can't do that outside of the scope of the original class declaration/definition afaik. so I'd have to go tinker with GLM/Qts source code for these items, and then I have to make sure to package my edits with my project and so on. And make sure that the Qt elements are included in the right GLM elements, and vice versa. OpenGL stuff can get tangled too because then you end up with multiple definitions pretty easily, if you're not really careful.
[editline]19th April 2017[/editline]
i mean correct me if I'm wrong, because just defining a conversion operator would be great
All I do is use GLM as the standard in my application, and only use the others when absolutely necessary.
Like when I'm updating my own objects at the end of a physics tick, I grab their transformation matrix and convert it to glm's mat4. Or when I'm grabbing animations from an assimp model, I'll convert them to GLM.
I've added ratings to my [del]facepunch clone[/del] game about internet forums!
[t]http://i.imgur.com/oMuepwd.png[/t]
[t]http://i.imgur.com/zERFG6S.png[/t]
[QUOTE=paindoc;52126162]can't do that outside of the scope of the original class declaration/definition afaik. so I'd have to go tinker with GLM/Qts source code for these items, and then I have to make sure to package my edits with my project and so on. And make sure that the Qt elements are included in the right GLM elements, and vice versa. OpenGL stuff can get tangled too because then you end up with multiple definitions pretty easily, if you're not really careful.
[editline]19th April 2017[/editline]
i mean correct me if I'm wrong, because just defining a conversion operator would be great[/QUOTE]
Oh you're right. That's actually pretty sucky.
I guess I'd go with custom vector type then, just for the conversions, an implementation of the adapter pattern if you will. If you define conversions for that type from and to all the other corresponding vector types, you could just use it everywhere in your code and just plug it into the glm algorithms as well as the QT stuff. Seems pretty reasonable to me.
I don't really see an issue with having multiple vector types, think of it this way, your custom type is the main one for your application, glm provides you with algorithms and well QT its a lib you use so it's pretty normal that it defines some things that you already do. Code duplication is not inherently evil, it sometimes has reasonable justifications like in this case.
[QUOTE=cam64DD;52127652]I've added ratings to my [del]facepunch clone[/del] game about internet forums!
[t]http://i.imgur.com/oMuepwd.png[/t]
[t]http://i.imgur.com/zERFG6S.png[/t][/QUOTE]
I'm legitimately interested to see the game mechanic you make out of that.
Trying to simulate soft body physics with Box2d
[vid]https://thumbs.gfycat.com/NastyFittingFlamingo-mobile.mp4[/vid]
[QUOTE=Ziks;52125347]I think it was 1x1x1, oriented with the Z axis going through the middle.[/QUOTE]
Ok thank you :). :saxout: Gonna post results (will take some time tho)
Vulkan continues to be fun, if your idea of fun is slightly masochistic and involves getting to (or having to, depending on your opinion of it lol) explicitly control something as detailed as GPU memory allocation, reallocation, and free'ing. I was about to start rendering a triangle, but then I found out that Vulkan gives you so much control that I might as well have fun implementing a [URL="https://accu.org/index.php/journals/389"]circular buffer[/URL] to use for "staging memory" right now, and texture streaming down the road (if it works out, or makes sense). But then I also realized that I'd probably get fragmentation, or at least inefficient use of the buffers space, if I didn't take time to make sure that things were optimally arranged. And Vulkan expects everything to be aligned, so you have to make sure that (if shifting things around to defrag/optimize, for example) memory addresses are all aligned.
This alignment changes based on the type of data being used in the buffer too, which might be because allocating for an image could explicitly use texture memory on the GPU whereas something like uniform buffers or mesh data can reside in one of the caches (if available) or in constant memory or something I dunno. It is neat that you can easily get things like the max VRAM size, or the default alignment, or the max number/size of allocations for each GPU in a system and that these values can be easily fetched at runtime or compile-time. I can't think of way you could do that in OpenGL, and you never really had to even think about managing memory usage like this in OpenGL, but I can see how it could be really quite useful for automatically configuring the memory layout/usage depending on the hardware running your program. So writing a circular buffer quickly meant also writing a defragmenter that I can run in-between frames (I think, I'm still horribly new at trying to do fancy rendering stuff) so I don't end up with a shitload of pointlessly wasted space in my buffer. I'm also imaging that there is just no way I can ever outdo the driver when it comes to this stuff, either. I've dealt with allocating GPU resources before with CUDA but that was literally as simple as using something like cudaMalloc, cudaMemset, cudaFree, etc, not something that makes me feel like the worlds worst GPU driver programmer.
my head's all mush though, and I wish I had more to show for this effort, but I've been super busy at work and more than a little tense as we wait at work to negotiate a [I]really[/I] big programming contract. I'm not sure how much I can disclose, but its pretty much in the vein of work I'd like to do for a career and it'd be awesome to have on my portfolio. So, I've been practicing other things in C++ that would be related to this potential project and haven't had time to work on vulkan ;c
downside of aforementioned contract: the codebase terrifies our C++ guru and he asked me if I was okay trying to read comments in a foreign language, so uh yeah this could be interesting
So just messing around I thought how much of a difference would a for loop have if the array.Length was defined as for(int i = 0, len = array.Length; i < len; i++). I am sure its been done before but wanted to do it myself and thought someone else might find it interesting too. This was done in C# by the way.
The loops looked like the following -
Type 1.
[code]
for (int x = 0; x < arr.Length; x++)
{
...
}
[/code]
Type 2.
[code]
for (int x = 0, len = arr.Length; x < len; x++)
{
...
}
[/code]
Here is charts showing the data, goes from an array size of 100 to 2 million. Each array size was looped over 1000 times for both types and the average was taken.
[img]http://puu.sh/vqNUT/7c39beabc8.png[/img]
[IMG]http://puu.sh/vqP1L/a94591c4d7.png[/img]
Just found it interesting and thought I would post it.
[img_thumb]http://i.imgur.com/99xDFQX.jpg[/img_thumb]
Voxel planet part three-hundred made in the fantastic Unreal Engine 4™
Gonna add that greedy meshing and networked levels next probably
[QUOTE=TH3_L33T;52130629]So just messing around I thought how much of a difference would a for loop have if the array.Length was defined as for(int i = 0, len = array.Length; i < len; i++). I am sure its been done before but wanted to do it myself and thought someone else might find it interesting too. This was done in C# by the way.
The loops looked like the following -
Type 1.
[code]
for (int x = 0; x < arr.Length; x++)
{
...
}
[/code]
Type 2.
[code]
for (int x = 0, len = arr.Length; x < len; x++)
{
...
}
[/code]
[/QUOTE]
I guess .Length is a property with some logic built into the getter and len is just a local variable on the stack, which is super fast to access.
Interesting to see the comparison though.
[QUOTE=paindoc;52130561]Vulkan continues to be fun, if your idea of fun is slightly masochistic and involves getting to (or having to, depending on your opinion of it lol) explicitly control something as detailed as GPU memory allocation, reallocation, and free'ing. I was about to start rendering a triangle, but then I found out that Vulkan gives you so much control that I might as well have fun implementing a [URL="https://accu.org/index.php/journals/389"]circular buffer[/URL] to use for "staging memory" right now, and texture streaming down the road (if it works out, or makes sense). But then I also realized that I'd probably get fragmentation, or at least inefficient use of the buffers space, if I didn't take time to make sure that things were optimally arranged. And Vulkan expects everything to be aligned, so you have to make sure that (if shifting things around to defrag/optimize, for example) memory addresses are all aligned.
This alignment changes based on the type of data being used in the buffer too, which might be because allocating for an image could explicitly use texture memory on the GPU whereas something like uniform buffers or mesh data can reside in one of the caches (if available) or in constant memory or something I dunno. It is neat that you can easily get things like the max VRAM size, or the default alignment, or the max number/size of allocations for each GPU in a system and that these values can be easily fetched at runtime or compile-time. I can't think of way you could do that in OpenGL, and you never really had to even think about managing memory usage like this in OpenGL, but I can see how it could be really quite useful for automatically configuring the memory layout/usage depending on the hardware running your program. So writing a circular buffer quickly meant also writing a defragmenter that I can run in-between frames (I think, I'm still horribly new at trying to do fancy rendering stuff) so I don't end up with a shitload of pointlessly wasted space in my buffer. I'm also imaging that there is just no way I can ever outdo the driver when it comes to this stuff, either. I've dealt with allocating GPU resources before with CUDA but that was literally as simple as using something like cudaMalloc, cudaMemset, cudaFree, etc, not something that makes me feel like the worlds worst GPU driver programmer.
my head's all mush though, and I wish I had more to show for this effort, but I've been super busy at work and more than a little tense as we wait at work to negotiate a [I]really[/I] big programming contract. I'm not sure how much I can disclose, but its pretty much in the vein of work I'd like to do for a career and it'd be awesome to have on my portfolio. So, I've been practicing other things in C++ that would be related to this potential project and haven't had time to work on vulkan ;c
downside of aforementioned contract: the codebase terrifies our C++ guru and he asked me if I was okay trying to read comments in a foreign language, so uh yeah this could be interesting[/QUOTE]
You can implement circular buffers on textures though, can't you? 512x512 texture can hold 512 ~512dimensional circular buffer for example. (slightly less than 512 dimension because you need some space for circular buffer control variables).
So I found out node.js now supports async\await out of box and decided to try it, only to find out that my typescript node project transpiles async\await by itself and upgrading was not required
Anyway I failed to rewrite one of my endpoints to use async\await. I'm not sure if it's just the feature is restrictive or my architecture doesn't fit, but I needed nested async\await calls.
[QUOTE=suXin;52131624]So I found out node.js now supports async\await out of box and decided to try it, only to find out that my typescript node project transpiles async\await by itself and upgrading was not required
Anyway I failed to rewrite one of my endpoints to use async\await. I'm not sure if it's just the feature is restrictive or my architecture doesn't fit, but I needed nested async\await calls.[/QUOTE]
Async/Await in JS is just basically a fancy wrapper around Promises, an async function returns a promise and you can await other promises inside of it.
[QUOTE=suXin;52131624]So I found out node.js now supports async\await out of box and decided to try it, only to find out that my typescript node project transpiles async\await by itself and upgrading was not required
Anyway I failed to rewrite one of my endpoints to use async\await. I'm not sure if it's just the feature is restrictive or my architecture doesn't fit, but I needed nested async\await calls.[/QUOTE]
You can probably tell the TypeScript compiler to use a newer ES version to make it emit it directly.
[QUOTE=Fourier;52131475]You can implement circular buffers on textures though, can't you? 512x512 texture can hold 512 ~512dimensional circular buffer for example. (slightly less than 512 dimension because you need some space for circular buffer control variables).[/QUOTE]
sorry, I'm not quite sure what you mean? Most of the memory allocation and such calls have the VKAPI flag, so I'm pretty sure they get called by the GPU itself and such, so not sure how control variables will work. The idea is to allocate a large chunk of space (based on the hardware I'm running on), and use that to stream textures. One of the problems I'm realizing I'm going to have is texture compression, because the raw output from my compute jobs isn't going to be compressed and idk if I'm going to be able to compress it in time. Texture streaming is down the road though. Like, yknow, after I make a triangle appear.
For now, its going to be useful as a "FIFO" type queue to help stage resources. Its a common approach that I've found several people using when I was browsing /r/vulkan, just that no one uses the term "circular buffer", even if they're effectively using one. Its not a very well-known structure, it would seem.
Also, if anyone's interested in trying out vulkan, I highly recommend [URL="https://www.khronos.org/assets/uploads/developers/library/2016-vulkan-devu-seoul/4-Vulkan-HPP.pdf"]vulkan.hpp[/URL] as the [URL="https://github.com/KhronosGroup/Vulkan-Hpp"]main include file[/URL] (its packaged w/ the API now, alongside vulkan.h) you draw from. Its something nvidia did a fairly decent job with, as it adds:
- typesafe enums and flags
- support for standard library containers
- ability to use references instead of pointers in many locations
- optional exception support
- "CreateInfo" structs no longer have issues with uninitialized values
Everything is also encapsulated in a "vk" namespace, and the vk prefix is removed from everything and the enum + struct names are seriously cleaned up. Its not a high-level wrapper by any means, but it does make it handle more like C++ and less like outdated C (which is odd tbh, because vulkan accommodates C++14 programming techniques just fine, but is conceptually laid out like old C). Only problem with vulkan.hpp is that it's not documented or demo'd like regular 'ol vulkan.h is, so the best bet is to just read through the header and figure out how things translate yourself. Unfortunately, this also makes it easy to miss features, like the ability to turn VkResults into more descriptive strings or the ability for vulkan.hpp to take care of grabbing debugging function pointers for you.
[video=youtube;g0FUQW6h-bE]https://www.youtube.com/watch?v=g0FUQW6h-bE[/video]
Added Percentage-Closer Soft Shadows to Bulletin.
So now, shadows can be SUPER SOFT or SUPER HARD.
And guess what, the implementation actually gave me an FPS boost compared to my old shitty custom shadow casting solution!
Some experiments I've been doing with planet rendering lately. I'm trying to simulate a gravitational shift so that objects can get caught in a planet orbit.
[video=youtube;1lR99sIBwiU]http://www.youtube.com/watch?v=1lR99sIBwiU[/video]
[QUOTE=HiredK;52134752]Some experiments I've been doing with planet rendering lately. I'm trying to simulate a gravitational shift so that objects can get caught in a planet orbit.
[video=youtube;1lR99sIBwiU]http://www.youtube.com/watch?v=1lR99sIBwiU[/video][/QUOTE]
rated winner because that's also a really good song.
How do you render the grass currently? I want to explore this technique when the time comes around [url]http://outerra.blogspot.com/2012/05/procedural-grass-rendering.html[/url]
[QUOTE=paindoc;52134799]rated winner because that's also a really good song.
How do you render the grass currently? I want to explore this technique when the time comes around [url]http://outerra.blogspot.com/2012/05/procedural-grass-rendering.html[/url][/QUOTE]
I actually tried to replicate what Brano is explaining in the comments of this article, the "use vertex id in shader to drive the generation process" comment. I'm currently using a virtual vbo (basically just a static index buffer) to send 160K (400 * 400) GL_POINTS to the GPU, then in a vertex shader I place them in a grid using gl_VertexID, like this:
[code]
float px = u_Offset.x + (gl_VertexID % 400) * u_Offset.z;
float py = u_Offset.y + (gl_VertexID / 400) * u_Offset.z;
float h = texTile(s_HeightNormal_Slot0, vec2(px, py), u_HeightNormal_TileCoords, u_HeightNormal_TileSize).x;
gl_Position = vec4(px, h, py, 0);[/code]
Then in a geometry shader, I add some random offsets to the grid and build 3 intersecting planes like this:
[code]
for(int i = 0; i < 3; i++)
{
vec3 vBaseDirRotated = (rotationMatrix(vec3(0, 1, 0), sin(u_Timer * 0.7f) * 0.1) * vec4(vBaseDir[i], 1.0)).xyz;
....
// Grass patch top left vertex
vec3 vTL = vGrassFieldPos - vBaseDirRotated * fGrassPatchSize * 0.5f + vWindDirection * fWindPower;
vTL.y += fGrassPatchHeight;
vec3 vTL_deformed;
gl_Position = getDeformedPos(vTL, vTL_deformed);
fs_TexCoord = vec2(fTCStartX, 1.0);
fs_Deformed = vTL_deformed;
EmitVertex();
// Grass patch bottom left vertex
vec3 vBL = vGrassFieldPos - vBaseDir[i] * fGrassPatchSize * 0.5f;
vec3 vBL_deformed;
gl_Position = getDeformedPos(vBL, vBL_deformed);
fs_TexCoord = vec2(fTCStartX, 0.0);
fs_Deformed = vBL_deformed;
EmitVertex();
// Grass patch top right vertex
vec3 vTR = vGrassFieldPos + vBaseDirRotated * fGrassPatchSize * 0.5f + vWindDirection * fWindPower;
vTR.y += fGrassPatchHeight;
vec3 vTR_deformed;
gl_Position = getDeformedPos(vTR, vTR_deformed);
fs_TexCoord = vec2(fTCEndX, 1.0);
fs_Deformed = vTR_deformed;
EmitVertex();
// Grass patch bottom right vertex
vec3 vBR = vGrassFieldPos + vBaseDir[i] * fGrassPatchSize * 0.5f;
vec3 vBR_deformed;
gl_Position = getDeformedPos(vBR, vBR_deformed);
fs_TexCoord = vec2(fTCEndX, 0.0);
fs_Deformed = vBR_deformed;
EmitVertex();
}
[/code]
And finally I use a threshold value to discard pixels using a grass atlas texture in the fragment shader. I'm pretty sure rendering the actual blade geometry like outerra are doing is a better approach, but I haven't found a good way to do it yet. The performance is really good, I hardy notice a dip in the FPS when toggled on/off. I actually managed to get some shadows working in a older build, here's what it looks like:
[img]http://i64.tinypic.com/24130xu.png[/img]
[QUOTE=HiredK;52134968]I actually tried to replicate what Brano is explaining in the comments of this article, the "use vertex id in shader to drive the generation process" comment. I'm currently using a virtual vbo (basically just a static index buffer) to send 160K (400 * 400) GL_POINTS to the GPU, then in a vertex shader I place them in a grid using gl_VertexID, like this:
[code]
float px = u_Offset.x + (gl_VertexID % 400) * u_Offset.z;
float py = u_Offset.y + (gl_VertexID / 400) * u_Offset.z;
float h = texTile(s_HeightNormal_Slot0, vec2(px, py), u_HeightNormal_TileCoords, u_HeightNormal_TileSize).x;
gl_Position = vec4(px, h, py, 0);[/code]
Then in a geometry shader, I add some random offsets to the grid and build 3 intersecting planes like this:
[code]
for(int i = 0; i < 3; i++)
{
vec3 vBaseDirRotated = (rotationMatrix(vec3(0, 1, 0), sin(u_Timer * 0.7f) * 0.1) * vec4(vBaseDir[i], 1.0)).xyz;
....
// Grass patch top left vertex
vec3 vTL = vGrassFieldPos - vBaseDirRotated * fGrassPatchSize * 0.5f + vWindDirection * fWindPower;
vTL.y += fGrassPatchHeight;
vec3 vTL_deformed;
gl_Position = getDeformedPos(vTL, vTL_deformed);
fs_TexCoord = vec2(fTCStartX, 1.0);
fs_Deformed = vTL_deformed;
EmitVertex();
// Grass patch bottom left vertex
vec3 vBL = vGrassFieldPos - vBaseDir[i] * fGrassPatchSize * 0.5f;
vec3 vBL_deformed;
gl_Position = getDeformedPos(vBL, vBL_deformed);
fs_TexCoord = vec2(fTCStartX, 0.0);
fs_Deformed = vBL_deformed;
EmitVertex();
// Grass patch top right vertex
vec3 vTR = vGrassFieldPos + vBaseDirRotated * fGrassPatchSize * 0.5f + vWindDirection * fWindPower;
vTR.y += fGrassPatchHeight;
vec3 vTR_deformed;
gl_Position = getDeformedPos(vTR, vTR_deformed);
fs_TexCoord = vec2(fTCEndX, 1.0);
fs_Deformed = vTR_deformed;
EmitVertex();
// Grass patch bottom right vertex
vec3 vBR = vGrassFieldPos + vBaseDir[i] * fGrassPatchSize * 0.5f;
vec3 vBR_deformed;
gl_Position = getDeformedPos(vBR, vBR_deformed);
fs_TexCoord = vec2(fTCEndX, 0.0);
fs_Deformed = vBR_deformed;
EmitVertex();
}
[/code]
And finally I use a threshold value to discard pixels using a grass atlas texture in the fragment shader. I'm pretty sure rendering the actual blade geometry like outerra are doing is a better approach, but I haven't found a good way to do it yet. The performance is really good, I hardy notice a dip in the FPS when toggled on/off. I actually managed to get some shadows working in a older build, here's what it looks like:
[img]http://i64.tinypic.com/24130xu.png[/img][/QUOTE]
I was wondering if you did something like that, because it looked quite nice and had an impressive amount of detail. I can't say that I've found a lot of resources on grass rendering ad generation, so I'm going to have to add this post to my list of resources. I imagine I'll be trying a number of techniques too.
After getting window creation working yesterday, I got logical device and command queue creation working today, and sent some basic transfer commands to/from the gpu. Next is buffer objects and basic shader functionality, at which point I should be able to get a couple of rendering tests working
[editline]22nd April 2017[/editline]
Oh god, almost forgot that I still need to setup the swap chain sometime soon too since double buffered rendering is close to essential
been working on a rigid body skateboard test thingie
didn't expect it to go rogue
[media]https://www.youtube.com/watch?v=AeXfsOQtnws&feature=youtu.be[/media]
I uh
I just got a really awesome job offer???
I am super exite
[QUOTE=Ac!dL3ak;52135793]I uh
I just got a really awesome job offer???
I am super exite[/QUOTE]
Well don't leave us hangin, what do they do?
I wrote a prescription drug name generator last night because I was bored.
[code][rs:20;\n]
{
[case:first]
{
is{o|a}|
l{e|i}{v|t|m}{i|o|a}|
ox{y|i|o}|
n{e|i}x|
ins{u|o}|
{z|s}{e|i}t{i|o|y|a}|
ne{o|a}|
ep{o|i|a}|
{t|p}r{u|o|a}|
di{o|a}|
{v|f}{i|e}{a|o}|
r{i|a}{t|v|d|n|m}|
sy{n|m}|
pr{e|i}|
d{i|e}x|
se{n|m{|i}}
av{o|i|o{l|n|m|r}}
b{e|a|i}n
}
{c|t|x|th|z}
{
or|ia|il|
{r|l}
{
a|o|
{o|i|a}{x|x{y|i|a}{l|m|lt}}
}|
i{u|a}m|tin|fi{b|m|l{|um}}
}
{|{fin|la|pra|min|ya|se|ide|gen|ci{a|um}}}
[reg]
}[/code]
Output:
[quote]Setocil®
Isoztinpra®
Isocra®
Synthorse®
Sitythlo®
Limoxtin®
Ramziaide®
Epathiumcium®
Litacfim®
Dixciumfin®
Neacil®
Insuzia®
Diotfilumya®
Epotilya®
Pritfilum®
Dextfim®
Epaclo®
Oxyxlaxilt®
Senavobinztinmin®
Neaziamcium®[/quote]
Sorry, you need to Log In to post a reply to this thread.