[QUOTE=Legend286;52189887]Okay, so I'm using Java to create my own little rendering thingy with LWJGL as a foundation. I followed some tutorials and then kinda went off about it by myself...
The only problem is that I seem to have broken something relating to sending variables to the shaders (or it's simply just a fuck up that I've not found a fix for somewhere else in the code) and now either meshes don't render, or they render as white with no matrix transforms and the shader code seemingly does nothing. A lot of my code is unfinished but it *should* work because I didn't really do anything to break it.
Anyone good with GL and Java fancy helping me? Code is available here: [url]https://github.com/Legend286/DeferredRenderer[/url][/QUOTE]
Okay, so I don't use Java and I'm not sure how you'd hook it in (it should just work with an executable that you build tbh), but try [URL="https://renderdoc.org/"]RenderDoc[/URL] if you haven't already. Its been indispensable ot me in fixing bugs like this, because you can capture a log and then check the values of your uniform variables. This makes it really easy to see if something is set to zero, or if one of your matrices glitched and has all the entries set to "NaN" or something. You can also check the vertex input - to make sure mesh data is being sent correctly, and the vertex output - to make sure its going through the vertex shader properly.
I'd look more but I'm at work and having a bit of a hard time navigating your repo, but I'd give RenderDoc a shot.
Hmm, I've heard RenderDoc is good for just about anything, so I will give it a go. Maybe there's something I messed up somewhere that I overlooked but there shouldn't be any invalid stuff being sent to the shader cos that code hasn't changed (besides me making some inherited classes from the base shader handler, but nothing should have been broken :S)
Okay, so apparently RenderDoc can't even detect the OGL api. The renderer is drawing this:
[t]http://i.imgur.com/I1AXt3v.png[/t]
I would be eternally grateful if someone finds something leading to me resolving this, because so far I've really enjoyed learning what I have with Java and I'm kinda looking forward to getting down to implementing some interesting things, starting off with physically based deferred shading.
[QUOTE=Legend286;52190238]Hmm, I've heard RenderDoc is good for just about anything, so I will give it a go. Maybe there's something I messed up somewhere that I overlooked but there shouldn't be any invalid stuff being sent to the shader cos that code hasn't changed (besides me making some inherited classes from the base shader handler, but nothing should have been broken :S)[/QUOTE]
I'd be careful with object handles and OpenGL though, for a long time I was having issues with shaders because I forgot to properly implement the rule of three/five for shader objects. I'm pretty sure Java itself takes care of many aspects of this, so its mainly a C++ problem, but I'd use renderdoc to see if it catches any API errors like invalid shader object handles or something.
[QUOTE=F.X Clampazzo;52187764]stuff[/QUOTE]
Typically the lower level memory management stuff doesn't make learning to program easier for most people.
I solved my previous problem; 'X' and 'Y' are the pixel's tangent and bitangent ('ax' and 'ay' are roughness on both axes).
Isn't it possible to derive bitangent from the normal and tangent since they are orthogonal to each other? I'd rather not have to store another 12 bits of data per pixel storing the bitangent if I can just recalculate it from the normal and tangent.
When I try:
[code]
// Raw texture information
vec4 View_Normal_Texture = texture(ViewNormalMap, TexCoord);
vec4 View_Tangent_Texture = texture(ViewTangentMap, TexCoord);
// Transform from view space into world space
vec3 World_Normal = normalize( Inverse_View_Matrix, vec4( View_Normal_Texture.xyz, 0.0f ) ).xyz;
vec3 World_Tangent = normalize( Inverse_View_Matrix, vec4( View_Tangent_Texture.xyz, 0.0f ) ).xyz;
vec3 World_Bitangent = normalize( cross( World_Normal, World_Tangent ) );
[/code]
When comparing the actual bitangent and the derived bitangent, I get skewed results.
'correct' bitangent vs 'derived' bitangent
[t]http://i.imgur.com/mo4q0bT.png[/t] [t]http://i.imgur.com/cZ99dBS.png[/t]
[editline]5th May 2017[/editline]
Actually, I think I know why this is happening now.
The tangent and bitangent were originally calculated relative to the vertex normal, but in my graphics pipeline the normal that is stored as a texture is modulated by the bump-map.
Although the tangent is unmodified, the normal may be modified, giving a slightly different bitangent.
Welp, I guess I have no way to circumvent storing the bitangent
[QUOTE=Karmah;52191346]I solved my previous problem; 'X' and 'Y' are the pixel's tangent and bitangent ('ax' and 'ay' are roughness on both axes).
Isn't it possible to derive bitangent from the normal and tangent since they are orthogonal to each other? I'd rather not have to store another 12 bits of data per pixel storing the bitangent if I can just recalculate it from the normal and tangent.
When I try:
[code]
// Raw texture information
vec4 View_Normal_Texture = texture(ViewNormalMap, TexCoord);
vec4 View_Tangent_Texture = texture(ViewTangentMap, TexCoord);
// Transform from view space into world space
vec3 World_Normal = normalize( Inverse_View_Matrix, vec4( View_Normal_Texture.xyz, 0.0f ) ).xyz;
vec3 World_Tangent = normalize( Inverse_View_Matrix, vec4( View_Tangent_Texture.xyz, 0.0f ) ).xyz;
vec3 World_Bitangent = normalize( cross( World_Normal, World_Tangent ) );
[/code]
When comparing the actual bitangent and the derived bitangent, I get skewed results.
'correct' bitangent vs 'derived' bitangent
[t]http://i.imgur.com/mo4q0bT.png[/t] [t]http://i.imgur.com/cZ99dBS.png[/t]
[editline]5th May 2017[/editline]
Actually, I think I know why this is happening now.
The tangent and bitangent were originally calculated relative to the vertex normal, but in my graphics pipeline the normal that is stored as a texture is modulated by the bump-map.
Although the tangent is unmodified, the normal may be modified, giving a slightly different bitangent.
Welp, I guess I have no way to circumvent storing the bitangent[/QUOTE]
Why are you storing your normals as a view space texture only without any geometry data? That won't work for lighting.
[QUOTE=Legend286;52193298]Why are you storing your normals as a view space texture only without any geometry data? That won't work for lighting.[/QUOTE]
I do though? I do all the typical gbuffer stuff for deferred shading, just typically iirc you don't have to store view-space tangents and bitangents as textures because traditional lighting models are isotropic.. Of course I use them to build a TBN matrix and then apply bump maps. However now I'll store all 3 in view space textures.
Hey, I'm programming stuff with tkinter/python. I'm doing a thing where I need to pull items from 3 different rss feeds. The user has the option of selecting 0-5 items from each rss via spinbox. I have that all working, it pulls the items as needed. But another requirement is that it shows the progress of the items being pulled. This is my code:
[code] loading_label['text']='Loading Car Parts'
if int(carparts_spin.get()) > 0:
carparts=RSS_Scraper(car_parts_rss,int(carparts_spin.get()))
loading_label['text']='Loading Safes'
if int(safes_spin.get()) > 0:
safes=RSS_Scraper(safes_rss,int(safes_spin.get()))
loading_label['text']='Loading Pillows'
if int(pillows_spin.get()) > 0:
pillows=RSS_Scraper(pillows_rss,int(pillows_spin.get()))
loading_label['text']='Done'[/code]
SO what should happen is that the label says it's loading something, it goes and pulls the information, and continues on. What actually happens is that you press the button to trigger all of this happening, the window locks up, and then the label changes to 'done'. In the background its loading the RSS stuff okay, its just not changing the label. It should change the label before it even thinks about starting to pull information, but it don't, and I don't get why.
Total C++ noob here. Only been studying it for the past 5 months, though I've done some PHP with Joomla for a web dev job I had, so I'm not totally unfamiliar with OOP. I've been playing about with SFML, and I'm trying to write an algorithm to render a square of X amount of sprites, however I'm not too sure on how to proceed.
[code]
#include <SFML/Graphics.hpp>
int main(int argc, char** argv)
{
const int width = 800;
const int height = 600;
sf::Vector2f size(32, 32);
sf::RenderWindow window(sf::VideoMode(width, height), "Sprite Test");
std::vector<sf::Sprite> grass (64);
sf::Texture t_grass;
if (!t_grass.loadFromFile("textures/grass.bmp", sf::IntRect(0, 0, size.x, size.y)))
return 1;
/*
sf::Sprite s_grass;
s_grass.setOrigin(0, 0);
s_grass.setTexture(t_grass);
*/
//Counter for loop to increase height
int counter = 1;
for (std::vector<sf::Sprite>::iterator it = grass.begin(); it != grass.end(); it++)
{
//Position of object in vector
auto pos = it - grass.begin();
it->setOrigin(0, 0);
it->setTexture(t_grass);
if (pos > (counter * ((int)sqrt(grass.size() ))) )
{
counter++;
}
it->setPosition(pos * size.x, height - (counter * size.y));
}
while (window.isOpen())
{
sf::Event event;
while (window.pollEvent(event))
{
if (event.type == sf::Event::Closed)
window.close();
}
window.clear();
for (std::vector<sf::Sprite>::iterator it = grass.begin(); it != grass.end(); it++)
{
window.draw(*it);
}
window.display();
}
return 0;
}
[/code]
I'll eventually get around to putting this into its own header/source files, I'm just prototyping to get used to SFML at the minute.
This is what it does so far, which is not the desired intent, though expected with the code I've written:
[t]https://puu.sh/vKx6G/3752f59e4f.png[/t]
Questions I have are:
• What is the recommended way to render X and Y sprites? I have them all in one vector for now, but would it be better to have separate containers for separate rows, or separate columns?
• Is a vector an appropriate container for storing these types of objects?
[QUOTE=CMB Unit 01;52205659]Total C++ noob here. Only been studying it for the past 5 months, though I've done some PHP with Joomla for a web dev job I had, so I'm not totally unfamiliar with OOP. I've been playing about with SFML, and I'm trying to write an algorithm to render a square of X amount of sprites, however I'm not too sure on how to proceed.
(((code)))
I'll eventually get around to putting this into its own header/source files, I'm just prototyping to get used to SFML at the minute.
This is what it does so far, which is not the desired intent, though expected with the code I've written:
[t]https://puu.sh/vKx6G/3752f59e4f.png[/t]
Questions I have are:
• What is the recommended way to render X and Y sprites? I have them all in one vector for now, but would it be better to have separate containers for separate rows, or separate columns?
• Is a vector an appropriate container for storing these types of objects?[/QUOTE]
If I understand correctly, it looks like in your for loop you've got:
[code]it->setPosition(pos * size.x, height - (counter * size.y));[/code]
pos * size.x is the issue. Since size.x is basically constant, and pos increases without bound, you need a way for it to go back to the beginning of the row every time the counter increases by one.
It's a bit difficult to fit it into your code, but a good general way to do this is (in pseudocode):
[code]
pos_x = offset.x + ((counter * size.x) % width)
pos_y = offset.y + (int)((counter * size.y) / height)
[/code]
where size.x and size.y are the size of your blocks
and where width and height are the total number of blocks per row/column
and where offset.x and offset.y is the offset (in pixels) where you want the blocks to start at.
% is the modulus operator, it gets the remainder between two numbers, e.g:
1 % 2 = 1
4 % 2 = 0
7 % 3 = 1 (because 7/3 = 2 and 1/3)
What this effectively does is start the pos_x over every time the counter reaches some number (width).
so the table for counter % width where width = 4 is:
[code]
*----*-------------------------------*
|x |0 |1 |2 |3 |4 |5 |6 |7 |
*----*-------------------------------*
|f(x)|0 |1 |2 |3 |0 |1 |2 |3 |
*----*-------------------------------*
[/code]
so the X value starts over every time counter % width = 0
---
Now for y, what we do is get the floor of the y-value. By doing (int)(counter / height) we know that if counter is less than height, then value for counter/height will be between (0, 1). When we get the floor of that (round down), it will be 0 when counter doesn't equal height, and 1 when it does.
For values of counter greater than height, the trend continues. if height is 20 and counter is 25, the value is one (25/20) = 1.25. Rounded down = 1.
if height is 20 and the counter is 35, the value is 2.25. Rounded down this value is 2
[QUOTE=Karmah;52191346]I solved my previous problem; 'X' and 'Y' are the pixel's tangent and bitangent ('ax' and 'ay' are roughness on both axes).
Isn't it possible to derive bitangent from the normal and tangent since they are orthogonal to each other? I'd rather not have to store another 12 bits of data per pixel storing the bitangent if I can just recalculate it from the normal and tangent.
When I try:
[code]
// Raw texture information
vec4 View_Normal_Texture = texture(ViewNormalMap, TexCoord);
vec4 View_Tangent_Texture = texture(ViewTangentMap, TexCoord);
// Transform from view space into world space
vec3 World_Normal = normalize( Inverse_View_Matrix, vec4( View_Normal_Texture.xyz, 0.0f ) ).xyz;
vec3 World_Tangent = normalize( Inverse_View_Matrix, vec4( View_Tangent_Texture.xyz, 0.0f ) ).xyz;
vec3 World_Bitangent = normalize( cross( World_Normal, World_Tangent ) );
[/code]
When comparing the actual bitangent and the derived bitangent, I get skewed results.
'correct' bitangent vs 'derived' bitangent
[t]http://i.imgur.com/mo4q0bT.png[/t] [t]http://i.imgur.com/cZ99dBS.png[/t]
[editline]5th May 2017[/editline]
Actually, I think I know why this is happening now.
The tangent and bitangent were originally calculated relative to the vertex normal, but in my graphics pipeline the normal that is stored as a texture is modulated by the bump-map.
Although the tangent is unmodified, the normal may be modified, giving a slightly different bitangent.
Welp, I guess I have no way to circumvent storing the bitangent[/QUOTE]
Late response to this, but you should be able to calculate one component given the other two. I had issues doing this in maths at my last uni, TNB vectors are kind of hard concept at first. Make sure your tangent and normal vectors are normalized (made into unit vectors) - then the bitangent is just the cross product of your unit normal and unit tangent. Forgetting to normalize things can break the calculations, iirc. its been [I]ages[/I] since I've had to do them though.
If your tangent is unmodified, then the updated normal can be found by getting your unit tangent vector. Then, divide each component of the unit tangent by the magnitude of the unit tangent vector:
[t]http://tutorial.math.lamar.edu/Classes/CalcIII/TangentNormalVectors_files/eq0024M.gif[/t]
Thus leading to the bitangent as:
[t]http://tutorial.math.lamar.edu/Classes/CalcIII/TangentNormalVectors_files/eq0042M.gif[/t]
If you want to just calculate the normal, [URL="https://en.wikipedia.org/wiki/Normal_%28geometry%29#Calculating_a_surface_normal"]that's pretty easily done in the shader too[/URL]
Given the lateness of this post, i'm not sure if you found a solution or not but I had actually posted a reply to this that was welped by my work DNS server several days ago. Just didn't re-post it because I had to go to a meeting and forgot from there. I'd imagine, though, that the calculation approach is optimal since the GPU can chew through those in a shader and minimizing the amount of data you have to transfer to the GPU is always ideal.
Thanks. I've decided on removing anisotropic lighting all together for several reasons. The biggest is all the extra calculations per pixel per light for the brdf, and secondly having varying roughness per axis actually poses a problem for my reflection algorithms. Env maps are convolved and stored in mip-chains, and I would use the roughness of a texel to lookup which mip-level of the environment to reflect from. I could average the 2 axes but then highly anisotropic surfaces would appear too glossy in one direction and too rough in the other.
Anyone using .NET Core here? I'm looking for http(s) library to retrieve page sources and download files with support for multithreading (not fucking everything up like HttpClient if you have many instances of library will be enough for me), cookies and header modification. Built-in HttpClient is good but I can't edit headers in one thread without affecting other threads (and you are not supposed to make more than one HttpClient instance in app)
I'm making an editor app and looking into Polymer for the GUI/composition. (I'm currently using Vue.js, but there's an unpleasant tendency of that library being 'smart'. I had to implement a few workarounds against imprecise behaviour so far.)
I expect to write a bunch of mostly independent components or at least modules for the program (base, editor, runtime...) that aren't always necessary. With .NET development I usually separate that out into distinct assemblies, but how do I concurrently develop a bunch of JS/TypeScript modules?
Additionally, I know there's some way to dynamically update code with Webpack without reloading the page or discarding backing state, which should work pretty nicely with Web Components and Incremental DOM (assuming I can invalidate the state, but I think there's a mixin to do that globally). However, I can't seem to find a good getting-started page on how to set this up properly.
Does anyone know if/how it's possible to share data between kernels in a compute shader? I'd like to basically fill an array with data in one kernel and have it be read by another kernel afterwards. I feel like this ought to be possible but I don't know any way other than to send the data from kernel #1 to the CPU and then upload it to kernel #2 before dispatch.
[QUOTE=srobins;52217353]Does anyone know if/how it's possible to share data between kernels in a compute shader? I'd like to basically fill an array with data in one kernel and have it be read by another kernel afterwards. I feel like this ought to be possible but I don't know any way other than to send the data from kernel #1 to the CPU and then upload it to kernel #2 before dispatch.[/QUOTE]
Are you talking about CUDA kernels? if so I can help more in full when my work computer finishes updating, but if it's OpenCL I can't help much
[QUOTE=paindoc;52219127]Are you talking about CUDA kernels? if so I can help more in full when my work computer finishes updating, but if it's OpenCL I can't help much[/QUOTE]
It's OpenCL unfortunately. Thanks for the offer though!
Is it possible to pass an entire HashMap as a parameter in Java? I have a method that uses a foreach loop to print out the keys and values in the map. It worked when the whole process of collecting and printing the data was in one method, but as soon as I migrated the printing to a separate method, the foreach broke.
[code]
private static void displayResults(HashMap students){
//using a foreach loop from the Maps interface, print the student's ID and a
//list of the courses they are enrolled in.
for(Map.Entry map:students.entrySet()){
System.out.println(map.getKey()+" "+map.getValue());
}
}
[/code]
I get the error:
[code]Error:(55, 44) java: incompatible types: java.lang.Object cannot be converted to java.util.Map.Entry[/code]
[B]Fixed it:[/B]
I had to use a feature that's new to Java 8 to get it to work. It looks like some weird alchemy honestly but at least it works, and it's fewer lines to read. Here's what works:
[code]
private static void displayResults(HashMap students){
//overriding the foreach method in Java8 to print the entire map. *This only works in Java 8*
students.forEach((key,value) -> System.out.println(key + ": " + value));
}
[/code]
Long shot, anyone know of a flowgraph/node graph editor control, preferably for .NET with SkiaSharp, but I'd accept anything .NET if it had some way I could plug another renderer in. Really trying to avoid embedding a WebView.
[QUOTE=Adelle Zhu;52224960]Is it possible to pass an entire HashMap as a parameter in Java? I have a method that uses a foreach loop to print out the keys and values in the map. It worked when the whole process of collecting and printing the data was in one method, but as soon as I migrated the printing to a separate method, the foreach broke.
...
[/QUOTE]
The compiler was complaining that you didn't specify the types of keys and values contained by the HashMap. You can fix the error like so if you don't like lambdas (change they key and the value types to the correct ones based on your case):
[code]
private static void displayResults(HashMap<Long, Student> students) {
for (Map.Entry<Long, Student> map : students.entrySet()) {
System.out.println(map.getKey() + " " + map.getValue());
}
}
[/code]
[QUOTE=Adelle Zhu;52224960]Is it possible to pass an entire HashMap as a parameter in Java? I have a method that uses a foreach loop to print out the keys and values in the map. It worked when the whole process of collecting and printing the data was in one method, but as soon as I migrated the printing to a separate method, the foreach broke.
[code]
private static void displayResults(HashMap students){
//using a foreach loop from the Maps interface, print the student's ID and a
//list of the courses they are enrolled in.
for(Map.Entry map:students.entrySet()){
System.out.println(map.getKey()+" "+map.getValue());
}
}
[/code]
I get the error:
[code]Error:(55, 44) java: incompatible types: java.lang.Object cannot be converted to java.util.Map.Entry[/code]
[B]Fixed it:[/B]
I had to use a feature that's new to Java 8 to get it to work. It looks like some weird alchemy honestly but at least it works, and it's fewer lines to read. Here's what works:
[code]
private static void displayResults(HashMap students){
//overriding the foreach method in Java8 to print the entire map. *This only works in Java 8*
students.forEach((key,value) -> System.out.println(key + ": " + value));
}
[/code][/QUOTE]
the java8 code is way more intuitive anyway, i'm all for modern languages having straightforward method names its why i love swift 3 over swift 2 (despite the lack of documentation for a lot of it). i haven't used java in a long time but the first code looks like it could have been fixed.
I am reading a book about computer architecture and I am stuck on one of the ideas.
This is from section 1.6 of the book:
[QUOTE]Designers refer to the length of a clock period both as the time for a complete clock cycle (e.g., 250 picoseconds, or 250ps) and as the clock rate (e.g., 4 gigahertz, or 4GHz), which is the inverse of the clock period. [/QUOTE]
So, the book doesn't explain why the clock rate is the inverse of the clock period. Could someone please explain this to me?
Thank you! :smile:
NOTE:
The book I am reading is:
Computer Organization and Design ARM Edition: The Hardware Software Interface
[QUOTE=Reflex F.N.;52228050]I am reading a book about computer architecture and I am stuck on one of the ideas.
This is from section 1.6 of the book:
So, the book doesn't explain why the clock rate is the inverse of the clock period. Could someone please explain this to me?
Thank you! :smile:
NOTE:
The book I am reading is:
Computer Organization and Design ARM Edition: The Hardware Software Interface[/QUOTE]
I just finished a semester using the MIPS version. That publisher is awful.
[QUOTE=Reflex F.N.;52228050]I am reading a book about computer architecture and I am stuck on one of the ideas.
This is from section 1.6 of the book:
So, the book doesn't explain why the clock rate is the inverse of the clock period. Could someone please explain this to me?
Thank you! :smile:
NOTE:
The book I am reading is:
Computer Organization and Design ARM Edition: The Hardware Software Interface[/QUOTE]
1 second / 250 picoseconds = 4 billion hertz = 4 GHz.
[QUOTE=helifreak;52228111]1 second / 250 picoseconds = 4 billion hertz = 4 GHz.[/QUOTE]Oh, now I get it!
Thanks a lot for your help! :smile:
[QUOTE=Adelle Zhu;52228108]I just finished a semester using the MIPS version. That publisher is awful.[/QUOTE]Why?
Does this mean that I am going to have a lot of occurrences like this where I don't completely understand something?
The book wasn't cheap! :(
[QUOTE=Reflex F.N.;52228165]Oh, now I get it!
Thanks a lot for your help! :smile:
Why?
Does this mean that I am going to have a lot of occurrences like this where I don't completely understand something?
The book wasn't cheap! :([/QUOTE]
Possibly. Everyone's different. Also my book may have had different authors but​ it sounds like it's from the same series. The homework exercises are impossible to relate to the material.
[QUOTE=Adelle Zhu;52228237]Possibly. Everyone's different. Also my book may have had different authors but​ it sounds like it's from the same series. The homework exercises are impossible to relate to the material.[/QUOTE]The authors are David Patterson and John L. Hennessy.
There is both a MIPS edition and an ARM edition of this book.
This is the fifth edition of the book and it is the first edition to get an ARM version.
According to the book's preface, there isn't much of a difference between the ARM edition and the MIPS edition, except for the assembly instructions used and things that are specific to ARM; i.e. the explanation of concepts and other things that are not exclusive to either ARM or MIPS are the same in both editions of the book.
By the way, what do you mean by the homework exercises? Do you mean the exercises in the book, or the ones that your instructor gave you?
[QUOTE=Reflex F.N.;52228322]The authors are David Patterson and John L. Hennessy.
There is both a MIPS edition and an ARM edition of this book.
This is the fifth edition of the book and it is the first edition to get an ARM version.
According to the book's preface, there isn't much of a difference between the ARM edition and the MIPS edition, except for the assembly instructions used and things that are specific to ARM; i.e. the explanation of concepts and other things that are not exclusive to either ARM or MIPS are the same in both editions of the book.
By the way, what do you mean by the homework exercises? Do you mean the exercises in the book, or the ones that your instructor gave you?[/QUOTE]
The exercises at the end of each chapter were assigned as homework. It shows you what paragraph to read to understand the question but the information is usually irrelevant.
Looking for someone who knows Vulkan well enough to continue a project/software product that one of my coders left off on. Add me on steam "Jangalomph" or discord jangafx#1318 to talk to me about joining the team!
Not sure if "jobs" can be posted so not gonna make a job description. The software you'd be helping with is [URL="https://jangafx.com/vectoraygen-sales/"]VectorayGen[/URL].
C++ & Vulkan needed. Software is already generating sales and payments would be royalties or equity depending on how active you want to be in le company. I'm looking for two or more programmers.
Here's something the software helped me create:
[video]http://youtube.com/watch?v=72abXopPJ-g[/video]
[QUOTE=jangalomph;52229749]Looking for someone who knows Vulkan well enough to continue a project/software product that one of my coders left off on. Add me on steam "Jangalomph" or discord jangafx#1318 to talk to me about joining the team!
Not sure if "jobs" can be posted so not gonna make a job description. The software you'd be helping with is [URL="https://jangafx.com/vectoraygen-sales/"]VectorayGen[/URL].
C++ & Vulkan needed. Software is already generating sales and payments would be royalties or equity depending on how active you want to be in le company. I'm looking for two or more programmers.
Here's something the software helped me create:
[video]http://youtube.com/watch?v=72abXopPJ-g[/video][/QUOTE]
Goddamn, that's neat as hell. I was wondering a bit about different ways you could do this stuff, and using something like compute shaders was top of the list. If I excelled at something beyond breaking vulkan, this would be a fun position :v:
Sorry, you need to Log In to post a reply to this thread.