• What do you need help with? v9
    133 replies, posted
Hoping someone with experience in the industry can give me some advice. Ive got a baby on the way, need a better paying job than my current one (receptionist). I have a bachelor of IT majoring in software development, have no formal experience and my skills are rusty as fuck, have done barely any programming for about a year. What are some things i should be doing in preparation for finding a job and what are some things i can do to improve my chances of getting a job in an IT position. The kind of work im looking for is anything that pays at least 50k per year and is related to IT.
I'd be getting up to speed on programming - even if it's not strictly required for the job you're going for it's going to be useful.
100% fill out your linkedin profile and set yourself to "looking for work" or something like that. Connect with everyone you possibly can. If you live near your former college (or any college, really), watch out for career fairs. Linkedin has given me more interviews than any other job site (which have all given me a net total of 0 interviews) If you want to be a software developer, consider working on side projects to show both your passion for software development, and technical experience/knowledge And, of course, fill out 10,000 applications a day. Get your resume critiqued if you're having trouble getting responses as well. Good luck!
any tutorials for practical programming to get me back into the swing of things? i've got a couple of website ideas i'd like to explore and some game ideas i'm trying to work up the enthusiasm for.
Depends what you want to do. Do you want to be a web developer, do you want to write drivers for GPUs, do you want to write software for robots, do you want to make videogames, etc. additionally, consider your area. Do you live in a place with a lot of government contractors like I do? Then consider learning C# or Java. Do you live on the west coast with a lot of consumer-oriented companies? Then you can be more liberal and learn things like Node.JS, Python, or Ruby
Getting a web-dev job will probably be easiest (seems to me that that's what most of the jobs are) and I agree that looking at a server-side framework and learning that is a good idea and learning React will help too.
As someone with a huge background in C++/C, graphics stuff, etc (most of you here know the shit I do): absolutely get some balance by getting knowledge in web stuff imo. It's fairly easy to pick up, a bit easier to find jobs in, and they all seem to pay really well. Often better than what I'd get doing my sort of stuff, it seems. Getting personal projects is also not just a great way to beef up your resume or to get talking points for your cover letter - it's absolutely a great way to drive rapid improvement, especially if you're super interested in the topic.
I'm trying to implement shader program caching in OpenGL 4.5 (using program binaries instead of SPIR-V & shader binaries), however I seem to be making a mistake somewhere, given that the driver won't accept the binaries I'm giving it. I can't seem to find any real resources on the topic. Here's how I'm saving the data to disk: struct Header { GLenum binaryFormat = 0; GLsizei binaryLength = 0; }; // After successful shader compilation and linkage (from text files) Header header; glGetProgramiv(glProgramID, GL_PROGRAM_BINARY_LENGTH, &header.binaryLength); std::vector<char> binary(header.m_binaryLength); glProgramParameteri(glProgramID, GL_PROGRAM_BINARY_RETRIEVABLE_HINT, GL_TRUE); glGetProgramBinary(glProgramID, header.binaryLength, NULL, &header.binaryFormat, binary.data()); FILE* file = fopen((shaderPath + ".shader").c_str(), "w+"); if (file != NULL) { // Write header information first fwrite(&header, sizeof(char), sizeof(Header), file); // Write the rest of the program data fwrite(binary.data(), sizeof(char), header.binaryLength, file); fclose(file); } And here's how I'm reading it back: Header header; std::vector<char> binary; FILE* file = fopen((shaderPath + ".shader").c_str(), "r"); if (file != NULL) { // Read header information first fread(&header, sizeof(char), sizeof(Header), file); // Read the rest of the program data binary.resize(header.binaryLength); fread(binary.data(), sizeof(char), header.binaryLength, file); fclose(file); } glProgramID = glCreateProgram();  glProgramBinary(glProgramID, header.binaryFormat, binary.data(), header.binaryLength); GLint success; glGetProgramiv(glProgramID, GL_LINK_STATUS, &success); if (success == 0) { ... // Always reach here ... } glValidateProgram(userAsset.m_glProgramID);
Solved the problem. When working with binary files, a special flag needs to be set when opening the file. Using the C-Style fopen(...) needs to have the "rb" or "wb" flag for reading/writing respectively Using the C++ input/output file streams needs to have the std::ios::binary flag set when opening the file.
yup! this is a simple thing to get caught on and I actually had the exact same problem with my shader cache system lmao. Also where are you saving these binaries to? Are they precompiled or compiled at runtime? Also, have you tried messing with specialization constants yet? The way you can set them is quite a bit easier than in Vulkan, but you can still get some pretty neat benefits out of them (since they're effectively compiled into the pipeline state, before being used)
Right now my implementation is basic and naive. I'm not using SPIR-V yet, so I'm just saving the program binaries instead of the individual shader binaries. When creating the shader object, I check and see if the cached binary exists (same shader name, different extension). If it does, it attempts to load it, if it doesn't, it attempts loading the plain-text shaders. In that case, if the shader gets successfully compiled, linked, and verified, I export its binaries to disk for next time. I think I save a little bit of performance; I was using the nSight profiler and I was getting a lot of variance between launches so I don't have any real numbers for this one. I was also implementing pixel buffer objects at the same time and I saw a ~1ms improvement in load time. In my next version I'll tackle spir-v. I'll have to make some project changes so that the shaders are added to my project source code and add a shaderc compilation step so I can ship the spir-v intermediate rather than the plain-text versions.
if you decide to use shaderc, you have to set the options you like in the options struct like so, then apply them before compiling like so. I compile to assembly before translating to binary for a couple reasons (mostly debug-related, and because the last step is fairly cheap so separating them is nbd). You can skip and just go right to binary, though. My only caution though is that linking directly to shaderc can involve dragging in a lot of dependencies. Also, for baking into the binary use glslangvalidator with the command-line option of "-x" to save a text-based format encoding the binary as uint32s given in hex: then you can just copy and paste these big blobs into your constexpr uint32_t arrays and bam baked-in compile-time constant shader binaries. I'm dealing with some issues myself atm: I'm having a really fucking irritating bug with delegates. I'm trying to use this delegate implementation with C++17: https://www.codeproject.com/Articles/1170503/The-Impossibly-Fast-Cplusplus-Delegates-Fixed My implementation and usage are here: https://gist.github.com/fuchstraumer/3e4f119b6375e19a1256394c2da7e036 Usage is for the swapchain callbacks, at line 24/25 of main.cpp. I can't see any differences between my implementation and the original, but it never fucking works on static methods. It gives me the following errors: 1>c:\users\(xx)\documents\softwaredev\diamonddogs\modules\common\signal\delegate.hpp(119): error C2440: '<function-style-cast>': cannot convert from 'initializer list' to 'delegate_t<void (VkSwapchainKHR,uint32_t,uint32_t)>' 1>c:\users\(xx)\documents\softwaredev\diamonddogs\modules\common\signal\delegate.hpp(119): note: No constructor could take the source type, or constructor overload resolution was ambiguous 1>c:\users\(xx)\documents\softwaredev\diamonddogs\tests\integration_tests\triangletest\main.cpp(25): note: see reference to function template instantiation 'delegate_t<void (VkSwapchainKHR,uint32_t,uint32_t)> delegate_t<void (VkSwapchainKHR,uint32_t,uint32_t)>::create<BeginRecreateCallback>(void)' being compiled 1>c:\users\(xx)\documents\softwaredev\diamonddogs\tests\integration_tests\triangletest\main.cpp(25): note: see reference to function template instantiation 'delegate_t<void (VkSwapchainKHR,uint32_t,uint32_t)> delegate_t<void (VkSwapchainKHR,uint32_t,uint32_t)>::create<BeginRecreateCallback>(void)' being compiled I'm not sure at all why the fuck this is happening. I'm not using bracket initializers anywhere, which is usually what fucks up overload resolution with constructors and initializer_list, and despite my recent reading on initializer lists and initialization issues I have no idea why this is breaking. I'm completely stymied and it's really fuckin irritating.
so i'm trying to use pthreads in C and i'm using barrier to synchronize what i'm doing, but i'm not seeing any speed increases when using more than 1 thread does anyone know a common mistake in using multi-threading with synchronization? pretty new to it so i'm guessing i'm doing something stupid and just in case it's an issue with how i'm recording my time, pretty sure it's not this though: double getTime() { struct timeval tv; gettimeofday(&tv,(struct timezone *)0); return ((double)(tv.tv_sec / 1000000.0))); } void main() { //blah blah int t = getTime(); //do threading stuff printf("%.2f\n",getTime()-t); //do stuff }
Post your threading code. Your timekeeping code looks overly complicated. You should just be able to get the time taken by using the same time difference approach, but by using the clock function: https://www.tutorialspoint.com/c_standard_library/c_function_clock.htm. Unless you're timing something that's going to take longer than 72 minutes. You shouldn't have a problem with it.
i did take a look at a couple other time methods and they gave me similar results, varying only by fractions of seconds so not a big deal please excuse this, it's pretty messy because i've been messing around with it void* sub(void *arg) {     pthread_mutex_lock(&m);     struct threadParams *readParams = (struct threadParams *)arg;     pthread_mutex_unlock(&m);     double difference = 0;     double maxDifference = 0;     double testDifference = 0;     double hold = 0;     int count = 0;     int test = 1;     while (test) {         difference = 0;         for (int row = 1; row < readParams->numRows-1; row++){             for (int col = 1; col < readParams->numCols-1; col++){                 readParams->hotPlateCopy[row][col] =                 ((readParams->hotPlate[row-1][col] +                 readParams->hotPlate[row+1][col] +                 readParams->hotPlate[row][col+1] +                 readParams->hotPlate[row][col-1]) / 4.0);                 maxDifference = fabs(readParams->hotPlate[row][col]                 -readParams->hotPlateCopy[row][col]);                 if (maxDifference > difference)                     difference = maxDifference;             }         }         for (int row = 1; row < readParams->numRows-1; row++) {             for (int col = 1; col < readParams->numCols-1; col++) {                 readParams->hotPlate[row][col] = readParams->hotPlateCopy[row][col];             }         }         pthread_barrier_wait(&barr);         if (difference < readParams->epsilon) {             if (count != 0)                 //printf("\t%d\t %f\n", count, difference);             break;         }         if (isPowerOfTwo(count) && count != 0) {             //printf("\t%d\t %f\n", count, difference);         }         count++;     } }
From what it looks like, you probably are not splitting up the work and your threads are running the same instructions in parallel. Post how you initialize the threadParams and the threads themselves.
t = getTime();     pthread_barrier_init(&barr, NULL, numThreads);     pthread_mutex_init(&m, NULL);     for (int i = 0; i < numThreads; i++) {         rc = pthread_create(&thrdid[i],NULL,sub,(void*)readParams);     }          for (int i = 0; i < numThreads; i++)         pthread_join(thrdid[i],NULL);
Anyone know enough shader-coding to fix Garry's outline shaders? Shaders don't compile · Issue #1 · Facepunch/Facepunch.Highlight..
Wanna add some more details to that ticket?
Sure, done.
Is there a compile log with errors that you can add?
what the. I dragged the files into the project, magenta. Removed them, dragged them in again, magenta. Made a new empty project, dragged them in, magenta. Removed them. Now I wanted to see if I could find some compile log by dragging them in again, no magenta anymore. Put them in my working project, no magenta. thanks proboardslol, your aura fixed the compiler
https://media.giphy.com/media/Q7y3K35QjxCBa/giphy.gif
Seems like you're passing the same thread_params struct to each function. So it's going to do the same work. Usually in this type of thing you base the subset of your overall data on the thread_id of the thread you created. Also you don't need a mutex where you have one unless you're going to be updating that bit of memory in multiple threads, and you need to prevent a race condition.
Ah I think I get it, I'll have to work with it a little I had a feeling I wasn't distributing the work but wasn't entirely sure
would it be possible to go through the array row by row, with each thread doing a calculation or am i thinking about this incorrectly?
You certainly could. Change your loops to use thread ID, and num threads. So for every iteration you would increment by num threads.
i think i'm following, but not 100%, lemme sleep on it since it's 1 am here lol
I managed to it working in my project, unfortunately this method doesn't work well with ARKit projects, it's quite performance heavy
Still stuck. Any ideas?
Sorry, you need to Log In to post a reply to this thread.