-snip, page stretcher. Was solved in private chat. I'd share a solution but there's nothing to share so dw about me being a DenverCoder9-
[QUOTE=Asgard;52079672][...]
But if you have a suggestion on how to handle this matching decoders with opcodes and returning the correct data, please let me know. Again, the encoder and handler work just fine because I have type information, but I don't know how to handle the decoder part.
[...][/QUOTE]
You'll have to send some metadata with type information, ideally out of band compared to the payload.
If the type information is known at top level at least, then you can also make the encoders/decoders take care of that.
[editline]9th April 2017[/editline]
[QUOTE=Blasphemy;52079423]Working on writing a bot that uses machine learning to solve technical support tickets for a hosting company that I work for.
The first step, write a python back-end with an android app to allow the bot to automatically two-factor authenticate:
[VID]http://paradigm-network.com/files/whmcs_auth_testing.webm[/VID][/QUOTE]
Is that with or without you having to push a button on the phone after the request comes in?
If not, then I recommend adding a confirmation button that has a delay of at least two seconds before becoming active.
In any case, the app really should display some more information regarding where the authentication code is sent [I]before[/I] the user manually confirms it.
[QUOTE=Tamschi;52079689]You'll have to send some metadata with type information, ideally out of band compared to the payload.
If the type information is known at top level at least, then you can also make the encoders/decoders take care of that.[/QUOTE]
What can I pragmatically do with type information? I'd still have issues with storing all the decoders and casting the decoded result to the correct message type
[QUOTE=Tamschi;52079689]You'll have to send some metadata with type information, ideally out of band compared to the payload.
If the type information is known at top level at least, then you can also make the encoders/decoders take care of that.
[editline]9th April 2017[/editline]
Is that with or without you having to push a button on the phone after the request comes in?
If not, then I recommend adding a confirmation button that has a delay of at least two seconds before becoming active.
In any case, the app really should display some more information regarding where the authentication code is sent [I]before[/I] the user manually confirms it.[/QUOTE]
No confirmation is required, this is intended to automatically authenticate with a fully functional two-step auth system
[QUOTE=Blasphemy;52080247]No confirmation is required, this is intended to automatically authenticate with a fully functional two-step auth system[/QUOTE]
That kind of defeats the purpose, by making the back-end a single point of failure regarding access to the support system.
I reworked my collision passthrough system for my portal asset so it's uber fast
[IMG]http://i.imgur.com/VqSrmnn.gif[/IMG]
I hope Unity pushes the update before my sale
([URL="https://www.assetstore.unity3d.com/en/#!/content/81638"]Also, it's going on sale on the 19th! Check it out![/URL])
I got my new shader compiler to work with seperable shader objects and program pipelines. Its capable of processing #include's, and with a bit of tuning can handle #define's too. One of the trickier things about setting up #include support is that you still need to place the #version directive at the first line of the file, and also have to make sure you erase the #include once you've added the actual source code you're including. I'm also kinda happy with the fact that the compiler object tracks what types of shader objects have been added to the program that will be linked, and uses this to set up the correct bitfield for binding a program to a pipeline (so that OpenGL knows what shader stages the program is using). The source is [URL="https://gist.github.com/fuchstraumer/b56ddd5ce1fcf64b7e93f9af388be956"]here[/URL] in a wee little gist, and is mainly taken from [URL="https://github.com/g-truc/ogl-samples/blob/b856b8ca236e1b46d40cf514784a25d3eaec5f4f/framework/compiler.hpp"]this[/URL] but I use std::regex in place of their method for finding the actual #include's and #define's.
Weirdly, you have to use gl_PerVertex in your shader stages though, for setting up the output. That's a bit odd. This whole thing is also making me want to move to uniform buffer objects, as those work well with DSA and feel more modern than having tons of individual explicit uniforms.
At this point, I've got a good chunk of "DiamondDogs" setup to start using DSA entirely, and just need to take care of a few more details and cleanup items (like removing my horrendously over-complicated mesh class) before I start working on neat space stuff for that project again. I'm pretty sure DSA should make using CUDA to generate texture data a lot easier, since most CUDA->OpenGL interop stuff comes down to ping-ponging access to device texture memory back and forth using the texture memory's device pointer
So last week at work I backed up the wrong file and reverted 21 hours of work on a report. Code was a mess anyway, redid the lot in 8 hours and refactored the shit out of it. Line count went from 3700 to 1800.
Sure am glad dynamic works in .NET 4.0 on Windows Server 2008 R2, can't see any other way of filtering an enumerable of anonymous types in a function.
I participated in a game jam last weekend, my first one. Never worked in Unity, with C# or with VR, but now I've experienced all three. :v: We managed to get first place too, only three teams but a win is a win, right?
The theme was "Emergency?", including the question mark. We came up with Bunker Buddies, best way to describe it is a coop VR Papers Please where you have to determine if the orders you receive are genuine or not by comparing names and identification codes.
[video=youtube;oPbE9Jw4VfY]https://www.youtube.com/watch?v=oPbE9Jw4VfY&t=1s[/video]
Here is some off screen footage, I keep switching and showing the players since it was intended for my parents but hopefully you'll get the gist of it regardless.
And in case you want to hear what is up with that FMV in the game, starring me as the glorious instructor with an authentic russian accent, you can check it out here.
[url]http://pahlavan.se/dump/bunkerbuddies.webm[/url]
The game is rather quiet since we didn't have a lot of time to add audio but all in all I'm very happy with what we made.
In OpenGL, is it normal to have more than one shader? Or do people just cram everything into a single shader?
[QUOTE=false prophet;52086230]In OpenGL, is it normal to have more than one shader? Or do people just cram everything into a single shader?[/QUOTE]
It is common to have a single shader for "general" shading but specialized shaders for less common effects. Usually, you'll have a master shader with keywords (#defines) which enables you to compile shaders which include only what you need for a certain material. This makes sure you have a versatile all-purpose material which also is optimized (real branches are often to be avoided if possible in shaders, for performance reasons). The alternative to this, assuming you want to avoid branches, is having multiple shaders which support certain types of inputs, but the number of permutations quickly become a pain to manage once you expand your feature set.
Say for instance you are setting up a material for a wooden crate object. You have an albedo texture, a normal map, but no specular/metallic map, even though your shader supports it. Based on what inputs are active, you set the appropriate keywords and compile a shader which has just the features you want/need. This is all assuming you are handling the compilation of shaders yourself somehow, though. The obvious downside to this is that shader compilation can take a while, depending on the complexity of your master shader.
Holy shit, I got reflection working well enough to nearly fully automate the process of binding attributes and VAO+VBO together upon initialization of a renderable object type. Its hacky as fuck though, and I need to implement detection of the number of elements being bound (so it'll work with custom vector, array, or matrix types) along with figuring some way to generalize things a bit so I don't have so many specialized template methods for calling the vertex attribute format binding method. The binding process is literally as simple as just constructing a render_object, where the template parameter is whatever type of vertex you'd like to use. The constructor of the render_object takes care of that. So whereas an operation like setting up three renderable items with OpenGL, each with their own different kind of vertex, would take a ridiculous amount of (mostly redundant) calls to various OpenGL API methods (even with DSA its 3 lines per attribute), this entire process is simplified to three lines for a "user":
[cpp]
int main(){
init_gl();
render_object<base_vertex_t> base_render_object_t;
render_object<textured_vertex_t> textured_render_object_t;
render_object<lod_vertex_t> lod_render_object_t;
return 0;
}
[/cpp]
Vertex types are defined something like this, and the BOOST_FUSION_ADAPT_STRUCT macro gets them working with the reflection system:
[cpp]
struct base_vertex_t {
glm::vec3 position;
glm::vec3 normal;
};
BOOST_FUSION_ADAPT_STRUCT
(
base_vertex_t,
(glm::vec3, position)
(glm::vec3, normal)
)
[/cpp]
The constructor of a device object then calls the binding system, which does a bunch of the aforementioned template metaprogramming hackery to decide what binding function to call to let the VAO know about a given member of the vertex struct. I need to work on this, though, as I'm not sure if keeping the ability to also bind custom struct's is worth it - most of the hackery is because my first enable_if check caught glm vector types as structs to iterate through, then broke when the glm vectors didn't have things like a "size" for them to query. And not to mention that I want to bind a whole glm vector, not each individual element :v:
I inserted a shitload of good ol' terrible calls to printf to see if the attribute index was changing between attributes, and to make sure that the VAO's had actual names/IDs, and things seem to be working:
[t]http://i.imgur.com/YrLAKoO.png[/t]
If anyone wants to see the horrifying code that does this, [URL="https://gist.github.com/fuchstraumer/52153c11915289dfc61aad77234b5157"]it's here[/URL]. Its probably not even very useful, and I didn't really do that much to get it working, I just did my usual script kiddie [URL="http://jrruethe.github.io/blog/2015/05/21/boost-fusion-json-serializer/"]modifications of someone else's work[/URL] (a json serializer like this in C++ is pretty impressive though). I'm just stuck on how to better filter out various struct's and types: unlike what the json serializer assumes, the struct whose members we're iterating through is actually far simpler than the members it contains, so it kinda turns the whole enable_if() process upside-down. I wish there was a way I could detect if a struct was contained in a certain namespace, that'd make things really simple for handling GLM stuff at least.
[QUOTE=fewes;52086297]It is common to have a single shader for "general" shading but specialized shaders for less common effects. Usually, you'll have a master shader with keywords (#defines) which enables you to compile shaders which include only what you need for a certain material. This makes sure you have a versatile all-purpose material which also is optimized (real branches are often to be avoided if possible in shaders, for performance reasons). The alternative to this, assuming you want to avoid branches, is having multiple shaders which support certain types of inputs, but the number of permutations quickly become a pain to manage once you expand your feature set.
Say for instance you are setting up a material for a wooden crate object. You have an albedo texture, a normal map, but no specular/metallic map, even though your shader supports it. Based on what inputs are active, you set the appropriate keywords and compile a shader which has just the features you want/need. This is all assuming you are handling the compilation of shaders yourself somehow, though. The obvious downside to this is that shader compilation can take a while, depending on the complexity of your master shader.[/QUOTE]
Actually branching is somewhat fine by now, at least uniform branches are, most compilers can optimize them quite well. There's been some presentations recently where studios moved away from shader permutations to more branching based approaches.
@paindoc,
Your multiple calls to printf could be massively simplified like so:
[code]printf("GL renderable item with vertex type '%s' and VAO ID %i bound successfully.\n",
typeid(T).name(),
vao_name);
printf("'%s' bound at VAO location %i\n", typeid(T).name(), current_attribute_idx);[/code]
I may be wrong, but doing multiple calls to printf can be unnecessarily expensive. Other than that, nice work!
[QUOTE=Berkin;52087505][vid]https://my.mixtape.moe/bzuktr.webm[/vid][/QUOTE]
No audio, except for steam notifications?
[QUOTE=Karmah;52087591]No audio, except for steam notifications?[/QUOTE]
Whoops
[QUOTE=CarLuver69;52087070]@paindoc,
Your multiple calls to printf could be massively simplified like so:
[code]printf("GL renderable item with vertex type '%s' and VAO ID %i bound successfully.\n",
typeid(T).name(),
vao_name);
printf("'%s' bound at VAO location %i\n", typeid(T).name(), current_attribute_idx);[/code]
I may be wrong, but doing multiple calls to printf can be unnecessarily expensive. Other than that, nice work![/QUOTE]
No, I agree - I absolutely should have done a single call instead of chaining them unnecessarily but I was working quickly (lazily) and just wanted to see what would happen if I tried to make it work. I forget that printf and console output in general can be costly, and I have the even worse habit of forgetting to remove these or even do something like flagging them for debug builds only.
Unfortunately, that code won't work. I think I updated the gist to include the size of the individual attribute, but I can't think of a way to account for the size of the whole struct - that's needed for
[cpp]
glVertexArrayVertexBuffer(vao_name, current_attribute_idx, vbo_name, 0, sizeof(T));
[/cpp]
where T should be the size of the vertex object itself, as that's the size of the stride through the VBO. Alternatively, I could try to split the single VBO I use into a single VBO per attribute (which has its advantages). I'm going to have to play with both I think, since the multiple VBO approach is needed for doing rendering relative to eye stuff (position VBO gets updated frequently, rest not really).
My problem is I'm still thinking in terms of general runtime computation and methodology, not type-level computations and the second language that is C++ template metaprogramming. It hurts my head.
[editline]edited[/editline]
this continued experimentation with templates on my part is less me asking [I]why[/I] I should use templates and more me just doing it because I (kinda) can, and its been a fun diversion away from my abuse of inheritance. in the case of OpenGL, a surprising amount of stuff can be done at compile time so I figured why not do it at compile time? that and I kinda want to make it as easy as possible to add in new shaders and such, that'd be neat to play with especially given that I'm shooting for a solar system renderer
Some players have been having a white screen problem in my game. Sounds like sometimes the whole screen is white and sometimes just some textures are:
[IMG]http://i.imgur.com/7EbBhqL.jpg[/IMG]
I guessed that It might have been due to the game loading the levels/assets in a separate thread. Which I do so the loading screen can animate rather than hang when every is loading. Though I think that was making the textures not stay in memory on some systems, causing the sprites to display as white. I released an update that had a "singleThreadLoading" option which stopped the game using new threads when loading levels/assets. A couple of guys who were having problems tried it and it fixed it for them.
Not if there's something else I can do to fix the problem or If I should make this setting the default. If I did make it the default the loading screens wouldn't be very smooth and there would be fps drops when changing levels...
I also noticed another problem that occurs if you try to change the fps limit (which you can only do by going in the config.txt). By default the game has a fps limit of 60 but due to the way the game logic works, if it's changed it can cause problems. For example: a zombie attacks and deals damage when it's animation is on frame 9, but if you're playing with a 120fps limit it's on frame 9 of it's animation for 2 frames. This causes the zombie to deal damage twice. And if you play with a fps lock of 30; it can skip frames which causes a lot of problems. Having the game locked at 60 all the time also seems to cause the game to go in slow motion on slower computers...
I'm thinking I should probably rework a lot of this game logic that relies on the animations to happen on animation frame change rather than in the normal game update loop. That way the game wouldn't be forced to lock the fps to 60 and it also might help with performance in general.
Cant you retarget animation speed based on max fps?
Has anyone ever had problems running Vulkan + ImGui on AMD cards? Some of our testers are getting some really fucked up results only on AMD. (geometry shaders not working, hall of windows effects etc.)
[QUOTE=gonzalolog;52089918]Cant you retarget animation speed based on max fps?[/QUOTE]
I think the issue is he asks "what frame is this on" every tick and if the FPS is high enough it'll stay on the same frame for multiple ticks.
[QUOTE=jangalomph;52090545]Has anyone ever had problems running Vulkan + ImGui on AMD cards? Some of our testers are getting some really fucked up results only on AMD. (geometry shaders not working, hall of windows effects etc.)[/QUOTE]
I'm running ImGui with Vulkan on AMD and it seems to be fine so far using the example glfw/vulkan binding they have in the repo.
-snip-
I made a small node module that lets you draw basic graphics straight from NodeJS: [url]https://github.com/qwook/graphical[/url]
Me and a few friends were working on accelerometer and eye-tracking and were forced to read the values that were spat out every 100 milliseconds. That made me come up with the idea of creating a small tool to be able to see all that data using graphics.
You create a few primitives and define their attributes, then that data gets streamed over to a web browser.
It isn't too fully-featured because it isn't meant to be more than just a debugging tool.
Here's an example of it in action:
[img]http://i.imgur.com/E0ES7IA.gif[/img]
The code for that gif can be found here: [url]https://github.com/qwook/graphical/blob/master/test.js[/url]
There's probably something like this that exists, but I haven't found it.
[QUOTE=Ott;52090610]I think the issue is he asks "what frame is this on" every tick and if the FPS is high enough it'll stay on the same frame for multiple ticks.[/QUOTE]
Yep, that's the problem. Because of that the game is forced to always be locked at 60fps.
The slow motion probably happens because when the game drops down to 30 or 15fps on a slower computer, it's still trying to update and render the game at 60fps. The whole game goes slow motion in that case, not just the animations.
I added a simple abbreviation function to Rant.
[img]http://i.imgur.com/6ijquGH.png[/img]
[QUOTE=Berkin;52094951]I added a simple abbreviation function to Rant.
[img]http://i.imgur.com/6ijquGH.png[/img][/QUOTE]
Does it have a hardcoded lookup list for things or does it ignore words like 'of', 'the', 'to', and 'for'?
[QUOTE=MattJeanes;52095942]Does it have a hardcoded lookup list for things or does it ignore words like 'of', 'the', 'to', and 'for'?[/QUOTE]
It ignores short prepositions, articles, etc. It's pretty basic.
Hey I feel super stupid, but I'm having some trouble understanding running shit with Python (brand new to this)
I'm trying to install Python, pip and virtualenv using the instructions on [URL="https://github.com/slackapi/Slack-Python-Onboarding-Tutorial/blob/master/README.md#pythonboarding-bot"]this page[/URL] so that I can author some custom bot behavior in Slack.
I have Python installed, which came with the latest version of pip. Do I really need to navigate to the Python install directory in order to run Python commands in command prompt? If I run a command prompt from a win+s search, it runs it from like C\users\my username and acts like it doesn't know Python stuff anymore. Also, trying to run
[CODE]$ [sudo] pip install virtualenv[/CODE]
returns an error
[CODE]C:\Python27>python $ [sudo] pip install virtualenv
python: can't open file '$': [Errno 2] No such file or directory[/CODE]
What am I doing wrong here?
Sorry, you need to Log In to post a reply to this thread.