[QUOTE=Tamschi;52189457]Afaict it looks good, assuming that's actually a Promise(-compatible) return value.
Have you considered providing TypeScript typings? [URL="https://github.com/Microsoft/dts-gen"]You may be able to partially auto-generate it[/URL][URL="http://archive.is/8PFLx"].[/URL]
Is there a serious reason for the hijacking though? It seems like you'd normally just fail the Promise there.[/QUOTE]
Yes, there are other methods though such as a pop-up but this one will just hijack the current page. It just works™. Every method is documented with jsdoc so type hinting should work.
I just put the docs online, right now making sure that builds are published and then I'll make the repo open source.
Authentication examples: [url]https://mapcreatoreu.github.io/m4n-api/manual/example/examples.authentication.html[/url]
Not even close to the awesome stuff you guys post here, but I've been working on a small time handler [URL="https://github.com/Skoparov/Helpers/tree/master/temporal"]class [/URL]intended to combine the comfort of Qt's QDateTime and the template ratio approach used in std::chrono, with some small additions to make stuff easier to work with. So, for end users, it just looks like a set of typedefs similar to those from the standard, but they all use the same inner class that basically just stores seconds since Julian year 0 as well as nanoseconds, which allows to work with the wrapper pretty freely:
[code]
using namespace temporal;
millisec_t ms{ days_t{ 666 } + hours_t{ 666 } - minutes_t{ 666 } + nanosec_t{ 666 } };
time_type ms_count{ ms.count() };
time_type ns_count{ nanosec_t{ ms }.count() };
ms.from_timeval( timeval{ 666, 666 } );
ms.from_timespec( timespec{ 666, 666 } );
ms.from_std_duration( std::chrono::seconds{ 666 }, since::epoch ); // may treat it as a duration since epoch or since julian day 0
auto dms = ms.to_std_duration< std::chrono::milliseconds >();
// there's also a simple date_time class capable of basic stuff like calculating date, time as well as some other stuff using the said internal timestamp
date_time dt{ 1666, date_time::jun, 6, 6, 6, 6, 666000000 };
ms = dt;
date_time dt_now{ millisec_t::now().to_date_time() };
std::string date{ dt_now.to_string() }; // also accepts time patterns
time_type hour{ dt_now.hour() };
bool is_leap{ dt_now.is_leap() };
int day_of_week{ dt_now.day_of_week() };
[/code]
A lot of things are constexpr, so it's possible to make some basic stuff at compile time as well.
:snip:
[QUOTE=proboardslol;52189471]Thinking of writing a chrome plugin for work. We do things with computer-telephone-integration and sometimes we have to make a real call via phone just to test frontend menus, and that gets annoying. Of course, we should test with actual calls but when we're just testing a menu option, we can skip that part. I want to write a little popup menu that I can use to trigger different types of UI events more quickly.
I'd have to do this in my freetime though
[editline]5th May 2017[/editline]
I think I'll call it "blue box"[/QUOTE]
Why disagree :(
edit fuck meant to click edit not reply sorry drunk
A client lodged a support ticket because a piece of software in our suite was taking too long to load. Optimised the SQL queries from n+1 to 4 (so 973 to 4 in this case) and parallelised it reducing load time from 3 minutes to 15 seconds. Still some more optimisations to go, aiming for 10 seconds if I can.
[QUOTE=helifreak;52192300]A client lodged a support ticket because a piece of software in our suite was taking too long to load. Optimised the SQL queries from n+1 to 4 (so 973 to 4 in this case) and parallelised it reducing load time from 3 minutes to 15 seconds. Still some more optimisations to go, aiming for 10 seconds if I can.[/QUOTE]
I still want to go back and do this to my markov text gen. It does all the processing in PHP and gets it from MySQL so it takes 1-2 minutes to generate a paragraph of text. If I can rewrite all the logic in MySQL then it should be much faster, but I can only do simple CRUD stuff in SQL
Got loading phong material values loading from file and textures working. I will probably skip normal mapping as this needs to be a playable game by next week and I haven't started on gameplay yet.
[IMG]http://imgur.com/nsxsJ8Dl.png[/IMG]
The floor and wall textures are from the [URL="https://glitchvid.com/hl2emtextdls/"]HL2 enhancement mod by glitchvid[/URL].
Also does anybody know where to get free textured models?
Anybody knows an easy, multithread-safe SQL library for C++? I need to do some work web-crawling and I want to store results sooner rather than later.
[QUOTE=MaPi_svk;52193606]Also does anybody know where to get free textured models?[/QUOTE]
3D Models coming right up!
[url]https://opengameart.org/art-search-advanced?keys=&field_art_type_tid%5B%5D=10&sort_by=count&sort_order=DESC[/url]
[url]http://collinoswalt.com/blending/[/url]
stupid hacky thing dunno what to call it. There's probably a better way to do this
working with a friend
[vid]https://s1.webmshare.com/JG9LE.webm[/vid]
haha I wonder which game engine we're using
Processing is really fun
[media]http://www.youtube.com/watch?v=acKOaEbua2A[/media]
[editline]7th May 2017[/editline]
youtube's compression is shit tho
This is the most satisfied I've ever felt with a simple triangle:
[t]http://i.imgur.com/PMNZ8js.png[/t]
Also, I've been in various states of sobriety throughout the day while working on Vulkan stuff and I found this reason for why program is kill:
[t]http://i.imgur.com/2rvZQZ4.png[/t]
I think I was upset with myself :v:
[QUOTE=paindoc;52200630]This is the most satisfied I've ever felt with a simple triangle:
[t]http://i.imgur.com/PMNZ8js.png[/t]
Also, I've been in various states of sobriety throughout the day while working on Vulkan stuff and I found this reason for why program is kill:
[t]http://i.imgur.com/2rvZQZ4.png[/t]
I think I was upset with myself :v:[/QUOTE]
Congratulations to your black triangle success. (geddit?)
First time publishing/building a javascript library. Wondering if any of you have any pointers. It's not really usable since you need to use our service (costs dosh). I'm more wondering about if my api syntax is any good etc.
[url]https://github.com/MapCreatorEU/m4n-api[/url]
[url]https://www.npmjs.com/package/@mapcreator/maps4news[/url]
When I went to your website ([url]https://mapcreator.eu/[/url]) for a second I thought it was about projection mapping but then I was let down :(
[QUOTE=LennyPenny;52201330]When I went to your website ([url]https://mapcreator.eu/[/url]) for a second I thought it was about projection mapping but then I was let down :([/QUOTE]
We make actual maps. Like these: [url]https://maps4news.com/us/gallery/[/url]
[QUOTE=Mega1mpact;52201373]We make actual maps. Like these: [url]https://maps4news.com/us/gallery/[/url][/QUOTE]
I thought some of these seemed familiar, then moused over. Good job!
[URL="https://gist.github.com/fuchstraumer/d2df96ae0d1379ae40536a74d18d67a5"]I made my version of the polygon generation code that I was ranting about a few pages ago[/URL]. Upsides include: no infinite loops, no goto's, much higher simplicity, list instead of vector, and 2 fewer containers (which consumes a large amount of RAM for more complex layers). I'm trying to make a cleaner and parallelizable slicing system, but unfortunately my initial tests with std::async resulted in [I]greatly[/I] decreased performance. I'm not sure why this is because I tried to read up on the proper way to use std::async, but it just wasn't work. And since this application is supposed to be cross-platform and should be able to work on a large range of hardware + OS's std::async could have helped make things easier I guess.
as a whole though, this part of the project is making me realize where my education is lacking. I have no background in algorithms and that sort of stuff - and there's only so much I've been able to pick up in the past few days, and I hadn't really needed to learn about them in the past since I was doing almost nothing but GPU/graphics programming. Like I'm having a hard time now because I need to do a better job sorting the paths, then need to do motion planning (which I'm outright stealing from another repo, but I'd like to actually understand it), and have to still get infill generation working in a sane fashion. I also need to actually separate my GUI and slicing engine into two distinct projects, too. I don't like how tightly tied they are now, even if it makes debugging slightly easier.
Kinda terrifying because I have 5wks to go until this is supposed to be done and I'm not sure I'll make it. And I don't think it'd be a HUGE issue if I didn't, but I'd like to get off this project because I've been on it since last August.
You're probably running into async issues at any point that shares resources since calling async takes a fair bit of time and any mutex stuff holds much longer with async because of it. You could also have issues using async if you're storing your futures in vectors or other slower containers, but it looks like you're using lists so that probably isn't it. You could have issues simply because your workload might not be large enough to make-up for the slowness of async calls, but idk if that's even likely considering the scope of your project. If you're able to, try splitting all the reference/source material into enough containers that your threads/async can have their own sources, or just at least a few more than one to use with mutexes. The optimal number is something you'll have to play with, but even 2 copies of the source material and having each async try_lock to see if it can use the source or the copies should gain some performance. In my experience, you're always going to have the trade-off of memory usage to speed for most parallel/async programming in order to see any real tangible benefit usually.
What OS's are you looking to use it on?
Because if they support c++11 using std::thread might be the solution you're looking for. IIRC, it works flawlessly on Windows, Linux and mostly works well enough on Android. I think timed_mutex still isn't implemented to Android but that's something you can work-around easily. Could also look into pthreads if you're worried about signal handling (imo that's mostly irrelevant but idk your deployment environments).
Async also has issues in Microsoft's implementation because it might do up to hardware concurrency in parallel but then could put the rest of your tasks in series on one thread, or it might not, which is bogus and fucking weird. Async is weird honestly, I hate it. With std::thread (or pthreads) you're able to at least ensure that no matter what all tasks are operating on their own threads and all logical cores are going to do their part to just pick a thread from the pool when they're done with one and get shit done until there's no more threads to process.
[QUOTE=F.X Clampazzo;52202066]You're probably running into async issues at any point that shares resources since calling async takes a fair bit of time and any mutex stuff holds much longer with async because of it. You could also have issues using async if you're storing your futures in vectors or other slower containers, but it looks like you're using lists so that probably isn't it. You could have issues simply because your workload might not be large enough to make-up for the slowness of async calls, but idk if that's even likely considering the scope of your project. If you're able to, try splitting all the reference/source material into enough containers that your threads/async can have their own sources, or just at least a few more than one to use with mutexes. The optimal number is something you'll have to play with, but even 2 copies of the source material and having each async try_lock to see if it can use the source or the copies should gain some performance. In my experience, you're always going to have the trade-off of memory usage to speed for most parallel/async programming in order to see any real tangible benefit usually.
What OS's are you looking to use it on?
Because if they support c++11 using std::thread might be the solution you're looking for. IIRC, it works flawlessly on Windows, Linux and mostly works well enough on Android. I think timed_mutex still isn't implemented to Android but that's something you can work-around easily. Could also look into pthreads if you're worried about signal handling (imo that's mostly irrelevant but idk your deployment environments).
Async also has issues in Microsoft's implementation because it might do up to hardware concurrency in parallel but then could put the rest of your tasks in series on one thread, or it might not, which is bogus and fucking weird. Async is weird honestly, I hate it. With std::thread (or pthreads) you're able to at least ensure that no matter what all tasks are operating on their own threads and all logical cores are going to do their part to just pick a thread from the pool when they're done with one and get shit done until there's no more threads to process.[/QUOTE]
I'm not a huge fan of std::async as it is right now, but I want to like it for how well it could abstract away a lot of the mess of dealing with std::thread.
I'm betting its thread storage and such though, because sharing resources could be a problem. Each layer of a 3D print can theoretically have its polygons generated independently - buuuut you need to have thread-safe access to the underlying mesh data. I'm just using admesh so I'm not sure if that's the case or not. As it is, my new solution is already [I]much[/I] faster than the methods I'd used in the past, and is a hybrid of how Mandoline, Slic3r, and Cura all do their magic. Admesh alone saves me tons of time - I can have it generate shared vertex data and do a lot of the repair work I previously had to manage (which is how Cura does it). Warning because these files are a little under 2mb, but here's an example of closed v. unclosed:
[URL="http://fuchstraumer.github.io/unclosed_example.html"]unclosed[/URL], [URL="http://fuchstraumer.github.io/closed_example.html"]closed[/URL]
This was a "stress-test" of sorts, as this mesh is about 22mb and is the most complex mesh I've tossed at my application so far. Those SVGs both have several thousand points. Previously, no matter what I did, the output always looked a bit like the unclosed example and had tons of gaps and holes which made toolpath generation and infill generation a tremendous pain in the ass. Regardless, generating the 200+ layers to print this octopus took a few seconds (if that), so threading probably isn't worth the hassle right now.
[editline]edited[/editline]
the unfun part: I had made all of these changes in a "dev" namespace to avoid breaking functionality and now I'm going to roll it back in and see how hard it breaks my unit tests
Yeah if you're down to a few seconds for that then you're probably good to just leave it single-threaded and not have to deal with the headache. Unless you're planning on it doing some massive ass prints for some really big 3d printers, that probably have their own proprietary software for that anyway, you're probably good tbh. A few seconds won't kill anyone.
[QUOTE=Matthew0505;52195707]Seems like they were specifically chosen to have little sigfigs in binary.
-snip-
I'd guess XY and Z are converting floats to fixed precision numbers.[/QUOTE]
Heh, never heard of a "sigfig" before. Looks like you managed to extract a hell of a lot more information than I did:
[QUOTE=CarLuver69;52178068]Basically, a 10-bit value can be -511>511 signed, and 512 unsigned. 12-bits gives us -1023/1023 signed and 1024 unsigned.
Then we have to account for the conversion back to a float. In this case, the format allows for X and Y to be -3.0>3.0 (3 / 512 = 0.005859375) and Z to be -7.5>7.5 (7.5 / 1024 = 0.00732421875).[/QUOTE]
This is how I'd repack the data:
[code]
fX = -0.75
fY = 0.725
fZ = 1.925
iX = (int)(fX / 0.0058593750f); // -128
iY = (int)(fY / 0.0058593750f); // 124
iZ = (int)(fZ / 0.0073242188f); // 263
// 10-bit X/Y, 12-bit Z
result = (iX & 0x3FF) | ((iY & 0x3FF) << 12) | ((iZ & 0xFFF) << 20);
[/code]
When unpacking the data, we shift the value we want to retrieve into the upper 32-bits and then shift it back into the lower 10-bits, thus preserving the signed-ness of the original value:
[code]
1111 1111 1111 1111 1111 1111 1000 0000 // -128, 32-bit: original value
0000 0000 0000 0000 0000 0011 1000 0000 // -128, 10-bit: compressed value (=896)
1110 0000 0000 0000 0000 0000 0000 0000 // -128, 10-bit: shifted into upper 32-bits
1111 1111 1111 1111 1111 1111 1000 0000 // -128, 32-bit: shifted back to lower 10-bits
[/code]
I'm surprised this kind of vector compression (if you want to call it that) isn't more commonplace!
[QUOTE=CarLuver69;52203081]Heh, never heard of a "sigfig" before. Looks like you managed to extract a hell of a lot more information than I did:
This is how I'd repack the data:
[code]
fX = -0.75
fY = 0.725
fZ = 1.925
iX = (int)(fX / 0.0058593750f); // -128
iY = (int)(fY / 0.0058593750f); // 124
iZ = (int)(fZ / 0.0073242188f); // 263
// 10-bit X/Y, 12-bit Z
result = (iX & 0x3FF) | ((iY & 0x3FF) << 12) | ((iZ & 0xFFF) << 20);
[/code]
When unpacking the data, we shift the value we want to retrieve into the upper 32-bits and then shift it back into the lower 10-bits, thus preserving the signed-ness of the original value:
[code]
1111 1111 1111 1111 1111 1111 1000 0000 // -128, 32-bit: original value
0000 0000 0000 0000 0000 0011 1000 0000 // -128, 10-bit: compressed value (=896)
1110 0000 0000 0000 0000 0000 0000 0000 // -128, 10-bit: shifted into upper 32-bits
1111 1111 1111 1111 1111 1111 1000 0000 // -128, 32-bit: shifted back to lower 10-bits
[/code]
I'm surprised this kind of vector compression (if you want to call it that) isn't more commonplace![/QUOTE]
Protip, if you have normalized vector, you need to store only 2 components. You can calculate third component out of other two. Maybe you can even store magnitude with less precision, if you wish to have more accurate direction and less accurate magnitude.
[QUOTE=Fourier;52203100]Protip, if you have normalized vector, you need to store only 2 components. You can calculate third component out of other two. Maybe you can even store magnitude with less precision, if you wish to have more accurate direction and less accurate magnitude.[/QUOTE]
Ah true, but this is just some reverse engineering of a format I haven't been able to figure out until recently. It's used to store what I believe are offsets relative to the center of an object, which in turn is used to render a bullet hole on a vehicle:
[t]https://qxdtqa-sn3301.files.1drv.com/y4mYXJG42WcOrRAiJlFzxnOEqwP2gBOePmvvvua7AcK2vpjTUlzh2whtTTmf4XJz1Ah2ViEGKaFye4fyhHJO1dVBkdRKZGGXAWQHjXIFIpN3uTKxprr3WBsfLzxE6NxVj2cpe76QSvyrPsR4Deqw3daOwrvCBTna98EsKpjjO6g5LiSzy6NQ_vXfm0wfHlxZEGBd5mJeCo6Yt_gSeufC7ZLDg?width=1030&height=797&cropmode=none[/t]
If anyone's interested, [url=https://pastebin.com/KQDPsP0s]here's[/url] a dump of the data that's compressed using that crazy algorithm above :P
Anyone here familiar with Cryengine, specifically extracting specific assets from pak files and finding them from the XML files?
I want to rip some sound clips from Prey for a future project.
[thumb]http://carp.tk/$/Discord_2017-05-09_02-19-32.png[/thumb]
You would think changing Console.Title in a very fast loop would be secure, well, nope....
Who wins. 40000 lines of OpenCL, or 1000 lines of OpenGL?
[IMG]https://i.imgur.com/3rjIilq.png[/IMG]
I may be the only person in the known universe who thinks this, but jesus christ OpenGL is easy and straightforward compared to dicking about in OpenCL. No random holes in your geometry? Lots of freely available tooling? An easy way to dynamically set variables from the host in gl code? I've not had to manually parse any device side code for randomly inserted macros to stop myself from wanting to die, at all! I do miss OpenCL's explicit queue, and as it turns out OpenGL has a bunch of abstractions that literally don't mean anything in the modern day, as uniforms, textures, buffers, buffer nonsense, framebufferrendernadgers bufferrenderpfngltextureimagearraycubemap5fdij are actually all just bits of memory in the real world. These do explain a few OpenCL bugs I'd experienced - Nvidia crashing with unused textures is because OpenGL optimises unused uniforms away
Also, shitposting on reddit might have gotten me a job. I spent a couple of weeks building a space game (because RIP shitty swordfighting game you will be missed), and then someone sent me a message asking me if I wanted a job in suspiciously exactly the area that I'd like to work in (which is why I'm now finally getting to grips with OpenGL) after writing a long detailed post complaining about CSGO's networking. Interviews went well, and now I'm just waiting for final confirmation. Unfortunately its the sort of hush hush thing that you guys will probably enjoy in a few months
[QUOTE=Icedshot;52204553]Who wins. 40000 lines of OpenCL, or 1000 lines of OpenGL?
[IMG]https://i.imgur.com/3rjIilq.png[/IMG]
I may be the only person in the known universe who thinks this, but jesus christ OpenGL is easy and straightforward compared to dicking about in OpenGL. No random holes in your geometry? Lots of freely available tooling? An easy way to dynamically set variables from the host in gl code? I've not had to manually parse any device side code for randomly inserted macros to stop myself from wanting to die, at all! I do miss OpenCL's explicit queue, and as it turns out OpenGL has a bunch of abstractions that literally don't mean anything in the modern day, as uniforms, textures, buffers, buffer nonsense, framebufferrendernadgers bufferrenderpfngltextureimagearraycubemap5fdij are actually all just bits of memory in the real world. These do explain a few OpenCL bugs I'd experienced - Nvidia crashing with unused textures is because OpenGL optimises unused uniforms away
Also, shitposting on reddit might have gotten me a job. I spent a couple of weeks building a space game (because RIP shitty swordfighting game you will be missed), and then someone sent me a message asking me if I wanted a job in suspiciously exactly the area that I'd like to work in (which is why I'm now finally getting to grips with OpenGL) after writing a long detailed post complaining about CSGO's networking. Interviews went well, and now I'm just waiting for final confirmation. Unfortunately its the sort of hush hush thing that you guys will probably enjoy in a few months[/QUOTE]
Nice score on the job! And I have to agree about OpenGL. Especially after setting up uniform buffers in Vulkan ([URL="https://gist.github.com/fuchstraumer/cf559058005434c4f322176df473f0b6"]link[/URL]) vs setting them up in OpenGL. Admittedly, how I laid out the structure initializer lists makes it appear bigger here but it still feels more painstaking and verbose than it does in OpenGL. Getting used to setting up the command queue is different, though. Not unpleasant, but it reminds me of immediate-mode OpenGL tbh. One thing I miss about CUDA was explicit access to different types of memory - like surface, texture, constant, and thread-local memory. It was useful being able to leverage each type. It was unfun, though, trying to debug register pressure and figure out why in the hell CUDA just shoved everything in the registers (to the point of bottlenecking the SM due to decreasing block size and threadcount to give each thread more register space)
The benefit of that UBO stuff is now [URL="https://gfycat.com/MammothThornyJenny"]yay spinning square[/URL]
[media]https://www.youtube.com/watch?v=yjpKeIzZpIw[/media]
It's getting there :')
[QUOTE=Icedshot;52204553]I may be the only person in the known universe who thinks this, but jesus christ OpenGL is easy and straightforward compared to dicking about in OpenCL. No random holes in your geometry? Lots of freely available tooling? An easy way to dynamically set variables from the host in gl code? I've not had to manually parse any device side code for randomly inserted macros to stop myself from wanting to die, at all! I do miss OpenCL's explicit queue, and as it turns out OpenGL has a bunch of abstractions that literally don't mean anything in the modern day, as uniforms, textures, buffers, buffer nonsense, framebufferrendernadgers bufferrenderpfngltextureimagearraycubemap5fdij are actually all just bits of memory in the real world. These do explain a few OpenCL bugs I'd experienced - Nvidia crashing with unused textures is because OpenGL optimises unused uniforms away[/QUOTE]
The opinion that OpenGL is more straightforward than OpenCL [I]when you're doing graphics[/I] isn't unpopular at all. Likewise, OpenCL, with all its problems, is a lot more convenient when your problem maps poorly to the graphics abstractions (after all, the reason these platforms were invented is that people tried doing GPGPU with Direct3D and OpenGL at first and it really sucked).
Sorry, you need to Log In to post a reply to this thread.