[QUOTE=paindoc;52914517]You get some overhead with that, though I'm not entirely sure how much. I'd weigh it based on how often you find yourself doing light copies.
unique_ptr, however, has practically no overhead to speak of. There's honestly very little reason to have raw pointers as class or struct members, anymore[/QUOTE]
Non owning pointers?
[QUOTE=paindoc;52914517]You get some overhead with that, though I'm not entirely sure how much. I'd weigh it based on how often you find yourself doing light copies.
unique_ptr, however, has practically no overhead to speak of. There's honestly very little reason to have raw pointers as class or struct members, anymore[/QUOTE]
Yeah, I know about the control block overhead and the whole lot of problems that may be caused by it, that's exactly why I mentioned i'm not sure about it. I guess it all depends on how big the stored data is, so sharing huge vectors might prove useful(pretty much the same technique is literally omnipresent in Qt, by the way). It would obviously be inefficient for storing integral types or small strings though, but since a type erasure class just inherits the same pros and cons as the underlying pointer, it may be a good idea to actually give users control over what pointer should be used.
Also, like the guy above posted, with all the smart pointers around, raw pointers pretty much imply that they just don't own the data they point to. Pretty useful to show your intent sometimes.
[QUOTE=gmoddertr;52914085]I am working on a steam bot with c# at the moment. I want to be able to use the console constantly to send messages. I did that using something like;
[CODE] Start:
string command = Console.ReadLine();
if (command == "clear")
{
Console.Clear();
goto Start; // to handle new commands
} [/CODE]
This code works, the problem is "Start": loop and "Console.ReadLine" interrupts other callbacks and voids. For example, I cant receive messages from people while I am in "Console.ReadLine" sequence. Whenever I exit the loop with "not" using the goto thing, it works, other voids/callbacks re-start working. How would you go with that?[/QUOTE]
you should spawn a new thread and handle steam stuff there and do the console stuff in the main thread
[editline]22nd November 2017[/editline]
[code]
using System.Threading;
Thread T = new Thread(Func);
T.IsBackground = true;
T.Start();
while (true) {
string Input = Console.ReadLine();
switch (Input) {
case "clear": {
break;
}
default: {
break;
}
}
}
static void Func() {
}
[/code]
[QUOTE=WTF Nuke;52913368]
Use a unique_ptr instead of for m_payload.[/QUOTE]
But can I even dynamic cast a unique_ptr safely in this context?
I started going that route and I found some posts talking about how it was unsafe or other things
My payload is stored as the interface class, and the derived type is a template class, so I need to downcast it from interface to derived of the appropriate template type
Currently I'm doing it like this:
[code]
class Message {
private:
struct Interface() { // ... virtual pure methods ... // }
template < typename T >
struct Derived(const T &t) : m_data(t) {
// ... override virtual methods...//
T m_data;
}
shared_ptr<Derived> m_payload;
public:
template < typename T >
Message ( const T &t) : m_payload(new Derived<T>(t)) { }
template < typename T >
T & GetPayload() const {
return dynamic_pointer_cast<Derived<T>>(m_payload).get()->m_data;
}
}
[/code]
I'm trying to parse some NewPunch JSON, but I'm a little confused on this specific part:
[code]
"{\"ops\":[{\"insert\":\"haven't posted for 3 years\\nyet look at fp most days\\nmigrated today\\nnow back to lurking\\nsee you in 3 years\\n\"}]}"
[/code]
What the heck is ops?
I've googled a bit about it, and it seems to be related to JSON patch, but all examples only uses op, and is structured a bit different, than the example above.
Can someone help me understand this?
[QUOTE=tisseman890;52915059]I'm trying to parse some NewPunch JSON, but I'm a little confused on this specific part:
[code]
"{\"ops\":[{\"insert\":\"haven't posted for 3 years\\nyet look at fp most days\\nmigrated today\\nnow back to lurking\\nsee you in 3 years\\n\"}]}"
[/code]
What the heck is ops?
I've googled a bit about it, and it seems to be related to JSON patch, but all examples only uses op, and is structured a bit different, than the example above.
Can someone help me understand this?[/QUOTE]
This is only a guess but it looks like it's an array of operations to run with that specific example having an insert command to insert text into the post. Probably from an edit or automerge.
[QUOTE=Alice3173;52915162]This is only a guess but it looks like it's an array of operations to run with that specific example having an insert command to insert text into the post. Probably from an edit or automerge.[/QUOTE]
Makes sense, though it's the same for each post message.
It also seems that only insert is used.
[QUOTE=tisseman890;52915170]Makes sense, though it's the same for each post message.
It also seems that only insert is used.[/QUOTE]
I'm not certain of the exact details but an example:
[url]https://forum.facepunch.com/f/meta/gvqb/mention-so-people-get-a-notification/2/?json=1[/url]
Specifically the first post, the post itself:
[img]https://i.imgur.com/141Tc7s.png[/img]
the json:
[code]{
"PostId": "bfhlt",
"Country": "United Kingdom ",
"UserId": "cyqw",
"Username": "Jetamo",
"Message": "{\"ops\":[{\"insert\":{\"postquote\":{\"text\":\"twitch does not\\n\\n\\nEdit:\\nRemember, twitch is just an IRC server, not a forum\\n\",\"forumid\":\"48\",\"threadid\":\"70069\",\"postid\":\"499708\",\"postnumber\":\"22\",\"username\":\"Scratch\",\"userid\":\"1755\"}}},{\"insert\":\"Ah, I must be getting it mistaken with functionality that BetterTTV does - typing @ and will start bringing up the list of users as you type.\\n\"}]}",
"Meta": {
"Votes": null,
"Bans": null
},
"CreatedDate": "2017-11-18T18:07:07.8057924",
"LastEditDate": null,
"Deleted": false
}[/code]
Inside the message itself we've got this escaped javascript:
[code] "{
\"ops\": [
{
\"insert\": {
\"postquote\": {
\"text\": \"twitch does not\\n\\n\\nEdit:\\nRemember, twitch is just an IRC server, not a forum\\n\",
\"forumid\": \"48\",
\"threadid\": \"70069\",
\"postid\": \"499708\",
\"postnumber\": \"22\",
\"username\": \"Scratch\",
\"userid\": \"1755\"
}
}
},
{
\"insert\": \"Ah, I must be getting it mistaken with functionality that BetterTTV does - typing @ and will start bringing up the list of users as you type.\\n\"
}
]
}"[/code]
Looks like insert will be the primary command for everything most likely but there's going to be subcommands as well. "postquote" is most likely a command itself using the relevant data given there. ([url=https://forum.facepunch.com/f/meta/gvqb/mention-so-people-get-a-notification/2/#postbfhlt]This[/url] is the post I used for that example btw.)
Another example: ([url=https://forum.facepunch.com/f/meta/cxif/Embed-Test/1/?json=1]JSON[/url], [url=https://forum.facepunch.com/f/meta/cxif/Embed-Test/1/]post[/url])
That one has other subcommands. Most of which seem to be variations of the hotlink command, but even then the insert command seems to be the primary one regardless.
[QUOTE=Karmah;52915035]But can I even dynamic cast a unique_ptr safely in this context?
I started going that route and I found some posts talking about how it was unsafe or other things
My payload is stored as the interface class, and the derived type is a template class, so I need to downcast it from interface to derived of the appropriate template type
Currently I'm doing it like this:
[code]
class Message {
private:
struct Interface() { // ... virtual pure methods ... // }
template < typename T >
struct Derived(const T &t) : m_data(t) {
// ... override virtual methods...//
T m_data;
}
shared_ptr<Derived> m_payload;
public:
template < typename T >
Message ( const T &t) : m_payload(new Derived<T>(t)) { }
template < typename T >
T & GetPayload() const {
return dynamic_pointer_cast<Derived<T>>(m_payload).get()->m_data;
}
}
[/code][/QUOTE]
A failed dynamic cast returns a nullptr, so couldn't you just test this on a few things and see if it ever returns a nullptr? If it does, things broke somehow and you can't use that. Unless you already tried that, in which case if it hasn't broken yet it should be fine.
edit: also use std::make_shared if you can, it allocates the control block and the object together. small tweak, but worth doing
Ok sounds good. It does work with a unique ptr.
I never really need to share the pointer to the derived class, I only care about the reference to the underlying data (which only exists for as long as the message exists, which isn't very long anyway)
So I just made it a unique_ptr, and when trying to down-cast it and return its underlying data I just dynamic_cast the underlying pointer and return it's member's stored data.
I also reworked your suggestion in this case to std::make_unique (but I will use make_shared for future shared pointers from now on)
It's always a good rule to use make_shared/unique. In this case it's just efficiency (for shared, for unique there isn't really a difference except no raw new) but other cases, like a vector of unique_ptrs is different. For example this: my_ptrs.emplace_back(new foo()); is objectively worse than my_ptrs.emplace_back(make_unique<foo>()); because emplace_back may throw before your new heap foo gets wrapped in a unique_ptr and you get a memory leak.
[QUOTE=WTF Nuke;52915941]It's always a good rule to use make_shared/unique. In this case it's just efficiency (for shared, for unique there isn't really a difference except no raw new) but other cases, like a vector of unique_ptrs is different. For example this: my_ptrs.emplace_back(new foo()); is objectively worse than my_ptrs.emplace_back(make_unique<foo>()); because emplace_back may throw before your new heap foo gets wrapped in a unique_ptr and you get a memory leak.[/QUOTE]
damn, I didn't even know that. Good to know! I also forget that you can use unique_ptr to make C-style arrays (why, i dunno, maybe someone can inform me).
there's so many features in c++ from c++11 it's really hard to keep up with them. like i keep forgetting std::variant is a thing, or that i can use std::valarray for maths at work
[QUOTE=WTF Nuke;52915941]It's always a good rule to use make_shared/unique. [/QUOTE]
There are exceptions though that must be kept in mind. Like that one where you allocate some big object with make_shared, and then even if there's only one small weak_ptr left, the whole chunk of data will still be kept untouched due to the nature of make_shared's allocation(1 piece instead of 2). Also no custom deleters.
[QUOTE=Alice3173;52915238] Long post [/QUOTE]
Makes good sense.
So it's seems it's pretty easy to handle this format.
Update:
I just found out why garry is formatting it like that.
[URL]https://quilljs.com/docs/delta/[/URL]
He seems to use the Delta format, which the text editor is using.
[QUOTE=cartman300;52914977]you should spawn a new thread and handle steam stuff there and do the console stuff in the main thread
[editline]22nd November 2017[/editline]
[code]
using System.Threading;
Thread T = new Thread(Func);
T.IsBackground = true;
T.Start();
while (true) {
string Input = Console.ReadLine();
switch (Input) {
case "clear": {
break;
}
default: {
break;
}
}
}
static void Func() {
}
[/code][/QUOTE]
thank you but I couldn't get how can I take the whole bot section into a func. There is a load of callbacks etc. I tryed to take only "LogIn" function as new thread but it doesnt work.
Anyway, looks like I fixed it using this;
[CODE] static void Main(string[] args)
{
Thread T = new Thread(getCommands);
T.IsBackground = true;
T.Start();
Console.Title = "fxBot";
username = "myusername";
pass = "mypass";
LogIn();
}
static void getCommands()
{
Start:
string Input = Console.ReadLine();
switch (Input)
{
case "clear":
{
goto Start;
}
default:
{
goto Start;
}
}
}[/CODE]
I'm kinda curious if anyone knows if there is some earlier extension for bindless textures in opengl, because for a while I've been using GL_NV_bindless_texture, GL_ARB_gpu_shader5, and GL_ARB_gpu_shader_int64, however one of my machines actually doesn't seem to fully support glsl version 4.50
[QUOTE=antianan;52916343]There are exceptions though that must be kept in mind. Like that one where you allocate some big object with make_shared, and then even if there's only one small weak_ptr left, the whole chunk of data will still be kept untouched due to the nature of make_shared's allocation(1 piece instead of 2). Also no custom deleters.[/QUOTE]
True about the weak_ptr drawback, I never use shared_ptr but it is definitely something to think about. I would say that no custom deleter is kind of the point since we are trying to remove new+delete. If you need a custom deleter then you probably don't want to allocate with new either. And if you use a custom allocator you can use allocate_shared.
Having some issues with C# index lookup
Using list.IndexOf(item), which I assume is what I should be using if I want the index of an item within the list.
However, it returns an index of -1,
But the items resides in position 1957 of the list
[t]https://tenryuu.blob.core.windows.net/astrid/2017/11/17-11-24_16-24-23-twitchBot_(Debugging)_-_Microsoft_Visual_Studio_.png[/t]
This works but it's just yuck
[code]
await getUrlAsync("/default/?cmd=QueueItems¶m1=" + Playlist.FindIndex(song=> song.ToString().Equals(Song.ToString())));
[/code]
[QUOTE=Scratch.;52918582]Having some issues with C# index lookup
Using list.IndexOf(item), which I assume is what I should be using if I want the index of an item within the list.
However, it returns an index of -1,
But the items resides in position 1957 of the list
[t]https://tenryuu.blob.core.windows.net/astrid/2017/11/17-11-24_16-24-23-twitchBot_(Debugging)_-_Microsoft_Visual_Studio_.png[/t]
This works but it's just yuck
[code]
await getUrlAsync("/default/?cmd=QueueItems¶m1=" + Playlist.FindIndex(song=> song.ToString().Equals(Song.ToString())));
[/code][/QUOTE]
IndexOf uses the default equality comparer which, for most objects, will only check if it's the same reference (instance). Can't say this for sure without the rest of your code, though.
If you still want to use IndexOf, you'd have to implement IEquatable or override object.Equals on Song that checks if two Songs have the same value.
[QUOTE=horsedrowner;52918635]IndexOf uses the default equality comparer which, for most objects, will only check if it's the same reference (instance). [/quote]
That would make sense, since it's using a completely new lookup from the source, meaning that the list I'm fetching from would have completely different references
So I'm working on upgrading my old modular noise library - mostly, I'm trying to add different backends vs just the CUDA backend. First, I'll just be doing a simple CPU backend that works as a fallback - eventually, I'll probably a Vulkan backend purely for launching compute jobs.
I'm trying to keep the function signatures used to launch a module the same across backends, and trying to keep it very nearly C-style so it's easy to compile as a dynamic library. My question relates to optimization, though:
[cpp]
void AddLauncher(float* output, const float* input0, const float* input1, const int width, const int height);
[/cpp]
Given that I'll be using C-style arrays, what's the best way I can help the compiler optimize this (here, do output = input0 + input1 per element)? This is pretty much how all the functions look (float* output, two const float* inputs). I already can't use SIMD stuff or something like std::valarray, since I'd have to copy the values into a new container. The CPU implementation of this will always be slower, but I still want it to be reasonably fast
Having more trouble with my database. I've already exhausted most of the solutions from SO. My Flask app is losing connection overnight to the database because of inactivity. I've already ensured that SQLAlchemy's engine is set to recycle connections in the pool every 30 minutes, which is far less than MySQL's global wait_timeout of eight hours. I'm trying not to have to rewrite all of my shit with Flask-SQLAlchemy.
[QUOTE=Adelle Zhu;52920462]Having more trouble with my database. I've already exhausted most of the solutions from SO. My Flask app is losing connection overnight to the database because of inactivity. I've already ensured that SQLAlchemy's engine is set to recycle connections in the pool every 30 minutes, which is far less than MySQL's global wait_timeout of eight hours. I'm trying not to have to rewrite all of my shit with Flask-SQLAlchemy.[/QUOTE]
Use a site that does ping reliability, point that at a page that hits the DB.
Is there any possible way I could query for CUDA support at runtime? I'm trying to figure out how to handle this for what I mentioned in my previous post, as there are three backends - CUDA, Vulkan, and the dumb/brute-force CPU fallback. I know I can delay load DLLs on windows, so I could delay the DLL containing my CUDA implementation, but I don't know how to query for CUDA support at all.
The only way I can think of doing this would really involve redoing a significant portion of my code, but I feel like there are products and games that do this somehow? There has to be someway to get this info at runtime, right?
[editline]edited[/editline]
answered my own question after some serious google-fu to word the question right. I can use cudaGetDeviceCount() on a non-CUDA PC so long as I package cudart.dll with my application. It'll just return 0 on PCs that don't have any CUDA hardware, and the rest is easy to manage.
With GLEW, I know you're supposed to initialize it after you create the window, but for some reason having it set up this way in my class doesn't let it correctly initialize. When I call glGenBuffers(), it throws an exception. Is it because of how my class is designed? Setting it up manually in main() works fine, but the code below throws it off.
[cpp]
Engine() : sf::RenderWindow(){
// setup the SFML renderwindow
string windowTitle = "OpenGL";
sf::ContextSettings contextSettings;
contextSettings.antialiasingLevel = 4;
contextSettings.depthBits = 64;
sf::RenderWindow::create(sf::VideoMode(800, 600), windowTitle, sf::Style::Resize | sf::Style::Close, contextSettings);
// setup GLEW
glewExperimental = GL_TRUE;
if(!glewInit()) printf("ERROR: GLEW failed to initialize.\n");
glViewport(0, 0, sf::RenderWindow::getSize().x, sf::RenderWindow::getSize().y);
...
}
[/cpp]
[QUOTE=The Inzuki;52925761]With GLEW, I know you're supposed to initialize it after you create the window, but for some reason having it set up this way in my class doesn't let it correctly initialize. When I call glGenBuffers(), it throws an exception. Is it because of how my class is designed? Setting it up manually in main() works fine, but the code below throws it off.
[cpp]
Engine() : sf::RenderWindow(){
// setup the SFML renderwindow
string windowTitle = "OpenGL";
sf::ContextSettings contextSettings;
contextSettings.antialiasingLevel = 4;
contextSettings.depthBits = 64;
sf::RenderWindow::create(sf::VideoMode(800, 600), windowTitle, sf::Style::Resize | sf::Style::Close, contextSettings);
// setup GLEW
glewExperimental = GL_TRUE;
if(!glewInit()) printf("ERROR: GLEW failed to initialize.\n");
glViewport(0, 0, sf::RenderWindow::getSize().x, sf::RenderWindow::getSize().y);
...
}
[/cpp][/QUOTE]
I'd change your glewInit() to look more like this:
[cpp]
const GLenum init = glewInit();
if (init != GLEW_OK) {
const std::string err_str = std::string("GLEW failed to init: ") + std::string(glewGetErrorString(init)) + std::string("\n");
throw std::runtime_error(err_str.c_str());
}
[/cpp]
If GLEW fails to init for some reason, its good to throw an exception right there so that you catch this error (mostly for your own purposes) as it happens, instead of later on. Especially because failing to initialize glew means things aren't going to work. Is that what's occurring here, maybe? Also, its generally preferable to avoid using printf in most C++ code - output to std::cerr, instead, in this case. Or use a logging library like easylogging-pp. I don't see anything wrong outright here, though.
I have a Git question - if I host my repository on my own hardware, and then expose it to select users/machines myself, do I have to pay anything to Github? Trying to set up a private repo without having to pay a fee.
Bonus points: is there any legal way to do the same thing but using Perforce software instead?
Sorry if this post doesn't fit here, wasn't sure but also not sure where else to ask.
I'm pretty sure you can set up your own private repo on your own server
Github just won't host any private repo's for free iirc, so you have to do it yourself (but can still use their software)
Bitbucket offers free private repos if that's all you're after.
[QUOTE=SataniX;52927269]Bitbucket offers free private repos if that's all you're after.[/QUOTE]
I was more wondering about licensing for the software, I can host the repo on one of my machines but I didn't know if I'd still have to pay to use Git or Perforce, which it seems Git is free and Perforce Helix is free for teams of 5 or fewer, so I think I have my answer.
Cheers friends! <3
Sorry, you need to Log In to post a reply to this thread.