[QUOTE=WTF Nuke;51803138]He says "Yeah but "modern C++" specifically means "don't use bare pointers, use RAII everywhere, shitloads of templates, etc"." which is not at all what modern C++ means to most people. RAII everywhere is [B]the[/B] mantra of C++, modern and otherwise. Bare pointers are not frowned upon in modern C++, just owning bare pointers (for which smart pointers are preferred). I really don't see why anyone would be against smart pointers and ownership semantics. And modern C++ as most people know it (lambdas, smart pointers, move semantics, auto, constexpr) has almost no reliance on templates, or rather has no more reliance than C++03 and below. The standard library is chock full of templates and if people don't like templates they really aren't using C++ to the fullest, regardless of how modern it is. All in all this just seems a bit like the ramblings of a guy who uses C++ as C with classes, I mean really what are you being indoctrinated into? Making the compiler do more work for you?[/QUOTE]
I missed that JBlow response.
[QUOTE] I mean really what are you being indoctrinated into? Making the compiler do more work for you? [/QUOTE]
Roasted!
I don't know any other way as when I was introduced to the language last year at Uni it was with the C++14 standard, and usage of features such as smart pointers and templated algorithms with lambdas was pretty much mandated. Honestly I don't understand JBlow. He's writing his own language Jai which supposedly has the primary purpose of making writing games easier. How a low-level C-style language is supposed to aid in that I have no idea. Oh well, I guess he'd have a better idea than most given he's spent accumulatively over a decade making two.
[QUOTE=JWki;51803232]One reason why most people that aren't really that convinced of C++ as a good language in general tend to avoid the modern flavour might be that a LOT of the new language features and STL elements are either a) a clear attempt to fix obvious design flaws that happened earlier in the development of the language or b) embracing the things people tend to not like about C++.
While a) is just a symptom that C++ has some issues in it's basic design, b) is what might be offputting about recent language developments to people like Jon Blow.[/QUOTE]
I don't think the issue is that people who don't like C++ avoid modern C++, but rather people who don't like C++ haven't encountered modern C++. I'm wondering what you consider to be design flaws fixed by the standard library (specifically newer feature). I don't mean the new standards, because obviously yes defects and flaws are fixed by the language, but I mean standard library additions that are supposed to fix previous mistakes. Most of the standard library additions have been things that haven't been introduced before rather than have been designed poorly, such as the threading library. I'm also not sure what you mean by embracing what people don't like about C++. The only things introduced in the standard are approved papers written by users of C++. Now granted the average C++ user isn't going to write a paper, much less get it approved, but clearly someone wants these new features. What sort of standard library support would you want?
[QUOTE=gayNightOwl;51803235]I missed that JBlow response.
Roasted!
I don't know any other way as when I was introduced to the language last year at Uni it was with the C++14 standard, and usage of features such as smart pointers and templated algorithms with lambdas was pretty much mandated. Honestly I don't understand JBlow. He's writing his own language Jai which supposedly has the primary purpose of making writing games easier. How a low-level C-style language is supposed to aid in that I have no idea. Oh well, I guess he'd have a better idea than most given he's spent accumulatively over a decade making two.[/QUOTE]
Making two of the most critically acclaimed indie games in history including one of the first indie games to ever gain as much attention as AAA games at the time, yeah.
I'd be very very careful with what they teach in C++ courses at Uni. I don't know anything about your course specifically, but someone designing and giving a language lecture at Uni doesn't necessarily have a good grasp on how that language is effectively used in production. Those are the same people advocating big O notation as the fundamental metric for performance. If they tell you std::deque is a great thing, stop listening.
Jai isn't meant to make writing games easier in general, it is meant to make it easier to write games in a good way - the tech for games will ALWAYS be written in a low level language because you need control over memory, you need native execution, and Jai is designed to be a better approach at a systems language that makes writing the low level code you NEED to have underneath any game easier. And it's getting pretty good at that - just look at how easy it is to switch between SoA and AoS storage for data structures, something that's a painful transformation to do in C++ most of the time.
[editline]10th February 2017[/editline]
[QUOTE=WTF Nuke;51803241]I don't think the issue is that people who don't like C++ avoid modern C++, but rather people who don't like C++ haven't encountered modern C++. I'm wondering what you consider to be design flaws fixed by the standard library (specifically newer feature). I don't mean the new standards, because obviously yes defects and flaws are fixed by the language, but I mean standard library additions that are supposed to fix previous mistakes. Most of the standard library additions have been things that haven't been introduced before rather than have been designed poorly, such as the threading library. I'm also not sure what you mean by embracing what people don't like about C++. The only things introduced in the standard are approved papers written by users of C++. Now granted the average C++ user isn't going to write a paper, much less get it approved, but clearly someone wants these new features. What sort of standard library support would you want?[/QUOTE]
I'll just hope that the forum will auto merge my two posts if it doesn't I'll do it by hand.
So, I have encountered modern C++. I don't like C++ very much anymore.
I used to be evangelistic about it - always use the most recent compilers to get the newest language features, advocate smart pointers, etc etc, but the more I used those things the less I liked them. So I don't think that's a valid assumption. Funnily enough, Jon Blow also often states that he doesn't like game development on Linux, which always leads to funny discussions where people try to ease him into the Linux mindset and try to comfort him saying it'll get easier as he gets used to using Linux - while he's been using Unix systems for servers and the like since before Linux even existed, so it's clearly not a matter of not being used to the Unix way.
So, when I say design flaws being fixed, what comes to mind is the most recent proposals about reflection and introspection. That's a thing that has been missing from C++ from the very start. Languages with introspection support have existed for as long as C++ and I think everybody will agree that it's an incredibly helpful feature to have - I'd argue that any advanced C# user in this forum relies on it for a lot of things, and rightfully so. So not having it in the first place is a design flaw right there.
Same goes for additions like std::aligned_storage - explicit control over alignment is crucial in many domains where C++ is the primary language being used, and always has been, so when trying to "fix C", you'd expect that to be something that's considered in the design of the language.
When I said fixing flaws I didn't necessarily mean fix existing things that are broken, but rather introduce things that should have been first class language features from the very start.
On the topic of embracing language features that people don't like, that was poorly phrased, sorry for that. I wasn't referring to the average C++ user (average not being used in a judgmental fashion here, please mind English is not my first language), but rather to well people who don't like C++ very much in the first place even though they might still use it.
A often heard complaint is that the committees that propose and approve new standard additions are often lead and sometimes also consist of people that aren't necessarily very close to actual production code. Or to phrase it more controversial, there's voices saying that the C++ committee has no clue of what the language would actually benefit from in production usage.
I don't have a solid opinion on that because I don't know the whole C++ committee's biographies but I will say that some of them do not actually have a lot of experience in the world of software development.
Got CS:GO lightmap reading working, although I still haven't figured out how the layout specified in the BSP works so I'm having to pack the atlas myself:
[t]http://files.facepunch.com/ziks/2017/February/09/lightmap4.png[/t]
[editline]10th February 2017[/editline]
[vid]http://files.facepunch.com/ziks/2017/February/10/lightmaps.mp4[/vid]
[QUOTE=Ziks;51803303]Got CS:GO lightmap reading working, although I still haven't figured out how the layout specified in the BSP works so I'm having to pack the atlas myself[/QUOTE]
Are you using any specific middleware or algorithm?
[QUOTE=JohnnyOnFlame;51803492]Are you using any specific middleware or algorithm?[/QUOTE]
I was using [url=https://github.com/ChevyRay/RectanglePacker]this[/url], but the result wasn't dense enough (it would produce something like [url=http://files.facepunch.com/ziks/2017/February/09/lightmap2.png]this[/url]) so I adapted it a bit by changing what order it considers potential positions to place each subrect.
[editline]10th February 2017[/editline]
Here's my modified version:
[url]https://github.com/Metapyziks/SourceUtils/blob/master/SourceUtils/RectanglePacker.cs[/url]
The result seems pretty good considering how short it is.
[QUOTE=Ziks;51803507]I was using [url=https://github.com/ChevyRay/RectanglePacker]this[/url], but the result wasn't dense enough (it would produce something like [url=http://files.facepunch.com/ziks/2017/February/09/lightmap2.png]this[/url]) so I adapted it a bit by changing what order it considers potential positions to place each subrect.
[editline]10th February 2017[/editline]
Here's my modified version:
[url]https://github.com/Metapyziks/SourceUtils/blob/master/SourceUtils/RectanglePacker.cs[/url]
The result seems pretty good considering how short it is.[/QUOTE]
Seems pretty close to what I'm using, still that's an interesting approach to that problem, thanks!
[QUOTE=JohnnyOnFlame;51803582]Seems pretty close to what I'm using, still that's an interesting approach to that problem, thanks![/QUOTE]
Just made the result much more compact now, it used to produce this for de_dust2:
[url=http://files.facepunch.com/ziks/2017/February/10/de_dust2_lightmap_old.png][img]http://files.facepunch.com/ziks/2017/February/10/de_dust2_lightmap_old_thumb.png[/img][/url]
And now produces this:
[url=http://files.facepunch.com/ziks/2017/February/10/de_dust2_lightmap.png][img]http://files.facepunch.com/ziks/2017/February/10/de_dust2_lightmap_thumb.png[/img][/url]
[QUOTE=JWki;51803242] I don't know anything about your course specifically, but someone designing and giving a language lecture at Uni doesn't necessarily have a good grasp on how that language is effectively used in production. Those are the same people advocating big O notation as the fundamental metric for performance. If they tell you std::deque is a great thing, stop listening.
[/QUOTE]
I don't know about your uni, but from my experience if you thought Big-O directly correlates and predicts performance then you really don't understand asymptotic notation, and how/why it's used. My professors made that very clear that you might have an algorithm with great big O that has huge constants, making it intractable in reality. Alternatively, you might have an algorithm with shit big O, like insertion sort, and use with something like merge sort when the numbers get small enough. They tell us why we gloss over the constants, but if we forget about them then that's a massive failure of understanding.
[QUOTE=gayNightOwl;51802954]snip[/QUOTE]
Hey I think I have this same keyboard(cm masterkeys pro l)! I am going to try to try this out.
It's too damn cold here so some tropical psychosis sounds good right about now.
[QUOTE=Ziks;51803507]-lightmaps-[/QUOTE]
You're supposed to pack it yourself. The space inbetween the lightmaps is full of weird (mostly useless) data.
[QUOTE=JWki;51803242]-long post on C++-[/QUOTE]
As I understand it, many of the major OS manufacturers have spots on the committee. The Convener is currently a microsoft dude, and Google, Red Hat, and Intel have a number of spots on the committee too. So there a lot of people contributing to the Linux kernel on the standards committee. Edison Design Group has a number of seats too, but they mostly make front-ends iirc
I think though that there are differences in priorities still between OS producer and your general software engineer. Being the person that writes software that runs on an OS would naturally change your priorities, I imagine.
I don't deny that C++ is missing some big features. Aligned storage and allocation is one I've run into as well, in the various SIMD projects I've mentioned.
I agree that there's a bit of a problem with many people being only familiar with the older style of C++, instead of the newer style. And there are those against the newer style that either have poor justification or no justification for there stance. When I was first getting started, many of the tutorials I found referenced hilariously out of date methods despite being relatively recent tutorials. Take the "[URL="https://sites.google.com/site/letsmakeavoxelengine/"]Lets Make a Voxel Engine[/URL]" tutorial. Block data is allocated using this old method:
[cpp]
class Chunk
{
public:
Chunk();
~Chunk();
void Update(float dt);
void Render(OpenGLRenderer* pRenderer);
static const int CHUNK_SIZE = 16;
private:
// The blocks data
Block*** m_pBlocks;
};
Chunk::Chunk(){
// Create the blocks
m_pBlocks = new Block**[CHUNK_SIZE];
for(int i = 0; i < CHUNK_SIZE; i++){
m_pBlocks[i] = new Block*[CHUNK_SIZE];
for(int j = 0; j < CHUNK_SIZE; j++){
m_pBlocks[i][j] = new Block[CHUNK_SIZE];
}
}
}
Chunk::~Chunk(){
// Delete the blocks
for (int i = 0; i < CHUNK_SIZE; ++i){
for (int j = 0; j < CHUNK_SIZE; ++j){
delete [] m_pBlocks[i][j];
}
delete [] m_pBlocks[i];
}
delete [] m_pBlocks;
}
[/cpp]
Coming from Python, I really wasn't a fan of this. When I showed this to a friend, he asked me why I wasn't using std::vector. I didn't even know std::vector was a thing. That code was replaced, then, with this:
[cpp]
static constexpr unsigned int CHUNK_SIZE_XZ = 32;
static constexpr unsigned int CHUNK_SIZE_Y = 128;
using BlockContainer = std::vector<uint8_t>;
class Chunk{
public:
Chunk();
// Dtor can be handled by compiler, since we're using a std::vector
~Chunk() = default;
BlockContainer Blocks;
}
Chunk::Chunk() {
// Allocate space for the chunk in this chunk's block container
Blocks.reserve(CHUNK_SIZE_XZ * CHUNK_SIZE_XZ * CHUNK_SIZE_Y);
}
[/cpp]
That whole tutorial series is chock-full of (in my unqualified opinion) poor choices when it comes to utilizing C++. But choices like that are not uncommon. Std::vector does have costly reallocation calls (although, most compilers will do fine with primitive types and use memmove instead of memcpy for these), but a simple call to reserve can help save space. Blocks.resize() works even better in some cases, since it zero-initializes the elements too, meaning you can index through them like a C-style array created with "new". vector.reserve() doesn't allow that, only allocating space. Modern C++ also has a feature rich library, and the talk "C++ seasonings" does a better job of expanding on this than I can, but a common example is using standard library algorithms over your own, and using them to replace "for"-loops wherever possible. Example:
[cpp]
// (b is an "InterSectionPoint", one of many we will use for connections to the perimeter)
// Append points from nextPoints into newPath.
std::copy(b.NextPoints.begin(), b.NextPoints.end(), std::back_inserter(newPath));
// Use this ternary function to decide which direction we offset the point in. best to use explicit types to avoid possible conversion issues due to platform architecture.
auto offset = [this, b](ClipperLib::IntPoint& pt)->ClipperLib::cInt { return pt.Y += this->PerimeterOverlap * (b.Type == ipType::Upper ? 1 : -1); };
// Iterate through our newPath object and apply the given lambda function, making sure our path's points overlap with the perimeters.
std::transform(newPath.begin(), newPath.end(), newPath.begin(), offset);
[/cpp]
The original version of this code used several loops. The first was just to copy elements with "newPath.push_back()". std::copy allows me to reduce this to one line, removing the for loop. I can still make sure that I'm not overwriting elements, though, by using std::back_inserter(), which is functionally equivalent to push_back(). std::transform is ridiculously powerful, because it can work with lambda expressions. In this case, there was a loop iterating over "newPath" applying the offset function to each point in "newPath" in the loop body itself. The function isn't complex, but its actually used a ton in the larger method that this snippet is from. Moving it into a lambda means I can just call this lambda repeatedly, instead of rewriting that same function over and over, and also means I can easily use std::transform in other locations too.
I dunno, Modern C++ felt a lot better to use once I learned to use it.
I agree entirely with what WTF Nuke said:
[quote]I mean really what are you being indoctrinated into? Making the compiler do more work for you?[/quote]
Modern C++ removes a lot of the work from the individual programmer and moves it to the compiler. Now, that being said, that means you have to be able to trust the compiler and trust the standard library implementers. There's a few users in this thread that have run into issues with that, though, so there are some downfalls to Modern C++ paradigms. To further recurse this anecdote, it must be said that those are edge cases and when of the users reached out to MSVCs main implementer he offered clarification. For 90%+ of use cases and users, Modern C++ is entirely the way to go.
[QUOTE=DoctorSalt;51804039]I don't know about your uni, but from my experience if you thought Big-O directly correlates and predicts performance then you really don't understand asymptotic notation, and how/why it's used. My professors made that very clear that you might have an algorithm with great big O that has huge constants, making it intractable in reality. Alternatively, you might have an algorithm with shit big O, like insertion sort, and use with something like merge sort when the numbers get small enough. They tell us why we gloss over the constants, but if we forget about them then that's a massive failure of understanding.[/QUOTE]
I understand Big-O notation, thanks. What I was implying is that the upper complexity boundary for your algorithm doesn't matter when you run it on a data structure with shitty memory access patterns. A O(log n) algorithm isn't of much use when you waste hundreds of cycles waiting for stuff to get loaded into cache while you could run a O(n) on a linear data structure without nearly as many cache misses.
I'm quite excited right now. After doing some quick research on bindless textures, I've just realized that I can effectively store texture handles within a vertex attribute buffer (along side the vertices, normals, tangents, bitangents, etc) for a mesh
If I understand this right, this could mean that each triangle in a mesh could have its own unique texture with little work. Importantly, this means I wouldn't have to group objects together by common materials and then render, I could just render an entire mesh with 1 vao binding + 1 draw call and be done.
Even more importantly, if 1 model happened to have many small meshes, I could just aggregate them all into a single vertex array object, meaning 1 draw call for an entire ~400 mesh object with roughly ~30 different materials built up of ~80 different textures
Holy shit
Apparently this technique is also useful for deferred rendering, but I'm not quite sure yet. I'm going to have to study this some more to find the implications
[QUOTE=JWki;51805439]I understand Big-O notation, thanks. What I was implying is that the upper complexity boundary for your algorithm doesn't matter when you run it on a data structure with shitty memory access patterns. A O(log n) algorithm isn't of much use when you waste hundreds of cycles waiting for stuff to get loaded into cache while you could run a O(n) on a linear data structure without nearly as many cache misses.[/QUOTE]
Yeah you're both agreeing with each other here. He was saying that it was an unfair characterization you made saying that Algorithms and Complexity lecturers teach that Big-O is the only thing to consider.
[QUOTE=Karmah;51806109]I'm quite excited right now. After doing some quick research on bindless textures, I've just realized that I can effectively store texture handles within a vertex attribute buffer (along side the vertices, normals, tangents, bitangents, etc) for a mesh
If I understand this right, this could mean that each triangle in a mesh could have its own unique texture with little work. Importantly, this means I wouldn't have to group objects together by common materials and then render, I could just render an entire mesh with 1 vao binding + 1 draw call and be done.
Even more importantly, if 1 model happened to have many small meshes, I could just aggregate them all into a single vertex array object, meaning 1 draw call for an entire ~400 mesh object with roughly ~30 different materials built up of ~80 different textures
Holy shit
Apparently this technique is also useful for deferred rendering, but I'm not quite sure yet. I'm going to have to study this some more to find the implications[/QUOTE]
I've been looking at some of the stuff you brought up - thanks for doing so. Some of the texturing work feels really useful for me, as I think my planet generation stuff will be switching textures a lot and using a lot of textures in general. The current texture system in OpenGL feels really dated, like an artifact of the immediate-mode era.
Got the dither generator working well with limited pallates
[IMG]https://dl.dropboxusercontent.com/u/12024286/Dev%20stuff/Dither%20Improvements.PNG[/IMG][img]https://my.mixtape.moe/myqanb.gif[/img]
[QUOTE=phygon;51806685]Got the dither generator working well with limited pallates
[IMG]https://dl.dropboxusercontent.com/u/12024286/Dev%20stuff/Dither%20Improvements.PNG[/IMG][/QUOTE]
Wow, you're on a [I]roll.[/I]
[editline]asd[/editline]
As in, roll.exe.
What dithering algorithm are you using? Dithering is the bees knees in my opinion, fun to play around with. I was thinking of using it as an artistic choice in a game because it can look super stylistic but I never got around to making anything.
[QUOTE=WTF Nuke;51806742]What dithering algorithm are you using? Dithering is the bees knees in my opinion, fun to play around with. I was thinking of using it as an artistic choice in a game because it can look super stylistic but I never got around to making anything.[/QUOTE]
I guess it makes it easy to make your game look like some late 90s render sequence.
Yeah but sadly my art skills are from bad early 90s games. Actually that's too generous, early 80s is more accurate.
[QUOTE=WTF Nuke;51806742]What dithering algorithm are you using? Dithering is the bees knees in my opinion, fun to play around with. I was thinking of using it as an artistic choice in a game because it can look super stylistic but I never got around to making anything.[/QUOTE]
I'm not using any existing one that I know of. I'm just going through the color pallate, finding which two when combined are the closest to a given pixel (comparing at however many intervals of strength as there are dither patterns), then baking that all into a texture where the R value is the first color index, the G value is the second color index, and the B value is the index of the pattern to use. Then, it sends the texture + the pallate(turned into a texture) + the patterns (turned into another texture) into a shader that just reads off the pixel color and then selects the colors + pattern from that.
It takes a second to compute when applying the material but it's ~computationally free~ once the game is running.
[editline]s[/editline]
If you want to take a look at the source, when it's done I'm going to post it on github
[QUOTE=paindoc;51805433]Lots of C++ stuff[/QUOTE]
Can you explain what you mean when you say that C++ doesn't have aligned storage and allocation?
GitHub changed their theme
[editline]ohno[/editline]
uuuuuuhhhhhhHHHHHHHHHH doing butt stuff
[img]http://i.imgur.com/r9cB6Ce.png[/img]
[QUOTE=Dr Magnusson;51807499]Can you explain what you mean when you say that C++ doesn't have aligned storage and allocation?[/QUOTE]
Yeah it was a bit ill worded perhaps. What I meant is there's no built-in way of specifying alignment for your allocations, you used to have to use compiler specific alligned_malloc calls or align your pointers by hand. They added std::aligned_storage at some point to help with that.
Ofc default alignments for objects were always considered but there's a lot of use cases where you want custom alignment for stuff.
Can I join the dithering club?
[code]void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
//You need to load the Bayer Matrix Texture first! On JS Console:
//gShaderToy.SetTexture(1, {mSrc:'http://i.imgur.com/6dbb0p7.png', mType:'texture', mID:1, mSampler:{ filter: 'nearest', wrap: 'repeat', vflip:'true', srgb:'false', internal:'byte' }});
float steps = 1.0 + floor((iMouse.x / iResolution.x) * 32.0);
vec3 luminance = vec3(0.299, 0.587, 0.114); //B&W luminance coefficients
float dither = texture2D(iChannel1, fragCoord.xy / 8.0).r;
vec4 tex = texture2D(iChannel3, fragCoord.xy / iResolution.xy);
float c = dot(tex.rgb, luminance.rgb);
c = floor(dither + c * steps) / steps;
fragColor = vec4(c);
}[/code]
[vid]https://s1.webmshare.com/BOwZW.webm[/vid]
So i decided to try my rendering engine on the new laptop i got, it has nVidia GeForce GTX 650M and an integrated intel GPU.
Integrated:
[thumb]http://carp.tk/$/LibTech_2017-02-11_14-16-02.png[/thumb]
nVidia:
[thumb]http://carp.tk/$/LibTech_2017-02-11_14-16-18.png[/thumb]
k.
[editline]11th February 2017[/editline]
Obviously i'm running the latest stable drivers for both of them. Yet the nVidia card tends to fuck up arbitrary shit. I even get diagonal screen tearing when scrolling in firefox on ALL websites.
Done with the dithering
[IMG]https://my.mixtape.moe/gcxyuz.gif[/IMG]
[IMG]https://my.mixtape.moe/ffnmwu.gif[/IMG]
[QUOTE=cartman300;51808353]So i decided to try my rendering engine on the new laptop i got, it has nVidia GeForce GTX 650M and an integrated intel GPU.
Integrated:
[thumb]http://carp.tk/$/LibTech_2017-02-11_14-16-02.png[/thumb]
nVidia:
[thumb]http://carp.tk/$/LibTech_2017-02-11_14-16-18.png[/thumb]
k.
[editline]11th February 2017[/editline]
Obviously i'm running the latest stable drivers for both of them. Yet the nVidia card tends to fuck up arbitrary shit. I even get diagonal screen tearing when scrolling in firefox on ALL websites.[/QUOTE]
Interesting, were you developing on a machine with an AMD card before by chance?
If not, that's weird, my 660M tended to replicate what I got on a desktop Nvidia quite consistently but I guess you never know with GPUs.
If yes, it's not surprising.
Sorry, you need to Log In to post a reply to this thread.