[QUOTE=Darwin226;46752484]Let's see. 4 bytes per entity. So that's like 4 extra megabytes per MILLION entities. Hardly a space bottleneck.
Secondly, if you're iterating over your entities and they need to interact by jumping around in memory, you're gonna get cache misses anyways. I'm having a hard time imagining a scenario where you literally only need to read sequential data about each of those EXCEPT to render them.
[editline]19th December 2014[/editline]
CPUs have a cache because elementary operations on collections benefit from them greatly (summing up numbers and things like that). Anything you do with an entity will be too complex to utilize the sequential access.[/QUOTE]
What? Many things involve sequential access. One of the most important things (and most costly) for games, physics, is done constantly and is usually sequential access.
Also, it depends on what you are storing. In a doubly-linked list, you can often have more pointer data than actual payload data!
[QUOTE=Tommyx50;46752543]What? Many things involve sequential access. One of the most important things (and most costly) for games, physics, is done constantly and is usually sequential access.
Also, it depends on what you are storing. In a doubly-linked list, you can often have more pointer data that actual payload data![/QUOTE]
If you're accessing your physical objects in the game sequentially, you're doing something very wrong.
[QUOTE=mastersrp;46753187]If you're accessing your physical objects in the game sequentially, you're doing something very wrong.[/QUOTE]
Same if your entities have less than 4 bytes of data.
[QUOTE=mastersrp;46753187]If you're accessing your physical objects in the game sequentially, you're doing something very wrong.[/QUOTE]
How else would you perform euler integration? Random access?
Have you ever created a physics system, even a rudimentary one, before?
[editline]19th December 2014[/editline]
[QUOTE=Darwin226;46753250]Same if your entities have less than 4 bytes of data.[/QUOTE]
Even if it's a relatively small waste of space, it's a huge waste of computing potential. Cache misses are a big thing...
Even when you don't sequentially access your data you'll get a big performance increase from packing it tightly.
Entities will be more likely to already be present in cache, and it will result in less cache misses.
Cache is big, you can probably stick you whole gamestate in the 16MB of L3 Cache on modern processors as long as its tightly packed in memory.
EDIT: Just to put in perspective how important hitting the cache can be [url]http://danluu.com/3c-conflict/[/url]
[QUOTE=Cold;46753800]Even when you don't sequentially access your data you'll get a big performance increase from packing it tightly.
Entities will be more likely to already be present in cache, and it will result in less cache misses.
Cache is big, you can probably stick you whole gamestate in the 16MB of L3 Cache on modern processors as long as its tightly packed in memory.
EDIT: Just to put in perspective how important hitting the cache can be [url]http://danluu.com/3c-conflict/[/url][/QUOTE]
Indeed! Usually it's best to store all useful information tightly together - so, for example if you have the vectors x, y and z corresponding to position, velocity and acceleration, it tends to be better to store it as
xyzxyzxyz:
[code]
struct Entity {
vec3 position;
vec3 velocity;
vec3 acceleration;
}
vector<Entity> Entities;
[/code]
instead of xxxyyyzzz:
[code]
struct Entities {
vector<vec3> positions;
vector<vec3> velocities;
vector<vec3> accelerations;
}
[/code]
Because the latter can risk the cache "flip-flopping" with the accesses to one vector unloaded the other from the cache.
And then you want to store the irrelevant information so that it's out-of-the-way, not being wasted by being loaded when it isn't needed - essentially making blocks of relevant info stay together.
For example:
[code]
struct Balloons {
vector<Entity> physics;
vector<Colour> colours;
}
[/code]
Which would store the data in memory as such, where c is now colour:
xyzxyzxyzccc
Compare this to the traditional object-orientated approach:
[code]
struct Balloon : public Entity {
Colour colour;
}
vector<Balloon> balloons;
[/code]
Which would be like this in memory:
xyzcxyzcxyzc
As you can imagine, colour information is usually useless during physics operations such as integration, so there's no point loading it in - but with a more typical approach that ignore the underlying hardware, it's intermixed with the data and loaded into the cache nonetheless, meaning that in the end you get more cache misses. This is a trivial example of course, but imagine a fuller class which may have dozens of instance variables - you'd load in tonnes of worthless data!
[QUOTE=Tommyx50;46753385]How else would you perform euler integration? Random access?
Have you ever created a physics system, even a rudimentary one, before?[/QUOTE]
So, a physics engine where bodies don't interact? Or are you expecting that the iterative step where you apply the velocities will be as expensive as the actual collision resolution?
[editline]20th December 2014[/editline]
[QUOTE=Cold;46753800]Even when you don't sequentially access your data you'll get a big performance increase from packing it tightly.
Entities will be more likely to already be present in cache, and it will result in less cache misses.
Cache is big, you can probably stick you whole gamestate in the 16MB of L3 Cache on modern processors as long as its tightly packed in memory.
EDIT: Just to put in perspective how important hitting the cache can be [url]http://danluu.com/3c-conflict/[/url][/QUOTE]
With 16MBs of cache, I'm wondering how important the whole thing is. I can't imagine the automatic allocator would scatter things around more than that. That's just a pure guess though.
It's premature optimisation, anyway
To everyone arguing with Tommyx50--
He's right, you're wrong.
[URL="http://gameprogrammingpatterns.com/data-locality.html"]http://gameprogrammingpatterns.com/data-locality.html
[/URL]
[QUOTE="Go read the above article"]If our goal was to take a whirlwind tour around the game’s address space like some “256MB of RAM in Four Nights!” cheap vacation package, this would be a fantastic deal. But our goal is to run the game quickly, and traipsing all over main memory is not the way to do that. Remember that sleepFor500Cycles() function? Well this code is effectively calling that all the time.[/QUOTE]
[THUMB]http://i.imgur.com/k0t1e.png[/THUMB]
The key detail is at the top of the second column.
A fetch from main memory is the penalty for a cache miss.
For a linked list, you have a guaranteed cache miss on every single pointer traversal. When you have thousands of entities for particles, state, animation, physics, this all adds up. Using linked lists isn't just bad optimization, it's bad coding. There is no reason to use a linked list. You don't need O(1) append because you should be using static arrays and object pools.
Your code isn't immune to this. Just because your frame update runs in 10 ms or w/e doesn't mean that it couldn't run in <1 ms for the same number of objects if you just refactored how you iterate through your entities.
-snip- Bad mood. Didn't read.
He is actually mostly right, linked lists are slower(for access) than static arrays.
But, past the collection type(which is important) micro optimizing cache usage should be more of an after thought as it takes a lot of time and isn't all that hard to come back and do if it becomes an issue. Unless you are writing say embedded stuff that needs to be as fast as possible.
[editline]20th December 2014[/editline]
Of course it's good to adopt good practices like this.
[code]
struct Entity {
vec3 position;
vec3 velocity;
vec3 acceleration;
}
vector<Entity> Entities;
[/code]
This is actually taught as the usual way for storing data for a reason, but it's also not often explained for a reason, it's not THAT important for most use cases.
[QUOTE=reevezy67;46755106]micro optimizing cache[/QUOTE]
This isn't micro optimization and it's kinda annoying to "do later". Even with Lua, where I don't really have control over memory at all I get significant performance boosts when I do the xxxyyyzzz way instead of the xyzxyzxyz one. I'm not the only one who does this either: [URL]http://bitsquid.blogspot.com.br/2013/01/garbage-collection-and-memory.html[/URL]
[QUOTE]
One thing I've noticed as I delve deeper and deeper into data-oriented design is that I tend to allocate memory in much larger chunks than before. It's a natural consequence of trying to keep things continuous in memory, treating resources as large memory blobs and managing arrays of similar objects together.
This has interesting consequences for garbage collection, because when the garbage collector only has to keep track of a small number of large chunks, rather than a large number of small chunks, it can perform a lot better.
Let's look at a simple example in Lua. Say we want to write a class for managing bullets. In the non-data-oriented solution, we allocate each bullet as a separate object:
[code]
function Bullet:update(dt)
self.position = self.position + self.velocity * dt
end
function Bullets:update(dt)
for i,bullet in ipairs(self.bullets) do
bullet:update(dt)
end
end
[/code]
In the data-oriented solution, we instead use two big arrays to store the position and velocity of all the bullets:
[code]
function Bullets:update(dt)
for i=1,#self.pos do
self.pos[i] = self.pos[i] + dt * self.vel[i]
end
end
[/code]
I tested these two solutions with a large number of bullets and got two interesting results:
The data-oriented solution runs 50 times faster.
The data-oriented solution only needs half as much time for garbage collection.
That the data-oriented solution runs so much faster shows what cache coherence can do for you. It is also a testament to how awesome LuaJIT is when you give it tight inner loops to work with.
[/QUOTE]
Yeah that's a pretty good case where you would want to just throw away what I said, in this case it's not micro optimization at all.
It depends entirely on the situation and the language. In a lot of cases the OOP way makes more sense for productivity and maintainability.
[QUOTE=adnzzzzZ;46755277]This isn't micro optimization and it's kinda annoying to "do later". Even with Lua, where I don't really have control over memory at all I get significant performance boosts when I do the xxxyyyzzz way instead of the xyzxyzxyz one. I'm not the only one who does this either: [URL]http://bitsquid.blogspot.com.br/2013/01/garbage-collection-and-memory.html[/URL][/QUOTE]
This is the main thing to understand.
[B]This not a micro-optimization.[/B]
For those of you who saw my posts in WAYWO discussing Duff's device, [B]that[/B] is a micro-optimization in almost any code base. I outlined the one example I could think of where it would be worthwhile to manually unroll a loop:
[URL="http://facepunch.com/showthread.php?t=1439640&p=46751661&viewfull=1#post46751661"]http://facepunch.com/showthread.php?t=1439640&p=46751661&viewfull=1#post46751661[/URL]
This on the other hand is a fundamental problem that you will encounter if you try to build a high-performance entity-based system with a sufficiently large number of entity instances using traditional "object-orientated" design where every entity contains all of the necessary pieces relevant to itself. [B]Cache misses in critical loops will cause those loops to run an order of magnitude slower.[/B] The solution is not something that you can simply refactor or magic away with some clever code. [B]It's a full redesign of your architecture.[/B] You have to split apart your entities into their components and rewrite your update loop to work on the arrays of components, not your list of entities. It's far easier to do it right the first time than it is to try and fix your entire code-base.
edit: Solved my problem, after serializing my object into a byte array I passed it to my SendMessage(object obj) method, which serializes the object itself before networking it. So all this time I was sending a serialized byte array which got deserialized on the other end as a byte array rather then the intended object. Huge headache over dumb mistake.
Does anybody know how I could retrieve a byte array sent through an asynchronous socket operation? Here's a snippet of what I'm working with:
[url]http://pastebin.com/JjWzZ3jN[/url]
The size of state.buffer is always 1024, as specific in the StateObject class. bytesRead appears to be the correct size (I'm sending a struct with length of 136, my application reads 136), however I'm not sure if state.buffer has all the data it needs in order to begin deserialization. This is giving me such a headache right now
The MSDN example for Asyncronous server socket loops BeginReceive until "<EOF>" is reached, then it knows that all data has been received. Though I'm not working with strings, and not quite sure how I could implement something similar for sending objects.
[QUOTE=Dvd;46755652][B]It's a full redesign of your architecture.[/B] You have to split apart your entities into their components and rewrite your update loop to work on the arrays of components, not your list of entities. It's far easier to do it right the first time than it is to try and fix your entire code-base.[/QUOTE]
You can get away with a lot here so that this work is diminished considerably. Instead of redesigning your architecture entirely (the part that updates all entities, etc), you keep it the same way but now instead of updating 1000 objects you'll be updating 10 objects that have 100 objects each. Does this make sense? If you do this you only have recode each class. And usually you only do this for classes that you absolutely need to spam, like particles, bullets, effects, maybe enemies depending on the game, so the job of doing it later isn't as big. It's still annoying though
[QUOTE=Darwin226;46754217]So, a physics engine where bodies don't interact? Or are you expecting that the iterative step where you apply the velocities will be as expensive as the actual collision resolution?
[editline]20th December 2014[/editline]
With 16MBs of cache, I'm wondering how important the whole thing is. I can't imagine the automatic allocator would scatter things around more than that. That's just a pure guess though.[/QUOTE]
Collision resolution isn't the expensive part, collision detection is. In the dumber O(n²) system to detect collisions, you are constantly looping through every single entity! When you use a more intelligent broad phase, such as sweep and prune, data coherency is still a large factor, however.
16 MBs of cache is only on the later level caches. L1 cache (which is the fastest by far) typically only has a 16 kilobytes usually, which in reality isn't all that much when you consider how much data even a simple entity uses nowadays.
[QUOTE=esalaka;46754282]It's premature optimisation, anyway[/QUOTE]
That's just a buzzword. Firstly, this is a vital part of engine architecture, not an afterthought. Secondly - how is it premature? This isn't referring to any solid defined project. This is a thought experiment, since somebody asked what the best way of doing it was. Saying that this is premature optimization is meaningless, since we aren't talking about a real project. Not everything is ALWAYS premature optimization...
[QUOTE=adnzzzzZ;46755277]This isn't micro optimization and it's kinda annoying to "do later". Even with Lua, where I don't really have control over memory at all I get significant performance boosts when I do the xxxyyyzzz way instead of the xyzxyzxyz one. I'm not the only one who does this either: [URL]http://bitsquid.blogspot.com.br/2013/01/garbage-collection-and-memory.html[/URL][/QUOTE]
The data being xxxyyyzzz isn't better than xyzxyzxyz. In fact, it can be slower - the cache can potentially constantly flip-flop between 2 areas in the first, which wouldn't happen with the second. What's important is that useless data is kept out of the way. So, as I said before, when you have colour information it'd be better to store it as xyzxyzxyzccc instead of as xyzcxyzcxyzc, because it's information that you don't want nearby and clogging up the other info. The data being held sequentially is a decent rule-of-thumb if you're having trouble thinking about how it should all be laid out in memory, but it's not the absolute best way.
The reason that it was mainly faster in that simple Lua example was more likely due to the easier garbage collection (it's freeing three arrays, not thousands of objects), and the lack of function redirection (in the DOD approach, he was directly looping through the bullets and updating their info, whereas with the OO design he was first following a functor). That's more of a test of data-orientated design vs traditional OO, instead of directly comparing the layouts of memory and the affect it has on the cache.
[QUOTE=reevezy67;46755106]-snip- Bad mood. Didn't read.
He is actually mostly right, linked lists are slower(for access) than static arrays.
But, past the collection type(which is important) micro optimizing cache usage should be more of an after thought as it takes a lot of time and isn't all that hard to come back and do if it becomes an issue. Unless you are writing say embedded stuff that needs to be as fast as possible.
[editline]20th December 2014[/editline]
Of course it's good to adopt good practices like this.
[code]
struct Entity {
vec3 position;
vec3 velocity;
vec3 acceleration;
}
vector<Entity> Entities;
[/code]
This is actually taught as the usual way for storing data for a reason, but it's also not often explained for a reason, it's not THAT important for most use cases.[/QUOTE]
It can be hard to come back to. It's low-level engine stuff, typically, and ripping it out can cause a lot of damage if done very late, and require large sweeping changes over the codebase.
About the good practices - that's the usual object-orientated way, to store as xyzxyzxyz with each object being in one place in memory: |xyz|xyz|xyz|. It's usually taught that way because of architectural and design reasons, not performance reasons. The part where OO fails is here:
[code]
struct Balloon : public Entity {
Colour colour;
}
vector<Balloon> balloons;
[/code]
This would store the data as xyzcxyzcxyzc, which means that now you are loading this worthless colour data into the cache when you are just wanting to do some physics integrations - and likewise, while you are performing calculations with the colour data, you've got all this useless physics data. The best way, performance wise, is this:
[code]
struct Balloons {
vector<Entity> physics;
vector<Colour> colours;
}
[/code]
xyzxyzxyzccc
which would store the colour data out of the way. This is the way which typically isn't taught.
When your game loop budget is 16 ms, and your entity code runs at 2 ms and the rest busywaits because the game is not designed to run faster than 60 frames per second, then optimizing 2 ms entity loop to run under 1 ms [B]is premature useless optimization[/B].
[QUOTE=cartman300;46756752]When your game loop budget is 16 ms, and your entity code runs at 2 ms and the rest busywaits because the game is not designed to run faster than 60 frames per second, then optimizing 2 ms entity loop to run under 1 ms [B]is premature useless optimization[/B].[/QUOTE]
Again, how does the word "premature" even make sense here? We are talking about a thought experiment, not a project with an actual lifecycle.
The only point I've been trying to make is that data coherency DOES matter. You can't say it doesn't matter by thinking of a theoretical game where you do your entire update cycle in 2ms, and saying that thus somehow ALL data coherency is worthless optimization...
This highly depends on the game - if you have a lot of stuff going on then almost undoubtedly you're gonna get huge savings from properly laying out memory. There's no doubt in my mind that games like The Last Of Us and GTA V that ran on last-gen consoles had their data laid out practically perfectly for performance! Even PC games like Total War most likely do it, so they can handle thousands of soldiers.
Or a game like this: [url]https://www.youtube.com/watch?v=eKQYRFYXCWk[/url] (made by a good friend of mines)
Do you think he could've achieved that performance using linked lists to store voxels? Nope. In fact, it'd use up shitloads more memory and use up loads of performance. The only reason he gets the performance he does is because he's intelligent about how he does things, such as storing the light data separately from the voxel data.
EDIT:
Maybe, if all of you just want games with new gameplay but leaving interesting tech and optimization behind, that's fine. Nothing wrong with that. However, you'd never see a AAA game developer, much less one for the last-gen consoles, having a discussion like this, because the answer is immediately obvious.
[QUOTE=Tommyx50;46756705]Collision resolution isn't the expensive part, collision detection is. In the dumber O(n²) system to detect collisions, you are constantly looping through every single entity! When you use a more intelligent broad phase, such as sweep and prune, data coherency is still a large factor, however.[/QUOTE]
Yeah, but you're not going to use a dumb physics engine, are you? How do you fit ANY spatial hashing or tree structure into the cache? You're going to be using pointers there whether you like it or not because the gains of reducing the complexity of the algorithm far outweigh the losses from cache misses.
I understand the advantages of tight packing. I really do. I'm just having trouble coming up with an example where you can ACTUALLY utilize them in a game engine.
[QUOTE=Darwin226;46756830]Yeah, but you're not going to use a dumb physics engine, are you? How do you fit ANY spatial hashing or tree structure into the cache? You're going to be using pointers there whether you like it or not because the gains of reducing the complexity of the algorithm far outweigh the losses from cache misses.
I understand the advantages of tight packing. I really do. I'm just having trouble coming up with an example where you can ACTUALLY utilize them in a game engine.[/QUOTE]
You can use it almost anywhere. It's a perfect fit for particle systems, for example. Not only does the system give great cache usage, but it can allow for easier multi-threading and allow you to implement SIMD far more easily.
With physics engines, the more compact all the useful data is, the more likely it'll be in the cache after it passes the broad-phase. If you do typical OO, you'll store all the entity data such as NPC AI variables, health, stats and so on in the same object, and you might be able to fit perhaps 15 physics objects in the cache. If you only handle the physics data, even if you are jumping around due to spatial coherency, you may be able to store 60 physics objects in a single cache, which greatly increases the chances that colliding objects will be in the same cache line!
It can be used practically anywhere. In programming, when there's one, there's many. You don't have 1 animation, one physics object, one mesh or one sound sample. You've got hundreds! All of these immediately present themselves as obvious candidates. Obviously, depending on the system, the performance impact may be too small to be worth it (which is why you should always do profiling before optimization), but it should be obvious that these large streams of data make obvious candidates for separating apart, instead of storing in some huge monolithic objects.
[editline]20th December 2014[/editline]
Thing is, RAM IS terribly slow. Back in the 80s, you could access it in one 1 cycle, and now it takes over 400. How much work can you do in 400 cycles? How much time are you wasting? CPUs are far faster than memory is, so optimizing for memory isn't a bad idea.
[QUOTE=Tommyx50;46756877]You can use it almost anywhere. It's a perfect fit for particle systems, for example. Not only does the system give great cache usage, but it can allow for easier multi-threading and allow you to implement SIMD far more easily.
With physics engines, the more compact all the useful data is, the more likely it'll be in the cache after it passes the broad-phase. If you do typical OO, you'll store all the entity data such as NPC AI variables, health, stats and so on in the same object, and you might be able to fit perhaps 15 physics objects in the cache. If you only handle the physics data, even if you are jumping around due to spatial coherency, you may be able to store 60 physics objects in a single cache, which greatly increases the chances that colliding objects will be in the same cache line!
It can be used practically anywhere. In programming, when there's one, there's many. You don't have 1 animation, one physics object, one mesh or one sound sample. You've got hundreds! All of these immediately present themselves as obvious candidates. Obviously, depending on the system, the performance impact may be too small to be worth it (which is why you should always do profiling before optimization), but it should be obvious that these large streams of data make obvious candidates for separating apart, instead of storing in some huge monolithic objects.
[editline]20th December 2014[/editline]
Thing is, RAM IS terribly slow. Back in the 80s, you could access it in one 1 cycle, and now it takes over 400. How much work can you do in 400 cycles? How much time are you wasting? CPUs are far faster than memory is, so optimizing for memory isn't a bad idea.[/QUOTE]
The only context I've seen the word "entity" used in game programming is in component-based systems so you really won't have everything in one object. Things like particle systems will probably consist of one entity with a particles emitter component attached to it. That component might as well have it's particles neatly packed because it's the perfect candidate for it. It's consistent, self contained and non-interactive. How you store the actual entity is of little importance.
An NPC character may typically store:
Health
Damage
Armour
Mesh
Texture
AI Variables
Physics Variables
Why load ALL that in when you only need to do some euler integration at the time? Or why load in the physics stuff when doing health and armour calculations? It's wasteful for the cache.
[quote]How you store the actual entity is of little importance.[/quote]
Such a statement is laughable. It's of HUGE important.
Have you read any books about engine architecture? Have you ever tried creating any simple game engines? Saying that how you store data is of little importance is incredibly wrong. There's no way you'd see any AAA game developers, especially for last gen consoles, saying any of this!
[QUOTE=Tommyx50;46757026]An NPC character may typically store:
Health
Damage
Armour
Mesh
Texture
AI Variables
Physics Variables
Why load ALL that in when you only need to do some euler integration at the time? Or why load in the physics stuff when doing health and armour calculations? It's wasteful for the cache.
Such a statement is laughable. It's of HUGE important.
Have you read any books about engine architecture? Have you ever tried creating any simple game engines? Saying that how you store data is of little importance is incredibly wrong. There's no way you'd see any AAA game developers, especially for last gen consoles, saying any of this![/QUOTE]
Are you listening to me? Entities are just collections of components. They don't carry around the data you mention. Components do.
A Physics component may just be a pointer into some large structure that's only concern is physics (which usually is the case anyways because the physics engine you use won't actually interface with your data but require that you provide separate meshes). A Render component may just be an integer that represents an OpenGL buffer. A ParticleEmitter component may be a tightly packed array of particles. All these things you don't need to carry around with the entity itself. Why would you want to?
[QUOTE=Darwin226;46757123]Are you listening to me? Entities are just collections of components. They don't carry around the data you mention. Components do.
A Physics component may just be a pointer into some large structure that's only concern is physics (which usually is the case anyways because the physics engine you use won't actually interface with your data but require that you provide separate meshes). A Render component may just be an integer that represents an OpenGL buffer. A ParticleEmitter component may be a tightly packed array of particles. All these things you don't need to carry around with the entity itself. Why would you want to?[/QUOTE]
I'm not talking about entities in that way. I meant a game entity, as in the traditional usage in game programming, which is just something that "exists" in the game world.
However, even with component design, you still tend to have stuff carried around with the object itself.
You have the inheritance method:
[code]
struct Entity : public Physics {
// extra data...
}
[/code]
and the composition method:
[code]
struct Entity {
Physics physics;
}
[/code]
Both still have the data held in the same way in the class (unless, of course, you hold a pointer to the data instead).
[QUOTE=Tommyx50;46757237]I'm not talking about entities in that way. I meant a game entity, as in the traditional usage in game programming, which is just something that "exists" in the game world.
However, even with component design, you still tend to have stuff carried around with the object itself.
You have the inheritance method:
[code]
struct Entity : public Physics {
// extra data...
}
[/code]
and the composition method:
[code]
struct Entity {
Physics physics;
}
[/code]
Both still have the data held in the same way in the class (unless, of course, you hold a pointer to the data instead).[/QUOTE]
Yeah, I tend to work in languages where pass-by-value doesn't really happen so it's all pointers except where it's explicitly not.
[editline]20th December 2014[/editline]
Also, "traditional usage in game programming". [url]https://www.google.hr/search?q=game+entity&qscrl=1&gws_rd=cr,ssl&ei=O5-VVM6rMsmwUavUgMAL[/url]
[QUOTE=Darwin226;46757294]Yeah, I tend to work in languages where pass-by-value doesn't really happen so it's all pointers except where it's explicitly not.
[editline]20th December 2014[/editline]
Also, "traditional usage in game programming". [url]https://www.google.hr/search?q=game+entity&qscrl=1&gws_rd=cr,ssl&ei=O5-VVM6rMsmwUavUgMAL[/url][/QUOTE]
Ah well, anyways, it's not really all that relevant to my point. Cache misses do matter, this entity stuff is just irrelevant mainly.
Anyone experienced with SFML?
trying to get the player to move when I hold down any of the WASD buttons. right now it just responds on clicking a button rather than holding it down:
[b]main.cpp[/b]
[code] while (window.pollEvent(event))
{
switch(event.type)
{
case sf::Event::Closed:
window.close();
break;
case sf::Event::KeyPressed:
input.update(player, event);
break;
}
}[/code]
it calls input, which is my inputmanager, which then sets the position of the player once:
[b]InputManager.cpp[/b]
[code]
void InputManager::update(sf::Sprite &player, sf::Event event)
{
this->event_ = event;
keyHandle(player, event.key.code);
}
void InputManager::keyHandle(sf::Sprite &player, int key)
{
float playerMoveSpeed = 48;
sf::Vector2f playerPos = player.getPosition();
switch(key)
{
case sf::Keyboard::W:
player.setPosition(playerPos.x, playerPos.y-playerMoveSpeed);
break;
case sf::Keyboard::A:
player.setPosition(playerPos.x-playerMoveSpeed, playerPos.y);
break;
case sf::Keyboard::S:
player.setPosition(playerPos.x, playerPos.y+playerMoveSpeed);
break;
case sf::Keyboard::D:
player.setPosition(playerPos.x+playerMoveSpeed, playerPos.y);
break;
default:
player.setPosition(playerPos.x, playerPos.y);
break;
}
}[/code]
of course, changing the movespeed will be necesarry if I manage to change the functionality.
I need to change it in main.cpp, but I'm not sure how.
Live input doesn't work because sf::Keyboard::isKeyPressed rarely works.
Not sure if this is the right place to ask but, a year or so ago I started learning cpp basics as my first language via school. Mostly really basic things like reading things from a txt file and then moving things around by doing simple maths or trying to count something like character amount in a line and then writing results in a different txt file.
Now that I'm done with school I am kind of left without a sense of direction over what to try learning and I was also told that cpp isn't a good language to start out with. So I'm curious, should I continue trying to learn cpp as I want to or should I consider a different language instead? And whichever the answer is, could someone throw a book or tutorial (preferably nonvideo) at me so that I'd have a sense of direction for figuring it out?
[QUOTE=WaRRioRTF;46758567]Not sure if this is the right place to ask but, a year or so ago I started learning cpp basics as my first language via school. Mostly really basic things like reading things from a txt file and then moving things around by doing simple maths or trying to count something like character amount in a line and then writing results in a different txt file.
Now that I'm done with school I am kind of left without a sense of direction over what to try learning and I was also told that cpp isn't a good language to start out with. So I'm curious, should I continue trying to learn cpp as I want to or should I consider a different language instead? And whichever the answer is, could someone throw a book or tutorial (preferably nonvideo) at me so that I'd have a sense of direction for figuring it out?[/QUOTE]
This is such a ridiculously ambiguous topic. It all depends on what you want to do with the code, if C++ is easy for you and if you want to keep doing it. In my experience, all languages are "bad to start with", especially because they usually employ something different to what you are used to. For me, I started with Lua then went straight to C++ which was extremely difficult. C++ required a much different mind-set than Lua does, not to mention the types.
Bjarne Stroustrup wrote (Several) books on C++ (Being the creator and all). I would recommend reading through his [b]A Tour of C++[/b] book.
Scott Meyers recently has his book [b]Modern Effective C++[/b] published which is another book I would strongly recommend.
You could also skim over tutorials from [URL="http://en.cppreference.com/w/"]cppreference[/URL]. Note that since C++11 and 14 have been "released", a lot of tutoral websites like [URL="http://learncpp.com"]learncpp[/URL] are quite out of date and have deprecated ways of doing things.
[QUOTE=WaRRioRTF;46758567]Not sure if this is the right place to ask but, a year or so ago I started learning cpp basics as my first language via school. Mostly really basic things like reading things from a txt file and then moving things around by doing simple maths or trying to count something like character amount in a line and then writing results in a different txt file.
Now that I'm done with school I am kind of left without a sense of direction over what to try learning and I was also told that cpp isn't a good language to start out with. So I'm curious, should I continue trying to learn cpp as I want to or should I consider a different language instead? And whichever the answer is, could someone throw a book or tutorial (preferably nonvideo) at me so that I'd have a sense of direction for figuring it out?[/QUOTE]
C++ might or might not be a good language to start but you're already started, right? :D
Don't worry too much about the language.
As for books, I've heard good things about [url]http://www.amazon.com/Accelerated-C-Practical-Programming-Example/dp/020170353X[/url]
Just wait until someone else confirms that before you dig in.
Sorry, you need to Log In to post a reply to this thread.