Speed Analysis of pairs and ipairs


If you don't know what pairs and ipairs is, do not bother reading this.

So which is actually faster?

Most say ipairs is, and for the real world part they are correct. But when you have tables nearing 15k keyvalues, pairs is the better option.

Although the speed difference between ipairs and pairs is negligible(really really small).

For example a table with 100 keyvalues the difference between pairs and ipairs was 3.41E-07seconds.
Meaning ipairs was faster than pairs by 0.0000000341seconds.

1,000 keyvalues, ipairs wins in speed.
If the purple line is in the positive, it means ipairs was faster.

http://img707.imageshack.us/img707/4135/1000m.png

100,000 keyvalues, pairs wins in speed.
If the purple line is in the negative, it means pairs was faster.

http://img204.imageshack.us/img204/7159/100000z.png

100,000 keyvalues, pairs wins in speed.
Since the line is almost always negative, pairs is faster.

http://img689.imageshack.us/img689/1583/100000diff.png

All the data was collected just after a garbage collection.
All the timing values were recorded with C++ Functions QueryPerformanceCounter and QueryPerformanceFrequency in Windows, which is the most precise timing.


Now people can shut up about how ipairs is so much faster.

Wow thank you. Fly away, ipairs people.

I’ll keep using ipairs when appropriate ( when the table consists entirely of numeric keys ), and pairs when appropriate ( any time else ). You know, like before I saw this thread.


Since you’ve got the tool there, is this any faster / better than pairs?

for k, v in next, t do

Saw that in PIL and never got around to testing it myself.

I’ll have a look later.
I will also look at while loops, for i=1,bla etc.

-snip-

[editline]09:45AM[/editline]

this.

It should do exactly the same as pairs, except with one less function call. pairs(t) returns next, t

I know this. I’m asking if that function call is going to make a difference across the long term, not if it functions differently.

It appears to just be called once, so it shouldn’t beyond the expense of a single function call, though.

This is very interesting. So it actually doesn’t matter?
Could you do a speed analysis for the 1 to 200 mark please? I suspect very few people will use a table with more than 200 elements in it in gmod.

http://img80.imageshack.us/img80/691/10002.png

[lua]
for k, v in pairs(tableToTraverse) do

for k, v in ipairs(tableToTraverse) do

while(k <= #tableToTraverse) do
k = k + 1

for i=1,#tableToTraverse do

for k, v in next,tableToTraverse, nil do
[/lua]

for i=1,#tableToTraverse do
is definitely the best way, but you need keys that are sequential from 1 to n

In all honesty, pairs or ipairs is the best, pairs working on every table and ipairs only working on integer keyed tables.

Wow, enlightening

Questions:

What’s with the sudden bump in while? Was that due to something on your PC, or does it happen every time?
Does anyone know the reason?

Are you doing:
[lua]
for i=1,#tableToTraverse do
[/lua]
or
[lua]
for i=1,#tableToTraverse do
local value = tableToTraverse*
[/lua]
for the fori test?

(Edit: As with the while loop. Are you indexing the table?)

Thanks!

-snip-

Every test gets the value and multiples it by 10.
Just to simulate it doing something.

for i=1,#tableToTraverse do
local value = tableToTraverse* * 10
end

Is exactly what it does.

As for that spike I would have to run the test again, it doesn’t appear to be cpu related otherwise all the other results of the same table size would be up.

How many times are you running the algorithm to get the results, it would smooth out the bumps if you ran it more often and averaged the results.
The bumps would be caused by your computer’s scheduler pre-empting the garrysmod process to do something else, this would add the delay.

A sample every increment of 10 in table size.

It is probably the thread being suspended.

So once.
He’s suggesting you run the tests 10 times and take an average, for a cleaner test. :3:

while could be a bit faster if you won’t do the table-size-check in the while-statement everytime. Try
[lua]local size = #tableToTraverse;
local k = 0;
while(k <= size) do k = k + 1;[/lua]

Ahh good point, let me adjust the test.

Thanks!
Still the fori will definitely beat everything: Fixed loop-size and no “counting” in lua (but in C++) directly.

Yes, you are right.

This is the moving average of the new test. 10 samples of each table size are averaged, so this is much better data.

http://img207.imageshack.us/img207/4914/1000movavgnew.png

Same data, but this time a linear trendline, I have also shown the r^2 value to show how close it fits the data.

http://img249.imageshack.us/img249/9748/1000linearnew.png

You can see clearly that pairs(table) is just an alias for next,table,nil

In your fori and while are you actually accessing the table’s variables at index “i”?