15 Sorting algorithms visualized and "audibilized" in 6 minutes
12 replies, posted
[media]http://www.youtube.com/watch?v=kPRA0W1kECg[/media]
This is weird, but somehow very interesting.
This is so fucking cool, I wonder if there were a way, with insane amounts of effort, if you could make it sort in such a way (like the ending) that you could make it play full songs
1:56 has to be the most interesting one out of them.
I love to be able to visually see the algorithms work. Its really interesting to see how they sort data.
How do the ones with 0 comparisons work?
Bogo sort sounds kinda cool
Even if it doesn't work
there is actually a sorting method designed to be so inefficient, that even if it went on trying until the heat death of the universe, it still couldn't solve it
Found this cool video in the related that explains two of the algorithms.
[video=youtube]http://www.youtube.com/watch?v=aXXWXz5rF64[/video]
Some of the algorithms sound like pacman
[QUOTE=ellibob;42986990]How do the ones with 0 comparisons work?[/QUOTE]
Really efficiently? :v:
[QUOTE=ellibob;42986990]How do the ones with 0 comparisons work?[/QUOTE]
The numbers are placed into "buckets" based on significant digit instead of being compared to one another.
802 & 52 are both in the same bucket for the 2 digit.
67 & 269 are in the same bucket on the next pass for the 6 digit.
And so on. Just look into Radix sort for more details.
So from what I understand the one with the least comparisons is the most light on resources however it will take a longer time to do?
[QUOTE=Sally;42987965]So from what I understand the one with the least comparisons is the most light on resources however it will take a longer time to do?[/QUOTE]
Sorting algorithms are very complex.
There are two main things that take "resources" - comparisons, and memory (array) accesses. Memory bandwidth is often a dominating factor, depending on the architecture you're running on and how your access pattern interacts with the various tiers of cache. And comparisons can also be affected by vectorization - I believe that's why GCC's sort has that last polishing-up step, so it can use SSE instructions if the CPU has them.
The other thing to be aware of is scaling. This is given as something like "O(n)", where N is the number of items in the list. For a sort algorithm with a complexity of O(n), sorting 1000 elements takes ten times as long as 100 elements. For an O(n^2) algorithm, sorting 1000 would take 100 (10^2) times as long.
Some algorithms also have "worst-case" and "average-case" complexity. Quicksort, for instance, is normally O(n log n), but can be O(n^2) if you give it just the right kind of wrong data.
There may also be space constraints. On a small embedded system, or with pathologically large arrays, a sort that uses additional memory may be unusable. So in-place sorts, while usually slower, may be your only option.
And of course there's the consideration of how sorted the array already is. Many sorts are basically instant if the data is already sorted, while some will take the same amount of time as a fully-random input just to confirm that it's sorted. And there's degrees of unsorted - 1,3,2,4,5,8,6,7,9 is obviously better-sorted than 7,4,1,8,5,2,9,6,3 is.
And then there's the question of parallelism - if you have multiple threads (ie. multi-core CPU), mergesort can be quite fast, but something like heapsort only really works with one thread.
So yeah, there's a reason programmers have come up with dozens of sorting algorithms. But if you point a gun at a coder's head and tell them to implement a sorting algorithm, you'll usually get quicksort, just because it's quick to run *and* quick to write.
Sorry, you need to Log In to post a reply to this thread.