Okay, so I'm interested in Genetic Algorithms, have read some articles about it, watched videos and experimented with it, but there is one thing I just do not quite understand.
These algorithms are supposed to find the best solution for a problem through a somewhat trial and error process in which they iterate over the problem's possible solution with different parameters, eliminating failed attempts and improving on the parameter constellations needed. While this is cool, it is rather limited per definition, isn't it? There is no room to evolve a true "intelligence" since all the possible parameters are fixed and defined right from the beginning, right? Also, the solution only works for the static problem it iterates over, right?
For example: I have written a small simulation where creatures move around and eat plants to survive. Each creature has a life expectation, hunger level, movement speed, etc. The plants only grow near other plants and wither in a certain amount of time. Creatures do reproduce and new creatures spawn with the mixed attributes of their parents, plus a slight randomized variation on it. Now it is indeed true that their movement speed increases with each generation, so they get more plants before they starve, but this leads to plants being eaten more quickly than they can spread, until they are all eaten and the creatures extinct. There is no way for them to "learn" NOT to eat every single one of them, but to let them grow. Also, I made some plants poisonous, leading to instant death of the one eating it. There is no way for them to "learn" NOT to eat those poisonous plants, unless I explicitly give them a flag "doEatPoisonousPlants" which will move towards being "false" in evolution, but this kind of defeats the purpose of the algorithm, doesn't it? What is the purpose of the algorithm if I, the programmer, have to build in every possible way to act? I KNOW that they should not eat poisonous plants, so why not just set it to "false"? Can someone explain this situation to me? What am I missing?
Also, how would one implement an algorithm that learns play tic-tac-toe against some human? If the "chromosomes" are just a bitfield, how can they have any decision logic that reacts to different game situations? How do they learn when the player always plays different?
These are the questions that always come up when I think about this topic. Help me understand it, Facepunch!
How about you add a possibility of poisonous plant not killing an animal, and those who survive set "doEatPoisonousPlants" to false?
Genetic algorithms are about fine tuning a specific set of parameters, which undergo a very complex process to reach an end result. To think about using a genetic algorithm rather than doing it by hand, the process must be sufficiently complex for there to be no obviously correct solution.
You need to make movement speed take energy, the creature to have the capacity to decide whether or not to spend the energy (is it worth it), and some sort of plant recognition system rather than just eat plants
Animals should be able to be immune to certain poisons, and plants should be able to develop new types of poison
So it's about finding the perfect balance for a rather big amount of variables, and not really a "learning" kind of thing?
What about the tic-tac-toe situation? Shouldn't a GA be able to learn how to play it? I cannot image how the "big amount of variables" would look like in such a very dynamic case...
[QUOTE=Dienes;35277703]So it's about finding the perfect balance for a rather big amount of variables, and not really a "learning" kind of thing?
What about the tic-tac-toe situation? Shouldn't a GA be able to learn how to play it? I cannot image how the "big amount of variables" would look like in such a very dynamic case...[/QUOTE]
You are confusing genetic algorithms and neural networks.
A GA provides parameters for a NN, which it then uses to solve a problem.
[QUOTE=neos300;35289740]You are confusing genetic algorithms and neural networks.
A GA provides parameters for a NN, which it then uses to solve a problem.[/QUOTE]
You can use a genetic algorithm to learn how to play tic tac toe
[QUOTE=Icedshot;35290553]You can use a genetic algorithm to learn how to play tic tac toe[/QUOTE]
A genetic algorithm won't play tic tac toe for you.
You're technically a genetic algorithm! :saddowns:
[QUOTE=Map in a box;35291131]You're technically a genetic algorithm! :saddowns:[/QUOTE]
No you aren't, you're a neural net evolved by genetic algorithms
[QUOTE=neos300;35290809]A genetic algorithm won't play tic tac toe for you.[/QUOTE]
It wont be able to learn to play tic tac toe, but if you design an ai for tic tac toe, you can implement several parameters that evolve across several generations to develop the most efficient tic tac toe ai
GAs are for problems with more breadth or depth than traditional search algorithms can handle. Since tic-tac-toe can be trivially brute-forced by a TI-84, it's not a great candidate.
Oh my, this reminds me of the AI classes I had, pain in the neck :S Freaky stuff... could never really get the hang of it.
I think your experiment demonstrates an important point about evolution, not the insufficiency of genetic algorithms. Evolution is merely the change in allele frequencies in a population over a period of time, in this case movement speed. The only thing that will ever improve is movement speed, if that allows an individual to reproduce more successfully.
This is something that is emphasized in biology - [i]evolution is not for the good of the group, it only happens because an individual in the previous generation was capable of producing more offspring.[/i] Because of this, it is natural for many species to evolve their way into extinction.
As for the poisonous plants, this is something that would not be a part of evolution. Remember that DNA is transferred only to physical characteristics, not to worldly knowledge. Since there are no physical characteristics that determine whether the animals are better or worse at eating the poisonous plants, each generation will eat them just as much as the previous generation. However, if you had some variable (like beak length for example) and a length of 5 allowed animals to eat the poisonous plants, where a length 4 didn't, the beak length allele frequency would shift down to anything below 5 over time.
Also, are you familiar with Smart Rockets [url]http://blog.blprnt.com/blog/blprnt/project-smart-rockets[/url] ?
[QUOTE=Dienes;35277196]
There is no way for them to "learn" NOT to eat every single one of them, but to let them grow. Also, I made some plants poisonous, leading to instant death of the one eating it. There is no way for them to "learn" NOT to eat those poisonous plants, unless I explicitly give them a flag "doEatPoisonousPlants"...
[/QUOTE]
Such extinction events occur in nature all of the time.
Rats were introduced to an island in the pacific where a giant bug lived; they proceeded to wipe them all out, and then starve.
I'm not too familiar with the processes of evolution, but I imagine that there have been many near-extinction events in nature which have end in both species dying, or at least shrinking a lot.
For example, the rabbit population of Australia was decreased through the use of poisons, until the rabbits began to resist it; because the reproduction time of rabbits is so short, the survivors (who had what it took to survive the poison, whether it be their immune system or a food they ate that helped them resist it) were able to re-establish the population.
If the reproduction time of the insects on the island had been fast enough, I imagine those who were better at avoiding the rats or ate something that made them poisonous to the rats would have been able to survive and compete for their lives.
That leads onto the idea of survival "chance".
It's been shown that birds that live near the cities (including new baby birds without life experience) are better at dodging cars than those who do not.
Now, assuming getting hit by a car is either 'survive' or 'die' (no 'recover')... then only those who assume the car is dangerous and evade it will survive and reproduce. Hence, through the natural process of evolution, the population will tend towards evading cars.
I'm not sure how those that 'recover' affect it... I suppose being injured makes them less attractive to females, and hence they are less likely to reproduce.
The problem with your simulation is that the population is too small. Any trends that emerge either affect the population to little (random survival chance & breeding pairing) or too much (stagnation, meaning they poorly adapt to future changes).
As it can be difficult to simulate a population large enough (and sometimes doesn't make sense, e.g: finding a GA for an NN that can play Mario), a common solution is to divide the simulation into generations, and to 'score' each creature.
In Mario, the score would be how far it goes on the level, perhaps will a slight influence on how quickly it does it and how many coins it gets (you need to prioritise).
In [URL="http://boxcar2d.com/"]this awesome demonstration[/URL] (leave it running for a while, they evolve relatively quickly), the score is simply how far it gets.
By using NNs and scoring, the GAs evolve themselves without you having to consider each aspect. Who cares about how Mario deals with Bullet-Bills or Koopas? As long as he gets to the end, the GA is good.. and when you have two GAs that are good, you make them have babies to get even better!
Although, you do need to be careful. If you perform selective breeding too much, you may find you get a stagnated population (no variance in crossovers); to solve this, use roulette selection and a decent mutation rate.
If you want to score your simulation where 'survival' and 'reproduction' are the goals (like real life), just have the score increase linearly over time, with bonuses for when the creatures pair up.
Be careful, though... if you don't tweak the bonuses right, you'll either have some very greedy creatures eating everything in sight and ignoring eachother, or a world full of super-horny creatures constantly having sex with one another in a huge orgy pile.
[QUOTE=Dienes;35277196]
Also, how would one implement an algorithm that learns play tic-tac-toe against some human? If the "chromosomes" are just a bitfield, how can they have any decision logic that reacts to different game situations? How do they learn when the player always plays different?
[/QUOTE]
Neural Nets, or another decision system that has analogue decision thresholds.
[QUOTE=Dienes;35277196]There is no room to evolve a true "intelligence" since all the possible parameters are fixed and defined right from the beginning, right?
[/QUOTE]
The GAs in Humans are used to control how cells behave, including replication.
In initial replication, they produce a neural net: the brain. Over a long, long time of evolution, our brain has become awesome at 'thinking' with concurrency (like a GPU rather than a CPU), modularity (motor control region), and dynamic-ness (dynamism?) (tweaked learning rates, efficient information storage).
Another thing is that our brains have a [I]state[/I]. The neurons in the brain itself affect other neurons in the next... 'frame'(?).
(I'm really interested in trying out a neural net that has some outputs linked back in as inputs, and seeing how good it is at solving and remembering locations of object, paths, and such.)
The thing is, this state doesn't just affect the behaviour, it affects the neurons that [I]make[/I] the behaviour, that is: whenever a neuron is stimulated, it is affected ever so slightly.
Some hormones within the body affect the modification rates and such (ever wonder why you can't forget that one time you did something [I][B]really[/B][/I] stupid?).
([b]Edit:[/b] Another thing to add is that animals have evolved the ability to communicate information to their offspring (with varying degrees of success).... something that is hard to replicate in a simulation).
That leads to another thing... seen as the neural net can evolve itself over time (something that would hard to program, and IMPOSSIBLE to evolve for (you'd have to create a cellular-level simulation and run it for years)), there is a benefit in using 'encouragement'.
That is: if the nerves in your phallus are stimulated, you must be having sex, which means however you got here was good... release chemicals to stimulate the brain, so that it learns from the experience!
Sometimes, however... encouragement can go wrong, e.g: masturbation tricks the brain ([URL="http://www.boingboing.net/2010/09/29/why-squirrels-mastur.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+boingboing%2FiBag+(Boing+Boing)"]or does it?[/URL]).
[editline]27th March 2012[/editline]
This site is really good: [url]http://www.ai-junkie.com/[/url]
It's where I first learnt about both GAs and NNs.
Great and elaborate responses, guys! Really helpful, make things more clear to me now about what to expect from an GA and where to actually use it. I will have a look at the links you posted after work today. Thanks!
Sorry, you need to Log In to post a reply to this thread.