patternMinor
Genetic Algorithms: Converge to best solution for one, few or many environments?
Viewed 0 times
algorithmsgeneticonefewforconvergemanysolutionenvironmentsbest
Problem
Let's say we take someone's garden (A) as the environment. We want a robot to pick up a series of eggs that chickens have laid in the garden, while covering as little ground as possible (those heavy rover tracks damage the turf, you know). So we evolve a solution whereby the genes representing the robot's behaviour (i.e. we are not just evolving some chain of path states/positions) work great in that garden.
All freshly laid eggs in (A) are collected in record time.
Our algorithm has learned some tricks about navigating gardens; like driving a car for the first time. But there is more.
If I now put the same robot in a different garden (B), it may or may not do OK. It depends, I suppose, on how similar (B) is to (A) in certain key factors, i.e., will any of the tricks learned in (A) apply here? The likelihood is it will need new tricks, so I'm going to have evolve it further in (B) if I want anything like an optimal solution. Yet if I don't, at every iteration, continue to test it in (A) too, then I may be losing fitness for (A) in every new generation, yes?
So if I want a algorithm that performs well under a wide variety of circumstances, I am going to have to test it in all of those N circumstances, for all of the M candidates; will I thus need to run MxN tests on each generation? What about circumstances I didn't cover?
All freshly laid eggs in (A) are collected in record time.
Our algorithm has learned some tricks about navigating gardens; like driving a car for the first time. But there is more.
If I now put the same robot in a different garden (B), it may or may not do OK. It depends, I suppose, on how similar (B) is to (A) in certain key factors, i.e., will any of the tricks learned in (A) apply here? The likelihood is it will need new tricks, so I'm going to have evolve it further in (B) if I want anything like an optimal solution. Yet if I don't, at every iteration, continue to test it in (A) too, then I may be losing fitness for (A) in every new generation, yes?
So if I want a algorithm that performs well under a wide variety of circumstances, I am going to have to test it in all of those N circumstances, for all of the M candidates; will I thus need to run MxN tests on each generation? What about circumstances I didn't cover?
Solution
Unless you define "wide variety" very precisely and prove that your algorithm is able to abstract over all varieties in the way you want, then yes, you'll have to test it every time. I think for most interesting problems you won't be able to produce such a proof.
For a related entertaining read, check Neural Network Follies. The author describes a project where a neural network was trained to spot tanks in images. Instead the network learned how to tell whether it's sunny or not.
For a related entertaining read, check Neural Network Follies. The author describes a project where a neural network was trained to spot tanks in images. Instead the network learned how to tell whether it's sunny or not.
Context
StackExchange Computer Science Q#62505, answer score: 3
Revisions (0)
No revisions yet.