patternMinor
Evolutionary algorithm in stochastic environment
Viewed 0 times
algorithmstochasticevolutionaryenvironment
Problem
Consider the following model problem:
I want to use an evolutionary algorithm to optimize the starting point of particles for which it is apriori clear where they would start in state space, but not when. Once a particle is placed, it is part of the dynamics, which is stochastic, i.e. whether certain particles interact is modeled by a random variable. A mutation of the evolutionary algorithm corresponds to slightly changing the time when the particle is introduced in the system.
I have now two choices:
1) I let the stochastic dynamics evolve while the evolutionary algorithm is optimizing, i.e. after every mutation I draw a new interaction environment from the distribution. That means, the algorithm is evaluating every situation with a different environment (drawn from one fixed distribution).
2) I draw an interaction pattern for every particle apriori to have one fixed environment (we assume they can be drawn independently). Then I let the algorithm optimize my problem in that deterministic environment. I do that for several environments and take some statistic over the solution.
Does someone have experience with those two approaches and can tell me their advantages and disadvantages from a practical point of view?
I want to use an evolutionary algorithm to optimize the starting point of particles for which it is apriori clear where they would start in state space, but not when. Once a particle is placed, it is part of the dynamics, which is stochastic, i.e. whether certain particles interact is modeled by a random variable. A mutation of the evolutionary algorithm corresponds to slightly changing the time when the particle is introduced in the system.
I have now two choices:
1) I let the stochastic dynamics evolve while the evolutionary algorithm is optimizing, i.e. after every mutation I draw a new interaction environment from the distribution. That means, the algorithm is evaluating every situation with a different environment (drawn from one fixed distribution).
2) I draw an interaction pattern for every particle apriori to have one fixed environment (we assume they can be drawn independently). Then I let the algorithm optimize my problem in that deterministic environment. I do that for several environments and take some statistic over the solution.
Does someone have experience with those two approaches and can tell me their advantages and disadvantages from a practical point of view?
Solution
I'd go with option 1.
A dynamic training environment is a good way to produce a robust solution (after all natural evolution, which is the inspiration for EA, is always dynamic).
Moreover "take some statistics over the solutions" can be a very hard problem (you shouldn't take for granted that there is a meaningful way to combine / extract information from multiple solutions).
Indeed there are some important aspects to consider:
This is small compendium of interesting papers.
Looking for further resources, consider that there are many kinds of dynamic environments: e.g. noisy fitness functions, environments with optimum changing over time... they are somewhat different problems but share many interesting techniques.
Also it seems to me that you're describing a so called co-adaptive environment (each individual's evolution drives other individuals evolution and individuals change their fitness landscape evermore). You may find interesting techniques in papers about co-evolution.
A dynamic training environment is a good way to produce a robust solution (after all natural evolution, which is the inspiration for EA, is always dynamic).
Moreover "take some statistics over the solutions" can be a very hard problem (you shouldn't take for granted that there is a meaningful way to combine / extract information from multiple solutions).
Indeed there are some important aspects to consider:
- population diversity have to be carefully monitored. Often EAs are designed for a static optimization problem and the quick loss of diversity can be a problem in a dynamic environment;
- it seems that you have a system in which environmental change occurs rapidly. Try to slow down the environment: do not draw a new interaction environment after every mutation but wait for a sequence of mutations (in Genetic Programming this is the approach taken in Dynamic Training Subset Selection{1}). I'm not sure this will work, but it's worth a try;
- supply the EA with an explicit memory (sometimes called Valhalla) so that the EA can recall useful information from past generations (e.g {5}).
This is small compendium of interesting papers.
Looking for further resources, consider that there are many kinds of dynamic environments: e.g. noisy fitness functions, environments with optimum changing over time... they are somewhat different problems but share many interesting techniques.
Also it seems to me that you're describing a so called co-adaptive environment (each individual's evolution drives other individuals evolution and individuals change their fitness landscape evermore). You may find interesting techniques in papers about co-evolution.
- Dynamic Training Subset Selection for Supervised Learning in Genetic Programming by Chris Gathercole, Peter Ross (1994).
- Genetic Algorithm for Adaptation to Dynamic Environments - A survey by Naoki Mori, Hajime Kita (2000).
- Evolutionary Dynamic Optimization - A survey of the state of the art by Juergen Branke, Trung Thanh Nguyen, Shengxiang Yang (2012).
- An evolutionary approach for time dependant optimization by Philippe Collard, Cathy Escazut, Alessio Gaspar (1997).
- "Memory enhanced evolutionary algorithms for changing optimization problems" by Juergen Branke (1999).
- Evolutionary Algorithms and Dynamic Optimization Problems by Karsten Weicker (2003).
Context
StackExchange Computer Science Q#57549, answer score: 2
Revisions (0)
No revisions yet.