Metaheuristic

In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimization problem, especially with incomplete or imperfect information or limited computation capacity.[1] Metaheuristics sample a set of solutions which is too large to be completely sampled. Metaheuristics may make few assumptions about the optimization problem being solved, and so they may be usable for a variety of problems.[2]

Compared to optimization algorithms and iterative methods, metaheuristics do not guarantee that a globally optimal solution can be found on some class of problems.[2] Many metaheuristics implement some form of stochastic optimization, so that the solution found is dependent on the set of random variables generated.[1] In combinatorial optimization, by searching over a large set of feasible solutions, metaheuristics can often find good solutions with less computational effort than optimization algorithms, iterative methods, or simple heuristics.[2] As such, they are useful approaches for optimization problems.[1] Several books and survey papers have been published on the subject.[1][2][3][4][5]

Most literature on metaheuristics is experimental in nature, describing empirical results based on computer experiments with the algorithms. But some formal theoretical results are also available, often on convergence and the possibility of finding the global optimum.[2] Many metaheuristic methods have been published with claims of novelty and practical efficacy. While the field also features high-quality research, unfortunately many of the publications have been of poor quality; flaws include vagueness, lack of conceptual elaboration, poor experiments, and ignorance of previous literature.[6]

Properties

These are properties that characterize most metaheuristics:[2]

Classification

Different classifications of metaheuristics.

There are a wide variety of metaheuristics[1] and a number of properties along which to classify them.[2]

Local search vs. Global search

One approach is to characterize the type of search strategy.[2] One type of search strategy is an improvement on simple local search algorithms. A well known local search algorithm is the hill climbing method which is used to find local optimums. However, hill climbing does not guarantee finding global optimum solutions.

Many metaheuristic ideas were proposed to improve local search heuristic in order to find better solutions. Such metaheuristics include simulated annealing, tabu search, iterated local search, variable neighborhood search, and GRASP.[2] These metaheuristics can both be classified as local search-based or global search metaheuristics.

Other global search metaheuristic that are not local search-based are usually population-based metaheuristics. Such metaheuristics include ant colony optimization, evolutionary computation, particle swarm optimization, and genetic algorithms.[2]

Single-solution vs. Population-based

Another classification dimension is single solution vs population-based searches.[2][5] Single solution approaches focus on modifying and improving a single candidate solution; single solution metaheuristics include simulated annealing, iterated local search, variable neighborhood search, and guided local search.[5] Population-based approaches maintain and improve multiple candidate solutions, often using population characteristics to guide the search; population based metaheuristics include evolutionary computation, genetic algorithms, and particle swarm optimization.[5] Another category of metaheuristics is Swarm intelligence which is a collective behavior of decentralized, self-organized agents in a population or swarm. Ant colony optimization,[7] particle swarm optimization,[5] social cognitive optimization, penguins search optimization algorithm and artificial bee colony [8] algorithms are examples of this category.

Hybridization and memetic algorithms

A hybrid metaheuristic is one which combines a metaheuristic with other optimization approaches, such as algorithms from mathematical programming, constraint programming, and machine learning. Both components of a hybrid metaheuristic may run concurrently and exchange information to guide the search.

On the other hand, Memetic algorithms[9] represent the synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search. An example of memetic algorithm is the use of a local search algorithm instead of a basic mutation operator in evolutionary algorithms.

Parallel metaheuristics

A parallel metaheuristic is one which uses the techniques of parallel programming to run multiple metaheuristic searches in parallel; these may range from simple distributed schemes to concurrent search runs that interact to improve the overall solution.

Nature-inspired metaheuristics

A very active area of research is the design of nature-inspired metaheuristics. Many recent metaheuristics, especially evolutionary computation-based algorithms, are inspired by natural systems. Such metaheuristics include simulated annealing, evolutionary algorithms, ant colony optimization and particle swarm optimization. A large number of more recent metaphor-inspired metaheuristics have started to attract criticism in the research community for hiding their lack of novelty behind an elaborate metaphor.

Applications

Metaheuristics are used for combinatorial optimization in which an optimal solution is sought over a discrete search-space. An example problem is the travelling salesman problem where the search-space of candidate solutions grows faster than exponentially as the size of the problem increases, which makes an exhaustive search for the optimal solution infeasible. Additionally, multidimensional combinatorial problems, including most design problems in engineering[10][11][12] such as form-finding and behavior-finding, suffer from the curse of dimensionality, which also makes them infeasible for exhaustive search or analytical methods. Metaheuristics are also widely used for jobshop scheduling and job selection problems.[13] Popular metaheuristics for combinatorial problems include simulated annealing by Kirkpatrick et al.,[14] genetic algorithms by Holland et al.,[15] scatter search[16] and tabu search[17] by Glover. Literature review on metaheuristic optimization,[18] suggested that it was Fred Glover who coined the word metaheuristics.[19]

Contributions

Many different metaheuristics are in existence and new variants are continually being proposed. Some of the most significant contributions to the field are:

See also

References

  1. 1 2 3 4 5 Bianchi, Leonora; Marco Dorigo; Luca Maria Gambardella; Walter J. Gutjahr (2009). "A survey on metaheuristics for stochastic combinatorial optimization". Natural Computing: an international journal. 8 (2): 239–287. doi:10.1007/s11047-008-9098-4.
  2. 1 2 3 4 5 6 7 8 9 10 11 Blum, C.; Roli, A. (2003). "Metaheuristics in combinatorial optimization: Overview and conceptual comparison". 35 (3). ACM Computing Surveys: 268–308.
  3. Goldberg, D.E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Kluwer Academic Publishers. ISBN 0-201-15767-5.
  4. Glover, F.; Kochenberger, G.A. (2003). Handbook of metaheuristics. 57. Springer, International Series in Operations Research & Management Science. ISBN 978-1-4020-7263-5.
  5. 1 2 3 4 5 Talbi, E-G. (2009). Metaheuristics: from design to implementation. Wiley. ISBN 0-470-27858-7.
  6. Sörensen, Kenneth. "Metaheuristics—the metaphor exposed" (PDF). International Transactions in Operational Research. doi:10.1111/itor.12001.
  7. 1 2 M. Dorigo, Optimization, Learning and Natural Algorithms, PhD thesis, Politecnico di Milano, Italie, 1992.
  8. Karaboga, Dervis (2010). "Artificial bee colony algorithm". Scholarpedia. 5 (3): 6915. doi:10.4249/scholarpedia.6915.
  9. 1 2 Moscato, P. (1989). "On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms". Caltech Concurrent Computation Program (report 826).
  10. Tomoiagă B, Chindriş M, Sumper A, Sudria-Andreu A, Villafafila-Robles R. Pareto Optimal Reconfiguration of Power Distribution Systems Using a Genetic Algorithm Based on NSGA-II. Energies. 2013; 6(3):1439-1455.
  11. Ganesan, T.; Elamvazuthi, I.; Ku Shaari, Ku Zilati; Vasant, P. (2013-03-01). "Swarm intelligence and gravitational search algorithm for multi-objective optimization of synthesis gas production". Applied Energy. 103: 368–374. doi:10.1016/j.apenergy.2012.09.059.
  12. Ganesan, T.; Elamvazuthi, I.; Vasant, P. (2011-11-01). "Evolutionary normal-boundary intersection (ENBI) method for multi-objective optimization of green sand mould system". 2011 IEEE International Conference on Control System, Computing and Engineering (ICCSCE): 86–91. doi:10.1109/ICCSCE.2011.6190501.
  13. Mohammad Hossein Zarei, Mehdi Davvari, Farhad Kolahan, and Kuan Yew Wong, “Simultaneous Selection and Scheduling with Sequence-Dependent Setup Times, Lateness Penalties, and Machine Availability Constraint: Heuristic Approaches”, International Journal of Industrial Engineering Computations, vol. 7 no. 1, pp. 147-160, 2016. DOI: 10.5267/j.ijiec.2015.7.001
  14. 1 2 Kirkpatrick, S.; Gelatt Jr., C.D.; Vecchi, M.P. (1983). "Optimization by Simulated Annealing". Science. 220 (4598): 671–680. doi:10.1126/science.220.4598.671. PMID 17813860.
  15. 1 2 Holland, J.H. (1975). Adaptation in Natural and Artificial Systems. University of Michigan Press. ISBN 0-262-08213-6.
  16. 1 2 Glover, Fred (1977). "Heuristics for Integer programming Using Surrogate Constraints". Decision Sciences. 8 (1): 156–166. doi:10.1111/j.1540-5915.1977.tb01074.x.
  17. 1 2 Glover, F. (1986). "Future Paths for Integer Programming and Links to Artificial Intelligence". Computers and Operations Research. 13 (5): 533–549. doi:10.1016/0305-0548(86)90048-1.
  18. X. S. Yang, Metaheuristic optimization, Scholarpedia, 6(8):11472 (2011).
  19. Glover F., (1986). Future paths for integer programming and links to artificial intelligence, Computers and Operations Research,13,533-549 (1986).
  20. Robbins, H.; Monro, S. (1951). "A Stochastic Approximation Method". Annals of Mathematical Statistics. 22 (3): 400–407. doi:10.1214/aoms/1177729586.
  21. Barricelli, N.A. (1954). "Esempi numerici di processi di evoluzione". Methodos: 45–68.
  22. Rastrigin, L.A. (1963). "The convergence of the random search method in the extremal control of a many parameter system". Automation and Remote Control. 24 (10): 1337–1342.
  23. Matyas, J. (1965). "Random optimization". Automation and Remote Control. 26 (2): 246–253.
  24. Nelder, J.A.; Mead, R. (1965). "A simplex method for function minimization". Computer Journal. 7: 308–313. doi:10.1093/comjnl/7.4.308.
  25. Fogel, L.; Owens, A.J.; Walsh, M.J. (1966). Artificial Intelligence through Simulated Evolution. Wiley. ISBN 0-471-26516-0.
  26. Hastings, W.K. (1970). "Monte Carlo Sampling Methods Using Markov Chains and Their Applications". Biometrika. 57 (1): 97–109. doi:10.1093/biomet/57.1.97.
  27. Cavicchio, D.J. (1970). "Adaptive search using simulated evolution". Technical Report. University of Michigan, Computer and Communication Sciences Department. hdl:2027.42/4042.
  28. Kernighan, B.W.; Lin, S. (1970). "An efficient heuristic procedure for partitioning graphs". Bell System Technical Journal. 49 (2): 291–307. doi:10.1002/j.1538-7305.1970.tb01770.x.
  29. Mercer, R.E.; Sampson, J.R. (1978). "Adaptive search using a reproductive metaplan". Kybernetes. 7 (3): 215228. doi:10.1108/eb005486.
  30. Smith, S.F. (1980). A Learning System Based on Genetic Adaptive Algorithms (PhD Thesis). University of Pittsburgh.
  31. Wolpert, D.H.; Macready, W.G. (1995). "No free lunch theorems for search". Technical Report SFI-TR-95-02-010. Santa Fe Institute.
  32. Igel, Christian, Toussaint, Marc (Jun 2003). "On classes of functions for which No Free Lunch results hold". Information Processing Letters. Elsevier North-Holland, Inc. 86 (6): 317–321. doi:10.1016/S0020-0190(03)00222-9. ISSN 0020-0190. Retrieved 9 April 2013.
  33. Auger, Anne, Teytaud, Olivier (2010). "Continuous Lunches Are Free Plus the Design of Optimal Optimization Algorithms". Algorithmica. Springer-Verlag. 57 (1): 121–146. doi:10.1007/s00453-008-9244-5. ISSN 0178-4617. Retrieved 9 April 2013.
  34. Stefan Droste; Thomas Jansen; Ingo Wegener (2002). "Optimization with Randomized Search Heuristics – The (A)NFL Theorem, Realistic Scenarios, and Difficult Functions". Theoretical Computer Science. 287 (1): 131–144. doi:10.1016/s0304-3975(02)00094-4. Retrieved 9 April 2013.

Further reading

External links

This article is issued from Wikipedia - version of the 12/4/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.