Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Metaheuristics
Computer Science   Programming

Metaheuristics

Last Updated on February 1, 2021 by Editorial Team

Author(s): Saman Siadati

Computer Science, Optimization

Magnificient lessons learned fromΒ nature

Photo by James Wainscoat onΒ Unsplash

In finding the optimal answer with heuristic algorithms, we usually use searching methods by trial and error process. There is no guarantee that a solution will be found in this method, as many other popular methods may be more effective and efficient. In general, heuristic algorithms are considered local search-based methods because their searches focus on local variables; However, heuristic algorithms can still be considered as the best ways to solve optimization problems, especially when time constraints are also important in problem-solving. But as I said before, (Please read the previous story about heuristics, heuristic solutions: the shortcut for optimization problems). One of the major problems of this kind of algorithm is falling into local optimizations without having a chance to escape them. To improve these algorithms, a new generation of approaches, Metaheuristics, had been used which include algorithms that explicitly or implicitly use trade-off between search diversification (when there are some clues that the search is moving to bad areas of the search space) and search intensification (with the goal of finding the best answer in the feasibleΒ space).

The term metaheuristic(or meta-heuristic) refers to advanced heuristic algorithms and the β€˜β€˜meta’’ at the beginning of this word means beyond and metaheuristic means finding the optimal solution by using techniques at advanced levels as well as trial and error. A heuristic approach requires knowing the problem and would be responsive to a particular problem. For example, the method of finding the best home to buy may be different from the method of finding a suitable car. In fact, the heuristic method used for finding the best house is only effective for finding accommodation at a reasonable price. Actually, this is the best place for metaheuristic algorithms, that can search for optimal points without knowing the problem. In fact, metaheuristic algorithms are able to solve a problem with reasonable speed and accuracy without knowing the problem, by providing a general solution.

The swarm thinking and decision in many cases can lead to a favorable answer. In the meantime, the behaviors of creatures also indicate a kind of joint effort to reach an optimal answer in nature. Observing social behavior in nature, like the behavior of birds to find the optimal path in travel, is an example of swarm intelligence in nature. The PSO algorithm is one of the best swarm search algorithms had modeled on the social behavior of flocks of birds. Let’s assume that a group of birds is accidentally exposed to food in an area and there is only one food item in the search area. Not all birds know where food is, but they know how much food there is in each repetition. So what is the best strategy for finding food? The solution is to look for the bird that is closest to theΒ food.

The PSO algorithm starts with a group of random particles (solution) and then searches by updating generations. In each iteration, each particle is updated with two best values. The first is the best solution or fit function ever obtained and would be stored and is called β€˜β€˜pbest’’. Another β€œbest” is the value so far achieved by every particle in the population, is the best universal value and is called β€˜β€˜gbest’’. When a particle takes a part of the population as its topological neighbors, the best value is the best local and is called β€˜lbest’’. After finding the best values ​​of β€˜β€˜pbest’’ and β€˜β€˜gbest’’, the particle updates its speed and position.

The PSO algorithm is a universal optimization method that can be used to deal with problems whose answer is a point or surface in n-dimensional space. In such a space, hypotheses are made and an initial velocity is assigned to the particles, as well as channels of communication between the particles. These particles then move in the response space, and the results are calculated based on a competency criterion after each time interval. Over time, particles accelerate toward particles that have a higher competency standard and are in the same communication group. The main advantage of this method over other optimization strategies is that a large number of congestion particles makes the method more flexible in the face of the problem of local optimal response.

Some of the other metaheuristics that have been developed inspired by nature are the ant colony optimization algorithm, bee colony algorithm, Cuckoo search algorithm, grey wolf optimization algorithm, dragonfly optimization algorithm, the flower pollination optimization algorithm, whale optimization, locust optimization algorithm, and emperor penguin cloning optimization algorithm. In almost all of them, the main approach is based on Darwin’s theory of evolution and makes the foundations of a new generation of algorithms, evolutionary algorithms. Please read the next story for this wonderful kind of algorithms.


Metaheuristics was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓