A Fundamental Introduction to Genetic Algorithm -Part Two
Last Updated on April 16, 2026 by Editorial Team
Author(s): Hossein Chegini
Originally published on Towards AI.

“A 100-Queen solution” …picture from ‘repo/images/solutions’
Code Investigation
In the previous introduction, I provided a detailed explanation of the fundamental steps involved in training a Genetic Algorithm (GA). I discussed important concepts such as mutation, genes, chromosomes, and genetic population, and presented a case study on solving the N-Queen problem using a GA.
Following the publication of the article, I proceeded to create a repository and converted my previously written Matlab code into Python code. In this article, my focus is on explaining the different components of the repository and how the main file is structured to set up the GA scenario and find the optimal solution. You can find the repo here.
The main file (n_queen_solver.py) serves as the entry point for setting up the GA model and initiating the training process. It prompts the user to provide essential parameters that are crucial for configuring the GA model. These parameters include:
- Chromosome size or Chessboard Size: The size of the chessboard, which represents the number of queens and the dimensions of the board.
- Population Size: The number of chromosomes in the population, representing the number of candidate solutions.
- Epochs: The number of iterations or generations for which the GA model will be trained.
parser = argparse.ArgumentParser(description='Computation of the GA model for finding the n-queen problem.')
parser.add_argument('chromosome_size', type=int, help='The size of a chromosome')
parser.add_argument('population_size', type=int, help='The size of the population of the chromosomes')
parser.add_argument('epoches', type=int, help='The nmber of iterations to traing the GA model')
args = parser.parse_args()
After obtaining the parameters, the next block of code is responsible for initializing the population. The ‘init_population()’ method generates the population based on the specified number of individuals, using the encoding explained in the previous article. It returns the initialized population back to the main method of the file.
The fitness function and the calculation of the fitness score play a crucial role in selecting the best parents and guiding their reproduction to ensure the program progresses along the optimal path. For the simplicity and clarity of this implementation, I have chosen a straightforward fitness method. The following code block demonstrates the ‘fitness()’ method:
def fitness(chrom,chromosome_size):
q = 0
for i1 in range(chromosome_size):
tmp = i1 - chrom[i1]
for i2 in range(i1+1,chromosome_size):
q = q + (tmp == (i2 - chrom[i2]))
for i1 in range(chromosome_size):
tmp = i1 + chrom[i1]
for i2 in range(i1+1,chromosome_size):
q = q + (tmp == (i2 + chrom[i2]))
return 1/(q+0.001)
The ‘fitness()’ In this block received an individual chromosome and its size as input parameters and returns back its fitness score. In the given code block, the fitness function checks whether two queens in the chromosome are crossing each other or not. If two queens are found to be crossing, the variable ‘q’ is incremented by one. The purpose of this check is to evaluate the chromosome's fitness based on the number of queen collisions.
The line ‘1 / (q+0.001)’ represents the fitness score based on ‘q’ . By using the reciprocal of the value ‘q + 0.001’, the fitness score will be higher for chromosomes with fewer queen collisions (i.e., a lower value of ‘q’). The addition of ‘0.001’ is to avoid division by zero.
The fitness score is used to assess the quality of each chromosome in the population. The higher the fitness score, the better the chromosome’s performance. In the case of reaching a fitness score of 1000, it signifies that the solution has been found, and the program can terminate without further operations.
def train_population(population,epoches,chromosome_size):
num_best_parents = 2
ft = []
success_booelan = False
population_size = len(population)
for i1 in tqdm(range(epoches)): # 1 should be epoches later
fitness_score = []
for i2 in range(population_size):
fitness_score.append(fitness(population[i2],chromosome_size)) #fitness score initialisation
ft.append(sum(fitness_score)/population_size)
pop = np.concatenate((population, np.expand_dims(fitness_score, axis=1)), axis=1)
sorted_indices = np.argsort(pop[:, -1])
pop_sorted = pop[sorted_indices]
pop = pop_sorted[:, :-1]
best_parents_muted = []
best_parents = pop[-num_best_parents:]
best_parents_muted = [mutation(best_parents[i], chromosome_size) for i in range(num_best_parents)]
pop[0:num_best_parents] = best_parents_muted
population = pop
if ft[-1] == 1000: # this should be calculated accurately. In each case the model might pass the potimum solution, so whenever it is touching the solution's score we should stop the training
print('Woowww, the model could find the solution!!')
print('Here is an example of a solution : ',population[-1])
success_booelan = True
break
return population, ft, success_booelan
The genetic algorithm (GA) employs a selection process to choose parents with higher fitness scores for reproduction through mutation or crossover. The resulting offspring are added to the population, while chromosomes with lower fitness scores are excluded from the next training round. The line ‘if ft[-1] == 1000’ checks if the latest fitness score indicates convergence to a solution. If the condition is true, the loop breaks, and the method terminates, making way for the execution of subsequent blocks."
HINT: After reaching the solution or finding the global optimum of the solution space, it is possible for the program to continue executing operations. To ensure that the program terminates after finding the best fitness score, a break statement is used. The break statement exits the loop and ensures that no further operations are performed.

In the figure above, we observe the step-wise behavior of the learning curve. The program remains at a fitness score of 0 for the first 28 epochs and then suddenly jumps to a fitness score of 100. During a typical run, the solution is reached after 70 epochs, but there is a brief period where the program gets stuck at a fitness score of 600. You can run the program to generate additional learning curves or check the “repo/images/learning_curve” directory for existing curves. After training the model, two additional methods, namely “fitness_curve_plot” and “n_queen_plot”, are called to display the learning curve and visualize the positions of the queens on the chessboard.
Conclusion and Questions
This article presented various blocks of Python code for solving the N-Queen problem. The code can be modified and enhanced in different ways to improve efficiency or add additional capabilities for finding solutions. If you have any questions or suggestions about the code repository, feel free to share them in the comments.
Additionally, I invite you to consider the following questions and provide your insights:
- Can you propose another problem that could be solved using a genetic algorithm?
- Please share your thoughts on the encoding process, which is a crucial aspect of genetic algorithms.
In the next article, I plan to explore a more challenging case that goes beyond the N-Queen problem and may require more iterations to find a solution. Stay tuned for more exciting developments.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Towards AI Academy
We Build Enterprise-Grade AI. We'll Teach You to Master It Too.
15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.
Start free — no commitment:
→ 6-Day Agentic AI Engineering Email Guide — one practical lesson per day
→ Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages
Our courses:
→ AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.
→ Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.
→ AI for Work — Understand, evaluate, and apply AI for complex work tasks.
Note: Article content contains the views of the contributing authors and not Towards AI.