Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Monte Carlo Simulation An In-depth Tutorial with Python
Editorial   Machine Learning   Mathematics   Programming   Tutorials

Monte Carlo Simulation An In-depth Tutorial with Python

Last Updated on October 21, 2021 by Editorial Team


Figure 1: The Monte Carlo Simulation methods are used in many industries from the stock market, to finance, energy, banking..
Figure 1: The Monte Carlo Simulation methods are used in many industries, from the stock market to finance, energy, banking, and other forecasting models. | Source:Β Pexels Β 

An in-depth tutorial on the Monte Carlo Simulation methods and applications withΒ Python

Author(s): Pratik Shukla, Roberto Iriondo

What is the Monte Carlo Simulation?

A Monte Carlo method is a technique that uses random numbers and probability to solve complex problems. The Monte Carlo simulation, or probability simulation, is a technique used to understand the impact of risk and uncertainty in financial sectors, project management, costs, and other forecasting machine learning models.

Risk analysis is part of almost every decision we make, as we constantly face uncertainty, ambiguity, and variability in our lives. Moreover, even though we have unprecedented access to information, we cannot accurately predict the future.

The Monte Carlo simulation allows us to see all the possible outcomes of our decisions and assess risk impact, in consequence allowing better decision making under uncertainty.

In this article, we will go through five different examples to understand the Monte Carlo Simulation methods.

πŸ“š Resources: Google Colab Implementation | GitHub Repository πŸ“š

Applications:

  • Finance.
  • Project Management.
  • Energy.
  • Manufacturing.
  • Engineering.
  • Research and Development.
  • Insurance.
  • Oil and Gas.
  • Transportation.
  • Environment.
  • And others.

Examples:

  1. Coin Flip Example.
  2. Estimating PI Using Circle and Square.
  3. Monty Hall Problem.
  4. Buffon’s Needle Problem.
  5. Why Does the House Always Win?

a. Coin FlipΒ Example:

Figure 2: Heads and tails, mathematical representation.
Figure 2: Heads and tails, mathematical representation.

While flipping a coin:

Figure 3: Formula for heads and tails coin example.
Figure 3: Formula for heads and tails coinΒ example.

Next, we are going to prove this formula experimentally using the Monte Carlo Method.

Python Implementation:

  1. Import required libraries:
Figure 4: Import the required libraries for our coin flipping example, the Monte Carlo Simulation
Figure 4: Import the required libraries for our coin flippingΒ example.

2. Coin flip function:

Figure 5: A simple function randomizing the results between 0 and 1, 0 for heads and 1 for tails.
Figure 5: A simple function randomizing the results between 0 and 1, 0 for heads, and 1 forΒ tails.

3. Checking the output of the function:

Figure 6: Running the function of coin_flip()
Figure 6: Running the function of coin_flip()

4. Main function:

Figure 7: Calculating the probability and appending the probability values to the results, the Monte Carlo Simulation
Figure 7: Calculating the probability and appending the probability values to theΒ results.

5. Calling the main function:

Figure 8: Calling the Monte Carlo main function, along with plotting final values.
Figure 8: Calling the Monte Carlo main function, along with plotting finalΒ values.

As shown in figure 8, we show that after 5000 iterations, the probability of getting a tail is 0.502. Consequently, this is how we can use the Monte Carlo Simulation to find probabilities experimentally.


b. Estimating PI using circle and squareΒ :

Figure 9: Simple area of a circle and of a square.
Figure 9: Simple area of a circle and of aΒ square.
Figure 10: Calculation of the area of a circle and square respectively.
Figure 10: Calculation of the area of a circle and square respectively.

To estimate the value of PI, we need the area of the square and the area of the circle. To find these areas, we will randomly place dots on the surface and count the dots that fall inside the circle and dots that fall inside the square. Such will give us an estimated amount of their areas. Therefore instead of using the actual areas, we will use the count of dots to use as areas.

In the following code, we used the turtle module of Python to see the random placement of dots.

Python Implementation:

  1. Import required libraries:
Figure 10: Importing required libraries for our Ο€ example.
Figure 10: Importing required libraries for our π example.

2. To visualize the dots:

Figure 11: Drawing the figures.
Figure 11: Drawing theΒ figures.

3. Initialize some required data:

Figure 12: Initializing data values.
Figure 12: Initializing dataΒ values.

4. Main function:

Figure 13: Implementing the main function.
Figure 13: Implementing the main function.Β 

5. Plot the data:

Figure 14: Plotting the data values.
Figure 14: Plotting the dataΒ values.

6. Output:

Figure 15: Ο€ approximations using the Monte Carlo methodology.
Figure 15: Ο€ approximations using the Monte Carlo methodology.
Figure 16: Data visualization of the values, the Monte Carlo Simulation.
Figure 16: Data visualization of theΒ values.
Figure 17: Data visualization of the values, the Monte Carlo Simulation.
Figure 17: Data visualization of theΒ values.

As shown in figure 17, we can see that after 5000 iterations, we can get the approximate value of PI. Also, notice that the error in estimation also decreased exponentially as the number of iterations increased.


πŸ“š Check out an overview of machine learning algorithms for beginners with code examples in Python.Β πŸ“š

3. Monty HallΒ Problem:

Suppose you are on a game show, and you have the choice of picking one of three doors: Behind one door is a car; behind the other doors, goats. You pick a door, let’s say door 1, and the host, who knows what’s behind the doors, opens another door, say door 3, which has a goat. The host then asks you: do you want to stick with your choice or choose another door? [1]

Is it to your advantage to switch your choice ofΒ door?

Based on probability, it turns out it is to our advantage to switch the doors. Let’s find out how:

Initially, for all three gates, the probability (P) of getting the car is the same (P = 1/3).

Figure 18: A simulation of the three gates for our game show, showcasing each of the possible outcomes, the Monte Carlo
Figure 18: A simulation of the three gates for our game show, showcasing each of the possible outcomes.

Now assume that the contestant chooses door 1. Next, the host opens the third door, which has a goat. Next, the host asks the contestant if he/she wants to switch the doors?

We will see why it is more advantageous to switch the door:

Figure 19: A figurative outcome for the door game show. The Monte Carlo Simulation
Figure 19: A figurative outcome for the door gameshow.

In figure 19, we can see that after the host opens door 3, the probability of the last two doors of having a car increases to 2/3. Now we know that the third door has a goat, the probability of the second door having a car increases to 2/3. Hence, it is more advantageous to switch the doors.

Now we are going to use the Monte Carlo Method to perform this test case many times and find out its probabilities in an experimental way.

Python Implementation:

  1. Import required libraries:
Figure 20: Importing required libraries for our game show example, the Monte Carlo Simulation
Figure 20: Importing required libraries for our game showΒ example.

2. Initialize some data:

Figure 21: Initializing the doors and empty lists to store the probability values.
Figure 21: Initializing the doors and empty lists to store the probability values.

3. Main function:

Figure 22: Implementing the main function with a Monte Carlo Simulation method.
Figure 22: Implementing the main function with a Monte Carlo Simulation method.

4. Calling the main function:

Figure 23: Calling the main function of our game show example, and interesting 1000 times.
Figure 23: Calling the main function of our game show example, and interesting 1000Β times.

5. Output:

Figure 24: Approximate winning probabilities to sticking with your choice or switching doors.
Figure 24: Approximate winning probabilities to sticking with your choice or switching doors.

In figure 24, we show that after 1000 iterations, the winning probability if we switch the door is 0.669. Therefore, we are confident that it works to our advantage to switch the door in this example.


4. Buffon’s NeedleΒ Problem:

A French nobleman Georges-Louis Leclerc, Comte de Buffon, posted the following problem in 1777 [2] [3].

Suppose that we drop a short needle on a ruled paperβ€Šβ€”β€Šwhat would be the probability that the needle comes to lie in a position where it crosses one of the lines?

The probability depends on the distance (d) between the lines of the ruled paper, and it depends on the length (l) of the needle that we dropβ€Šβ€”β€Šor rather, it depends on the ratio l/d. For this example, we can interpret the needle as l ≀ d. In short, our purpose is that the needle cannot cross two different lines at the same time. Surprisingly, the answer to the Buffon’s needle problem involves PI.

Here we are going to use the solution of Buffon’s needle problem to estimate the value of PI experimentally using the Monte Carlo Method. However, before going into that, we are going to show how the solution derives, making it more interesting.

Theorem:

If a short needle, of length l, is dropped on a paper that is ruled with equally spaced lines of distance d β‰₯ l, then the probability that the needle comes to lie in a position where it crosses one of the lines is:

Figure 25: Buffon’s needle problem theorem.
Figure 25: Buffon’s needle problemΒ theorem.

Proof:

Figure 26: Visualizing Buffon’s needle problem.
Figure 26: Visualizing Buffon’s needleΒ problem.

Next, we need to count the number of needles that crosses any of the vertical lines. For a needle to intersect with one of the lines, for a specific value of theta, the following are the maximum and minimum possible values for which a needle can intersect with a vertical line.

  1. Maximum Possible Value:
Figure 27: Maximum possible value.
Figure 27: Maximum possibleΒ value.

2. Minimum Possible Value:

Figure 28: Minimum possible value.
Figure 28: Minimum possibleΒ value.

Therefore, for a specific value of theta, the probability for a needle to lie on a vertical line is:

Figure 29: Probability for a needle to lie on a vertical line formula.
Figure 29: Probability for a needle to lie on a vertical lineΒ formula.

The above probability formula is only limited to one value of theta; in our experiment, the value of theta ranges from 0 to pi/2. Next, we are going to find the actual probability by integrating it concerning all the values of theta.

Figure 30: Probability formula by integrating all possible values for theta.
Figure 30: Probability formula by integrating all possible values forΒ theta.
Figure 31: PI estimation.
Figure 31: PI estimation.

Estimating PI using Buffon’s needleΒ problem:

Next, we are going to use the above formula to find out the value of PI experimentally.

Figure 32: Finding the value of PI.
Figure 32: Finding the value ofΒ PI.

Now, notice that we have the values for l and d. Our goal is to find the value of P first so that we can get the value of PI. To find the probability P, we must need the count of hit needles and total needles. Since we already have the count of total needles, the only thing we require now is the count of hit needles.

Below is the visual representation of how we are going to calculate the count of hit needles.

Figure 33: Visual representation to calculate the count of needles.
Figure 33: Visual representation to calculate the count ofΒ needles.

Python Implementation:

  1. Import required libraries:
Figure 34: Importing the required libraries for our problem.
Figure 34: Importing the required libraries for ourΒ problem.

2. Main function:

Figure 35: Implementing the Monte Carlo Simulation method to our Buffon problem.
Figure 35: Implementing the Monte Carlo Simulation method to our BuffonΒ problem.

3. Calling the main function:

Figure 36: Calling the Monte Carlo Method’s main function to our Buffon’s problem.
Figure 36: Calling the Monte Carlo Method’s main function to our Buffon’sΒ problem.

4. Output:

Figure 37: Data visualization of 100 iterations using the Monte Carlo Method.
Figure 37: Data visualization of 100 iterations using the Monte CarloΒ Method.

As shown in figure 37, after 100 iterations we are able to get a very close value of PI using the Monte Carlo Method.


How does the Monte Carlo Simulation applies to casinos, and why does the house always win?
Source: Pexels

5. Why Does the House AlwaysΒ Win?

How do casinos earn money? The trick is straightforwardβ€Šβ€”β€Šβ€œThe more you play, the more they earn.” Let us take a look at how this works with a simple Monte Carlo Simulation example.

Consider an imaginary game in which a player has to choose a chip from a bag of chips.

Rules:

  1. There are chips containing numbers ranging from 1–100 in a bag.
  2. Users can bet on even or odd chips.
  3. In this game, 10 and 11 are special numbers. If we bet on evens, then 10 will be counted as an odd number, and if we bet on odds, then 11 will be counted as an even number.
  4. If we bet on even numbers and we get 10 then we lose.
  5. If we bet on odd numbers and we get 11 then we lose.

If we bet on odds, the probability that we will win is of 49/100. The probability that the house wins is of 51/100. Therefore, for an odd bet the house edge is = 51/100–49/100 = 200/10000 = 0.02 = 2%

If we bet on evens, the probability that the user wins is of 49/100. The probability that the house wins is of 51/100. Hence, for an odd bet the house edge is = 51/100–49/100 = 200/10000 = 0.02 = 2%

In summary, for every $ 1 bet, $ 0.02 goes to the house. In comparison, the lowest house edge on roulette with a single 0 is 2.5%. Consequently, we are certain that you will have a better chance of winning at our imaginary game than with roulette.

Python Implementation:

  1. Import required libraries:
Figure 38: Importing the required libraries for our casino problem.
Figure 38: Importing the required libraries for our casinoΒ problem.

2. Player’s bet:

Figure 39: Placing bets for odds and even numbers.
Figure 39: Placing bets for odds and evenΒ numbers.

3. Main function:

Figure 40: Applying the Monte Carlo Methodology to our casino problem.
Figure 40: Applying the Monte Carlo Methodology to our casinoΒ problem.

4. Final output:

Figure 41: Calculating and displaying the final values.
Figure 41: Calculating and displaying the finalΒ values.

5. Running it for 1000 iterations:

Figure 42: Running our function 1000 times.
Figure 42: Running our function 1000Β times.

6. Number of bets = 5:

Figure 43: Data visualization of results when the number of bets equals five.
Figure 43: Data visualization of results when the number of bets equalsΒ five.

7. Number of bets = 10:

Figure 44: Data visualization of results when the number of bets equals ten.
Figure 44: Data visualization of results when the number of bets equalsΒ ten.

8. Number of bets = 1000:

Figure 45: Data visualization of results when the number of bets equals 1000.
Figure 45: Data visualization of results when the number of bets equalsΒ 1000.

9. Number of bets = 5000:

Figure 46: Data visualization of results when the number of bets equals 5000.
Figure 46: Data visualization of results when the number of bets equalsΒ 5000.

10. Number of bets = 10000:

Figure 47: Data visualization of results when the number of bets equals 10000.
Figure 47: Data visualization of results when the number of bets equalsΒ 10000.

From the above experiment, we can see that the player has a better chance of making a profit if they place fewer bets on these games. In some case scenarios, we get negative numbers, which means that the player lost all of their money and accumulated debt instead of making a profit.

Please keep in mind that these percentages are for our figurative game and they can be modified.


Conclusion:

Like with any forecasting model, the simulation will only be as good as the estimates we make. It is important to remember that the Monte Carlo Simulation only represents probabilities and not certainty. Nevertheless, the Monte Carlo simulation can be a valuable tool when forecasting an unknown future.

πŸ“š Check out our tutorial on neural networks from scratch with Python code and math in detail.πŸ“š


DISCLAIMER: The views expressed in this article are those of the author(s) and do not represent the views of Carnegie Mellon University. These writings do not intend to be final products, yet rather a reflection of current thinking, along with being a catalyst for discussion and improvement.

Published via Towards AI

Citation

For attribution in academic contexts, please cite this work as:

Shukla, et al., β€œMonte Carlo Simulation An In-depth Tutorial with Python”, Towards AI, 2020

BibTex citation:

@article{pratik_iriondo_2020, 
 title={Monte Carlo Simulation An In-depth Tutorial with Python}, 
 url={https://towardsai.net/monte-carlo-simulation}, 
 journal={Towards AI}, 
 publisher={Towards AI Co.}, 
 author={Pratik, Shukla and Iriondo, 
 Roberto},  
 year={2020}, 
 month={Aug}
}

References:

[1] Probability Question Quote, 21 Movie, https://www.imdb.com/title/tt0478087/characters/nm0000228#quotes

[2] Georges-Louis Leclerc, Comte de Buffon, Wikipedia, https://en.wikipedia.org/wiki/Georges-Louis_Leclerc,_Comte_de_Buffon

[3] Buffon’s needle problem, Wikipedia, https://en.wikipedia.org/wiki/Buffon%27s_needle_problem

Resources:

Github tutorial repository.

Google Collab Implementation.

Feedback ↓