Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take the GenAI Test: 25 Questions, 6 Topics. Free from Activeloop & Towards AI

Publication

Minimizing the Mean Square Error:
Bayesian approach: Part 1 (a)
Latest   Machine Learning

Minimizing the Mean Square Error: Bayesian approach: Part 1 (a)

Last Updated on June 4, 2024 by Editorial Team

Author(s): Varun Nakra

Originally published on Towards AI.

Minimizing the Mean Square Error:
Bayesian approach: Part 1 (a)

This will be a three-part series β€” Part 1 will illustrate the need of Bayesian MSE with an example; Part 2 will establish the foundation of Bayesian MSE; and, Part 3 will revert to the example in Part 1 for a conclusion and a summary. Each part will be subdivided when it gets
unwieldy.

1. MVU estimator could be a myth!

In the article on Mean Square Error, https://pub.towardsai.net/minimizing-the-mean-square-error-frequentist-approach-0c827dce51a9, we talked about minimizing the mean squares using the frequentist approach and established that minimizing MSE directly is an unfeasible task in
most practical cases. Therefore, we settle for an MVUE which is the estimator with the minimum variance but with an β€˜unbiasedness’ constraint. The question then arises, does the MVU estimator always exist?

Thus, the existence of an MVU estimator for al l values of the true parameter is not always possible!

2. β€˜Sample mean of a Normal distribution is MVUE’

Say the random variable X is normally distributed with a mean Β΅ and variance Οƒ2

Consider estimating the population mean Β΅ using the sample mean for n observations x1, . . . , xn. The sample mean is the estimator of the population mean and is defined as follows

The sample mean is also the MVUE of the population mean because it is unbiased and has the minimum variance possible amongst all estimators of the population mean (we can’t do any better than Οƒ2, which is the variance of X!)

Note that the sample mean is also a random variable, because different samples would give us different values of the sample mean. Therefore, there will be a β€˜sampling’ distribution of the sample mean. What would be its PDF? Think Central Limit Theorem!! The PDF of the sample mean will also be normally distributed with mean = Β΅ and variance = Οƒ2/n.

3. But, β€˜Trimmed mean of a Normal distribution is not MVUE!’

For most practical purposes, the population mean Β΅ will be bound between a certain range of values, say [βˆ’Β΅0, Β΅0], and not positively or negatively infinite! But, using the sample mean from Equation 1 above can lead to estimates that would fall outside the desired interval. Therefore, we will have to improve our estimation by using a β€˜truncated’ or β€˜trimmed’ sample mean defined as follows

Similar to what we did earlier, we will derive the MSE of the truncated sample mean. Unfortunately, we don’t know what is the expected value of the truncated sample mean and its variance, yet. We will derive them from its PDF. The truncated sample mean is a β€˜mixed’ random variable, i.e., it has discrete values (the boundaries of the population mean) and also continuous values. Therefore, its PDF will have a combination of β€˜delta’ functions and unit step functions. In order to explain the usage of these functions for defining the PDF of the truncated mean, I will have to digress a bit.

The following is a great tutorial article on these functions and their usage in probability β€” https://www.probabilitycourse.com/chapter4/4_3_2_delta_function.php

I implore the readers to go through this tutorial article and then come back at this point for further explanation. I will add a few pointers to help better understand these functions by referencing the aforementioned link. The following definition can be found on the bottom of that article.

The following additional points will help in understanding the form of the generalized PDF as mentioned in the above image and other concepts mentioned in the article:

  1. A completely discrete random variable experiences β€˜step jumps’ in its CDF. This is obviously because the values of the random variable are discrete, therefore, there is no β€˜continuous’ movement in the CDF from the lower discrete value to the higher discrete value. Thus, the CDF of the completely discrete random variable is defined as per the below image (from the article referenced earlier). Note that since we need to add all the

Now that we’ve explained how to use delta and step functions to define the PDF of a mixed random variable, we will compute the PDF of the trimmed mean in the sequel article β€˜Minimizing the Mean Square Error: Bayesian approach : Part 1 (b)’

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓