Why Do We Need More Explainable AI?
Last Updated on January 10, 2024 by Editorial Team
Author(s): Abdelkader Rhouati
Originally published on Towards AI.
In the era of AI, where new models continue to emerge every day in all particularly sensitive areas, such as health and education, controlling these models becomes a necessity. The main problem with AI models is that they are designed as black boxes, which makes them impossible to control. Explainability or Explainable AI are techniques, principles, and processes introduced to solve this problem and make it possible to conceive a transparent, explainable, interpretable, equitable, and verifiable model.
copyright (https://www.skillupai.com)
AI black boxes refer to AI systems whose inner workings are invisible to the end user. These systems take inputs, do some processing, and return results. But you canβt examine the code of the system or explain the logic behind those results.
Machine learning systems have three main components: an algorithm or set of algorithms, training data, and a model. An algorithm is a set of procedures that are trained on a large set of examples, called training data, and whose objective is to identify patterns in new data. Once a machine learning algorithm has been trained, the result is a machine learning model that will be used subsequently. Each of the three components of a machine learning system can be hidden and, therefore,… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI