10 Most Common ML Terms Explained in a Simple Day-To-Day Language
Last Updated on July 24, 2023 by Editorial Team
Author(s): Cristian
Originally published on Towards AI.
Do you remember the first time you tried to follow a recipe? Maybe it was for a chocolate chip cookie or a spicy salsa. As you scanned through the instructions, you were hit with terms like βfoldβ, βwhiskβ, βsauteβ, and βtemperβ. If you were a novice in the kitchen, these terms might have seemed as cryptic as a secret language. But once you understood what they meant, they transformed from confusing jargon into useful directions that helped you whip up delicious treats.
This is similar to how machine learning (ML) can seem at first. There are many terms and concepts that might feel like stumbling blocks when youβre trying to understand this transformative technology. But donβt worry! Thatβs why weβre here. Our job is to explain complicated tech terms in a simple day-to-day language, so everyone can understand.
In todayβs post, weβre going to decode ten of the most common machine learning terms. Weβll do it in plain, everyday language, using metaphors and examples from daily life to make these concepts as easy to understand as baking a batch of cookies!
Letβs get started, shall we?
1. Machine Learning β Teaching Computers to Learn
When we talk about Machine Learning, what we mean is the way we teach computers to learn from data, much like how we learn from experience. Imagine learning to ride a bicycle. The more you practice, the better you get at maintaining balance and steering. With each fall, you learn a little more about what not to do, and with each successful ride, you reinforce what to do.
This is precisely the process we emulate in cloud-based Machine Learning. Weβre educating the computer to learn from data (the equivalent of practice), to make informed predictions (akin to riding the bicycle), and to progressively improve with each iteration.
In the context of Machine Learning, data can be anything from images, text, numbers, to anything else that the computer can process and learn from.
2. Supervised Learning β The Guided Learning
Have you ever tried to learn a new skill under the guidance of a coach or mentor? They guide you, correct you, and provide feedback, helping you learn and improve. This is pretty much what Supervised Learning is in the world of Machine Learning.
In Supervised Learning, we have a dataset with both input data and the correct output. Itβs like having a textbook with both questions and answers. The algorithm learns from this data, understanding the relationship between the input and the output.
Letβs take an example of email spam filtering. The system is trained with thousands of emails, which are already marked as βspamβ or βnot spamβ. The system learns what features (like certain words, email addresses, or formatting) are likely to make an email spam. Once it has learned, it can start predicting whether a new email, not seen before, is spam or not.
So, Supervised Learning is like learning with a teacher who provides guidance and feedback, helping the algorithm to learn and make accurate predictions.
3. Unsupervised Learning β The Independent Explorer
Imagine a child playing with a pile of different toys β cars, dolls, blocks, balls. Without anyone telling them, they might start to group these toys based on similarities, like all cars in one place, all dolls in another. This instinctive organization is quite similar to what we call Unsupervised Learning in Machine Learning.
Unlike Supervised Learning, where we have labeled data (questions and answers), Unsupervised Learning works with unlabeled data. The system doesnβt know the correct output. Instead, it learns by finding patterns and structures in the input data.
Taking the example of emails again, in Unsupervised Learning, we only have the emails without any spam/not spam labels. The system could, however, group them based on similarities, like emails with similar words or from the same sender. This way, it might end up clustering spam emails together, not because it knew they were spam, but because it found patterns.
So, Unsupervised Learning is like a self-motivated explorer, making sense of new, unfamiliar territories without any guidance or supervision.
4. Reinforcement Learning β The Trial-and-Error Expert
Think back to when you were a child learning to ride a bike. Nobody gave you specific rules; instead, you tried, failed, adjusted, and tried again. You learned to balance, pedal, and steer through trial and error. This is pretty close to how Reinforcement Learning, another type of Machine Learning, works.
Unlike Supervised or Unsupervised Learning, Reinforcement Learning is all about interaction and learning from mistakes. The system, often referred to as an agent, makes decisions, takes actions in an environment, and gets rewards or penalties. Positive rewards reinforce good actions, while penalties discourage bad ones.
Letβs take a video game scenario: a virtual player (the agent) navigates a maze (the environment). The goal is to find the exit as fast as possible. Each wrong turn (bad action) results in time penalties (negative rewards), while correct turns (good actions) bring it closer to the exit (positive rewards). Over time, the player learns the best path, not because it was taught, but because it learned from its actions and their consequences.
Thatβs Reinforcement Learning in a nutshell β learning the best strategy through trial and error to achieve the maximum reward!
5. Neural Networks: The Brainy Network
Picture your brain. Itβs a massive network of neurons connected by synapses. Each neuron receives input signals, processes them, and sends output signals to other neurons. This intricate network is the basis for all our thoughts, decisions, and actions. In Machine Learning, we have something similar called a Neural Network.
A Neural Network is a system of algorithms thatβs somewhat designed to mimic the human brain. It learns from processed data, and can even learn the process of learning itself! Itβs structured in layers: an input layer to receive data, an output layer to make decisions or predictions, and hidden layers in between to process the data.
Imagine youβre trying to recognize a cat in different pictures. The input layer takes the images, the hidden layers might recognize patterns like pointy ears, whiskers, or a tail, and the output layer decides whether itβs a cat or not. The beauty of Neural Networks is that they can learn to identify these patterns on their own!
So, a Neural Network is like a virtual brain that can learn, recognize patterns, and make decisions based on the data itβs fed.
6. Deep Learning
You might have come across the term βDeep Learningβ while exploring the fascinating world of AI. It sounds quite intense, doesnβt it? But, fear not! Letβs break it down.
Think of deep learning as a superstar actor whoβs really, really good at their role because theyβve practiced their lines a million times. The βscriptβ here is the massive amount of data that the Deep Learning system learns from. It keeps practicing (or βlearningβ) from this data until it gets really good at its job, whether thatβs recognizing pictures of cats, translating languages, or predicting weather patterns.
Deep Learning is a subset of machine learning and uses something weβve already talked about β neural networks. But these are not just any neural networks. Theyβre big, complicated networks with many layers β hence the βdeepβ in Deep Learning. Each of these layers plays a role in helping the system understand the data better. Itβs like our superstar actor learning every little detail about their character to give an outstanding performance.
Remember the picture recognition example we used for Neural Networks? In Deep Learning, the network would not just recognize that thereβs a cat in the picture, but might also recognize what breed the cat is, or whether itβs sitting or standing. Thatβs how advanced it can be!
In the end, deep learning is just a machine learning method that excels at learning from large amounts of data. Itβs one of the reasons why AI has been making so many headlines in recent years!
7. Overfitting and Underfitting
When weβre learning a new skill, like playing the guitar, we might face two possible challenges. On one hand, we might try to play the song note for note, exactly like the original β this could make it hard for us to adapt if thereβs a small change, like a slightly different guitar or a different key. On the other hand, we might learn just a few basic chords and play every song using those, making them all sound kind of the same.
In Machine Learning, these two scenarios are called overfitting and underfitting. Overfitting is like trying to play the song note for note β the model learns the training data so well, it doesnβt perform well with new, unseen data. Underfitting is like using the same few chords for every song β the model is too simple to capture all the nuances in the data.
The challenge is to find a balance β a model complex enough to learn from the data, but not so complex that it canβt adapt to new information. Itβs like being able to play a song well, but also being able to adapt when something changes.
8. Feature Extraction: Making Important Things Stand Out
Remember when you were a kid and played βI spyβ game? You had to scan the environment and focus on specific details to find the hidden object. Thatβs kind of what Feature Extraction does in Machine Learning. Itβs the process of selecting the most important data or βfeaturesβ from the whole dataset for further analysis and processing.
Think about you and your friend being detectives trying to solve a mystery case. There are many clues (data), but not all of them are useful or relevant. You would try to identify the most telling clues (features) that would help you solve the case. Thatβs Feature Extraction in a nutshell!
Itβs crucial because Machine Learning algorithms can get confused if thereβs too much irrelevant data. By focusing on the important features, you can help the algorithm perform better and make more accurate predictions.
9. Label: Name Tags for Supervised Learning
Have you ever been to a party where you had to wear a name tag? That small piece of paper was crucial, wasnβt it? It told everyone who you were without you having to introduce yourself every time. Well, thatβs sort of what Labels do in Supervised Learning!
Supervised Learning, remember, is like a teacher-student scenario. The student (the machine learning model) is learning from the teacher (the dataset). But, the teacher doesnβt just throw a bunch of information at the student. No, the teacher carefully labels or tags each piece of information, telling the student what it is. Like putting name tags on the guests at a party.
In the context of Machine Learning, a βLabelβ is the answer or result we want our model to learn to predict. Itβs the βname tagβ for the data. So, if we were building a system to recognize images of cats and dogs, the labels would be βcatβ and βdogβ. By showing the model a bunch of images and their corresponding labels, we teach it to recognize and differentiate between cats and dogs.
10. Algorithm
Letβs imagine a recipe. The recipe guides you through the process of making a dish step by step. It tells you what ingredients you need, in what quantity, and the exact steps to prepare the dish.
In the world of machine learning, an algorithm is like that recipe. Itβs a series of steps that a machine learning model follows in order to learn from data and make predictions or decisions.
For instance, letβs think about our earlier example of labeling pictures of cats and dogs. The algorithm is the set of instructions that tells the model how to go about this task. It could say something like: βLook at this picture, analyze its features, compare those features to what youβve learned about cats and dogs, and then decide if itβs a cat or a dog.β
Just like there are countless recipes for different dishes, there are many different machine learning algorithms for different tasks β some are good for classifying images, others are better for predicting future trends, and so on.
In the end, choosing the right algorithm for your task is a crucial part of machine learning. Itβs a bit like picking the right recipe to cook a meal thatβll impress your friends!
Conclusion
Weβve embarked on quite the journey today, havenβt we? From train stations to cooking, fruit baskets to picture books, weβve looked at the world of Machine Learning through various lenses. Our hope is that these everyday examples have made these complex-sounding terms feel a little less daunting and a lot more accessible.
On our blog, we frequently publish articles that demystify complex tech jargon using straightforward, everyday language. For those interested in reading more posts of this nature, consider keeping up with our newsletter. Itβs free and we post weekly.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI