
You're unable to read via this Friend Link since it's expired. Learn more
Member-only story
Deep Learning, Editorial, Technology
What is a GPU? Are GPUs Needed for Deep Learning?
Diving into a technical explanation on what is a GPU
Author(s): Buse Yaren Tekin, Roberto Iriondo
In the age of the intelligent era, more and more innovative concepts emerge daily. One of them is undoubtedly the field of “artificial intelligence.” In this case, specifically, “deep learning,” a sub-branch of AI, will encounter us when we need to dive deeper into complex problems, and it is used in many areas today. We develop deep learning models to achieve specific tasks, and in some cases, surpass human, repetitive capabilities. In this article, we will explore how the GPU is used for deep learning with code examples.
This tutorial’s code is available on Github and its full implementation as well on Google Colab.
“In the era where artificial intelligence and algorithms make more decisions in our lives and in organizations, the time has come for people to tap into their intuition as an adjunct to today’s technical capabilities. Our inner wisdom can embed empirical data with humanity.”
- Abhishek Ratna [12]
What is a GPU?
A graphics processing unit (GPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device [1].
The graphics processing unit is a computer chip that performs rapid mathematical equations to render images. Specific or integrated may be part of a graphics card.
In embedded systems, it is quite possible to see GPUs in many areas, such as personal computers or workstations. Depending on the parallel processing structure. GPUs come with advantages over graphics processors contained in image and video processing. GPUs are becoming more popular and needed for artificial intelligence (AI) these days.