Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

How To Set Up and Run Cuda Operations In PyTorch
Latest

How To Set Up and Run Cuda Operations In PyTorch

Last Updated on October 4, 2022 by Editorial Team

Author(s): Muttineni Sai Rohith

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

Introduction

The advent of deep learning in recent years created a demand for computing resources and acceleration of workloads. Various operations involved in deep learning, such as matrix multiplications, tiling of the images, and processing chunks of voice samples, can be parallelized for better performance and accelerating the development of Machine learning models. Thus, many deep learning libraries like TensorFlow and Pytorch provide users with a set of functions or APIs to take advantage of their GPUs. CUDA Is one such programming model and computing platform which enables us to perform complex operations faster by parallelizing the tasks acrossΒ GPUs.

This article will discuss what CUDA is and how to set up the CUDA environment and run various CUDA operations available inΒ Pytorch.

Photo by Lucas Kepner onΒ Unsplash

What isΒ CUDA

CUDA (Compute Unified Device Architecture) is a programming model and parallel computing platform developed by Nvidia. Using CUDA, one can maximize the utilization of Nvidia-provided GPUs, thereby improving the computation power and performing operations away faster by parallelizing the tasks. PyTorch provides a torch.cuda library to set up and run the CUDA operations.

Using Pytorch CUDA, we can create tensors and allocate them to the device. Once allocated, we can perform operations on it, and the results are also assigned to theΒ device.

Installation

Pytorch provides a user-friendly interface on their official website where we can select our operating system, desired programming language, and other requirements, as shown in the belowΒ figure.

Refer to this official Pytorch linkβ€Šβ€”β€ŠStart Locally | PyTorch and select the requirements according to our system specifications. Pytorch provides CUDA libraries for Windows and Linux Operating systems. For windows, make sure to use CUDA 11.6 because CUDA 10.2 and ROCm are no longer supported for windows. For Python programming language, we can select one in conda, pip, and source packages, whereas LibTorch is used for C++ and Java languages.

Running CUDA operations inΒ PyTorch

Once installed successfully, we can use the torch.cuda interface to run CUDA operations inΒ Pytorch.

To make sure whether the installation is successful, use the torch.version.cuda command as shownΒ below:

# Importing Pytorch
import torch
# To print Cuda version
print(β€œPytorch CUDA Version is β€œ, torch.version.cuda)

If the installation is successful, the above code will show the following output –

# Output
Pytorch CUDA Version is 11.6

Before using the CUDA, we have to make sure whether CUDA is supported by ourΒ System.

Use torch.cuda.is_available() command as shown below –

# Importing Pytorch
import torch
# To check whether CUDA is supported
print(β€œWhether CUDA is supported by our system:”, torch.cuda.is_available())

The above command will return a Boolean Value as below –

# Output
Whether CUDA is supported by our system: True

Pytorch CUDA also provides the following functions to know about the device id and name of the device when given device ID, as shown below –

# Importing Pytorch
import torch
# To know the CUDA device ID and name of the device
Cuda_id = torch.cuda.current_device()
print(β€œCUDA Device ID: ”, torch.cuda.current_device())
print(β€œName of the current CUDA Device: ”, torch.cuda.get_device_name(cuda_id))

The above code will show the following output –

# Output
CUDA Device ID: 0
Name of the current CUDA Device: NVIDIA GeForce FTX 1650

We can also change the default CUDA device by specifying the ID as shown below –

# Importing Pytorch
import torch
# To change the Default CUDA device
torch.cuda.set_device(1)

Note: While using CUDA, make sure to develop device-agnostic code because some systems might not have GPUs and will have to run on CPUs, and vice versa. That can be done by adding the following line to ourΒ code-

device = β€˜cuda’ if torch.cuda.is_available() else β€˜cpu’

Operating Tensors withΒ CUDA

Generally, a Pytorch tensor is the same as a NumPy array. It is an n-dimensional array used for numerical computation. The only difference between tensor and NumPy array is tensor can run both on CPUs andΒ GPUs.

Pytorch CUDA provides the following functions to handle tensors –

Β· tensor.deviceβ€Šβ€”β€Šreturns the device name of the tensor. By default, it isΒ β€œCPU”.

Β· tensor.to(device_name)β€Šβ€”β€Šreturns a new instance of the tensor on the device mentioned. β€œCPU” for CPU and ”cuda” for CUDA enabledΒ GPU.

Β· tensor.cpu()β€Šβ€”β€Što transfer the tensor from the current device toΒ CPU.

Let’s understand the usage of the above functions by creating a tensor and performing some basic operations.

We will create a sample tensor and perform a tensor operation(Squaring) on the CPU, and then we will transfer the tensor to GPU and perform the same operation again and understand the performance.

import torch

# Creating a sample tensor
x = torch.randint(1, 1000, (100, 100))
# Checking the device name: will return β€˜CPU’ by default
print(β€œDevice Name: ” , x.device)
# Applying tensor operation
res_cpu = x ** 2
# Transferring tensor to GPU
x = x.to(torch.device(β€˜cuda’))
# Checking the device name: will return β€˜cuda:0’
print(β€œDevice Name after transferring: ”, x.device)
# Applying same tensor operation
res_gpu = x ** 2
# Transferring tensor from GPU to CPU
x.cpu()

Running Machine Learning models withΒ CUDA

CUDA provides the following function to transfer the machine learning model to the following device

Β· model.to(device_name)β€Šβ€”β€Šreturns a new instance of the Machine learning model on the device_name specified. β€œCPU” for CPU and ”cuda” for CUDA-enabled GPU.

To demonstrate the above function, we will import the pre-trained β€œResnet-18” model from torchvision.models

# Importing Pytorch
Import torch
import torchvision.models as models
# Making the code device-agnostic
device = β€˜cuda’ if torch.cuda.is_available() else β€˜cpu’
# Instantiating a pre-trained model
model = models.resnet18(pretrained=True)
# Transferring the model to a CUDA-enabled GPU
model = model.to(device)

Once the model is transferred, we can continue the rest of the machine learning workflow on CUDA-enabled GPU.

Conclusion

After reading this article, one can understand how to install the PyTorch CUDA library in our system, implement basic commands of PyTorch CUDA, handling tensors and machine learning models withΒ CUDA.


How To Set Up and Run Cuda Operations In PyTorch was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓