Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Running the Eigen GPU Tests
Artificial Intelligence   Latest   Machine Learning

Running the Eigen GPU Tests

Last Updated on December 30, 2023 by Editorial Team

Author(s): Luiz doleron

Originally published on Towards AI.

Today, GPUs are critical to perform fast computations, in particular, for artificial intelligence applications. Testing your code on the GPU is also critical, but usually, it is put aside because of the intrinsic complexity. Aiming to tackle this issue, this story explains how to build and run the Eigen GPU tests, which aren’t built by default by the Eigen build scripts, even though Eigen is part of many modern machine learning applications.

source: Matheus Bertelli

Eigen is one of the best alternatives to implement linear algebra computations in C++. If you have read one of my previous stories already, you know that I use Eigen to code deep learning models without using any machine learning framework. Indeed, Eigen is quite powerful. However, its absence of documentation is sometimes frustrating.

This happens to me all the time:

  • I hit my nose in something not documented in Eigen
  • I take some time to dig into the source code and make files to realize how to do the thing
  • I use the thing and think: "I don’t think I will need it again."

And a couple of months later, there we go again, trying to remember how to do that thing one more time.

Building Eigen's tests is one of these things. On the Eigen website, there are a few instructions about how to build the Eigen tests. However, there is nothing about building the tests from unsupported modules, which part of them is not built by default. And if you want to build & run tests for exotic combinations, such as unsupported modules on GPU, good luck.

Running Eigen on GPUs

We can run our programs much faster by executing the critical high-demanding computations on GPUs. Eigen has an API to perform computations on NVidia GPUs using CUDA. Some yes's and no's:

  • Yes, we can run Eigen on GPUs NVidia
  • No, we cannot run Eigen on non-NVidia GPUs
  • No, we cannot run CUDA without NVidia GPUs

β€” Honestly speaking, Eigen supports HIP https://github.com/ROCm/HIP, which can theoretically support AMD GPUs. However, I have never used it, and I’m unsure if Eigen supports HIP roughly well. Checking the source code, we can find flags to disable part of the code that does not run as expected with HIP. Sooner or later, I will give it a try! Running GPU code on AMD is really interesting!

A GPU is a specific type of hardware connected to the computer motherboard, primarily designed to perform fast computations for games and other graphical-intensive software. If you don't have an NVidia GPU, there is no way to run CUDA. Some old Nvidia GPUs are no longer supported. Sorry, this is the NVidia game, and they always win.

If you don't have a computer with an NVidia GPU yet, it is time to break the pig!

Credits: Dany Kurnyawan

Make sure you have a not-so-old NVidia GPU card, and keep reading.

Building the Eigen GPU tests

If you are reading this, I may assume that you have already cloned Eigen from its repository on GitLab. If you haven't done this already, run the following line:

git clone https://gitlab.com/libeigen/eigen.git

This line will connect to GitLab and download Eigen's sources.

Move to the eigen folder and create a directory called build:

cd eigen
mkdir build

Now, let's edit the file eigen/CMakeLists.txt as follows:

The way to edit the file is your choice. Here, I'm using Nano because I’m on Ubuntu 22.04.

Back to the file, we need to change the line:

set(EIGEN_CUDA_COMPUTE_ARCH 30 CACHE STRING "The CUDA compute architecture(s) to target when compiling CUDA code")

to

set(EIGEN_CUDA_COMPUTE_ARCH 86 CACHE STRING "The CUDA compute architecture(s) to target when compiling CUDA code")

I only replaced the 30 by 86. Or better saying, since I have an NVIDIA GeForce RTX 3050 card, I need only to replace 30 by 86. I have no idea what the right value is for you! This number depends completely on your GPU model. Indeed, you can replace 30 by a list of architectures, such as 70 75 80.

The following post can help you to realize which is the proper value: https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/

Once you have realized your GPU architecture and edited the cmake file accordingly, we can return to the terminal prompt. Move to the build folder and call cmake:

cd build
cmake -DEIGEN_TEST_CUDA=ON ..

cmake produces a verbose output. Take attention to these two lines:

If you haven't CUDA toolkit on your system yet, you can check the NVidia website: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html to find the instructions.

If everything is fine and you get no cmake errors, the program outputs something like this:

Now, it is time to invoke make:

make buildtests

This command builds the Eigen’s tests, generating an output like this:

By default, Eigen does not build the GPU tests, but since we flagged EIGEN_TEST_CUDA=ON when calling cmake, the build will include the GPU tests as well. Take a cup of coffee and watch something on Netflix. Building all tests will take long to finish. You can add the -j8 flag to make things faster, but it will still take a lot of time to end.

And note that we are only building the tests! We are not executing them!

At the end of the process, we will get the following message:

The final message [100%] Built target buildtests means we successfully built the eigen test battery. Now, let’s execute them.

Executing the tests

Eigen built the tests and put them as executables in the folder eigen/build/tests. The simpler way to run them is by calling:

make check

This command will check if the tests were built already and then call each one, summarizing them as follows:

Note the GPU tests are running and hopefully passing. Using nvidia-smi, I can confirm that the tests are actually using the GPU:

After some time, all the tests have finally finished:

Therefore, we can confirm again that the GPU tests have run.

Running single tests

We can build and run a single test if we want it. For example, if we want to run only the gpu_basic test, we can run the command:

make gpu_basic

And then call the test:

./test/gpu_basic

The test executables are placed in the eigen/est folder. Note that the unsupported modules tests are stored into the folder eigen/build/unsupported/test. Thus, to run the test cxx11_tensor_random_gpu we need first built it:

make cxx11_tensor_random_gpu

And then invoke it by running:

./unsupported/test/cxx11_tensor_random_gpu

Conclusion

Running Eigen GPU tests requires some tweaking of the Eigen cmake configurations, as shown in this story.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓