How to check CUDA memory usage

Published on Aug. 22, 2023, 12:14 p.m.

To check CUDA memory usage in a Python script

To check CUDA memory usage in a Python script, you can use the nvidia-smi command line tool or the pynvml Python library.

Here’s an example of how to use pynvml to get the total and used memory for a specific GPU device (in this example, we’re getting information for the first device cuda:0):

import pynvml

pynvml.nvmlInit()

# get handle for the first device
handle = pynvml.nvmlDeviceGetHandleByIndex(0)

# get memory information
info = pynvml.nvmlDeviceGetMemoryInfo(handle)
total_memory = info.total / (1024*1024) # convert to MB
used_memory = info.used / (1024*1024) # convert to MB

print("Total memory:", total_memory, "MB")
print("Used memory:", used_memory, "MB")

pynvml.nvmlShutdown()

This will output the total and used memory for the first GPU device in Megabytes (MB). You can modify the index in nvmlDeviceGetHandleByIndex() function to get information for other devices.

nvidia-smi

Alternatively, you can use the nvidia-smi tool in a command line to get the memory usage information of all GPU devices by running the following command:
nvidia-smi

This will display a table of information about each GPU device on your system, including the amount of memory usage. You can use the –query-gpu option to selectively display information about certain statistics, like memory usage. For example, to display only memory usage information, the command would be:

nvidia-smi --query-gpu=memory.used --format=csv
This will output the memory usage of each GPU device in comma-separated values (CSV) format.

You can also use the watch command to monitor the memory usage in real-time. For example, to monitor memory usage every second, the command would be:

watch -n 1 nvidia-smi
This will display a constantly updating table of GPU device information, including memory usage, every second. Press Ctrl+C to exit the watch command.

Tags:

related content