![]() |
2 years ago | |
---|---|---|
.. | ||
Makefile.inc | 5 years ago | |
README.md | 2 years ago | |
nvidia_smi.chart.py | 2 years ago | |
nvidia_smi.conf | 4 years ago |
Monitors performance metrics (memory usage, fan speed, pcie bandwidth utilization, temperature, etc.) using nvidia-smi
cli tool.
Warning: this collector does not work when the Netdata Agent is running in a container.
nvidia-smi
tool installed and your NVIDIA GPU(s) must support the tool. Mostly the newer high end models used for AI / ML and Crypto or Pro range, read more about nvidia_smi.You must enable this plugin, as its disabled by default due to minor performance issues:
cd /etc/netdata # Replace this path with your Netdata config directory, if different
sudo ./edit-config python.d.conf
Remove the '#' before nvidia_smi so it reads: nvidia_smi: yes
.
On some systems when the GPU is idle the nvidia-smi
tool unloads and there is added latency again when it is next queried. If you are running GPUs under constant workload this isn't likely to be an issue.
Currently the nvidia-smi
tool is being queried via cli. Updating the plugin to use the nvidia c/c++ API directly should resolve this issue. See discussion here: https://github.com/netdata/netdata/pull/4357
Contributions are welcome.
Make sure netdata
user can execute /usr/bin/nvidia-smi
or wherever your binary is.
If nvidia-smi
process is not killed after netdata restart you need to off loop_mode
.
poll_seconds
is how often in seconds the tool is polled for as an integer.
It produces the following charts:
KiB/s
percentage
percentage
percentage
percentage
MiB
celsius
MHz
Watts
MiB
MiB
num
Edit the python.d/nvidia_smi.conf
configuration file using edit-config
from the Netdata config
directory, which is typically at /etc/netdata
.
cd /etc/netdata # Replace this path with your Netdata config directory, if different
sudo ./edit-config python.d/nvidia_smi.conf
Sample:
loop_mode : yes
poll_seconds : 1
exclude_zero_memory_users : yes