Tools: Monitor GPU Utilization in Real Time: A Complete Guide (2026)
Source: DigitalOcean
By James Skelton and Vinayak Baranwal The fastest way to monitor GPU utilization in real time on Linux is to run nvidia-smi --loop=1, which refreshes GPU stats every second including core utilization, VRAM usage, temperature, and power draw. Monitoring GPU utilization in real time starts with nvidia-smi, then expands to per-process views, container metrics, and alerts for long-running jobs. This guide shows command-level workflows you can run on Ubuntu, GPU Droplets, Docker hosts, and Kubernetes clusters. If you are building or operating deep learning systems, pair this guide with How To Set Up a Deep Learning Environment on Ubuntu and DigitalOcean GPU Droplets. GPU utilization metrics tell you whether your job is compute-bound, memory-bound, input-bound, or idle between batches. Start by tracking core utilization, memory usage, memory controller load, temperature, and power draw together instead of looking at one metric in isolation. GPU core utilization is the percentage of time kernels are actively executing on SMs during the sampling window. GPU memory utilization in nvidia-smi usually refers to memory controller activity, while memory usage is allocated VRAM in MiB. Low core utilization with high allocated VRAM often means the model is resident but waiting on data or synchronization. High core utilization with low memory controller activity is more common in compute-heavy kernels. SM utilization tells you whether CUDA cores are busy, memory bandwidth indicates how hard memory channels are being driven, and power draw shows electrical load relative to the card limit. These three together explain why two workloads with similar utilization percentages can perform differently. Use power.draw, power.limit, and utilization metrics in the same sample window when tuning batch size and dataloader workers. If power is capped while utilization is high, clock throttling can be the next bottleneck to investigate. These metrics matter because training throughput is gated by the slowest stage in the pipeline. If GPU cores are idle while CPU or storage is saturated, adding another GPU will not fix throughput. For a practical environment baseline before tuning, follow How To Set Up a Deep Learning Environment on Ubuntu. Most GPU incidents in ML pipelines come from input bottlenecks or VRAM pressure. Diagnose both at the same time by sampling GPU, CPU, and process-level memory while a real training job is running. If CPU preprocessing is the bottleneck, GPU utilization drops between mini-batches even when VRAM remains allocated. This pattern appears when image decode, augmentation, or tokenization is slower than kernel execution. Check host pressure while your training loop runs: In vmstat, watch r, wa, bi, and us plus sy together. r is runnable processes, and if it stays above your CPU core count, the CPU is saturated. wa is CPU time waiting on I/O, and sustained values above 10 to 15 during training often mean dataloader workers are blocked on disk reads. bi is blocks received from storage, and high bi with high wa points to storage bottlenecks instead of compute. us + sy is total active CPU time, and if it is high while GPU-Util is low, preprocessing is outrunning the GPU. If wa is high, increase dataloader workers or switch to faster storage. If us + sy is high with low GPU-Util, move transforms to GPU with a library such as Kornia. OOM errors happen when requested allocations exceed available VRAM, often due to large batch sizes, long sequence lengths, or concurrent GPU processes. Resolve OOM by lowering memory pressure first, then increasing workload cautiously. If a stale process is still holding VRAM after a failed run, list active compute processes, verify ownership, terminate the stale PID, then confirm memory was released. Do not kill unknown PIDs on shared hosts. Verify process ownership and job context first. nvidia-smi is the fastest built-in tool for real-time GPU telemetry on Linux servers. It is available with NVIDIA drivers and documents fields used by most higher-level integrations. Run nvidia-smi with no flags for a full snapshot of GPU and process state. Focus first on GPU-Util, Memory-Usage, Temp, and Pwr:Usage/Cap. If GPU-Util shows 0% while a job appears to be running, check three common causes. The job may still be in a CPU-bound preprocessing stage and has not submitted work to the GPU yet. The process may have errored and stayed alive but idle. The job may also be running on a different GPU index, so list all devices with nvidia-smi --list-gpus and check each one. Use loop mode when you need live updates without writing scripts. --loop=1 refreshes once per second. Write sampled output to a file for post-run inspection. Redirect stdout so each sample is timestamped in your shell history and log stream. Use --query-gpu with --format=csv when you need parseable output for scripts. This is the preferred pattern for cron jobs and custom exporters. Per-process monitoring answers which application is consuming GPU time right now. Use nvidia-smi pmon to inspect utilization by PID instead of by device only. Run pmon in loop mode to monitor active compute processes. -s um displays utilization and memory throughput related activity by process. gpu is the GPU index the process is running on. pid is the process ID. type is workload class, where C is compute, G is graphics, and M is mixed. sm is the percentage of time spent executing kernels on streaming multiprocessors. mem is the percentage of time the memory interface was active for that process. enc and dec are encoder and decoder utilization percentages. command is the truncated process name. Map PIDs to full command lines to identify notebook kernels, training scripts, and inference workers. This is required when multiple Python jobs are running under one user. Use nvtop when you want interactive process control and gpustat when you want compact snapshots in scripts. Both tools complement nvidia-smi rather than replace it. Install nvtop from Ubuntu repositories, then start it in the terminal. It provides live bars and per-process views similar to htop. Install gpustat with pip, then use watch mode for one-second updates. This is useful in SSH sessions where minimal output matters. Use nvidia-smi for canonical driver-level data and scripted queries. Use gpustat for low-noise terminal snapshots, and use nvtop for interactive process monitoring during active debugging. Use Glances when you need one terminal dashboard for GPU, CPU, memory, disk, and network at once. Install with the GPU extra so NVIDIA metrics are available. In the Glances GPU line, util maps to GPU core activity, and mem shows allocated versus total VRAM. temp and power indicate thermal and electrical load during the sample window. Use these values together to identify whether workload pressure is compute, memory, or thermal related. Glances is a better choice than nvidia-smi when you want CPU, memory, disk, and GPU in one non-scrolling view during interactive debugging on a single node. If glances shows no GPU section, verify that NVIDIA drivers are installed on the host and the Python environment running Glances can access NVML. Containerized GPU monitoring requires host runtime support first, then workload-level metric collection. Start with NVIDIA Container Toolkit for Docker and DCGM Exporter for Kubernetes clusters. Install the NVIDIA Container Toolkit on the host, then run containers with --gpus all. Inside the container, nvidia-smi should show host GPU telemetry. Use this after setting up Docker by following How To Install and Use Docker on Ubuntu. The NVIDIA runtime is only active after the Docker daemon restarts. Already-running containers are not affected, but any new container launched after the restart will have GPU access. For full installation details, see the NVIDIA Container Toolkit guide. Deploy DCGM Exporter as a DaemonSet on GPU nodes to expose Prometheus metrics. This creates scrape targets with per-GPU and per-pod metric labels. To collect GPU metrics in a DOKS cluster, configure Prometheus to scrape the DCGM Exporter DaemonSet, then visualize the data in Grafana or forward it to a hosted monitoring backend. Separate GPU dashboards by node pool and workload labels to avoid mixed tenancy confusion. Before deployment, review An Introduction to Kubernetes if your team is new to cluster primitives. In a DOKS cluster, use DaemonSet pod IPs or a Kubernetes Service DNS name instead of static node IP targets. For Grafana dashboard import details, see NVIDIA DCGM Exporter documentation. Use Datadog when you need long-term retention, tag-based slicing, and alert routing to on-call systems. Install the Agent on each GPU node and enable the NVIDIA integration. Install Agent 7 on the GPU host, then enable the nvidia_gpu integration. Keep host drivers and NVML available to the Agent process. The NVML integration is not bundled with Agent 7 by default. Install it separately, then configure nvml.d/conf.yaml. Verify the latest available version of the NVML integration before installing. Define tags at the host and integration level so you can group by cluster, environment, and workload type. This keeps alert routing and dashboard filters usable at scale. Save this as /etc/datadog-agent/conf.d/nvml.d/conf.yaml, then restart: Create timeseries panels for nvidia.gpu.utilization, nvidia.gpu.memory.used, and nvidia.gpu.temperature, then alert on sustained saturation. A practical first alert is GPU utilization above 95% for 10 minutes on production training nodes. Use How To Monitor Your Infrastructure with Datadog for dashboard and monitor fundamentals. To monitor GPU hosts with Zabbix, install the Zabbix agent on each GPU host, import the NVIDIA GPU template, and configure trigger thresholds for utilization and temperature. Zabbix is the right choice when you need self-hosted monitoring with custom alerting and existing enterprise integrations. Import or attach an NVIDIA GPU template in Zabbix, then bind it to hosts that have NVIDIA drivers installed. Template items should poll utilization, memory, temperature, and power. Create triggers for sustained high utilization, high temperature, and unexpected drops to zero utilization during scheduled training windows. Use trigger expressions with time windows to avoid noise from short spikes. {#GPUINDEX} is a low-level discovery macro populated automatically by the template. You do not need to set it manually. Unified GPU Usage Monitoring aggregates activity from multiple GPU engines into a single usage view that operators can read quickly. Enable it through NVIDIA Control Panel first, then verify registry policy where required by your driver profile. Unified monitoring combines graphics, compute, copy, and video engine activity into one normalized utilization metric. This improves cross-process visibility when mixed workloads run on the same adapter. In NVIDIA Control Panel, enable the GPU activity monitoring feature and apply settings system-wide. If your environment uses managed policy, set the registry value used by your NVIDIA driver branch to turn on unified usage reporting. Registry value names for unified usage reporting vary by driver branch and policy tooling. Validate the exact key and value against your NVIDIA enterprise driver documentation before changing production systems. After enabling unified monitoring, Task Manager can display GPU engine and aggregate usage per process. WMI queries can then be used for scripted collection in Windows-based monitoring workflows. Use this table to pick a tool based on data depth, operational overhead, and alerting needs. Start with CLI tools for diagnostics, then add Datadog, Zabbix, or DCGM pipelines for persistent monitoring. For single-node debugging, start with nvidia-smi and nvtop. For fleet-level visibility across GPU Droplets and Kubernetes nodes, use DCGM Exporter plus your monitoring backend or deploy Datadog or Zabbix for retention and alerting. If you need a historical record of GPU activity alongside CPU, memory, and disk in a single log, atop captures all of these at configurable intervals and is worth adding to long-running training hosts alongside nvidia-smi. Q1: What is the fastest way to check GPU utilization in real time on Linux? Run nvidia-smi --loop=1. It refreshes once per second and shows utilization, memory usage, temperature, and power draw in one view. This is usually the first check during incident triage. Q2: How do I monitor GPU usage by a specific application or process? Use nvidia-smi pmon -s um -d 1 to display per-process GPU activity with PIDs. Then map the PID with ps -p <PID> -o cmd to identify the exact application. This is the quickest path to isolate noisy jobs. Q3: How do I monitor GPU utilization inside a Docker container? Install NVIDIA Container Toolkit on the host and run containers with --gpus all. After the container starts, run nvidia-smi inside it to verify access and usage. For persistent metrics, collect with DCGM Exporter. Q4: What does it mean to enable unified GPU usage monitoring? It means GPU activity from multiple engines is aggregated into a unified usage signal. This makes process-level and adapter-level utilization easier to interpret in mixed workloads. On Windows, this data is then visible in Task Manager and accessible through counters and WMI-based tooling. Q5: How do I set up GPU monitoring in Datadog? Install Datadog Agent 7 on each GPU host, enable the NVIDIA integration, and restart the agent. Once enabled, metrics appear in the nvidia.gpu.* namespace for dashboards and monitors. Add tags like environment and role before creating alerts. Q6: Can I monitor GPU utilization in a Kubernetes cluster? Yes, deploy DCGM Exporter as a DaemonSet on GPU nodes and scrape it with Prometheus. This gives real-time GPU metrics across nodes and workloads. Visualize and alert with Grafana and Alertmanager or a hosted backend. Q7: What is the difference between GPU utilization and GPU memory utilization? GPU utilization indicates how busy compute engines are over a sample period. GPU memory utilization indicates memory subsystem activity, while memory used indicates VRAM allocation. A job can allocate large VRAM and still show low compute utilization when input stalls occur. Q8: How do I monitor GPU utilization with Zabbix? Install the Zabbix agent on GPU hosts and attach an NVIDIA GPU template that runs nvidia-smi-based checks. Track utilization, memory, temperature, and power as items. Add trigger thresholds for sustained load and thermal risk. Real-time GPU utilization monitoring is essential for optimizing deep learning performance, troubleshooting bottlenecks, and achieving efficient resource usage—whether running on single nodes, inside containers, or scaling across clustered environments. The right monitoring tool depends on your specific use case: quick one-off checks, interactive debugging, continuous fleet-wide visibility, or long-term metric retention and alerting. Start with simple tools like nvidia-smi for instant visibility, and progress to dashboarding, custom alerting, and enterprise-grade solutions as your needs grow. With the strategies and tools outlined in this guide, you can proactively monitor, troubleshoot, and maximize the performance of your GPU workloads—ensuring smoother operation for development, training, and deployment pipelines. Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases. Learn more about our products Building future-ready infrastructure with Linux, Cloud, and DevOps. Full Stack Developer & System Administrator. Technical Writer @ DigitalOcean | GitHub Contributor | Passionate about Docker, PostgreSQL, and Open Source | Exploring NLP & AI-TensorFlow | Nailed over 50+ deployments across production environments. This textbox defaults to using Markdown to format your answer. You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link! Please complete your information! Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation. Full documentation for every DigitalOcean product. The Wave has everything you need to know about building a business, from raising funding to marketing your product. Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter. New accounts only. By submitting your email you agree to our Privacy Policy Scale up as you grow — whether you're running one virtual machine or ten thousand. From GPU-powered inference and Kubernetes to managed databases and storage, get everything you need to build, scale, and deploy intelligent applications.