External GPU (eGPU) + NVIDIA Drivers on Linux: Solving the Display Manager Initialization Problem
Introduction
Symptoms
Root Cause
Additional Issue: Boot Race Condition
Step 1: Diagnosis
Check Xorg logs from recovery mode or TTY (Ctrl+Alt+F2):
Verify GPU is visible to the system:
Confirm monitor connection to eGPU:
Step 2: The Critical Fix – AllowExternalGpus
Step 3: Kernel Mode Setting (KMS) Configuration
Step 4: Boot Race Condition Fix (for stability)
GPU Wait Script
Hotplug Script (runs after display manager)
Systemd Service: Wait (runs BEFORE display manager)
Systemd Service: Hotplug (runs AFTER display manager)
Display Manager Drop-in (LightDM example)
Enable Services
Step 5: PRIME Configuration (GPU priority)
Step 6: Reboot and Verification
Post-boot verification:
Troubleshooting: If Black Screen Persists
Boot into recovery mode → drop to root shell → check logs:
Common Errors and Solutions
Rollback (if something goes wrong)
Chroot from another system:
Final Configuration Summary
Files that should exist after setup:
Key kernel parameters (in /proc/cmdline):
Technical Explanations
Known Behavior: Boot Delay (30-90 seconds)
Possible optimizations:
Conclusion
What we learned: TL;DR: If your NVIDIA eGPU works in recovery mode but gives a black screen on normal boot, you're missing one critical Xorg option: AllowExternalGpus. This guide shows how to fix it properly on any X11-based Linux distribution. Installing NVIDIA drivers on a Linux system with an external GPU (eGPU) connected via Thunderbolt can result in a frustrating black screen instead of your login screen. This issue affects LightDM, SDDM, GDM (X11 session), and other display managers across multiple distributions. This guide documents a complete solution tested on real hardware and explains the root cause that official documentation often omits. Tested Configuration: Before diving into the solution, confirm you're experiencing this specific issue: NVIDIA drivers intentionally disable external GPUs by default as a safety measure to prevent crashes when the Thunderbolt cable is accidentally disconnected. Without the AllowExternalGpus flag, the X11 server attempts to initialize the NVIDIA GPU, receives a denial, and crashes: X11 then attempts to fall back to the Intel iGPU (modesetting driver), but if your monitor is connected only to the eGPU, there are no screens available on the Intel outputs, resulting in a black screen. Why GNOME/Wayland might work without this fix:
Wayland bypasses X11 and interacts directly with GPUs via KMS (kernel modesetting). NVIDIA drivers don't block KMS access for eGPUs. Display managers using Wayland (like GDM in Wayland mode) will work, while X11-based sessions (LightDM, SDDM, Cinnamon, MATE) will fail. Even after adding AllowExternalGpus, you might experience intermittent black screens. This occurs due to timing issues: This is addressed through systemd service synchronization (detailed in Step 4 below). If you see references to AllowExternalGpus or no screens found, you're in the right place. Create or edit the X11 configuration file: Critical line: Option "AllowExternalGpus" "True" — nothing works without this. Option "AllowEmptyInitialConfiguration" — allows X11 to start even if the GPU isn't fully initialized when the display manager launches. If not already configured during driver installation: If missing, add to GRUB: Locate GRUB_CMDLINE_LINUX_DEFAULT and add parameters: Create modprobe configuration: This is optional but eliminates rare black screens on some boots. For SDDM, use /etc/systemd/system/sddm.service.d/ instead. On systems with NVIDIA drivers and nvidia-prime: From recovery mode or another system (chroot): Why the problem isn't the display manager:LightDM, SDDM, and GDM are just wrappers that launch X11. They all use the same X server (/usr/bin/Xorg). The root cause lies in NVIDIA driver behavior at the X11 level, not in the display manager itself. Why GNOME/Wayland worked without the fix:GNOME defaults to Wayland, which interacts with GPUs via KMS (kernel modesetting) directly, bypassing Xorg. NVIDIA drivers don't block KMS access for eGPUs. Therefore, GDM in Wayland mode worked while LightDM/SSDM (X11) didn't. Why i915 ACT error is not the cause:The Intel iGPU sees that X11 is attempting to use it as a fallback (after NVIDIA rejection) and begins initializing Intel DisplayPort outputs, but the monitor isn't connected to Intel → timeout. This is a consequence of X11 failing with NVIDIA, not the root cause. About Thunderbolt and bolt:If the eGPU isn't authorized in bolt, it won't appear in the system at all. Check: boltctl list. If status isn't authorized, run: sudo boltctl enroll --policy auto <uuid>. On cold boots with eGPU via Thunderbolt, you may experience a delay before the login screen appears. This is normal and relates to sequential initialization: Total: 30-60 seconds on modern hardware. The systemd services (nvidia-egpu-wait and nvidia-drm-hotplug) minimize this delay but can't eliminate it entirely due to Thunderbolt physics. The root cause of black screen issues when using NVIDIA eGPU on Linux isn't the display manager, PRIME configuration, or GRUB parameters. It's a single missing Xorg option: AllowExternalGpus. NVIDIA drivers disable external GPUs by default as a safety measure. Without explicit permission via this flag, X11 initialization fails silently, resulting in a black screen. This configuration has been tested extensively and works reliably across multiple distributions. If you're building a Linux workstation with eGPU, this guide can save you hours of troubleshooting. Questions? Feel free to ask in the comments. I'll be monitoring this thread and happy to help troubleshoot your specific configuration. Author: Aleksandr KossarevLocation: EstoniaDate: May 2, 2026Hardware: GEEKOM GT1 Mega + NVIDIA RTX 5060 Ti (eGPU via Thunderbolt 4)
Project: Arche Iscrin — AI-assisted creative projects This article is based on real-world troubleshooting and testing. All commands and configurations have been verified on actual hardware. Feel free to share this guide with anyone struggling with eGPU setup on Linux. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse
(WW) NVIDIA(GPU-0): This device is an external GPU, but external GPUs have not
(WW) NVIDIA(GPU-0): been enabled with AllowExternalGpus. Disabling this device
(EE) NVIDIA(0): Failing initialization of X screen
(EE) no screens found
Fatal server error: no screens found
(WW) NVIDIA(GPU-0): This device is an external GPU, but external GPUs have not
(WW) NVIDIA(GPU-0): been enabled with AllowExternalGpus. Disabling this device
(EE) NVIDIA(0): Failing initialization of X screen
(EE) no screens found
Fatal server error: no screens found
(WW) NVIDIA(GPU-0): This device is an external GPU, but external GPUs have not
(WW) NVIDIA(GPU-0): been enabled with AllowExternalGpus. Disabling this device
(EE) NVIDIA(0): Failing initialization of X screen
(EE) no screens found
Fatal server error: no screens found
grep -E "(EE|WW|AllowExternal|no screens|nvidia)" /var/log/Xorg.0.log
grep -E "(EE|WW|AllowExternal|no screens|nvidia)" /var/log/Xorg.0.log
grep -E "(EE|WW|AllowExternal|no screens|nvidia)" /var/log/Xorg.0.log
nvidia-smi
# Should show GPU with temperature, memory usage, etc. lspci | grep -i nvidia
# Should list your GPU
nvidia-smi
# Should show GPU with temperature, memory usage, etc. lspci | grep -i nvidia
# Should list your GPU
nvidia-smi
# Should show GPU with temperature, memory usage, etc. lspci | grep -i nvidia
# Should list your GPU
ls /sys/class/drm/
# Look for card0-DP-* or card0-HDMI-* entries
cat /sys/class/drm/card0-DP-1/status
# Should return: connected
ls /sys/class/drm/
# Look for card0-DP-* or card0-HDMI-* entries
cat /sys/class/drm/card0-DP-1/status
# Should return: connected
ls /sys/class/drm/
# Look for card0-DP-* or card0-HDMI-* entries
cat /sys/class/drm/card0-DP-1/status
# Should return: connected
sudo nano /etc/X11/xorg.conf.d/10-nvidia.conf
sudo nano /etc/X11/xorg.conf.d/10-nvidia.conf
sudo nano /etc/X11/xorg.conf.d/10-nvidia.conf
Section "ServerLayout" Identifier "layout" Screen 0 "nvidia" Inactive "intel"
EndSection Section "Device" Identifier "nvidia" Driver "nvidia" Option "PrimaryGPU" "yes" Option "AllowExternalGpus" "True"
EndSection Section "Screen" Identifier "nvidia" Device "nvidia" Option "AllowEmptyInitialConfiguration"
EndSection Section "Device" Identifier "intel" Driver "modesetting"
EndSection Section "Screen" Identifier "intel" Device "intel"
EndSection
Section "ServerLayout" Identifier "layout" Screen 0 "nvidia" Inactive "intel"
EndSection Section "Device" Identifier "nvidia" Driver "nvidia" Option "PrimaryGPU" "yes" Option "AllowExternalGpus" "True"
EndSection Section "Screen" Identifier "nvidia" Device "nvidia" Option "AllowEmptyInitialConfiguration"
EndSection Section "Device" Identifier "intel" Driver "modesetting"
EndSection Section "Screen" Identifier "intel" Device "intel"
EndSection
Section "ServerLayout" Identifier "layout" Screen 0 "nvidia" Inactive "intel"
EndSection Section "Device" Identifier "nvidia" Driver "nvidia" Option "PrimaryGPU" "yes" Option "AllowExternalGpus" "True"
EndSection Section "Screen" Identifier "nvidia" Device "nvidia" Option "AllowEmptyInitialConfiguration"
EndSection Section "Device" Identifier "intel" Driver "modesetting"
EndSection Section "Screen" Identifier "intel" Device "intel"
EndSection
# Verify modeset is enabled
cat /proc/cmdline | grep nvidia-drm
# Should show: nvidia-drm.modeset=1
# Verify modeset is enabled
cat /proc/cmdline | grep nvidia-drm
# Should show: nvidia-drm.modeset=1
# Verify modeset is enabled
cat /proc/cmdline | grep nvidia-drm
# Should show: nvidia-drm.modeset=1
sudo nano /etc/default/grub
sudo nano /etc/default/grub
sudo nano /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nvidia-drm.modeset=1"
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nvidia-drm.modeset=1"
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nvidia-drm.modeset=1"
sudo nano /etc/modprobe.d/nvidia-kms.conf
sudo nano /etc/modprobe.d/nvidia-kms.conf
sudo nano /etc/modprobe.d/nvidia-kms.conf
options nvidia-drm modeset=1
options nvidia NVreg_PreserveVideoMemoryAllocations=1
options nvidia-drm modeset=1
options nvidia NVreg_PreserveVideoMemoryAllocations=1
options nvidia-drm modeset=1
options nvidia NVreg_PreserveVideoMemoryAllocations=1
sudo update-grub
sudo update-initramfs -u
sudo update-grub
sudo update-initramfs -u
sudo update-grub
sudo update-initramfs -u
sudo nano /usr/local/bin/nvidia-egpu-wait.sh
sudo nano /usr/local/bin/nvidia-egpu-wait.sh
sudo nano /usr/local/bin/nvidia-egpu-wait.sh
#!/bin/bash
# Wait for NVIDIA GPU to appear in /sys/class/drm
TIMEOUT=30
COUNT=0
while [ $COUNT -lt $TIMEOUT ]; do if ls /sys/class/drm/ 2>/dev/null | grep -q "^card[0-9]$"; then # Verify it's NVIDIA, not just Intel for card in /sys/class/drm/card[0-9]; do vendor=$(cat "$card/device/vendor" 2>/dev/null) if [ "$vendor" = "0x10de" ]; then sleep 2 # Additional pause for TB3 DP tunnel exit 0 fi done fi sleep 1 COUNT=$((COUNT + 1))
done
exit 0 # Timeout - continue anyway
#!/bin/bash
# Wait for NVIDIA GPU to appear in /sys/class/drm
TIMEOUT=30
COUNT=0
while [ $COUNT -lt $TIMEOUT ]; do if ls /sys/class/drm/ 2>/dev/null | grep -q "^card[0-9]$"; then # Verify it's NVIDIA, not just Intel for card in /sys/class/drm/card[0-9]; do vendor=$(cat "$card/device/vendor" 2>/dev/null) if [ "$vendor" = "0x10de" ]; then sleep 2 # Additional pause for TB3 DP tunnel exit 0 fi done fi sleep 1 COUNT=$((COUNT + 1))
done
exit 0 # Timeout - continue anyway
#!/bin/bash
# Wait for NVIDIA GPU to appear in /sys/class/drm
TIMEOUT=30
COUNT=0
while [ $COUNT -lt $TIMEOUT ]; do if ls /sys/class/drm/ 2>/dev/null | grep -q "^card[0-9]$"; then # Verify it's NVIDIA, not just Intel for card in /sys/class/drm/card[0-9]; do vendor=$(cat "$card/device/vendor" 2>/dev/null) if [ "$vendor" = "0x10de" ]; then sleep 2 # Additional pause for TB3 DP tunnel exit 0 fi done fi sleep 1 COUNT=$((COUNT + 1))
done
exit 0 # Timeout - continue anyway
sudo chmod +x /usr/local/bin/nvidia-egpu-wait.sh
sudo chmod +x /usr/local/bin/nvidia-egpu-wait.sh
sudo chmod +x /usr/local/bin/nvidia-egpu-wait.sh
sudo nano /usr/local/bin/nvidia-drm-hotplug.sh
sudo nano /usr/local/bin/nvidia-drm-hotplug.sh
sudo nano /usr/local/bin/nvidia-drm-hotplug.sh
#!/bin/bash
sleep 8
udevadm trigger --action=change --subsystem-match=drm
udevadm settle
#!/bin/bash
sleep 8
udevadm trigger --action=change --subsystem-match=drm
udevadm settle
#!/bin/bash
sleep 8
udevadm trigger --action=change --subsystem-match=drm
udevadm settle
sudo chmod +x /usr/local/bin/nvidia-drm-hotplug.sh
sudo chmod +x /usr/local/bin/nvidia-drm-hotplug.sh
sudo chmod +x /usr/local/bin/nvidia-drm-hotplug.sh
sudo nano /etc/systemd/system/nvidia-egpu-wait.service
sudo nano /etc/systemd/system/nvidia-egpu-wait.service
sudo nano /etc/systemd/system/nvidia-egpu-wait.service
[Unit]
Description=Wait for NVIDIA eGPU initialization
After=bolt.service
Before=display-manager.service
DefaultDependencies=no [Service]
Type=oneshot
ExecStart=/usr/local/bin/nvidia-egpu-wait.sh
RemainAfterExit=yes
TimeoutSec=35 [Install]
WantedBy=display-manager.service
[Unit]
Description=Wait for NVIDIA eGPU initialization
After=bolt.service
Before=display-manager.service
DefaultDependencies=no [Service]
Type=oneshot
ExecStart=/usr/local/bin/nvidia-egpu-wait.sh
RemainAfterExit=yes
TimeoutSec=35 [Install]
WantedBy=display-manager.service
[Unit]
Description=Wait for NVIDIA eGPU initialization
After=bolt.service
Before=display-manager.service
DefaultDependencies=no [Service]
Type=oneshot
ExecStart=/usr/local/bin/nvidia-egpu-wait.sh
RemainAfterExit=yes
TimeoutSec=35 [Install]
WantedBy=display-manager.service
sudo nano /etc/systemd/system/nvidia-drm-hotplug.service
sudo nano /etc/systemd/system/nvidia-drm-hotplug.service
sudo nano /etc/systemd/system/nvidia-drm-hotplug.service
[Unit]
Description=NVIDIA DRM hotplug trigger after display manager
After=display-manager.service bolt.service
Wants=display-manager.service [Service]
Type=oneshot
ExecStart=/usr/local/bin/nvidia-drm-hotplug.sh
RemainAfterExit=no [Install]
WantedBy=multi-user.target
[Unit]
Description=NVIDIA DRM hotplug trigger after display manager
After=display-manager.service bolt.service
Wants=display-manager.service [Service]
Type=oneshot
ExecStart=/usr/local/bin/nvidia-drm-hotplug.sh
RemainAfterExit=no [Install]
WantedBy=multi-user.target
[Unit]
Description=NVIDIA DRM hotplug trigger after display manager
After=display-manager.service bolt.service
Wants=display-manager.service [Service]
Type=oneshot
ExecStart=/usr/local/bin/nvidia-drm-hotplug.sh
RemainAfterExit=no [Install]
WantedBy=multi-user.target
sudo mkdir -p /etc/systemd/system/lightdm.service.d/
sudo nano /etc/systemd/system/lightdm.service.d/wait-nvidia-egpu.conf
sudo mkdir -p /etc/systemd/system/lightdm.service.d/
sudo nano /etc/systemd/system/lightdm.service.d/wait-nvidia-egpu.conf
sudo mkdir -p /etc/systemd/system/lightdm.service.d/
sudo nano /etc/systemd/system/lightdm.service.d/wait-nvidia-egpu.conf
[Unit]
Wants=nvidia-egpu-wait.service
After=nvidia-egpu-wait.service
[Unit]
Wants=nvidia-egpu-wait.service
After=nvidia-egpu-wait.service
[Unit]
Wants=nvidia-egpu-wait.service
After=nvidia-egpu-wait.service
sudo systemctl daemon-reload
sudo systemctl enable nvidia-egpu-wait.service
sudo systemctl enable nvidia-drm-hotplug.service
sudo systemctl daemon-reload
sudo systemctl enable nvidia-egpu-wait.service
sudo systemctl enable nvidia-drm-hotplug.service
sudo systemctl daemon-reload
sudo systemctl enable nvidia-egpu-wait.service
sudo systemctl enable nvidia-drm-hotplug.service
sudo prime-select nvidia
sudo prime-select nvidia
sudo prime-select nvidia
prime-select query
# Should return: nvidia
prime-select query
# Should return: nvidia
prime-select query
# Should return: nvidia
sudo reboot
sudo reboot
sudo reboot
# GPU is active and in use
nvidia-smi # Xorg has no critical errors
grep -E "^(EE|WW)" /var/log/Xorg.0.log # Services completed successfully
systemctl status nvidia-egpu-wait.service
systemctl status nvidia-drm-hotplug.service # NVIDIA is managing the display (not Intel fallback)
xrandr --listproviders
# Should show: NVIDIA-0 as primary provider
# GPU is active and in use
nvidia-smi # Xorg has no critical errors
grep -E "^(EE|WW)" /var/log/Xorg.0.log # Services completed successfully
systemctl status nvidia-egpu-wait.service
systemctl status nvidia-drm-hotplug.service # NVIDIA is managing the display (not Intel fallback)
xrandr --listproviders
# Should show: NVIDIA-0 as primary provider
# GPU is active and in use
nvidia-smi # Xorg has no critical errors
grep -E "^(EE|WW)" /var/log/Xorg.0.log # Services completed successfully
systemctl status nvidia-egpu-wait.service
systemctl status nvidia-drm-hotplug.service # NVIDIA is managing the display (not Intel fallback)
xrandr --listproviders
# Should show: NVIDIA-0 as primary provider
# Main X11 log
cat /var/log/Xorg.0.log | grep -E "(EE|WW|AllowExternal|screen)" # Boot journal
journalctl -b 0 -p err --no-pager | tail -50 # Service status
systemctl status lightdm nvidia-egpu-wait nvidia-drm-hotplug # Initialization sequence
journalctl -b 0 --no-pager | grep -E "(nvidia|drm|lightdm|sddm|bolt)" | head -40
# Main X11 log
cat /var/log/Xorg.0.log | grep -E "(EE|WW|AllowExternal|screen)" # Boot journal
journalctl -b 0 -p err --no-pager | tail -50 # Service status
systemctl status lightdm nvidia-egpu-wait nvidia-drm-hotplug # Initialization sequence
journalctl -b 0 --no-pager | grep -E "(nvidia|drm|lightdm|sddm|bolt)" | head -40
# Main X11 log
cat /var/log/Xorg.0.log | grep -E "(EE|WW|AllowExternal|screen)" # Boot journal
journalctl -b 0 -p err --no-pager | tail -50 # Service status
systemctl status lightdm nvidia-egpu-wait nvidia-drm-hotplug # Initialization sequence
journalctl -b 0 --no-pager | grep -E "(nvidia|drm|lightdm|sddm|bolt)" | head -40
# Remove our xorg config - X11 reverts to auto-detection
sudo rm /etc/X11/xorg.conf.d/10-nvidia.conf # Or temporarily rename for testing
sudo mv /etc/X11/xorg.conf.d/10-nvidia.conf /etc/X11/xorg.conf.d/10-nvidia.conf.bak
# Remove our xorg config - X11 reverts to auto-detection
sudo rm /etc/X11/xorg.conf.d/10-nvidia.conf # Or temporarily rename for testing
sudo mv /etc/X11/xorg.conf.d/10-nvidia.conf /etc/X11/xorg.conf.d/10-nvidia.conf.bak
# Remove our xorg config - X11 reverts to auto-detection
sudo rm /etc/X11/xorg.conf.d/10-nvidia.conf # Or temporarily rename for testing
sudo mv /etc/X11/xorg.conf.d/10-nvidia.conf /etc/X11/xorg.conf.d/10-nvidia.conf.bak
sudo mkdir -p /mnt/target
sudo mount /dev/nvme0n1pX /mnt/target # replace X with your partition
sudo mount --bind /dev /mnt/target/dev
sudo mount --bind /proc /mnt/target/proc
sudo mount --bind /sys /mnt/target/sys
sudo mount --bind /run /mnt/target/run
sudo chroot /mnt/target /bin/bash
# Make changes...
exit
sudo umount /mnt/target/dev /mnt/target/proc /mnt/target/sys /mnt/target/run
sudo umount /mnt/target
sudo mkdir -p /mnt/target
sudo mount /dev/nvme0n1pX /mnt/target # replace X with your partition
sudo mount --bind /dev /mnt/target/dev
sudo mount --bind /proc /mnt/target/proc
sudo mount --bind /sys /mnt/target/sys
sudo mount --bind /run /mnt/target/run
sudo chroot /mnt/target /bin/bash
# Make changes...
exit
sudo umount /mnt/target/dev /mnt/target/proc /mnt/target/sys /mnt/target/run
sudo umount /mnt/target
sudo mkdir -p /mnt/target
sudo mount /dev/nvme0n1pX /mnt/target # replace X with your partition
sudo mount --bind /dev /mnt/target/dev
sudo mount --bind /proc /mnt/target/proc
sudo mount --bind /sys /mnt/target/sys
sudo mount --bind /run /mnt/target/run
sudo chroot /mnt/target /bin/bash
# Make changes...
exit
sudo umount /mnt/target/dev /mnt/target/proc /mnt/target/sys /mnt/target/run
sudo umount /mnt/target
/etc/X11/xorg.conf.d/10-nvidia.conf ← primary fix
/etc/modprobe.d/nvidia-kms.conf ← KMS modeset
/etc/default/grub ← nvidia-drm.modeset=1 in cmdline
/usr/local/bin/nvidia-egpu-wait.sh ← wait script
/usr/local/bin/nvidia-drm-hotplug.sh ← hotplug script
/etc/systemd/system/nvidia-egpu-wait.service ← service (Before=DM)
/etc/systemd/system/nvidia-drm-hotplug.service← service (After=DM)
/etc/systemd/system/lightdm.service.d/wait-nvidia-egpu.conf ← drop-in
/etc/X11/xorg.conf.d/10-nvidia.conf ← primary fix
/etc/modprobe.d/nvidia-kms.conf ← KMS modeset
/etc/default/grub ← nvidia-drm.modeset=1 in cmdline
/usr/local/bin/nvidia-egpu-wait.sh ← wait script
/usr/local/bin/nvidia-drm-hotplug.sh ← hotplug script
/etc/systemd/system/nvidia-egpu-wait.service ← service (Before=DM)
/etc/systemd/system/nvidia-drm-hotplug.service← service (After=DM)
/etc/systemd/system/lightdm.service.d/wait-nvidia-egpu.conf ← drop-in
/etc/X11/xorg.conf.d/10-nvidia.conf ← primary fix
/etc/modprobe.d/nvidia-kms.conf ← KMS modeset
/etc/default/grub ← nvidia-drm.modeset=1 in cmdline
/usr/local/bin/nvidia-egpu-wait.sh ← wait script
/usr/local/bin/nvidia-drm-hotplug.sh ← hotplug script
/etc/systemd/system/nvidia-egpu-wait.service ← service (Before=DM)
/etc/systemd/system/nvidia-drm-hotplug.service← service (After=DM)
/etc/systemd/system/lightdm.service.d/wait-nvidia-egpu.conf ← drop-in
nvidia-drm.modeset=1
nvidia-drm.modeset=1
nvidia-drm.modeset=1 - Hardware: GEEKOM GT1 Mega (Intel Core Ultra 9 185H with Intel Arc iGPU) + NVIDIA RTX 5060 Ti in Sonnet eGPU Breakaway Box 750ex
- Connection: Thunderbolt 4
- OS: Linux Mint 22.3 MATE (applicable to Ubuntu, Fedora, Arch, and any X11-based distribution)
- Driver: NVIDIA 595 (proprietary) - ✅ NVIDIA drivers installed successfully
- ✅ nvidia-smi works and shows your GPU
- ✅ GPU visible in lspci output
- ❌ Black screen instead of login screen on normal boot
- ✅ System works normally in recovery mode or without X11
- ⚠️ Possible error in dmesg: i915: failed to get ACT after 3000ms
- ❌ Problem persists across different display managers (LightDM, SDDM, GDM in X11 mode) - Display manager starts → attempts to launch X11
- nvidia-drm module hasn't completed initialization (~2–3 seconds)
- Thunderbolt DisplayPort tunnel establishes even later - Thunderbolt authorization (~15 sec)
- NVIDIA driver loading (~20 sec)
- DisplayPort tunnel establishment (~15 sec)
- X11 initialization (~10 sec) - Configure bolt with auto-enroll policy
- Use nvidia-smi -pm 1 for early GPU "warm-up"
- Disable unused systemd services - ✅ External GPUs require explicit enablement in Xorg configuration
- ✅ Display managers (LightDM, SSDM, GDM in X11) all experience the same issue
- ✅ Wayland sessions work because they bypass X11 entirely
- ✅ Boot timing issues can be addressed with systemd service synchronization
- ✅ The i915 ACT error is a red herring — a consequence, not the cause