Linux nvidia kernel module

NVIDIA

This article covers the proprietary NVIDIA graphics card driver. For the open-source driver, see Nouveau. If you have a laptop with hybrid Intel/NVIDIA graphics, see NVIDIA Optimus instead.

Contents

Installation

These instructions are for those using the stock linux or linux-lts packages. For custom kernel setup, skip to the next subsection.

1. If you do not know what graphics card you have, find out by issuing:

2. Determine the necessary driver version for your card by:

  • Visiting NVIDIA’s driver download site and using the dropdown lists.
  • Finding the code name (e.g. NV50, NVC0, etc.) on nouveau wiki’s code names page or nouveau’s GitLab, then looking up the name in NVIDIA’s legacy card list: if your card is not there you can use the latest driver.

3. Install the appropriate driver for your card:

  • For the Maxwell (NV110/GMXXX) series and newer, install the nvidia package (for use with the linux kernel) or nvidia-lts (for use with the linux-lts kernel) package.
    • If these packages do not work, nvidia-betaAUR may have a newer driver version that offers support.
  • Alternatively for the Turing (NV160/TUXXX) series or newer the nvidia-open package may be installed for open source kernel modules on the linux kernel (On other kernels nvidia-open-dkms must be used).
    • This is currently alpha quality on desktop cards, so there will be issues.
  • For the Kepler (NVE0/GKXXX) series, install the nvidia-470xx-dkmsAUR package.
  • For the Fermi (NVC0/GF1XX) series, install the nvidia-390xx-dkmsAUR package.
  • For even older cards, have a look at #Unsupported drivers.

4. For 32-bit application support, also install the corresponding lib32 package from the multilib repository (e.g. lib32-nvidia-utils ).

5. Reboot. The nvidia package contains a file which blacklists the nouveau module, so rebooting is necessary.

Once the driver has been installed, continue to #Xorg configuration or #Wayland.

Unsupported drivers

If you have an older card, NVIDIA no longer actively supports drivers for your card. This means that these drivers do not officially support the current Xorg version. It thus might be easier to use the nouveau driver, which supports the old cards with the current Xorg.

However, NVIDIA’s legacy drivers are still available and might provide better 3D performance/stability.

  • For the Tesla (NV50/G80-90-GT2XX) series, install the nvidia-340xx-dkmsAUR package.
  • For the Curie (NV40/G70) series and older, drivers are no longer packaged for Arch Linux.

Custom kernel

If using a custom kernel, compilation of the NVIDIA kernel modules can be automated with DKMS. Install the nvidia-dkms package (or a specific branch), and the corresponding headers for your kernel.

Ensure your kernel has CONFIG_DRM_SIMPLEDRM=y , and if using CONFIG_DEBUG_INFO_BTF then this is needed in the PKGBUILD (since kernel 5.16):

The NVIDIA module will be rebuilt after every NVIDIA or kernel update thanks to the DKMS pacman hook.

DRM kernel mode setting

Early loading

For basic functionality, just adding the kernel parameter should suffice. If you want to ensure it is loaded at the earliest possible occasion, or are noticing startup issues (such as the nvidia kernel module being loaded after the display manager) you can add nvidia , nvidia_modeset , nvidia_uvm and nvidia_drm to the initramfs.

mkinitcpio

If you use mkinitcpio initramfs, follow mkinitcpio#MODULES to add modules.

If added to the initramfs, do not forget to run mkinitcpio every time there is a nvidia driver update. See #pacman hook to automate these steps.

Booster
pacman hook

To avoid the possibility of forgetting to update initramfs after an NVIDIA driver upgrade, you may want to use a pacman hook:

Make sure the Target package set in this hook is the one you have installed in steps above (e.g. nvidia , nvidia-dkms , nvidia-lts or nvidia-ck-something ).

Читайте также:  Не включается процессор не нагревается

Hardware accelerated video decoding

Accelerated video decoding with VDPAU is supported on GeForce 8 series cards and newer. Accelerated video decoding with NVDEC is supported on Fermi (

400 series) cards and newer. See Hardware video acceleration for details.

Hardware accelerated video encoding with NVENC

NVENC requires the nvidia_uvm module and the creation of related device nodes under /dev .

The latest driver package provides a udev rule which creates device nodes automatically, so no further action is required.

If you are using an old driver (e.g. nvidia-340xx-dkms AUR ), you need to create device nodes. Invoking the nvidia-modprobe utility automatically creates them. You can create /etc/udev/rules.d/70-nvidia.rules to run it automatically:

Xorg configuration

The proprietary NVIDIA graphics card driver does not need any Xorg server configuration file. You can start X to see if the Xorg server will function correctly without a configuration file. However, it may be required to create a configuration file (prefer /etc/X11/xorg.conf.d/20-nvidia.conf over /etc/X11/xorg.conf ) in order to adjust various settings. This configuration can be generated by the NVIDIA Xorg configuration tool, or it can be created manually. If created manually, it can be a minimal configuration (in the sense that it will only pass the basic options to the Xorg server), or it can include a number of settings that can bypass Xorg’s auto-discovered or pre-configured options.

Automatic configuration

The NVIDIA package includes an automatic configuration tool to create an Xorg server configuration file ( xorg.conf ) and can be run by:

This command will auto-detect and create (or edit, if already present) the /etc/X11/xorg.conf configuration according to present hardware.

If there are instances of DRI, ensure they are commented out:

Double check your /etc/X11/xorg.conf to make sure your default depth, horizontal sync, vertical refresh, and resolutions are acceptable.

nvidia-settings

The nvidia-settings tool lets you configure many options using either CLI or GUI. Running nvidia-settings without any options launches the GUI, for CLI options see nvidia-settings(1) .

You can run the CLI/GUI as a non-root user and save the settings to

/.nvidia-settings-rc by using the option Save Current Configuration under nvidia-settings Configuration tab.

/.nvidia-settings-rc for the current user:

See Autostarting to start this command on every boot.

/.nvidia-settings-rc and/or Xorg file(s) should recover normal startup.

  • Cinnamon desktop can override changes made through nvidia-settings . You can adjust the Cinnamon startup behavior to prevent that.
  • Manual configuration

    Several tweaks (which cannot be enabled automatically or with nvidia-settings) can be performed by editing your configuration file. The Xorg server will need to be restarted before any changes are applied.

    Minimal configuration

    A basic configuration block in 20-nvidia.conf (or deprecated in xorg.conf ) would look like this:

    Disabling the logo on startup

    Add the «NoLogo» option under section Device :

    Overriding monitor detection

    The «ConnectedMonitor» option under section Device allows to override monitor detection when X server starts, which may save a significant amount of time at start up. The available options are: «CRT» for analog connections, «DFP» for digital monitors and «TV» for televisions.

    The following statement forces the NVIDIA driver to bypass startup checks and recognize the monitor as DFP:

    Enabling brightness control

    This article or section is out of date.

    Add to kernel paremeters:

    Alternatively, add the following under section Device :

    If brightness control still does not work with this option, try installing nvidia-bl-dkms AUR .

    Enabling SLI

    Taken from the NVIDIA driver’s README Appendix B: This option controls the configuration of SLI rendering in supported configurations. A «supported configuration» is a computer equipped with an SLI-Certified Motherboard and 2 or 3 SLI-Certified GeForce GPUs.

    Find the first GPU’s PCI Bus ID using lspci :

    Add the BusID (3 in the previous example) under section Device :

    Add the desired SLI rendering mode value under section Screen :

    The following table presents the available rendering modes.

    Value Behavior
    0, no, off, false, Single Use only a single GPU when rendering.
    1, yes, on, true, Auto Enable SLI and allow the driver to automatically select the appropriate rendering mode.
    AFR Enable SLI and use the alternate frame rendering mode.
    SFR Enable SLI and use the split frame rendering mode.
    AA Enable SLI and use SLI antialiasing. Use this in conjunction with full scene antialiasing to improve visual quality.
    Читайте также:  Видеокарта ред драгон rx550

    Alternatively, you can use the nvidia-xconfig utility to insert these changes into xorg.conf with a single command:

    To verify that SLI mode is enabled from a shell:

    If this configuration does not work, you may need to use the PCI Bus ID provided by nvidia-settings ,

    and comment out the PrimaryGPU option in your xorg.d configuration,

    Using this configuration may also solve any graphical boot issues.

    Multiple monitors

    See Multihead for more general information.

    Using nvidia-settings

    The nvidia-settings tool can configure multiple monitors.

    For CLI configuration, first get the CurrentMetaMode by running:

    Save everything after the :: to the end of the attribute (in this case: DPY-1: 2880×1620 @2880×1620 +0+0 ) and use to reconfigure your displays with nvidia-settings —assign «CurrentMetaMode=your_meta_mode» .

    ConnectedMonitor

    If the driver does not properly detect a second monitor, you can force it to do so with ConnectedMonitor.

    The duplicated device with Screen is how you get X to use two monitors on one card without TwinView . Note that nvidia-settings will strip out any ConnectedMonitor options you have added.

    TwinView

    You want only one big screen instead of two. Set the TwinView argument to 1 . This option should be used if you desire compositing. TwinView only works on a per card basis, when all participating monitors are connected to the same card.

    If you have multiple cards that are SLI capable, it is possible to run more than one monitor attached to separate cards (for example: two cards in SLI with one monitor attached to each). The «MetaModes» option in conjunction with SLI Mosaic mode enables this. Below is a configuration which works for the aforementioned example and runs GNOME flawlessly.

    Vertical sync using TwinView

    If you are using TwinView and vertical sync (the «Sync to VBlank» option in nvidia-settings), you will notice that only one screen is being properly synced, unless you have two identical monitors. Although nvidia-settings does offer an option to change which screen is being synced (the «Sync to this display device» option), this does not always work. A solution is to add the following environment variables at startup, for example append in /etc/profile :

    You can change DFP-0 with your preferred screen ( DFP-0 is the DVI port and CRT-0 is the VGA port). You can find the identifier for your display from nvidia-settings in the «X Server XVideoSettings» section.

    Gaming using TwinView

    In case you want to play fullscreen games when using TwinView, you will notice that games recognize the two screens as being one big screen. While this is technically correct (the virtual X screen really is the size of your screens combined), you probably do not want to play on both screens at the same time.

    To correct this behavior for SDL, try:

    For OpenGL, add the appropriate Metamodes to your xorg.conf in section Device and restart X:

    Another method that may either work alone or in conjunction with those mentioned above is starting games in a separate X server.

    Mosaic mode

    Mosaic mode is the only way to use more than 2 monitors across multiple graphics cards with compositing. Your window manager may or may not recognize the distinction between each monitor. Mosaic mode requires a valid SLI configuration. Even if using Base mode without SLI, the GPUs must still be SLI capable/compatible.

    Base Mosaic

    Base Mosaic mode works on any set of Geforce 8000 series or higher GPUs. It cannot be enabled from within the nvidia-setting GUI. You must either use the nvidia-xconfig command line program or edit xorg.conf by hand. Metamodes must be specified. The following is an example for four DFPs in a 2×2 configuration, each running at 1920×1024, with two DFPs connected to two cards:

    SLI Mosaic

    If you have an SLI configuration and each GPU is a Quadro FX 5800, Quadro Fermi or newer then you can use SLI Mosaic mode. It can be enabled from within the nvidia-settings GUI or from the command line with:

    Читайте также:  Игровые процессоры рейтинг амд

    Wayland

    See Wayland#Requirements for more information.

    For further configuration options, take a look at the wiki pages or documentation of the respective compositor.

    Regarding XWayland take a look at Wayland#XWayland.

    Источник

    NVIDIA/yum-packaging-precompiled-kmod

    Use Git or checkout with SVN using the web URL.

    Work fast with our official CLI. Learn more.

    Launching GitHub Desktop

    If nothing happens, download GitHub Desktop and try again.

    Launching GitHub Desktop

    If nothing happens, download GitHub Desktop and try again.

    Launching Xcode

    If nothing happens, download Xcode and try again.

    Launching Visual Studio Code

    Your codespace will open once ready.

    There was a problem preparing your codespace, please try again.

    Latest commit

    Git stats

    Files

    Failed to load latest commit information.

    README.md

    yum packaging precompiled kmod

    Packaging templates for yum and dnf based Linux distros to build NVIDIA driver precompiled kernel modules.

    For official packages see this table and developer blog post.

    The main branch contains this README and a sample build script. The .spec and genmodules.py files can be found in the appropriate rhel7, rhel8, and fedora branches.

    Table of Contents

    This repo contains the .spec file used to build the following RPM packages:

    note: XXX is the first . delimited field in the driver version, ex: 440 in 440.33.01

    RHEL8 or Fedora streams: latest and XXX

    note: requires genmodules.py to generate modules.yaml for modularity streams.

    RHEL7 flavor: latest

    RHEL7 flavor: branch-XXX

    These packages can be used in place of their equivalent DKMS packages:

    RHEL8 or Fedora streams: latest-dkms and XXX-dkms

    RHEL7 flavor: latest-dkms

    The latest and latest-dkms streams/flavors always update to the highest versioned driver, while the XXX and XXX-dkms streams/flavors lock driver updates to the specified driver branch.

    note: XXX-dkms is not available for RHEL7

    Clone this git repository:

    Supported branches: rhel7 , rhel8 & fedora

    Download a NVIDIA driver runfile:

    CUDA runfiles: cuda_$_$_linux.run are not compatible.

    However a NVIDIA driver runfile can be extracted intact from a CUDA runfile:

    Install build dependencies

    note: these are only needed for building not installation

    Building with script

    Fetch script from main branch

    Generate tarball from runfile

    Generate X.509 public_key.der and private_key.priv files.

    Example x509-configuration.ini. Replace $USER and $EMAIL values.

    Compilation and Packaging

    note: Fedora users may need to export IGNORE_CC_MISMATCH=1

    Sign RPM package(s) with GPG signing key

    If one does not already exist, generate a GPG key pair

    Set $gpgKey to secret key ID.

    Other NVIDIA driver packages

    RHEL8 or Fedora

    Copy relevant packages from the CUDA repository

    RHEL7

    Copy relevant packages from the CUDA repository

    RHEL8 or Fedora

    RHEL7

    Enable local repo

    Create custom.repo file

    Copy to system path for yum / dnf package manager

    Clean yum / dnf cache

    note: XXX is the first . delimited field in the driver version, ex: 440 in 440.33.01

    • RHEL8 or Fedora streams: latest , XXX , latest-dkms , XXX-dkms

    RHEL8 or Fedora profiles: default , ks , fm , src

    The default profile ( default ) installs all of the driver packages for specified stream using transitive closure

    note: do not need to specify default profile

    The kickstart profile ( ks ) is used for unattended Anaconda installs of CentOS , Fedora , & RHEL Linux OSes via a configuration file. This profile does not install the cuda-drivers metapackage, which otherwise would attempt to uninstall any existing NVIDIA driver runfiles via a %pretrans hook

    note: any package warning is fatal to a kickstart installation

    The NvSwitch profile ( fm ) installs all of the driver packages, as well as Fabric Manager and NCSQ

    note: this is intended for hardware containing NvSwitch such as DGX systems

    The Source profile ( src ) installs only the contents of /usr/src/nvidia-$ which provides nv-p2p.h and other header files used for compiling NVIDIA kernel modules such as GDRCopy and nvidia-fs

    note: this profile is only compatible with precompiled streams ( latest , XXX ); DKMS streams use kmod-nvidia-latest-dkms

    note: this profile should be combined with another profile, i.e default , ks , or fm

    Источник