Skip to content
NVidia dynamic fan control for Linux and Windows
Branch: master
Clone or download
foucault Add support for TOML configuration files
Further to the efforts to add multi-GPU support and in order to
support individual curves per GPU alterations to the configuration
files have been made. As a flat 2-col configuration file can not
support this kind of setup we now support TOML configuration files.
A list of `[[gpus]]` block can now be provided in the TOML file.
Structure of said file is as follows

```
[[gpus]]
id = 0
enabled = true
points = [[temp, speed], [temp, speed], [...]]
```

Each block corresponds to a different GPU so the id of each MUST be
different. For compatibility reasons existing 2-col "legacy"
configuration files are still parseable and the GPU upon which they
operate is either 0 or the one specified in the command line.

Due to the dependence of `toml` crate on `serde` we now fix the versions
of both to 0.5 and 1.0 respectively.
Latest commit 757f707 May 29, 2019

README.md

nvfancontrol

About

Nvfancontrol provides dynamic fan control for NVidia graphic cards on Linux and Windows.

Sometimes it is desirable to control the fan speed of the graphics card using a custom response curve instead of the automatic setting that is built into the card's BIOS. Especially in newer GPUs the fan does not kick in below 60°C or a certain level of GPU utilization. This is a small toy project in Rust to achieve a more elaborate control over this using either XNVCtrl in Linux or NVAPI in Windows. It is a work in progress so proceed with caution!

The minimum supported driver version is 352.09. For GPUs with multiple independent cooler control nvfancontrol will autodetect and apply the provided response curve to each of the available fans separately.

HowTo

Building

Pre-built binaries for the latest release are provided however if you want to build the project from source read along.

Prerequisites for Linux

You will need:

  • the Rust compiler toolchain, stable >=1.34 or nightly (build)
  • XNVCtrl; static (build only) or dynamic (build and runtime)
  • Xlib (build and runtime)
  • Xext (build and runtime)

Since XNVCtrl supports FreeBSD in addition to Linux these instructions should also work for FreeBSD without further modifications. However nvfancontrol is completely untested on FreeBSD (bug reports are welcome).

Prerequisites for Windows

You will need:

  • the Rust compiler toolchain, stable >=1.15 or nightly. Be adviced that you need the MSVC ABI version of the toolchain not GNU. In order to target the MSVC ABI for Rust you will also need the Visual C++ build tools package or any recent version of Visual Studio (2015+). If you are using rustup (which you should) you will be warned about this (build only)
  • the NVAPI libraries (build only). Depending on which version you are building (x86, x64 or both) place nvapi.lib, nvapi64.lib or both in the root of the repository. As nvapi is linked statically there are no runtime dependencies apart from the NVidia driver.

For both platforms run cargo build --release. Upon successful compilation the executable can be found in target/release/nvfancontrol. On Linux the build tool expects the libraries installed in /usr/lib or /usr/local/lib. In case you have libraries installed in different locations export them using the LIBRARY_PATH environment variable (colon separated paths). By default libXNVCtrl will be linked statically. If a static version of libXNVCtrl is not available or you explicitly want it to be linked dynamically add --features=dynamic-xnvctrl to the cargo incantation.

Enable Coolbits (Linux only)

For Linux ensure that Coolbits is enabled from your X11 server settings. To do so create a file named 20-nvidia.conf within /etc/X11/xorg.conf.d/ or /usr/share/X11/xorg.conf.d/ (depends on distribution) containing the following

Section "Device"
    Identifier "Device 0"
    Driver     "nvidia"
    VendorName "NVIDIA Corporation"
    BoardName  "IDENTIFIER FOR YOUR GPU"
    Option     "Coolbits" "4"
EndSection

The important bit is the Coolbits option. Valid Coolbits values for dynamic fan control are 4, 5 and 12. A sample configuration file is provided.

Use and configure

To run the program just execute the nvfancontrol binary. Add the -d or --debug argument for more output. To add a custom curve you can provide a custom configuration file. On Linux create a file named nvfancontrol.conf under the XDG configuration directory (~/.config or /etc/xdg for per-user and system-wide basis respectively). On Windows create the file in C:\Users\[USERNAME]\AppData\Roaming instead. The configuration file should contain pairs of whitespace delimited parameters (Temperature degrees Celsius, Fan Speed %). For example

30    20
40    30
50    40
60    50
70    60
80    80

Lines starting with # are ignored. You need at least two pairs of values.

Bear in mind that for most GPUs the fan speed can't be below 20% or above 80% when in manual control, even if you use greater values. However, since these limits are arbitrary and vary among different VGA BIOS you can override it using the -l, or --limits option. For example to change the limits to 10% and 90% pass -l 10,90. To disable the limits effectively enabling the whole range just pass -l 0. In addition note that the program by default will not use the custom curve if the fan is already spinning in automatic control. This is the most conservative configuration for GPUs that turn their fans off below a certain temperature threshold. If you want to always use the custom curve pass the additional -f or --force argument. To terminate nvfancontrol send a SIGINT or SIGTERM on Linux or hit Ctrl-C in the console window on Windows.

Although presently nvfancontrol is limited to a single GPU, users can select the card to modulate the fan operation using the -g or --gpu switch. GPUs are indexed from 0. To help with that option -p or --print-coolers will list all available GPUs with their respective coolers. On Windows coolers are indexed from 0 for each GPU. On Linux each available cooler on the system is assigned a unique id.

Third party interfacing

nvfancontrol offers two ways to dump the output of the program for integration with third party programs. Using the -j option a JSON represantation of the current data is printed to stdout. As all other messages are printed to stderr the data can be parsed by reading new-line delimited data from the program's stdout. If this is not desirable a builtin TCP server is also provided which can be enabled using the -t option. This option can optionally be followed by a port number (default port is 12125). The server prints the JSON data through the socket and immediately closes the connection. The message is always terminated with a new-line character.

Bugs and known issues

Although nvfancontrol should work with most Fermi or newer NVidia cards it has been tested with only a handful of GPUs. So it is quite possible that bugs or unexpected behaviour might surface. In that case please open an issue in the bug tracker including the complete program output (use the --debug option).

RPM reporting for GPUs with multiple fans on Windows is incorrect or totally wrong because the provided function NvAPI_GPU_GetTachReading is limited to a single fan. There is nothing in the public NVAPI to suggest otherwise. However, speed in % should work as expected. In any case multiple cooler support on Windows is not thoroughly tested so bug reports are always welcome!

As mentioned before, nvfancontrol is limited to a single (but selectable) GPU. The underlying code does support multiple GPUs but exposing this support to the user-facing program will require possibly breaking alterations to the configuration file. It will be added eventually.

License

This project is licensed under the GPLv3 or any newer.

You can’t perform that action at this time.