The Open Fabrics Interfaces (OFI) is a framework focused on exporting fabric communication services to applications.
See the OFI website for more details, including a description and overview of the project, and detailed documentation of the libfabric APIs.
Installing pre-built libfabric packages
On OS X, the latest release of libfabric can be installed using the Homebrew package manager using the following command:
$ brew install libfabric
Libfabric pre-built binaries may be available from other sources, such as Linux distributions.
Building and installing libfabric from source
Distribution tarballs are available from the Github releases tab.
If you are building libfabric from a developer git clone, you must first run
autogen.sh script. This will invoke the GNU Autotools to bootstrap
libfabric's configuration and build mechanisms. If you are building libfabric
from an official distribution tarball, there is no need to run
libfabric distribution tarballs are already bootstrapped for you.
Libfabric currently supports GNU/Linux, Free BSD, and OS X.
configure script has many built-in options (see
Some useful options are:
make install will place the files in the
--prefix option specifies that libfabric files should be installed into
the tree specified by named
<directory>. The executables will be located at
Directory where valgrind is installed. If valgrind is found, then valgrind annotations are enabled. This may incur a performance penalty.
Enable debug code paths. This enables various extra checks and allows for using the highest verbosity logging output that is normally compiled out in production builds.
This enables or disables the provider named
<provider>. Valid options are:
auto (This is the default if the
--enable-<provider>option isn't specified)
The provider will be enabled if all of its requirements are satisfied. If one of the requirements cannot be satisfied, then the provider is disabled.
yes (This is the default if the
--enable-<provider>option is specified)
The configure script will abort if the provider cannot be enabled (e.g., due to some of its requirements not being available.
Disable the provider. This is synonymous with
Enable the provider and build it as a loadable library.
Enable the provider and use the installation given in
Consider the following example:
$ ./configure --prefix=/opt/libfabric --disable-sockets && make -j 32 && sudo make install
This will tell libfabric to disable the
sockets provider, and install
libfabric in the
/opt/libfabric tree. All other providers will be enabled if
possible and all debug features will be disabled.
$ ./configure --prefix=/opt/libfabric --enable-debug --enable-psm=dl && make -j 32 && sudo make install
This will tell libfabric to enable the
psm provider as a loadable library,
enable all debug code paths, and install libfabric to the
tree. All other providers will be enabled if possible.
The fi_info utility can be used to validate the libfabric and provider
installation and provide details about provider support and available
fi_info(1) man page for details on using the fi_info
utility. fi_info is installed as part of the libfabric package.
A more comprehensive test package is available via the fabtests package.
gni provider runs on Cray XC (TM) systems utilizing the user-space
Generic Network Interface (
uGNI), which provides low-level access to
the Aries interconnect. The Aries interconnect is designed for
low-latency one-sided messaging and also includes direct hardware
support for common atomic operations and optimized collectives.
fi_gni(7) man page for more details.
gccversion 4.9 or higher.
The OPX provider is an updated Libfabric provider for Omni-Path HPC fabrics. The other provider for Omni-Path is PSM2.
The OPX provider began as a fork of the libfabric BGQ provider, with the hardware-specific parts re-written for the Omni-Path hfi1 fabric interface card. Therefore OPX inherits several desirable characteristics of the BGQ driver, and analysis of instruction counts and cache line footprints of most HPC operations show OPX being lighter weight than PSM2 on the host software stack, leading to better overall performance.
fi_opx(7) man page for more details. See Cornelis Customer
Center for support information.
psm provider runs over the PSM 1.x interface that is currently supported
by the Intel TrueScale Fabric. PSM provides tag-matching message queue
functions that are optimized for MPI implementations. PSM also has limited
Active Message support, which is not officially published but is quite stable
and well documented in the source code (part of the OFED release). The
provider makes use of both the tag-matching message queue functions and the
Active Message functions to support various libfabric data transfer APIs,
including tagged message queue, message queue, RMA, and atomic
psm provider can work with the
psm2-compat library, which exposes
a PSM 1.x interface over the Intel Omni-Path Fabric.
fi_psm(7) man page for more details.
psm2 provider runs over the PSM 2.x interface that is supported
by the Intel Omni-Path Fabric. PSM 2.x has all the PSM 1.x features plus a set
of new functions with enhanced capabilities. Since PSM 1.x and PSM 2.x are not
ABI compatible, the
psm2 provider only works with PSM 2.x and doesn't support
Intel TrueScale Fabric.
fi_psm2(7) man page for more details.
psm3 provider provides optimized performance and scalability for most
verbs UD and sockets devices. Additional features and optimizations can be
enabled when running over Intel's E810 Ethernet NICs and/or using Intel's
rendezvous kernel module (
PSM 3.x fully integrates the OFI provider and the underlying PSM3
protocols/implementation and only exports the OFI APIs.
fi_psm3(7) for more details.
ofi_rxm provider is an utility provider that supports RDM endpoints emulated
over MSG endpoints of a core provider.
fi_rxm(7) for more information.
The sockets provider has been deprecated in favor of the tcp, udp, and utility providers, which provide improved performance and stability.
sockets provider is a general-purpose provider that can be used on any
system that supports TCP sockets. The provider is not intended to provide
performance improvements over regular TCP sockets, but rather to allow
developers to write, test, and debug application code even on platforms
that do not have high-performance fabric hardware. The sockets provider
supports all libfabric provider requirements and interfaces.
fi_sockets(7) man page for more details.
The tcp provider is an optimized socket based provider that supports reliable connected endpoints. It is intended to be used directly by apps that need MSG endpoint support, or in conjunction with the rxm provider for apps that need RDM endpoints. The tcp provider targets replacing the sockets provider for applications using standard networking hardware.
fi_tcp(7) man page for more details.
udp provider is a basic provider that can be used on any system that
supports UDP sockets. The provider is not intended to provide performance
improvements over regular UDP sockets, but rather allow applications and
provider developers to write, test, and debug their code. The
forms the foundation of a utility provider that enables the implementation of
libfabric features over any hardware.
fi_udp(7) man page for more details.
usnic provider is designed to run over the Cisco VIC (virtualized NIC)
hardware on Cisco UCS servers. It utilizes the Cisco usnic (userspace NIC)
capabilities of the VIC to enable ultra low latency and other offload
capabilities on Ethernet networks.
fi_usnic(7) man page for more details.
usnicprovider depends on library files from either
libnlversion 1 (sometimes known as
libnl1) or version 3 (sometimes known as
libnl3). If you are compiling libfabric from source and want to enable usNIC support, you will also need the matching
libnlheader files (e.g., if you are building with
libnlversion 3, you need both the header and library files from version 3).
If specified, look for libnl support. If it is not found, the
provider will not be built. If
<directory> is specified, then check in the
directory and check for
libnl version 3. If version 3 is not found, then
check for version 1. If no
<directory> argument is specified, then this
option is redundant with
The verbs provider enables applications using OFI to be run over any verbs hardware (Infiniband, iWarp, and RoCE). It uses the Linux Verbs API for network transport and translates OFI calls to appropriate verbs API calls. It uses librdmacm for communication management and libibverbs for other control and data transfer operations.
fi_verbs(7) man page for more details.
- The verbs provider requires libibverbs (v1.1.8 or newer) and librdmacm (v1.0.16 or newer). If you are compiling libfabric from source and want to enable verbs support, you will also need the matching header files for the above two libraries. If the libraries and header files are not in default paths, specify them in CFLAGS, LDFLAGS and LD_LIBRARY_PATH environment variables.
bgq provider is a native provider that directly utilizes the hardware
interfaces of the Blue Gene/Q system to implement aspects of the libfabric
interface to fully support MPICH3 CH4.
fi_bgq(7) man page for more details.
bgqprovider depends on the system programming interfaces (SPI) and the hardware interfaces (HWI) located in the Blue Gene/Q driver installation. Additionally, the open source Blue Gene/Q system files are required.
If specified, set the progress mode enabled in FABRIC_DIRECT (default is FI_PROGRESS_MANUAL).
If specified, set the memory registration mode (default is FI_MR_BASIC).
The Network Direct provider enables applications using OFI to be run over any verbs hardware (Infiniband, iWarp, and RoCE). It uses the Microsoft Network Direct SPI for network transport and provides a translation of OFI calls to appropriate Network Direct API calls. The Network Direct providers enables OFI-based applications to utilize zero-copy data transfers between applications, kernel-bypass I/O generation and one-sided data transfer operations on Microsoft Windows OS. An application can use OFI with the Network Direct provider enabled on Windows OS to expose the capabilities of the networking devices if the hardware vendors of the devices implemented the Network Direct service provider interface (SPI) for their hardware.
fi_netdir(7) man page for more details.
- The Network Direct provider requires Network Direct SPI. If you are compiling libfabric from source and want to enable Network Direct support, you will also need the matching header files for the Network Direct SPI. If the libraries and header files are not in default paths (the default path is root of provier directory, i.e. \prov\netdir\NetDirect, where NetDirect contains the header files), specify them in the configuration properties of the VS project.
The shm provider enables applications using OFI to be run over shared memory.
fi_shm(7) man page for more details.
- The shared memory provider only works on Linux platforms and makes use of kernel support for 'cross-memory attach' (CMA) data copies for large transfers.
efa provider enables the use of libfabric-enabled applications on Amazon
EC2 Elastic Fabric Adapter (EFA), a
custom-built OS bypass hardware interface for inter-instance communication on
fi_efa(7) for more information.
Even though Windows isn't fully supported, yet it is possible to compile and link your library.
- First, you need the NetDirect provider: Network Direct SDK/DDK may be obtained as a NuGet package (preferred) from:
or downloaded from:
https://www.microsoft.com/en-us/download/details.aspx?id=36043 on page press Download button and select NetworkDirect_DDK.zip.
Extract header files from downloaded NetworkDirect_DDK.zip:
<libfabricroot>\prov\netdir\NetDirect\, or add path to NetDirect headers into VS include paths
compiling: libfabric has 6 Visual Studio solution configurations:
1-2: Debug/Release ICC (restricted support for Intel Compiler XE 15.0 only) 3-4: Debug/Release v140 (VS 2015 tool set) 5-6: Debug/Release v141 (VS 2017 tool set) 7-8: Debug/Release v142 (VS 2019 tool set)
Make sure you choose the correct target fitting your compiler. By default, the library will be compiled to
- linking your library
- right-click your project and select properties.
- choose C/C++ > General and add
<libfabricroot>\includeto "Additional include Directories"
- choose Linker > Input and add
<libfabricroot>\x64\<yourconfigchoice>\libfabric.libto "Additional Dependencies"
- depending on what you are building you may also need to copy
libfabric.dllinto the target folder of your own project.