Skip to content

Commit

Permalink
Test federated plugin using GitHub action. (#10336)
Browse files Browse the repository at this point in the history
Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>
  • Loading branch information
trivialfis and hcho3 committed May 28, 2024
1 parent 7ae5c97 commit 7354955
Show file tree
Hide file tree
Showing 4 changed files with 27 additions and 27 deletions.
3 changes: 2 additions & 1 deletion .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -156,8 +156,9 @@ jobs:
- name: Build and install XGBoost shared library
run: |
cd build
cmake .. -DBUILD_STATIC_LIB=OFF -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -GNinja
cmake .. -DBUILD_STATIC_LIB=OFF -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -GNinja -DPLUGIN_FEDERATED=ON -DGOOGLE_TEST=ON
ninja -v install
./testxgboost
cd -
- name: Build and run C API demo with shared
run: |
Expand Down
26 changes: 20 additions & 6 deletions doc/build.rst
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ From the command line on Linux starting from the XGBoost directory:
.. note:: Specifying compute capability

To speed up compilation, the compute version specific to your GPU could be passed to cmake as, e.g., ``-DGPU_COMPUTE_VER=50``. A quick explanation and numbers for some architectures can be found `in this page <https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/>`_.
To speed up compilation, the compute version specific to your GPU could be passed to cmake as, e.g., ``-DCMAKE_CUDA_ARCHITECTURES=75``. A quick explanation and numbers for some architectures can be found `in this page <https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/>`_.

.. note:: Faster distributed GPU training with NCCL

Expand All @@ -147,6 +147,8 @@ From the command line on Linux starting from the XGBoost directory:
cmake .. -DUSE_CUDA=ON -DUSE_NCCL=ON -DNCCL_ROOT=/path/to/nccl2
make -j4
Some additional flags are available for NCCL, ``BUILD_WITH_SHARED_NCCL`` enables building XGBoost with NCCL as a shared library, while ``USE_DLOPEN_NCCL`` enables XGBoost to load NCCL at runtime using ``dlopen``.

On Windows, run CMake as follows:

.. code-block:: bash
Expand All @@ -165,6 +167,17 @@ The above cmake configuration run will create an ``xgboost.sln`` solution file i
To speed up compilation, run multiple jobs in parallel by appending option ``-- /MP``.

Federated Learning
==================

The federated learning plugin requires ``grpc`` and ``protobuf``. To install grpc, refer
to the `installation guide from the gRPC website
<https://grpc.io/docs/languages/cpp/quickstart/>`_. Alternatively, one can use the
``libgrpc`` and the ``protobuf`` package from conda forge if conda is available. After
obtaining the required dependencies, enable the flag: `-DPLUGIN_FEDERATED=ON` when running
CMake. Please note that only Linux is supported for the federated plugin.


.. _build_python:

***********************************
Expand Down Expand Up @@ -228,11 +241,12 @@ There are several ways to build and install the package from source:

3. Editable installation

To further enable rapid development and iteration, we provide an **editable installation**.
In an editable installation, the installed package is simply a symbolic link to your
working copy of the XGBoost source code. So every changes you make to your source
directory will be immediately visible to the Python interpreter. Here is how to
install XGBoost as editable installation:
To further enable rapid development and iteration, we provide an **editable
installation**. In an editable installation, the installed package is simply a symbolic
link to your working copy of the XGBoost source code. So every changes you make to your
source directory will be immediately visible to the Python interpreter. To install
XGBoost as editable installation, first build the shared library as previously
described, then install the Python package:

.. code-block:: bash
Expand Down
23 changes: 3 additions & 20 deletions plugin/federated/README.md
Original file line number Diff line number Diff line change
@@ -1,33 +1,16 @@
XGBoost Plugin for Federated Learning
=====================================

This folder contains the plugin for federated learning. Follow these steps to build and test it.
This folder contains the plugin for federated learning.

Install gRPC
------------
Refer to the [installation guide from the gRPC website](https://grpc.io/docs/languages/cpp/quickstart/).
See [build instruction](../../doc/build.rst) for how to build the plugin.

Build the Plugin
----------------
```shell
# Under xgboost source tree.
mkdir build
cd build
cmake .. -GNinja \
-DPLUGIN_FEDERATED=ON \
-DUSE_CUDA=ON\
-DUSE_NCCL=ON
ninja
cd ../python-package
pip install -e .
```
If CMake fails to locate gRPC, you may need to pass `-DCMAKE_PREFIX_PATH=<grpc path>` to CMake.

Test Federated XGBoost
----------------------
```shell
# Under xgboost source tree.
cd tests/distributed
cd tests/distributed/test_federated
# This tests both CPU training (`hist`) and GPU training (`gpu_hist`).
./runtests-federated.sh
```
2 changes: 2 additions & 0 deletions tests/ci_build/conda_env/cpp_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,5 @@ dependencies:
- c-compiler
- cxx-compiler
- gtest
- protobuf
- libgrpc

0 comments on commit 7354955

Please sign in to comment.