Skip to content
No description, website, or topics provided.
Python
Branch: master
Clone or download
Pull request Compare This branch is 2 commits behind dask:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.circleci
dask_xgboost
.gitignore
LICENSE.txt
MANIFEST.in
README.rst
requirements.txt
setup.cfg
setup.py

README.rst

Dask-XGBoost

Distributed training with XGBoost and Dask.distributed

This repository enables you to perform distributed training with XGBoost on Dask.array and Dask.dataframe collections.

pip install dask-xgboost

Example

from dask.distributed import Client
client = Client('scheduler-address:8786')  # connect to cluster

import dask.dataframe as dd
df = dd.read_csv('...')  # use dask.dataframe to load and
df_train = ...           # preprocess data
labels_train = ...

import dask_xgboost as dxgb
params = {'objective': 'binary:logistic', ...}  # use normal xgboost params
bst = dxgb.train(client, params, df_train, labels_train)

>>> bst  # Get back normal XGBoost result
<xgboost.core.Booster at ... >

predictions = dxgb.predict(client, bsg, data_test)

How this works

For more information on using Dask.dataframe for preprocessing see the Dask.dataframe documentation.

Once you have created suitable data and labels we are ready for distributed training with XGBoost. Every Dask worker sets up an XGBoost slave and gives them enough information to find each other. Then Dask workers hand their in-memory Pandas dataframes to XGBoost (one Dask dataframe is just many Pandas dataframes spread around the memory of many machines). XGBoost handles distributed training on its own without Dask interference. XGBoost then hands back a single xgboost.Booster result object.

Larger Example

For a more serious example see

History

Conversation during development happened at dmlc/xgboost #2032

You can’t perform that action at this time.