Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Build custom argsort for GPU quantile sketching. #9194

Open
wants to merge 11 commits into
base: master
Choose a base branch
from

Conversation

trivialfis
Copy link
Member

@trivialfis trivialfis commented May 23, 2023

  • Customize cub radix sort.
  • Use argsort for quantile sketching. Merge sort is replaced by radix sort.
  • Support float16 without transformation.

This is to optimize GPU memory usage for GPU input with QuantileDMatrix and RMM. The idea is to use argsort instead of value sort for quantile sketching. In XGBoost GPU-based GK-sketching, we need to sort the input data according to its feature index and value. During the sorting process, we need to copy out the value and feature index, which costs 8 bytes per element, 4 bytes for value, and another 4 bytes for the feature index. Then efficient parallel sort requires a double buffer, leading to a total of 16 bytes overhead per element.

This PR introduces argsort to replace the value sort. By limiting the size of each batch to the maximum of uint32_t, we can use std::uint32_t as the sorted index. During sorting, we write only to the sorted index and fetch the data in-place without altering it. This way, with a double buffer, the overhead becomes 8 bytes per-element (uint32_t costs 4 bytes), hence halving the peak memory usage inside XGBoost (without accounting for the original input).

The optimization is only useful when input is on GPU (cupy, cudf) while using QuantileDMatrix, and RMM is enabled. For normal DMatrix, the data needs to be copied anyway unless it's constructed on GPU, not much optimization can be done there. When RMM is not used, XGBoost can split the data into small batches based on the amount of available memory, although the splitting might negatively affect the sketching result.

An additional benefit is that we can sort with custom iterators like transform iterator, which may or may not return a type that's supported by cub sort, and may cost arbitrarily large memory. I think this can be useful for other projects as well.

The initial benchmark shows some performance degradation using this argsort (~30%) even with merged sort replaced by radix sort, likely due to inefficient global memory access as we are now fetching data from global memory in a completely random manner. Without the argsort, we can almost always fetch contiguous data. For XGBoost, the cost can be justified as we run the sketching only once during training, and it's the bottleneck for memory usage.

I want to eventually upstream the changes to cub, depending on the developers there. At the moment, the customized radix sort is a drastically slimmed-down version of onesweep sort in cub.

Changes in the cub radix sort:

  • Remove specialization of single tile sorting. It requires an implementation of two-sweep sort, which is more difficult to customize.
  • Add custom digit extractor for composite types. (Entry in XGBoost)
  • Add support for the custom key iterator. The original implementation accepts only pointers.

Memory usage

summary_ratio.csv

- Customize cub radix sort.
- Use argsort for quantile sketching.
include/xgboost/span.h Outdated Show resolved Hide resolved
@trivialfis
Copy link
Member Author

A related PR merged in cub NVIDIA/cub#671 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant