Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RMM integration plugin #5873

Merged
merged 48 commits into from
Aug 12, 2020
Merged

RMM integration plugin #5873

merged 48 commits into from
Aug 12, 2020

Conversation

hcho3
Copy link
Collaborator

@hcho3 hcho3 commented Jul 8, 2020

Fixes #5861.

Depends on #5871. Will rebase after #5871 is merged.

Depends on #5966. Will rebase after #5966 is merged.

Currently, the C++ tests are crashing with an out-of-memory error. The OOM has been fixed.

@hcho3 hcho3 force-pushed the add_rmm branch 2 times, most recently from 9d0033e to 601d21b Compare July 8, 2020 04:52
@hcho3
Copy link
Collaborator Author

hcho3 commented Jul 8, 2020

[ RUN      ] HistUtil.DeviceSketch
terminate called after throwing an instance of 'rmm::bad_alloc'
  what():  std::bad_alloc: CUDA error at: /opt/python/envs/rmm_test/include/rmm/mr/device/cuda_memory_resource.hpp66: cudaErrorMemoryAllocation out of memory

@trivialfis
Copy link
Member

I believe there's a way to do dynamic registration for malloc implementation, might be more involved. I'm quite reluctant to add compile time dependency.

@hcho3
Copy link
Collaborator Author

hcho3 commented Jul 8, 2020

I'm quite reluctant to add compile time dependency.

RMM is an optional dependency. Users opt in by compiling XGBoost with -DUSE_RMM=ON.

@RAMitchell
Copy link
Member

This makes xgboost use RMM, but do we also need to share a context with external ETL libraries to solve the original problem?

@jrhemstad
Copy link

This makes xgboost use RMM, but do we also need to share a context with external ETL libraries to solve the original problem?

RMM already has global state through the get/set_default_resource(). So if I do set_default_resource(mr) in libcudf, and my application links against both libcudf and xgboost, then when you call get_default_resource() in xgboost, then you'll be accessing the resource I set in libcudf.

https://github.com/rapidsai/rmm#default-resource

@hcho3
Copy link
Collaborator Author

hcho3 commented Jul 8, 2020

@jrhemstad For the global state to work, both applications will need to share the same dynamic library librmm.so. Is it correct?

@jrhemstad
Copy link

@jrhemstad For the global state to work, both applications will need to share the same dynamic library librmm.so. Is it correct?

So RMM is very shortly going to be a header-only library. And based on how we use function local statics for the global state, that state will be shared among all libraries that include rmm/mr/default_memory_resource.hpp that are dynamically linked together.

I explicitly tested this behavior here: https://github.com/jrhemstad/link_test

@hcho3 hcho3 marked this pull request as ready for review July 9, 2020 04:52
@hcho3 hcho3 changed the title [WIP] Add an option to use RMM Add a compile-time option to use RMM Jul 9, 2020
@codecov-commenter
Copy link

codecov-commenter commented Jul 9, 2020

Codecov Report

Merging #5873 into master will not change coverage.
The diff coverage is 0.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #5873   +/-   ##
=======================================
  Coverage   78.52%   78.52%           
=======================================
  Files          12       12           
  Lines        3013     3013           
=======================================
  Hits         2366     2366           
  Misses        647      647           
Impacted Files Coverage Δ
python-package/xgboost/data.py 58.54% <0.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 388f975...567fb33. Read the comment docs.

@hcho3
Copy link
Collaborator Author

hcho3 commented Jul 9, 2020

Failures in Span tests:

[ RUN      ] Span.Subspan
../tests/cpp/common/test_span.cc:424: Failure
Death test: s1.subspan(-1, 0)
    Result: died but not with expected error.
  Expected: contains regular expression "\\[xgboost\\] Condition .* failed.\n"
Actual msg:
[  DEATH   ] terminate called after throwing an instance of 'rmm::bad_alloc'
[  DEATH   ]   what():  std::bad_alloc: CUDA error at: /home/phcho/miniconda3/envs/rmm/include/rmm/mr/device/cuda_memory_resource.hpp66: cudaErrorMemoryAllocation out of memory
[  DEATH   ] 
../tests/cpp/common/test_span.cc:425: Failure
Death test: s1.subspan(17, 0)
    Result: died but not with expected error.
  Expected: contains regular expression "\\[xgboost\\] Condition .* failed.\n"
Actual msg:
[  DEATH   ] terminate called after throwing an instance of 'rmm::bad_alloc'
[  DEATH   ]   what():  std::bad_alloc: CUDA error at: /home/phcho/miniconda3/envs/rmm/include/rmm/mr/device/cuda_memory_resource.hpp66: cudaErrorMemoryAllocation out of memory
[  DEATH   ] 
../tests/cpp/common/test_span.cc:428: Failure
Death test: s1.subspan<kOne>()
    Result: died but not with expected error.
  Expected: contains regular expression "\\[xgboost\\] Condition .* failed.\n"
Actual msg:
[  DEATH   ] terminate called after throwing an instance of 'rmm::bad_alloc'
[  DEATH   ]   what():  std::bad_alloc: CUDA error at: /home/phcho/miniconda3/envs/rmm/include/rmm/mr/device/cuda_memory_resource.hpp66: cudaErrorMemoryAllocation out of memory
[  DEATH   ] 
../tests/cpp/common/test_span.cc:429: Failure
Death test: s1.subspan<17>()
    Result: died but not with expected error.
  Expected: contains regular expression "\\[xgboost\\] Condition .* failed.\n"
Actual msg:
[  DEATH   ] terminate called after throwing an instance of 'rmm::bad_alloc'
[  DEATH   ]   what():  std::bad_alloc: CUDA error at: /home/phcho/miniconda3/envs/rmm/include/rmm/mr/device/cuda_memory_resource.hpp66: cudaErrorMemoryAllocation out of memory
[  DEATH   ] 
[  FAILED  ] Span.Subspan (1178 ms)

Similar errors occur for the 6 tests:

[  FAILED  ] 5 tests, listed below:
[  FAILED  ] Span.FromPtrLen
[  FAILED  ] Span.ElementAccess
[  FAILED  ] Span.FrontBack
[  FAILED  ] Span.FirstLast
[  FAILED  ] Span.Subspan

@trivialfis
Copy link
Member

trivialfis commented Jul 9, 2020

That might has nothing to do with span. CUDA runs asynchronously, while span tests force it to sync. So the error appears in those tests.

@hcho3
Copy link
Collaborator Author

hcho3 commented Jul 9, 2020

The Span unit tests perform "death tests," where error conditions in the Span are expected to call std::terminate(). The Google Test framework then restarts the test suite to recover from std::terminate(), and then the subsequent attempt to allocate the memory pool throws OOM.

Thread 1 "testxgboost" hit Breakpoint 1, rmm::mr::cuda_memory_resource::do_allocate (this=0x55555a494020, bytes=25481084928) at /home/phcho/miniconda3/envs/rmm/include/rmm/mr/device/cuda_memory_resource.hpp:66
66          RMM_CUDA_TRY(cudaMalloc(&p, bytes), rmm::bad_alloc);
(gdb) backtrace
#0  rmm::mr::cuda_memory_resource::do_allocate (this=0x55555a494020, bytes=25481084928) at /home/phcho/miniconda3/envs/rmm/include/rmm/mr/device/cuda_memory_resource.hpp:66
#1  0x00005555557e14c7 in rmm::mr::device_memory_resource::allocate (this=0x55555a494020, bytes=25481084928, stream=0x0) at /home/phcho/miniconda3/envs/rmm/include/rmm/mr/device/device_memory_resource.hpp:84
#2  0x0000555555d18431 in rmm::mr::pool_memory_resource<rmm::mr::cuda_memory_resource>::block_from_upstream (this=0x55555a494040, size=25481084928, stream=0x0) at /home/phcho/miniconda3/envs/rmm/include/rmm/mr/device/pool_memory_resource.hpp:264
#3  0x0000555555d17a9c in rmm::mr::pool_memory_resource<rmm::mr::cuda_memory_resource>::pool_memory_resource (this=0x55555a494040, upstream_mr=0x55555a494020, initial_pool_size=25481084928, maximum_pool_size=18446744073709551615)
    at /home/phcho/miniconda3/envs/rmm/include/rmm/mr/device/pool_memory_resource.hpp:78
#4  0x0000555555d16f01 in std::make_unique<rmm::mr::pool_memory_resource<rmm::mr::cuda_memory_resource>, rmm::mr::cuda_memory_resource*> () at /usr/include/c++/7/bits/unique_ptr.h:821
#5  0x0000555555d147df in xgboost::SetUpRMMResource () at ../tests/cpp/helpers.cu:65
#6  0x0000555555c168ec in main (argc=3, argv=0x7fffffffe238) at ../tests/cpp/test_main.cc:12

@jrhemstad @harrism What is the expected behavior of the memory pool handle (pool_mr) when it gets freed due to std::terminate()? Does it not free up GPU memory it had allocated earlier?

@jrhemstad
Copy link

@jrhemstad @harrism What is the expected behavior of the memory pool handle (pool_mr) when it gets freed due to std::terminate()? Does it not free up GPU memory it had allocated earlier?

Hm, I've never really thought about it. How are you storing the pool_memory_resource?

Looking at the docs for std::terminate/std::abort:

Destructors of variables with automatic, thread local (since C++11) and static storage durations are not called.

The dtor may not get called, but I'd still think the OS would reclaim the device memory previously held by the process. I'm not sure exactly how the GTest death test works here though, so I could imagine that perhaps the resource isn't properly getting cleaned up.

@hcho3
Copy link
Collaborator Author

hcho3 commented Jul 9, 2020

How are you storing the pool_memory_resource?

I allocated it in the heap and stored it in a unique_ptr (2bdbc23#diff-8f1ab6127f2df1e731ef40efd2431439R12).

I looked at the google test doc, and it says that another process is forked to test a statement that's expected to die:

How It Works
Under the hood, ASSERT_EXIT() spawns a new process and executes the
death test statement in that process. The details of how precisely
that happens depend on the platform and the variable
::testing::GTEST_FLAG(death_test_style) (which is initialized from the
command-line flag --gtest_death_test_style).

On POSIX systems, fork() (or clone() on Linux) is used to spawn the child, after which:

If the variable's value is "fast", the death test statement is immediately executed.
If the variable's value is "threadsafe", the child process re-executes the unit test binary just as it was originally invoked, but with some extra flags to cause just the single death test under consideration to be run.

On Windows, the child is spawned using the CreateProcess() API, and re-executes the binary to cause just the single death test under consideration to be run - much like the threadsafe mode on POSIX.

So clearly, having two identical processes allocating a memory pool will cause OOM. Is it fair to assume that the pool allocator will allocate the whole GPU and then subdivide memory within it?

Now that I think about it, using a LocalCUDACluster might have the same issue, since you'd have multiple processes of XGBoost, each allocating GPU memory. So this PR may likely break multi-GPU training with Dask. Is RMM used with other multi-GPU Dask applications?

@jrhemstad
Copy link

jrhemstad commented Jul 9, 2020

So clearly, having two identical processes allocating a memory pool will cause OOM. Is it fair to assume that the pool allocator will allocate the whole GPU and then subdivide memory within it?

Yeah, I was just about to post the same docs :). By default, pool_memory_resource allocates ~1/2 of available GPU memory. If it needs to grow beyond that, it allocates another pool again that is ~1/2 of available GPU memory, and continues in this way. So if Google Test is trying to concurrently create two pool_memory_resources for the same GPU, you're almost certainly going to get an OOM.

Now that I think about it, using a LocalCUDACluster might have the same issue, since you'd have multiple processes of XGBoost, each allocating GPU memory. So this PR may likely break multi-GPU training with Dask. Is RMM used with other multi-GPU Dask applications?

Yes, we use RMM with multi-GPU workflows. Those are always run in a one process per GPU scenario. As such, each process/GPU has its own pool_memory_resource unique to that GPU/process and they do not conflict.

@hcho3
Copy link
Collaborator Author

hcho3 commented Jul 9, 2020

I see. For now, I will disable the death tests when USE_RMM is enabled. Thanks for pointing out 1-1 correspondence between process and GPU in the multi-GPU scenario. As long as each process gets its own GPU device, the RMM allocator can freely allocate the whole GPU memory.

@hcho3
Copy link
Collaborator Author

hcho3 commented Jul 9, 2020

I moved all EXPECT_DEATH assertions to separate test suites whose names are suffixed with DeathTest. This convention is recommended by the Google Test doc. Also, we now have an option to disable death tests in the command line.

@hcho3
Copy link
Collaborator Author

hcho3 commented Aug 8, 2020

Given that cuDF does not yet support CUDA 11 (rapidsai/cudf#5369), I unblocked this PR by fixing a unit test that failed when cuDF was missing. The predicate _is_cudf_df() was revised to return False when cuDF is missing.

Copy link
Member

@trivialfis trivialfis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add a demo on how do I enable that with dask? There are some dask tutorials on demo directory. Also I would like to try it myself before merging. Sorry for the inconvenience.

dh::device_vector<size_t> tree_segments;
dh::device_vector<int> tree_group;
// Need to lazily construct the vectors because GPU id is only known at runtime
std::unique_ptr<dh::device_vector<RegTree::Node>> nodes;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we are constructing the predictor lazily?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The device vectors were being constructed before cudaSetDevice() was called. The device vectors need access to correct CUDA context at the time of construction, so I've delayed construction of the device vectors.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit worrying if this is necessary, not sure if we can guarantee this behaviour across xgboost. Also wouldn't it be easier to place DeviceModel inside a unique pointer?

Copy link
Collaborator Author

@hcho3 hcho3 Aug 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously, this was not necessary, since Thrust's device vector builds its memory resource (MR) lazily for each GPU it was being used. On the other hand, if we use RMM allocator with device vectors, then the correct CUDA context needs to be set (with cudaSetDevice()) prior to the construction of the device vector.

For now this line works, but in the longer term I can design a new device vector class that lazily constructs the device MR.

Also wouldn't it be easier to place DeviceModel inside a unique pointer?

That won't work, because DeviceModel has a separate Init() function, and the correct CUDA context isn't set until we call Init() function.

void Init(const gbm::GBTreeModel& model, size_t tree_begin, size_t tree_end, int32_t gpu_id) {
dh::safe_cuda(cudaSetDevice(gpu_id));

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an earlier point in the program that we can set the device? e.g. in the learner as soon as it receives the parameter gpu_id?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or you can use HostDeviceVector, which is lazy.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. I replaced it with HostDeviceVector.

tests/cpp/common/test_span.cc Show resolved Hide resolved
@hcho3
Copy link
Collaborator Author

hcho3 commented Aug 9, 2020

@trivialfis I added demos.

@harrism
Copy link
Contributor

harrism commented Aug 10, 2020

Given that cuDF does not yet support CUDA 11 (rapidsai/cudf#5369), I unblocked this PR by fixing a unit test that failed when cuDF was missing. The predicate _is_cudf_df() was revised to return False when cuDF is missing.

cuDF should build with CUDA 11 just fine. That issue is only still open to track Numba and CuPy PRs for CUDA 11, which have now merged.

@hcho3
Copy link
Collaborator Author

hcho3 commented Aug 10, 2020

@harrism Is the cudf nightly for CUDA 11 on Conda now? Last time I tried, it led to runtime error (see above).

@harrism
Copy link
Contributor

harrism commented Aug 10, 2020

I don't know, I use conda naively. @kkraus14?

@kkraus14
Copy link

You can track builds here: https://anaconda.org/rapidsai-nightly/libcudf/files

Looks like CUDA 11 builds aren't enabled yet.

plugin/CMakeLists.txt Show resolved Hide resolved
dh::device_vector<size_t> tree_segments;
dh::device_vector<int> tree_group;
// Need to lazily construct the vectors because GPU id is only known at runtime
std::unique_ptr<dh::device_vector<RegTree::Node>> nodes;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit worrying if this is necessary, not sure if we can guarantee this behaviour across xgboost. Also wouldn't it be easier to place DeviceModel inside a unique pointer?

tests/ci_build/test_python.sh Show resolved Hide resolved
demo/rmm_plugin/rmm_singlegpu.py Show resolved Hide resolved
python-package/xgboost/data.py Show resolved Hide resolved
dh::device_vector<size_t> tree_segments;
dh::device_vector<int> tree_group;
// Need to lazily construct the vectors because GPU id is only known at runtime
std::unique_ptr<dh::device_vector<RegTree::Node>> nodes;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or you can use HostDeviceVector, which is lazy.

@@ -478,4 +487,57 @@ std::unique_ptr<GradientBooster> CreateTrainedGBM(
return gbm;
}

#if defined(XGBOOST_USE_RMM) && XGBOOST_USE_RMM == 1

using cuda_mr_t = rmm::mr::cuda_memory_resource;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick, type is defined as camel case.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

tests/cpp/helpers.cc Show resolved Hide resolved
tests/pytest.ini Outdated
gtest: Mark a test that requires C++ Google Test executable.
no_rmm_pool_setup: Mark a test to skip the setup_rmm_pool() fixture.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a note when should it apply. From I have seen it should be applied when:

  • dask is used.
  • demo that doesn't use rmm.

Correct me if I'm wrong. ;-)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mainly this mark is to avoid allocating RMM pool twice. If a Dask cluster sets up an RMM pool, then we do not want to set up another pool again (that will cause OOM).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Em ... I need to be extra careful when writing tests ...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@trivialfis Can we use a fixture for Dask clusters? Then the RMM fixture can reliably detect whether a Dask cluster is being used.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, there's document in distributed library's developer guide.

Copy link
Member

@trivialfis trivialfis Aug 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hcho3 See tests/python/test_with_dask.py for an example. I'm not sure about cuda cluster though. Worse case is you can read the code in distributed.utils_test.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

@hcho3 hcho3 Aug 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Now setup_rmm_pool() fixture will ignore any test that uses local_cuda_cluster fixture, so that RMM pool doesn't get duplicated.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I ended up fixing the issue by reducing the pool size. Duplicated pools get resolved eventually as Python's garbage collection removes the old pool. The default size (1/2 GPU memory) is too big. Setting the pool to be < 4GB does the trick.

@@ -0,0 +1,40 @@
import pytest
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RAMitchell We should export our compilation info to Python in the future, including whether XGBoost has CUDA, NCCL, RMM etc.

plugin/CMakeLists.txt Show resolved Hide resolved
tests/ci_build/Dockerfile.gpu Show resolved Hide resolved
tests/ci_build/Dockerfile.gpu Show resolved Hide resolved
@trivialfis
Copy link
Member

Looks good to me in general. I think host device vector can save you from the trouble of defining unique pointer and can copy data to device asynchronously.

@hcho3
Copy link
Collaborator Author

hcho3 commented Aug 11, 2020

@RAMitchell @trivialfis I think I addressed all of your comments. Can you take another look?

@hcho3 hcho3 merged commit 9adb812 into dmlc:master Aug 12, 2020
@hcho3 hcho3 deleted the add_rmm branch August 12, 2020 08:26
@hcho3
Copy link
Collaborator Author

hcho3 commented Aug 12, 2020

Thanks everyone for reviewing!

nyoko added a commit to nyoko/xgboost that referenced this pull request Aug 12, 2020
* [dask] Accept other inputs for prediction. (dmlc#5428)


* Returns a series when input is dataframe.

* Merge assert client.

* [R-package] changed FindLibR to take advantage of CMake cache (dmlc#5427)

* Support pandas SparseArray. (dmlc#5431)

* [R-package] fixed uses of class() (dmlc#5426)

Thank you a lot. Good catch!

* [dask] Fix missing value for scikit-learn interface. (dmlc#5435)

* Ranking metric acceleration on the gpu (dmlc#5398)

* Add link to GPU documentation (dmlc#5437)

* Add Accelerated Failure Time loss for survival analysis task (dmlc#4763)

* [WIP] Add lower and upper bounds on the label for survival analysis

* Update test MetaInfo.SaveLoadBinary to account for extra two fields

* Don't clear qids_ for version 2 of MetaInfo

* Add SetInfo() and GetInfo() method for lower and upper bounds

* changes to aft

* Add parameter class for AFT; use enum's to represent distribution and event type

* Add AFT metric

* changes to neg grad to grad

* changes to binomial loss

* changes to overflow

* changes to eps

* changes to code refactoring

* changes to code refactoring

* changes to code refactoring

* Re-factor survival analysis

* Remove aft namespace

* Move function bodies out of AFTNormal and AFTLogistic, to reduce clutter

* Move function bodies out of AFTLoss, to reduce clutter

* Use smart pointer to store AFTDistribution and AFTLoss

* Rename AFTNoiseDistribution enum to AFTDistributionType for clarity

The enum class was not a distribution itself but a distribution type

* Add AFTDistribution::Create() method for convenience

* changes to extreme distribution

* changes to extreme distribution

* changes to extreme

* changes to extreme distribution

* changes to left censored

* deleted cout

* changes to x,mu and sd and code refactoring

* changes to print

* changes to hessian formula in censored and uncensored

* changes to variable names and pow

* changes to Logistic Pdf

* changes to parameter

* Expose lower and upper bound labels to R package

* Use example weights; normalize log likelihood metric

* changes to CHECK

* changes to logistic hessian to standard formula

* changes to logistic formula

* Comply with coding style guideline

* Revert back Rabit submodule

* Revert dmlc-core submodule

* Comply with coding style guideline (clang-tidy)

* Fix an error in AFTLoss::Gradient()

* Add missing files to amalgamation

* Address @RAMitchell's comment: minimize future change in MetaInfo interface

* Fix lint

* Fix compilation error on 32-bit target, when size_t == bst_uint

* Allocate sufficient memory to hold extra label info

* Use OpenMP to speed up

* Fix compilation on Windows

* Address reviewer's feedback

* Add unit tests for probability distributions

* Make Metric subclass of Configurable

* Address reviewer's feedback: Configure() AFT metric

* Add a dummy test for AFT metric configuration

* Complete AFT configuration test; remove debugging print

* Rename AFT parameters

* Clarify test comment

* Add a dummy test for AFT loss for uncensored case

* Fix a bug in AFT loss for uncensored labels

* Complete unit test for AFT loss metric

* Simplify unit tests for AFT metric

* Add unit test to verify aggregate output from AFT metric

* Use EXPECT_* instead of ASSERT_*, so that we run all unit tests

* Use aft_loss_param when serializing AFTObj

This is to be consistent with AFT metric

* Add unit tests for AFT Objective

* Fix OpenMP bug; clarify semantics for shared variables used in OpenMP loops

* Add comments

* Remove AFT prefix from probability distribution; put probability distribution in separate source file

* Add comments

* Define kPI and kEulerMascheroni in probability_distribution.h

* Add probability_distribution.cc to amalgamation

* Remove unnecessary diff

* Address reviewer's feedback: define variables where they're used

* Eliminate all INFs and NANs from AFT loss and gradient

* Add demo

* Add tutorial

* Fix lint

* Use 'survival:aft' to be consistent with 'survival:cox'

* Move sample data to demo/data

* Add visual demo with 1D toy data

* Add Python tests

Co-authored-by: Philip Cho <chohyu01@cs.washington.edu>

* Force compressed buffer to be 4 bytes aligned. (dmlc#5441)

* Refactor tests with data generator. (dmlc#5439)

* Resolve travis failure. (dmlc#5445)

* Install dependencies by pip.

* Device dmatrix (dmlc#5420)

* Reducing memory consumption for 'hist' method on CPU (dmlc#5334)

* [R-package] fixed inconsistency in R -e calls in FindLibR.cmake (dmlc#5438)

* Thread safe, inplace prediction. (dmlc#5389)

Normal prediction with DMatrix is now thread safe with locks.  Added inplace prediction is lock free thread safe.

When data is on device (cupy, cudf), the returned data is also on device.

* Implementation for numpy, csr, cudf and cupy.

* Implementation for dask.

* Remove sync in simple dmatrix.

* Add support for dlpack, expose python docs for DeviceQuantileDMatrix (dmlc#5465)

* Reduce span check overhead. (dmlc#5464)

* Update dmlc-core. (dmlc#5466)

* Copy dmlc travis script to XGBoost.

* Prevent copying SimpleDMatrix. (dmlc#5453)

* Set default dtor for SimpleDMatrix to initialize default copy ctor, which is
deleted due to unique ptr.

* Remove commented code.
* Remove warning for calling host function (std::max).
* Remove warning for initialization order.
* Remove warning for unused variables.

* Remove silent parameter. (dmlc#5476)

* Enable parameter validation for skl. (dmlc#5477)

* Split up test helpers header. (dmlc#5455)

* Implement host span. (dmlc#5459)

* Accept other gradient types for split entry. (dmlc#5467)

* Implement robust regularization in 'survival:aft' objective (dmlc#5473)

* Robust regularization of AFT gradient and hessian

* Fix AFT doc; expose it to tutorial TOC

* Apply robust regularization to uncensored case too

* Revise unit test slightly

* Fix lint

* Update test_survival.py

* Use GradientPairPrecise

* Remove unused variables

* Fix dump model. (dmlc#5485)

* Small updates to GPU documentation (dmlc#5483)

* Add R code to AFT tutorial [skip ci] (dmlc#5486)

* Upgrade clang-tidy on CI. (dmlc#5469)

* Correct all clang-tidy errors.
* Upgrade clang-tidy to 10 on CI.

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>

* corrected spelling of 'list' (dmlc#5482)

* Edits on tutorial for XGBoost job on Kubernetes (dmlc#5487)

* add reference to gpu external memory (dmlc#5490)

* Fix out-of-bound array access in WQSummary::SetPrune() (dmlc#5493)

* [jvm-packages]add feature size for LabelPoint and DataBatch (dmlc#5303)

* fix type error

* Validate number of features.

* resolve comments

* add feature size for LabelPoint and DataBatch

* pass the feature size to native

* move feature size validating tests into a separate suite

* resolve comments

Co-authored-by: fis <jm.yuan@outlook.com>

* Use ellpack for prediction only when sparsepage doesn't exist. (dmlc#5504)

* Fix checking booster. (dmlc#5505)

* Use `get_params()` instead of `getattr` intrinsic.

* Requires setting leaf stat when expanding tree. (dmlc#5501)

* Fix GPU Hist feature importance.

* Remove distcol updater. (dmlc#5507)

Closes dmlc#5498.

* Unify max nodes. (dmlc#5497)

* Fix github merge. (dmlc#5509)

* Update doc for parameter validation. (dmlc#5508)

* Update doc for parameter validation.

* Fix github rebase.

* Serialise booster after training to reset state (dmlc#5484)

* Serialise booster after training to reset state

* Prevent process_type being set on load

* Check for correct updater sequence

* Remove makefiles. (dmlc#5513)

* [R] R raw serialization. (dmlc#5123)

* Add bindings for serialization.
* Change `xgb.save.raw' into full serialization instead of simple model.
* Add `xgb.load.raw' for unserialization.
* Run devtools.

* [CI] Use devtoolset-6 because devtoolset-4 is EOL and no longer available (dmlc#5506)

* Use devtoolset-6.

* [CI] Use devtoolset-6 because devtoolset-4 is EOL and no longer available

* CUDA 9.0 doesn't work with devtoolset-6; use devtoolset-4 for GPU build only

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>

* fix typo "customized" (dmlc#5515)

* Ensure that configured dmlc/build_config.h is picked up by Rabit and XGBoost (dmlc#5514)

* Ensure that configured header (build_config.h) from dmlc-core is picked up by Rabit and XGBoost

* Check which Rabit target is being used

* Use CMake 3.13 in all Jenkins tests

* Upgrade CMake in Travis CI

* Install CMake using Kitware installer

* Remove existing CMake (3.12.4)

* Update Python doc. [skip ci] (dmlc#5517)

* Update doc for copying booster. [skip ci]

The issue is resolved in  dmlc#5312 .

* Add version for new APIs. [skip ci]

* Add Neptune and Optuna to list of examples (dmlc#5528)

* [jvm-packages] [CI] Create a Maven repository to host SNAPSHOT JARs (dmlc#5533)

* Write binary header. (dmlc#5532)

* Purge device_helpers.cuh (dmlc#5534)

* Simplifications with caching_device_vector

* Purge device helpers

* [dask] dask cudf inplace prediction. (dmlc#5512)

* Add inplace prediction for dask-cudf.

* Remove Dockerfile.release, since it's not used anywhere

* Use Conda exclusively in CUDF and GPU containers

* Improve cupy memory copying.

* Add skip marks to tests.

* Add mgpu-cudf category on the CI to run all distributed tests.

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>

* [CI] Use Ubuntu 18.04 LTS in JVM CI, because 19.04 is EOL (dmlc#5537)

* [jvm-packages] [CI] Publish XGBoost4J JARs with Scala 2.11 and 2.12 (dmlc#5539)

* Fix CLI model IO. (dmlc#5535)


* Add test for comparing Python and CLI training result.

* Fix uninitialized value bug in xgboost callback (dmlc#5463)


Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Use thrust functions instead of custom functions (dmlc#5544)

* Optimizations for RNG in InitData kernel (dmlc#5522)

* optimizations for subsampling in InitData

* optimizations for subsampling in InitData

Co-authored-by: SHVETS, KIRILL <kirill.shvets@intel.com>

* Add missing aft parameters. [skip ci] (dmlc#5553)

* Don't use uint for threads. (dmlc#5542)

* Fix skl nan tag. (dmlc#5538)

* Assert matching length of evaluation inputs. (dmlc#5540)

* Fix r interaction constraints (dmlc#5543)

* Unify the parsing code.

* Cleanup.

* Fix slice and get info. (dmlc#5552)

* gpu_hist performance fixes (dmlc#5558)

* Remove unnecessary cuda API calls

* Fix histogram memory growth

* Use non-synchronising scan (dmlc#5560)

* Fix non-openmp build. (dmlc#5566)


* Add test to Jenkins.
* Fix threading utils tests.
* Require thread library.

* Don't set seed on CLI interface. (dmlc#5563)

* [jvm-packages] XGBoost Spark should deal with NaN when parsing evaluation output (dmlc#5546)

* Group aware GPU sketching. (dmlc#5551)

* Group aware GPU weighted sketching.

* Distribute group weights to each data point.
* Relax the test.
* Validate input meta info.
* Fix metainfo copy ctor.

* Fix configuration I load model. (dmlc#5562)

* [Breaking] Set output margin to True for custom objective. (dmlc#5564)

* Set output margin to True for custom objective in Python and R.

* Add a demo for writing multi-class custom objective function.

* Run tests on selected demos.

* For histograms, opting into maximum shared memory available per block. (dmlc#5491)

* Use cudaDeviceGetAttribute instead of cudaGetDeviceProperties (dmlc#5570)

* Restore attributes in complete. (dmlc#5573)

* Enable parameter validation for R. (dmlc#5569)

* Enable parameter validation for R.

* Add test.

* Update document. (dmlc#5572)

* Port R compatibility patches from 1.0.0 release branch (dmlc#5577)

* Don't use memset to set struct when compiling for R

* Support 32-bit Solaris target for R package

* [CI] Use Vault repository to re-gain access to devtoolset-4 (dmlc#5589)

* [CI] Use Vault repository to re-gain access to devtoolset-4

* Use manylinux2010 tag

* Update Dockerfile.jvm

* Fix rename_whl.py

* Upgrade Pip, to handle manylinux2010 tag

* Update insert_vcomp140.py

* Update test_python.sh

* Avoid rabit calls in learner configuration (dmlc#5581)

* Hide C++ symbols in libxgboost.so when building Python wheel (dmlc#5590)

* Hide C++ symbols in libxgboost.so when building Python wheel

* Update Jenkinsfile

* Add test

* Upgrade rabit

* Add setup.py option.

Co-authored-by: fis <jm.yuan@outlook.com>

* Set device in device dmatrix. (dmlc#5596)

* Fix compilation on Mac OSX High Sierra (10.13) (dmlc#5597)

* Fix compilation on Mac OSX High Sierra

* [CI] Build Mac OSX binary wheel using Travis CI

* [CI] Grant public read access to Mac OSX wheels (dmlc#5602)

* [R] Address warnings to comply with CRAN submission policy (dmlc#5600)

* [R] Address warnings to comply with CRAN submission policy

* Include <xgboost/logging.h>

* Instruct Mac users to install libomp (dmlc#5606)

* Clarify meaning of `training` parameter in XGBoosterPredict() (dmlc#5604)

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>

* Better message when no GPU is found. (dmlc#5594)

* Refactor the CLI. (dmlc#5574)


* Enable parameter validation.
* Enable JSON.
* Catch `dmlc::Error`.
* Show help message.

* Move dask tutorial closer other distributed tutorials (dmlc#5613)

* Refactor gpu_hist split evaluation (dmlc#5610)

* Refactor

* Rewrite evaluate splits

* Add more tests

* Fix build on big endian CPUs (dmlc#5617)

* Fix build on big endian CPUs

* Clang-tidy

* Remove dead code. (dmlc#5635)

* Move device dmatrix construction code into ellpack. (dmlc#5623)

* Enhance nvtx support. (dmlc#5636)

* Support 64bit seed. (dmlc#5643)

* Resolve vector<bool>::iterator crash (dmlc#5642)

* Reduce device synchronisation (dmlc#5631)

* Reduce device synchronisation

* Initialise pinned memory

* Upgrade to CUDA 10.0 (dmlc#5649) (dmlc#5652)

Co-authored-by: fis <jm.yuan@outlook.com>

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* skip missing lookup if nothing is missing in CPU hist partition kernel. (dmlc#5644)

* [xgboost] skip missing lookup if nothing is missing

* Update Python demos with tests. (dmlc#5651)

* Remove GPU memory usage demo.
* Add tests for demos.
* Remove `silent`.
* Remove shebang as it's not portable.

* Add JSON schema to model dump. (dmlc#5660)

* Pseudo-huber loss metric added (dmlc#5647)


- Add pseudo huber loss objective.
- Add pseudo huber loss metric.

Co-authored-by: Reetz <s02reetz@iavgroup.local>

* [JVM Packages] Catch dmlc error by ref. (dmlc#5678)

* Remove silent from R demos. (dmlc#5675)

* Remove silent from R demos.

* Vignettes.

* add pointers to the gpu external memory paper (dmlc#5684)

* Distributed optimizations for 'hist' method with CPUs (dmlc#5557)


Co-authored-by: SHVETS, KIRILL <kirill.shvets@intel.com>

* Document more objective parameters in R package (dmlc#5682)

* C++14 for xgboost (dmlc#5664)

* Implement Python data handler. (dmlc#5689)


* Define data handlers for DMatrix.
* Throw ValueError in scikit learn interface.

* [R-package] Reduce duplication in configure.ac (dmlc#5693)


* updated configure

* Remove redundant sketching. (dmlc#5700)

* [R] Fix duplicated libomp.dylib error on Mac OSX (dmlc#5701)

* Fix IsDense. (dmlc#5702)

* Let XGBoostError inherit ValueError. (dmlc#5696)

* Define _CRT_SECURE_NO_WARNINGS to remove unneeded warnings in MSVC (dmlc#5434)

* Changed build.rst (binary wheels are supported for macOS also) (dmlc#5711)

* [CI] Remove CUDA 9.0 from Windows CI. (dmlc#5674)

* Remove CUDA 9.0 on Windows CI.

* Require cuda10 tag, to differentiate

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Require CUDA 10.0+ in CMake build (dmlc#5718)

* Require Python 3.6+; drop Python 3.5 from CI (dmlc#5715)

* [dask] Return GPU Series when input is from cuDF. (dmlc#5710)


* Refactor predict function.

* [Doc] Fix typos in AFT tutorial (dmlc#5716)

* gpu_hist performance tweaks (dmlc#5707)

* Remove device vectors

* Remove allreduce synchronize

* Remove double buffer

* Allow pass fmap to importance plot (dmlc#5719)

Co-authored-by: Peter Jung <peter.jung@heureka.cz>
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>

* Fix release degradation (dmlc#5720)

* fix release degradation, related to 5666

* less resizes

Co-authored-by: SHVETS, KIRILL <kirill.shvets@intel.com>

* Fix loading old model. (dmlc#5724)


* Add test.

* Bump version to 1.2.0 snapshot in master (dmlc#5733)

* Add swift package reference (dmlc#5728)

Co-authored-by: Peter Jung <peter.jung@heureka.cz>

* Don't use mask in array interface. (dmlc#5730)

* Bump version in header. (dmlc#5742)

* [CI] Remove CUDA 9.0 from CI (dmlc#5745)

* Add pkgconfig to cmake (dmlc#5744)

* Add pkgconfig to cmake

* Move xgboost.pc.in to cmake/

Co-authored-by: Peter Jung <peter.jung@heureka.cz>
Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>

* Expose device sketching in header. (dmlc#5747)

* Add Python binding for rabit ops. (dmlc#5743)

* Add float32 histogram (dmlc#5624)

* new single_precision_histogram param was added.

Co-authored-by: SHVETS, KIRILL <kirill.shvets@intel.com>
Co-authored-by: fis <jm.yuan@outlook.com>

* Reorder includes. (dmlc#5749)

* Reorder includes.

* R.

* Remove `max.depth` in R gblinear example. (dmlc#5753)

* Speed up python test (dmlc#5752)

* Speed up tests

* Prevent DeviceQuantileDMatrix initialisation with numpy

* Use joblib.memory

* Use RandomState

* Add helper for generating batches of data. (dmlc#5756)

* Add helper for generating batches of data.

* VC keyword clash.

* Another clash.

* Remove column major specialization. (dmlc#5755)


Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>

* Document addition of new committer @SmirnovEgorRu (dmlc#5762)

* Add release note for 1.1.0 in NEWS.md (dmlc#5763)

* Add release note for 1.1.0 in NEWS.md

* Address reviewer's feedback

* Revert "Reorder includes. (dmlc#5749)" (dmlc#5771)

This reverts commit d3a0efb.

* [python-package] remove unused imports (dmlc#5776)

* Added conda environment file for building docs (dmlc#5773)

* [R] replace uses of T and F with TRUE and FALSE (dmlc#5778)

* [R-package] replace uses of T and F with TRUE and FALSE

* enable linting

* Remove skip

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Implement weighted sketching for adapter. (dmlc#5760)


* Bounded memory tests.
* Fixed memory estimation.

* Avoid including `c_api.h` in header files. (dmlc#5782)

* Implement `Empty` method for host device vector. (dmlc#5781)

* Fix accessing nullptr.

* Bump com.esotericsoftware to 4.0.2 (dmlc#5690)

Co-authored-by: Antti Saukko <antti.saukko@verizonmedia.com>

* [DOC] Mention dask blog post in doc. [skip ci] (dmlc#5789)

* [R] Remove dependency on gendef for Visual Studio builds (fixes dmlc#5608) (dmlc#5764)

* [R-package] Remove dependency on gendef for Visual Studio builds (fixes dmlc#5608)

* clarify docs

* removed debugging print statement

* Make R CMake install more robust

* Fix doc format; add ToC

* Update build.rst

* Fix AppVeyor

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>

* Add new skl model attribute for number of features (dmlc#5780)

* Fix exception causes all over the codebase (dmlc#5787)

* Use hypothesis (dmlc#5759)

* Use hypothesis

* Allow int64 array interface for groups

* Add packages to Windows CI

* Add to travis

* Make sure device index is set correctly

* Fix dask-cudf test

* appveyor

* Accept string for ArrayInterface constructor.

* Revert "Accept string for ArrayInterface constructor."

This reverts commit e8ecafb.

* Implement fast number serialization routines. (dmlc#5772)

* Implement ryu algorithm.
* Implement integer printing.
* Full coverage roundtrip test.

* Add cupy to Windows CI (dmlc#5797)

* Add cupy to Windows CI

* Update Jenkinsfile-win64

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Update Jenkinsfile-win64

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Update tests/python-gpu/test_gpu_prediction.py

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Add an option to run brute-force test for JSON round-trip (dmlc#5804)

* Add an option to run brute-force test for JSON round-trip

* Apply reviewer's feedback

* Remove unneeded objects

* Parallel run.

* Max.

* Use signed 64-bit loop var, to support MSVC

* Add exhaustive test to CI

* Run JSON test in Win build worker

* Revert "Run JSON test in Win build worker"

This reverts commit c97b2c7.

* Revert "Add exhaustive test to CI"

This reverts commit c149c2c.

Co-authored-by: fis <jm.yuan@outlook.com>

* [CI] Fix cuDF install; merge 'gpu' and 'cudf' test suite (dmlc#5814)

* Implement extend method for meta info. (dmlc#5800)

* Implement extend for host device vector.

* Update rabit. (dmlc#5680)

* Update document for model dump. (dmlc#5818)

* Clarify the relationship between dump and save.
* Mention the schema.

* [Doc] Fix rendering of Markdown docs, e.g. R doc (dmlc#5821)

* Remove unweighted GK quantile. (dmlc#5816)

* Rename Ant Financial to Ant Group (dmlc#5827)

* Accept string for ArrayInterface constructor. (dmlc#5799)

* Implement a DMatrix Proxy. (dmlc#5803)

* Relax test for shotgun. (dmlc#5835)

* Relax linear test. (dmlc#5849)

* Increased error in coordinate is mostly due to floating point error.
* Shotgun uses Hogwild!, which is non-deterministic and can have even greater
floating point error.

* Implement iterative DMatrix. (dmlc#5837)

* Ensure that LoadSequentialFile() actually read the whole file (dmlc#5831)

* Add c-api-demo to .gitignore (dmlc#5855)

* Use dmlc stream when URI protocol is not local file. (dmlc#5857)

* Move feature names and types of DMatrix from Python to C++. (dmlc#5858)


* Add thread local return entry for DMatrix.
* Save feature name and feature type in binary file.

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Split Features into Groups to Compute Histograms in Shared Memory (dmlc#5795)

* Implement GK sketching on GPU. (dmlc#5846)

* Implement GK sketching on GPU.
* Strong tests on quantile building.
* Handle sparse dataset by binary searching the column index.
* Hypothesis test on dask.

* Accept iterator in device dmatrix. (dmlc#5783)


* Remove Device DMatrix.

* Remove print. (dmlc#5867)

* fix device sketch with weights in external memory mode (dmlc#5870)

* [Doc] Document that CUDA 10.0 is required [skip ci] (dmlc#5872)

* [CI] Simplify CMake build with modern CMake techniques (dmlc#5871)

* [CI] Simplify CMake build

* Make sure that plugins can be built

* [CI] Install lz4 on Mac

* Add new parameter singlePrecisionHistogram to xgboost4j-spark (dmlc#5811)

Expose the existing 'singlePrecisionHistogram' param to the Spark layer.

* Upgrade Rabit (dmlc#5876)

* [jvm-packages] update spark dependency to 3.0.0 (dmlc#5836)

* Cleanup on device sketch. (dmlc#5874)

* Remove old functions.

* Merge weighted and un-weighted into a common interface.

* [CI] Enforce daily budget in Jenkins CI (dmlc#5884)

* [CI] Throttle Jenkins CI

* Don't use Jenkins master instance

* Add XGBoosterGetNumFeature (dmlc#5856)

- add GetNumFeature to Learner
- add XGBoosterGetNumFeature to C API
- update c-api-demo accordingly

* Fix NDK Build. (dmlc#5886)

* Explicit cast for slice.

* [CI] Reduce load on Windows CI pipeline (dmlc#5892)

* Fix R package build with CMake 3.13 (dmlc#5895)

* Fix R package build with CMake 3.13

* Require OpenMP for xgboost-r target

* Simplify the data backends. (dmlc#5893)

* [CI] update spark version to 3.0.0 (dmlc#5890)

* [CI] update spark version to 3.0.0

* Update Dockerfile.jvm_cross

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Fix sketch size calculation. (dmlc#5898)

* Dask device dmatrix (dmlc#5901)


* Fix softprob with empty dmatrix.

* GPU implementation of AFT survival objective and metric (dmlc#5714)

* Add interval accuracy

* De-virtualize AFT functions

* Lint

* Refactor AFT metric using GPU-CPU reducer

* Fix R build

* Fix build on Windows

* Fix copyright header

* Clang-tidy

* Fix crashing demo

* Fix typos in comment; explain GPU ID

* Remove unnecessary #include

* Add C++ test for interval accuracy

* Fix a bug in accuracy metric: use log pred

* Refactor AFT objective using GPU-CPU Transform

* Lint

* Fix lint

* Use Ninja to speed up build

* Use time, not /usr/bin/time

* Add cpu_build worker class, with concurrency = 1

* Use concurrency = 1 only for CUDA build

* concurrency = 1 for clang-tidy

* Address reviewer's feedback

* Update link to AFT paper

* Fix Windows 2016 build. (dmlc#5902)

* Further improvements and savings in Jenkins pipeline (dmlc#5904)

* Publish artifacts only on the master and release branches

* Build CUDA only for Compute Capability 7.5 when building PRs

* Run all Windows jobs in a single worker image

* Build nightly XGBoost4J SNAPSHOT JARs with Scala 2.12 only

* Show skipped Python tests on Windows

* Make Graphviz optional for Python tests

* Add back C++ tests

* Unstash xgboost_cpp_tests

* Fix label to CUDA 10.1

* Install cuPy for CUDA 10.1

* Install jsonschema

* Address reviewer's feedback

* Support building XGBoost with CUDA 11 (dmlc#5808)

* Change serialization test.
* Add CUDA 11 tests on Linux CI.

Co-authored-by: Philip Hyunsu Cho <chohyu01@cs.washington.edu>

* Add Github Action for R. (dmlc#5911)

* Fix lintr errors.

* Fix typo in CI. [skip ci] (dmlc#5919)

* [Doc] Document new objectives and metrics available on GPUs (dmlc#5909)

* Fix mingw build with R. (dmlc#5918)

* Add option to enable all compiler warnings in GCC/Clang (dmlc#5897)

* Add option to enable all compiler warnings in GCC/Clang

* Fix -Wall for CUDA sources

* Make -Wall private req for xgboost-r

* Setup github action. (dmlc#5917)

* Remove R and JVM from appveyor. (dmlc#5922)

* Fix r early stop with custom objective. (dmlc#5923)

* Specify `ntreelimit`.

* Add explicit template specialization for portability (dmlc#5921)

* Add explicit template specializations

* Adding Specialization for FileAdapterBatch

* Cache dependencies on Github Action. (dmlc#5928)

* Use `cudaOccupancyMaxPotentialBlockSize` to calculate the block size. (dmlc#5926)

* [BLOCKING] Handle empty rows in data iterators correctly (dmlc#5929)

* [jvm-packages] Handle empty rows in data iterators correctly

* Fix clang-tidy error

* last empty row

* Add comments [skip ci]

Co-authored-by: Nan Zhu <nanzhu@uber.com>

* [CI] Make Python model compatibility test runnable locally (dmlc#5941)

* [BLOCKING] Remove to_string. (dmlc#5934)

* [R] Add a compatibility layer to load Booster object from an old RDS file (dmlc#5940)

* [R] Add a compatibility layer to load Booster from an old RDS
* Modify QuantileHistMaker::LoadConfig() to be backward compatible with 1.1.x
* Add a big warning about compatibility in QuantileHistMaker::LoadConfig()
* Add testing suite
* Discourage use of saveRDS() in CRAN doc

* [R] Enable weighted learning to rank (dmlc#5945)

* [R] enable weighted learning to rank

* Add R unit test for ranking

* Fix lint

* [BLOCKING] [jvm-packages] add gpu_hist and enable gpu scheduling (dmlc#5171)

* [jvm-packages] add gpu_hist tree method

* change updater hist to grow_quantile_histmaker

* add gpu scheduling

* pass correct parameters to xgboost library

* remove debug info

* add use.cuda for pom

* add CI for gpu_hist for jvm

* add gpu unit tests

* use gpu node to build jvm

* use nvidia-docker

* Add CLI interface to create_jni.py using argparse

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>

* [CI] Improve R linter script (dmlc#5944)

* [CI] Move lint to a separate script

* [CI] Improved lintr launcher

* Add lintr as a separate action

* Add custom parsing logic to print out logs

* Fix lintr issues in demos

* Run R demos

* Fix CRAN checks

* Install XGBoost into R env before running lintr

* Install devtools (needed to run demos)

* Fix prediction heuristic (dmlc#5955)


* Relax check for prediction.
* Relax test in spark test.
* Add tests in C++.

* [Breaking] Fix custom metric for multi output. (dmlc#5954)


* Set output margin to true for custom metric.  This fixes only R and Python.

* Disable feature validation on sklearn predict prob. (dmlc#5953)


* Fix issue when scikit learn interface receives transformed inputs.

* [CI] Fix broken Docker container 'cpu' (dmlc#5956)

* Fix evaluate root split. (dmlc#5948)

* [Dask] Asyncio support. (dmlc#5862)

* Thread-safe prediction by making the prediction cache thread-local. (dmlc#5853)

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>

* Force colored output for ninja build. (dmlc#5959)

* Update XGBoost + Dask overview documentation (dmlc#5961)

* Add imports to code snippet

* Better writing.

* Add CMake flag to log C API invocations, to aid debugging (dmlc#5925)

* Add CMake flag to log C API invocations, to aid debugging

* Remove unnecessary parentheses

* [CI] Assign larger /dev/shm to NCCL (dmlc#5966)

* [CI] Assign larger /dev/shm to NCCL

* Use 10.2 artifact to run multi-GPU Python tests

* Add CUDA 10.0 -> 11.0 cross-version test; remove CUDA 10.0 target

* Add missing Pytest marks to AsyncIO unit test (dmlc#5968)

* [R] Provide better guidance for persisting XGBoost model (dmlc#5964)

* [R] Provide better guidance for persisting XGBoost model

* Update saving_model.rst

* Add a paragraph about xgb.serialize()

* [jvm-packages] Fix wrong method name `setAllowZeroForMissingValue`. (dmlc#5740)

* Allow non-zero for missing value when training.

* Fix wrong method names.

* Add a unit test

* Move the getter/setter unit test to MissingValueHandlingSuite

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>

* Export DaskDeviceQuantileDMatrix in doc. [skip ci] (dmlc#5975)

* Fix sklearn doc. (dmlc#5980)

* Update Python custom objective demo. (dmlc#5981)

* Update JSON schema. (dmlc#5982)


* Update JSON schema for pseudo huber.
* Update JSON model schema.

* Fix missing data warning. (dmlc#5969)

* Fix data warning.

* Add numpy/scipy test.

* Enforce tree order in JSON. (dmlc#5974)


* Make JSON model IO more future proof by using tree id in model loading.

* Fix dask predict shape infer. (dmlc#5989)

* [R] fix uses of 1:length(x) and other small things (dmlc#5992)

* Fix typo in tracker logging (dmlc#5994)

* Introducing DPC++-based plugin (predictor, objective function) supporting oneAPI programming model (dmlc#5825)

* Added plugin with DPC++-based predictor and objective function

* Update CMakeLists.txt

* Update regression_obj_oneapi.cc

* Added README.md for OneAPI plugin

* Added OneAPI predictor support to gbtree

* Update README.md

* Merged kernels in gradient computation. Enabled multiple loss functions with DPC++ backend

* Aligned plugin CMake files with latest master changes. Fixed whitespace typos

* Removed debug output

* [CI] Make oneapi_plugin a CMake target

* Added tests for OneAPI plugin for predictor and obj. functions

* Temporarily switched to default selector for device dispacthing in OneAPI plugin to enable execution in environments without gpus

* Updated readme file.

* Fixed USM usage in predictor

* Removed workaround with explicit templated names for DPC++ kernels

* Fixed warnings in plugin tests

* Fix CMake build of gtest

Co-authored-by: Hyunsu Cho <chohyu01@cs.washington.edu>

* Remove skmaker. (dmlc#5971)

* Rabit update. (dmlc#5978)

* Remove parameter on JVM Packages.

* Move warning about empty dataset. (dmlc#5998)

* [Breaking] Fix .predict() method and add .predict_proba() in xgboost.dask.DaskXGBClassifier (dmlc#5986)

* Unify CPU hist sketching (dmlc#5880)

* Fix nightly build doc. [skip ci] (dmlc#6004)

* Fix nightly build doc. [skip ci]

* Fix title too short. [skip ci]

* RMM integration plugin (dmlc#5873)

* [CI] Add RMM as an optional dependency

* Replace caching allocator with pool allocator from RMM

* Revert "Replace caching allocator with pool allocator from RMM"

This reverts commit e15845d.

* Use rmm::mr::get_default_resource()

* Try setting default resource (doesn't work yet)

* Allocate pool_mr in the heap

* Prevent leaking pool_mr handle

* Separate EXPECT_DEATH() in separate test suite suffixed DeathTest

* Turn off death tests for RMM

* Address reviewer's feedback

* Prevent leaking of cuda_mr

* Fix Jenkinsfile syntax

* Remove unnecessary function in Jenkinsfile

* [CI] Install NCCL into RMM container

* Run Python tests

* Try building with RMM, CUDA 10.0

* Do not use RMM for CUDA 10.0 target

* Actually test for test_rmm flag

* Fix TestPythonGPU

* Use CNMeM allocator, since pool allocator doesn't yet support multiGPU

* Use 10.0 container to build RMM-enabled XGBoost

* Revert "Use 10.0 container to build RMM-enabled XGBoost"

This reverts commit 789021f.

* Fix Jenkinsfile

* [CI] Assign larger /dev/shm to NCCL

* Use 10.2 artifact to run multi-GPU Python tests

* Add CUDA 10.0 -> 11.0 cross-version test; remove CUDA 10.0 target

* Rename Conda env rmm_test -> gpu_test

* Use env var to opt into CNMeM pool for C++ tests

* Use identical CUDA version for RMM builds and tests

* Use Pytest fixtures to enable RMM pool in Python tests

* Move RMM to plugin/CMakeLists.txt; use PLUGIN_RMM

* Use per-device MR; use command arg in gtest

* Set CMake prefix path to use Conda env

* Use 0.15 nightly version of RMM

* Remove unnecessary header

* Fix a unit test when cudf is missing

* Add RMM demos

* Remove print()

* Use HostDeviceVector in GPU predictor

* Simplify pytest setup; use LocalCUDACluster fixture

* Address reviewers' commments

Co-authored-by: Hyunsu Cho <chohyu01@cs.wasshington.edu>

Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
Co-authored-by: James Lamb <jaylamb20@gmail.com>
Co-authored-by: sriramch <33358417+sriramch@users.noreply.github.com>
Co-authored-by: Rory Mitchell <r.a.mitchell.nz@gmail.com>
Co-authored-by: Avinash Barnwal <avinashbarnwal123@gmail.com>
Co-authored-by: Philip Cho <chohyu01@cs.washington.edu>
Co-authored-by: ShvetsKS <33296480+ShvetsKS@users.noreply.github.com>
Co-authored-by: Paul Kaefer <2408155+paulkaefer@users.noreply.github.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Rong Ou <rong.ou@gmail.com>
Co-authored-by: Zhang Zhang <zhang.zhang@intel.com>
Co-authored-by: Bobby Wang <wbo4958@gmail.com>
Co-authored-by: Liang-Chi Hsieh <viirya@gmail.com>
Co-authored-by: Nicolas Scozzaro <nscozzaro@gmail.com>
Co-authored-by: Kamil A. Kaczmarek <kamil.kaczmarek@neptune.ai>
Co-authored-by: Melissa Kohl <mjkohl32@gmail.com>
Co-authored-by: SHVETS, KIRILL <kirill.shvets@intel.com>
Co-authored-by: Liang-Chi Hsieh <liangchi@uber.com>
Co-authored-by: Andy Adinets <aadinets@nvidia.com>
Co-authored-by: Jason E. Aten, Ph.D <j.e.aten@gmail.com>
Co-authored-by: Oleksandr Kuvshynov <661042+okuvshynov@users.noreply.github.com>
Co-authored-by: LionOrCatThatIsTheQuestion <44895499+LionOrCatThatIsTheQuestion@users.noreply.github.com>
Co-authored-by: Reetz <s02reetz@iavgroup.local>
Co-authored-by: Lorenz Walthert <lorenz.walthert@icloud.com>
Co-authored-by: Dmitry Mottl <dmitry.mottl@gmail.com>
Co-authored-by: Peter Jung <peter@jung.ninja>
Co-authored-by: Peter Jung <peter.jung@heureka.cz>
Co-authored-by: Elliot Hershberg <eahershberg@gmail.com>
Co-authored-by: anttisaukko <antti.saukko@gmail.com>
Co-authored-by: Antti Saukko <antti.saukko@verizonmedia.com>
Co-authored-by: Alex <wozn0001@e.ntu.edu.sg>
Co-authored-by: Ram Rachum <ram@rachum.com>
Co-authored-by: Alexander Gugel <alexander.gugel@gmail.com>
Co-authored-by: Nan Zhu <nanzhu@uber.com>
Co-authored-by: boxdot <d@zerovolt.org>
Co-authored-by: James Bourbeau <jrbourbeau@users.noreply.github.com>
Co-authored-by: Shaochen Shi <shishaochen_ha@sina.com>
Co-authored-by: Anthony D'Amato <anthony.damato@hotmail.fr>
Co-authored-by: Vladislav Epifanov <vepifanov92@gmail.com>
Co-authored-by: jameskrach <69264125+jameskrach@users.noreply.github.com>
Co-authored-by: Hyunsu Cho <chohyu01@cs.wasshington.edu>
@gnaggnoyil
Copy link

Is it intended behavior to add rmm header directory directly to the include path? I mean, rmm exports a CMake target to specify how to use itself, and for most of the time it's not as simple as including rmm header directory directly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature proposal] Support RAPIDS Memory Manager (RMM)
10 participants