Skip to content
  • v1.3.3
  • 000292c
  • Compare
    Choose a tag to compare
    Search for a tag
  • v1.3.3
  • 000292c
  • Compare
    Choose a tag to compare
    Search for a tag

@trivialfis trivialfis released this Jan 20, 2021

  • Fix regression on best_ntree_limit. (#6616)
Assets 2

@trivialfis trivialfis released this Jan 13, 2021 · 3 commits to release_1.3.0 since this release

  • Fix compatibility with newer scikit-learn. (#6555)
  • Fix wrong best_ntree_limit in multi-class. (#6569)
  • Ensure that Rabit can be compiled on Solaris (#6578)
  • Fix best_ntree_limit for linear and dart. (#6579)
  • Remove duplicated DMatrix creation in scikit-learn interface. (#6592)
  • Fix evals_result in XGBRanker. (##6594)
Assets 2
  • v1.3.1
  • a78d0d4
  • Compare
    Choose a tag to compare
    Search for a tag
  • v1.3.1
  • a78d0d4
  • Compare
    Choose a tag to compare
    Search for a tag

@trivialfis trivialfis released this Dec 22, 2020 · 9 commits to release_1.3.0 since this release

  • Enable loading model from <1.0.0 trained with objective='binary:logitraw' (#6517)
  • Fix handling of print period in EvaluationMonitor (#6499)
  • Fix a bug in metric configuration after loading model. (#6504)
  • Fix save_best early stopping option (#6523)
  • Remove cupy.array_equal, since it's not compatible with cuPy 7.8 (#6528)

You can verify the downloaded source code xgboost.tar.gz by running this on your unix shell:

echo "fd51e844dd0291fd9e7129407be85aaeeda2309381a6e3fc104938b27fb09279 *xgboost.tar.gz" | shasum -a 256 --check
Assets 3

@hcho3 hcho3 released this Dec 9, 2020 · 122 commits to master since this release

XGBoost4J-Spark: Exceptions should cancel jobs gracefully instead of killing SparkContext (#6019).

  • By default, exceptions in XGBoost4J-Spark causes the whole SparkContext to shut down, necessitating the restart of the Spark cluster. This behavior is often a major inconvenience.
  • Starting from 1.3.0 release, XGBoost adds a new parameter killSparkContextOnWorkerFailure to optionally prevent killing SparkContext. If this parameter is set, exceptions will gracefully cancel training jobs instead of killing SparkContext.

GPUTreeSHAP: GPU acceleration of the TreeSHAP algorithm (#6038, #6064, #6087, #6099, #6163, #6281, #6332)

  • SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain predictions of machine learning models. It computes feature importance scores for individual examples, establishing how each feature influences a particular prediction. TreeSHAP is an optimized SHAP algorithm specifically designed for decision tree ensembles.
  • Starting with 1.3.0 release, it is now possible to leverage CUDA-capable GPUs to accelerate the TreeSHAP algorithm. Check out the demo notebook.
  • The CUDA implementation of the TreeSHAP algorithm is hosted at rapidsai/GPUTreeSHAP. XGBoost imports it as a Git submodule.

New style Python callback API (#6199, #6270, #6320, #6348, #6376, #6399, #6441)

  • The XGBoost Python package now offers a re-designed callback API. The new callback API lets you design various extensions of training in idomatic Python. In addition, the new callback API allows you to use early stopping with the native Dask API (xgboost.dask). Check out the tutorial and the demo.

Enable the use of DeviceQuantileDMatrix / DaskDeviceQuantileDMatrix with large data (#6201, #6229, #6234).

  • DeviceQuantileDMatrix can achieve memory saving by avoiding extra copies of the training data, and the saving is bigger for large data. Unfortunately, large data with more than 2^31 elements was triggering integer overflow bugs in CUB and Thrust. Tracking issue: #6228.
  • This release contains a series of work-arounds to allow the use of DeviceQuantileDMatrix with large data:
    • Loop over copy_if (#6201)
    • Loop over thrust::reduce (#6229)
    • Implement the inclusive scan algorithm in-house, to handle large offsets (#6234)

Support slicing of tree models (#6302)

  • Accessing the best iteration of a model after the application of early stopping used to be error-prone, need to manually pass the ntree_limit argument to the predict() function.
  • Now we provide a simple interface to slice tree models by specifying a range of boosting rounds. The tree ensemble can be split into multiple sub-ensembles via the slicing interface. Check out an example.
  • In addition, the early stopping callback now supports save_best option. When enabled, XGBoost will save (persist) the model at the best boosting round and discard the trees that were fit subsequent to the best round.

Weighted subsampling of features (columns) (#5962)

  • It is now possible to sample features (columns) via weighted subsampling, in which features with higher weights are more likely to be selected in the sample. Weighted subsampling allows you to encode domain knowledge by emphasizing a particular set of features in the choice of tree splits. In addition, you can prevent particular features from being used in any splits, by assigning them zero weights.
  • Check out the demo.

Improved integration with Dask

  • Support reverse-proxy environment such as Google Kubernetes Engine (#6343, #6475)
  • An XGBoost training job will no longer use all available workers. Instead, it will only use the workers that contain input data (#6343).
  • The new callback API works well with the Dask training API.
  • The predict() and fit() function of DaskXGBClassifier and DaskXGBRegressor now accept a base margin (#6155).
  • Support more meta data in the Dask API (#6130, #6132, #6333).
  • Allow passing extra keyword arguments as kwargs in predict() (#6117)
  • Fix typo in dask interface: sample_weights -> sample_weight (#6240)
  • Allow empty data matrix in AFT survival, as Dask may produce empty partitions (#6379)
  • Speed up prediction by overlapping prediction jobs in all workers (#6412)

Experimental support for direct splits with categorical features (#6028, #6128, #6137, #6140, #6164, #6165, #6166, #6179, #6194, #6219)

  • Currently, XGBoost requires users to one-hot-encode categorical variables. This has adverse performance implications, as the creation of many dummy variables results into higher memory consumption and may require fitting deeper trees to achieve equivalent model accuracy.
  • The 1.3.0 release of XGBoost contains an experimental support for direct handling of categorical variables in test nodes. Each test node will have the condition of form feature_value \in match_set, where the match_set on the right hand side contains one or more matching categories. The matching categories in match_set represent the condition for traversing to the right child node. Currently, XGBoost will only generate categorical splits with only a single matching category ("one-vs-rest split"). In a future release, we plan to remove this restriction and produce splits with multiple matching categories in match_set.
  • The categorical split requires the use of JSON model serialization. The legacy binary serialization method cannot be used to save (persist) models with categorical splits.
  • Note. This feature is currently highly experimental. Use it at your own risk. See the detailed list of limitations at #5949.

Experimental plugin for RAPIDS Memory Manager (#5873, #6131, #6146, #6150, #6182)

  • RAPIDS Memory Manager library (rapidsai/rmm) provides a collection of efficient memory allocators for NVIDIA GPUs. It is now possible to use XGBoost with memory allocators provided by RMM, by enabling the RMM integration plugin. With this plugin, XGBoost is now able to share a common GPU memory pool with other applications using RMM, such as the RAPIDS data science packages.
  • See the demo for a working example, as well as directions for building XGBoost with the RMM plugin.
  • The plugin will be soon considered non-experimental, once #6297 is resolved.

Experimental plugin for oneAPI programming model (#5825)

  • oneAPI is a programming interface developed by Intel aimed at providing one programming model for many types of hardware such as CPU, GPU, FGPA and other hardware accelerators.
  • XGBoost now includes an experimental plugin for using oneAPI for the predictor and objective functions. The plugin is hosted in the directory plugin/updater_oneapi.
  • Roadmap: #5442

Pickling the XGBoost model will now trigger JSON serialization (#6027)

  • The pickle will now contain the JSON string representation of the XGBoost model, as well as related configuration.

Performance improvements

  • Various performance improvement on multi-core CPUs
    • Optimize DMatrix build time by up to 3.7x. (#5877)
    • CPU predict performance improvement, by up to 3.6x. (#6127)
    • Optimize CPU sketch allreduce for sparse data (#6009)
    • Thread local memory allocation for BuildHist, leading to speedup up to 1.7x. (#6358)
    • Disable hyperthreading for DMatrix creation (#6386). This speeds up DMatrix creation by up to 2x.
    • Simple fix for static shedule in predict (#6357)
  • Unify thread configuration, to make it easy to utilize all CPU cores (#6186)
  • [jvm-packages] Clean the way deterministic paritioning is computed (#6033)
  • Speed up JSON serialization by implementing an intrusive pointer class (#6129). It leads to 1.5x-2x performance boost.

API additions

  • [R] Add SHAP summary plot using ggplot2 (#5882)
  • Modin DataFrame can now be used as input (#6055)
  • [jvm-packages] Add getNumFeature method (#6075)
  • Add MAPE metric (#6119)
  • Implement GPU predict leaf. (#6187)
  • Enable cuDF/cuPy inputs in XGBClassifier (#6269)
  • Document tree method for feature weights. (#6312)
  • Add fail_on_invalid_gpu_id parameter, which will cause XGBoost to terminate upon seeing an invalid value of gpu_id (#6342)

Breaking: the default evaluation metric for classification is changed to logloss / mlogloss (#6183)

  • The default metric used to be accuracy, and it is not statistically consistent to perform early stopping with the accuracy metric when we are really optimizing the log loss for the binary:logistic objective.
  • For statistical consistency, the default metric for classification has been changed to logloss. Users may choose to preserve the old behavior by explicitly specifying eval_metric.

Breaking: skmaker is now removed (#5971)

  • The skmaker updater has not been documented nor tested.

Breaking: the JSON model format no longer stores the leaf child count (#6094).

  • The leaf child count field has been deprecated and is not used anywhere in the XGBoost codebase.

Breaking: XGBoost now requires MacOS 10.14 (Mojave) and later.

  • Homebrew has dropped support for MacOS 10.13 (High Sierra), so we are not able to install the OpenMP runtime (libomp) from Homebrew on MacOS 10.13. Please use MacOS 10.14 (Mojave) or later.

Deprecation notices

  • The use of LabelEncoder in XGBClassifier is now deprecated and will be removed in the next minor release (#6269). The deprecation is necessary to support multiple types of inputs, such as cuDF data frames or cuPy arrays.
  • The use of certain positional arguments in the Python interface is deprecated (#6365). Users will use deprecation warnings for the use of position arguments for certain function parameters. New code should use keyword arguments as much as possible. We have not yet decided when we will fully require the use of keyword arguments.

Bug-fixes

  • On big-endian arch, swap the byte order in the binary serializer to enable loading models that were produced by a little-endian machine (#5813).
  • [jvm-packages] Fix deterministic partitioning with dataset containing Double.NaN (#5996)
  • Limit tree depth for GPU hist to 31 to prevent integer overflow (#6045)
  • [jvm-packages] Set maxBins to 256 to align with the default value in the C++ code (#6066)
  • [R] Fix CRAN check (#6077)
  • Add back support for scipy.sparse.coo_matrix (#6162)
  • Handle duplicated values in sketching. (#6178)
  • Catch all standard exceptions in C API. (#6220)
  • Fix linear GPU input (#6255)
  • Fix inplace prediction interval. (#6259)
  • [R] allow xgb.plot.importance() calls to fill a grid (#6294)
  • Lazy import dask libraries. (#6309)
  • Deterministic data partitioning for external memory (#6317)
  • Avoid resetting seed for every configuration. (#6349)
  • Fix label errors in graph visualization (#6369)
  • [jvm-packages] fix potential unit test suites aborted issue due to race condition (#6373)
  • [R] Fix warnings from R check --as-cran (#6374)
  • [R] Fix a crash that occurs with noLD R (#6378)
  • [R] Do not convert continuous labels to factors (#6380)
  • [R] remove uses of exists() (#6387)
  • Propagate parameters to the underlying Booster handle from XGBClassifier.set_param / XGBRegressor.set_param. (#6416)
  • [R] Fix R package installation via CMake (#6423)
  • Enforce row-major order in cuPy array (#6459)
  • Fix filtering callable objects in the parameters passed to the scikit-learn API. (#6466)

Maintenance: Testing, continuous integration, build system

  • [CI] Improve JVM test in GitHub Actions (#5930)
  • Refactor plotting test so that it can run independently (#6040)
  • [CI] Cancel builds on subsequent pushes (#6011)
  • Fix Dask Pytest fixture (#6024)
  • [CI] Migrate linters to GitHub Actions (#6035)
  • [CI] Remove win2016 JVM test from GitHub Actions (#6042)
  • Fix CMake build with BUILD_STATIC_LIB option (#6090)
  • Don't link imported target in CMake (#6093)
  • Work around a compiler bug in MacOS AppleClang 11 (#6103)
  • [CI] Fix CTest by running it in a correct directory (#6104)
  • [R] Check warnings explicitly for model compatibility tests (#6114)
  • [jvm-packages] add xgboost4j-gpu/xgboost4j-spark-gpu module to facilitate release (#6136)
  • [CI] Time GPU tests. (#6141)
  • [R] remove warning in configure.ac (#6152)
  • [CI] Upgrade cuDF and RMM to 0.16 nightlies; upgrade to Ubuntu 18.04 (#6157)
  • [CI] Test C API demo (#6159)
  • Option for generating device debug info. (#6168)
  • Update .gitignore (#6175, #6193, #6346)
  • Hide C++ symbols from dmlc-core (#6188)
  • [CI] Added arm64 job in Travis-CI (#6200)
  • [CI] Fix Docker build for CUDA 11 (#6202)
  • [CI] Move non-OpenMP gtest to GitHub Actions (#6210)
  • [jvm-packages] Fix up build for xgboost4j-gpu, xgboost4j-spark-gpu (#6216)
  • Add more tests for categorical data support (#6219)
  • [dask] Test for data initializaton. (#6226)
  • Bump junit from 4.11 to 4.13.1 in /jvm-packages/xgboost4j (#6230)
  • Bump junit from 4.11 to 4.13.1 in /jvm-packages/xgboost4j-gpu (#6233)
  • [CI] Reduce testing load with RMM (#6249)
  • [CI] Build a Python wheel for aarch64 platform (#6253)
  • [CI] Time the CPU tests on Jenkins. (#6257)
  • [CI] Skip Dask tests on ARM. (#6267)
  • Fix a typo in is_arm() in testing.py (#6271)
  • [CI] replace egrep with grep -E (#6287)
  • Support unity build. (#6295)
  • [CI] Mark flaky tests as XFAIL (#6299)
  • [CI] Use separate Docker cache for each CUDA version (#6305)
  • Added USE_NCCL_LIB_PATH option to enable user to set NCCL_LIBRARY during build (#6310)
  • Fix flaky data initialization test. (#6318)
  • Add a badge for GitHub Actions (#6321)
  • Optional find_package for sanitizers. (#6329)
  • Use pytest conventions consistently in Python tests (#6337)
  • Fix missing space in warning message (#6340)
  • Update custom_metric_obj.rst (#6367)
  • [CI] Run R check with --as-cran flag on GitHub Actions (#6371)
  • [CI] Remove R check from Jenkins (#6372)
  • Mark GPU external memory test as XFAIL. (#6381)
  • [CI] Add noLD R test (#6382)
  • Fix MPI build. (#6403)
  • [CI] Upgrade to MacOS Mojave image (#6406)
  • Fix flaky sparse page dmatrix test. (#6417)
  • [CI] Upgrade cuDF and RMM to 0.17 nightlies (#6434)
  • [CI] Fix CentOS 6 Docker images (#6467)
  • [CI] Vendor libgomp in the manylinux Python wheel (#6461)
  • [CI] Hot fix for libgomp vendoring (#6482)

Maintenance: Clean up and merge the Rabit submodule (#6023, #6095, #6096, #6105, #6110, #6262, #6275, #6290)

  • The Rabit submodule is now maintained as part of the XGBoost codebase.
  • Tests for Rabit are now part of the test suites of XGBoost.
  • Rabit can now be built on the Windows platform.
  • We made various code re-formatting for the C++ code with clang-tidy.
  • Public headers of XGBoost no longer depend on Rabit headers.
  • Unused CMake targets for Rabit were removed.
  • Single-point model recovery has been dropped and removed from Rabit, simplifying the Rabit code greatly. The single-point model recovery feature has not been adequately maintained over the years.
  • We removed the parts of Rabit that were not useful for XGBoost.

Maintenance: Refactor code for legibility and maintainability

  • Unify CPU hist sketching (#5880)
  • [R] fix uses of 1:length(x) and other small things (#5992)
  • Unify evaluation functions. (#6037)
  • Make binary bin search reusable. (#6058)
  • Unify set index data. (#6062)
  • [R] Remove stringi dependency (#6109)
  • Merge extract cuts into QuantileContainer. (#6125)
  • Reduce C++ compiler warnings (#6197, #6198, #6213, #6286, #6325)
  • Cleanup Python code. (#6223)
  • Small cleanup to evaluator. (#6400)

Usability Improvements, Documentation

  • [jvm-packages] add example to handle missing value other than 0 (#5677)
  • Add DMatrix usage examples to the C API demo (#5854)
  • List DaskDeviceQuantileDMatrix in the doc. (#5975)
  • Update Python custom objective demo. (#5981)
  • Update the JSON model schema to document more objective functions. (#5982)
  • [Python] Fix warning when missing field is not used. (#5969)
  • Fix typo in tracker logging (#5994)
  • Move a warning about empty dataset, so that it's shown for all objectives and metrics (#5998)
  • Fix the instructions for installing the nightly build. (#6004)
  • [Doc] Add dtreeviz as a showcase example of integration with 3rd-party software (#6013)
  • [jvm-packages] [doc] Update install doc for JVM packages (#6051)
  • Fix typo in xgboost.callback.early_stop docstring (#6071)
  • Add cache suffix to the files used in the external memory demo. (#6088)
  • [Doc] Document the parameter kill_spark_context_on_worker_failure (#6097)
  • Fix link to the demo for custom objectives (#6100)
  • Update Dask doc. (#6108)
  • Validate weights are positive values. (#6115)
  • Document the updated CMake version requirement. (#6123)
  • Add demo for DaskDeviceQuantileDMatrix. (#6156)
  • Cosmetic fixes in faq.rst (#6161)
  • Fix error message. (#6176)
  • [Doc] Add list of winning solutions in data science competitions using XGBoost (#6177)
  • Fix a comment in demo to use correct reference (#6190)
  • Update the list of winning solutions using XGBoost (#6192)
  • Consistent style for build status badge (#6203)
  • [Doc] Add info on GPU compiler (#6204)
  • Update the list of winning solutions (#6222, #6254)
  • Add link to XGBoost's Twitter handle (#6244)
  • Fix minor typos in XGBClassifier methods' docstrings (#6247)
  • Add sponsors link to FUNDING.yml (#6252)
  • Group CLI demo into subdirectory. (#6258)
  • Reduce warning messages from gbtree. (#6273)
  • Create a tutorial for using the C API in a C/C++ application (#6285)
  • Update plugin instructions for CMake build (#6289)
  • [doc] make Dask distributed example copy-pastable (#6345)
  • [Python] Add option to use libxgboost.so from the system path (#6362)
  • Fixed few grammatical mistakes in doc (#6393)
  • Fix broken link in CLI doc (#6396)
  • Improve documentation for the Dask API (#6413)
  • Revise misleading exception information: no such param of allow_non_zero_missing (#6418)
  • Fix CLI ranking demo. (#6439)
  • Fix broken links. (#6455)

Acknowledgement

Contributors: Nan Zhu (@CodingCat), @FelixYBW, Jack Dunn (@JackDunnNZ), Jean Lescut-Muller (@JeanLescut), Boris Feld (@Lothiraldan), Nikhil Choudhary (@Nikhil1O1), Rory Mitchell (@RAMitchell), @ShvetsKS, Anthony D'Amato (@Totoketchup), @Wittty-Panda, neko (@akiyamaneko), Alexander Gugel (@alexanderGugel), @dependabot[bot], DIVYA CHAUHAN (@divya661), Daniel Steinberg (@dstein64), Akira Funahashi (@funasoul), Philip Hyunsu Cho (@hcho3), Tong He (@hetong007), Hristo Iliev (@hiliev), Honza Sterba (@honzasterba), @hzy001, Igor Moura (@igormp), @jameskrach, James Lamb (@jameslamb), Naveed Ahmed Saleem Janvekar (@janvekarnaveed), Kyle Nicholson (@kylejn27), lacrosse91 (@lacrosse91), Christian Lorentzen (@lorentzenchr), Manikya Bardhan (@manikyabard), @nabokovas, John Quitto-Graham (@nvidia-johnq), @odidev, Qi Zhang (@qzhang90), Sergio Gavilán (@sgavil), Tanuja Kirthi Doddapaneni (@tanuja3), Cuong Duong (@tcuongd), Yuan Tang (@terrytangyuan), Jiaming Yuan (@trivialfis), vcarpani (@vcarpani), Vladislav Epifanov (@vepifanov), Vitalie Spinu (@vspinu), Bobby Wang (@wbo4958), Zeno Gantner (@zenogantner), zhang_jf (@zuston)

Reviewers: Nan Zhu (@CodingCat), John Zedlewski (@JohnZed), Rory Mitchell (@RAMitchell), @ShvetsKS, Egor Smirnov (@SmirnovEgorRu), Anthony D'Amato (@Totoketchup), @Wittty-Panda, Alexander Gugel (@alexanderGugel), Codecov Comments Bot (@codecov-commenter), Codecov (@codecov-io), DIVYA CHAUHAN (@divya661), Devin Robison (@drobison00), Geoffrey Blake (@geoffreyblake), Mark Harris (@harrism), Philip Hyunsu Cho (@hcho3), Honza Sterba (@honzasterba), Igor Moura (@igormp), @jakirkham, @jameskrach, James Lamb (@jameslamb), Janakarajan Natarajan (@janaknat), Jake Hemstad (@jrhemstad), Keith Kraus (@kkraus14), Kyle Nicholson (@kylejn27), Christian Lorentzen (@lorentzenchr), Michael Mayer (@mayer79), Nikolay Petrov (@napetrov), @odidev, PSEUDOTENSOR / Jonathan McKinney (@pseudotensor), Qi Zhang (@qzhang90), Sergio Gavilán (@sgavil), Scott Lundberg (@slundberg), Cuong Duong (@tcuongd), Yuan Tang (@terrytangyuan), Jiaming Yuan (@trivialfis), vcarpani (@vcarpani), Vladislav Epifanov (@vepifanov), Vincent Nijs (@vnijs), Vitalie Spinu (@vspinu), Bobby Wang (@wbo4958), William Hicks (@wphicks)

Assets 2
Pre-release

@hcho3 hcho3 released this Nov 23, 2020 · 122 commits to master since this release

#6422

R package: xgboost_1.3.0.1.tar.gz

Assets 3
  • v1.2.1
  • bcb15a9
  • Compare
    Choose a tag to compare
    Search for a tag
  • v1.2.1
  • bcb15a9
  • Compare
    Choose a tag to compare
    Search for a tag

@hcho3 hcho3 released this Oct 14, 2020 · 912 commits to master since this release

This patch release applies the following patches to 1.2.0 release:

  • Hide C++ symbols from dmlc-core (#6188)
Assets 2

@hcho3 hcho3 released this Aug 23, 2020 · 912 commits to master since this release

XGBoost4J-Spark now supports the GPU algorithm (#5171)

  • Now XGBoost4J-Spark is able to leverage NVIDIA GPU hardware to speed up training.
  • There is on-going work for accelerating the rest of the data pipeline with NVIDIA GPUs (#5950, #5972).

XGBoost now supports CUDA 11 (#5808)

  • It is now possible to build XGBoost with CUDA 11. Note that we do not yet distribute pre-built binaries built with CUDA 11; all current distributions use CUDA 10.0.

Better guidance for persisting XGBoost models in an R environment (#5940, #5964)

  • Users are strongly encouraged to use xgb.save() and xgb.save.raw() instead of saveRDS(). This is so that the persisted models can be accessed with future releases of XGBoost.
  • The previous release (1.1.0) had problems loading models that were saved with saveRDS(). This release adds a compatibility layer to restore access to the old RDS files. Note that this is meant to be a temporary measure; users are advised to stop using saveRDS() and migrate to xgb.save() and xgb.save.raw().

New objectives and metrics

  • The pseudo-Huber loss reg:pseudohubererror is added (#5647). The corresponding metric is mphe. Right now, the slope is hard-coded to 1.
  • The Accelerated Failure Time objective for survival analysis (survival:aft) is now accelerated on GPUs (#5714, #5716). The survival metrics aft-nloglik and interval-regression-accuracy are also accelerated on GPUs.

Improved integration with scikit-learn

  • Added n_features_in_ attribute to the scikit-learn interface to store the number of features used (#5780). This is useful for integrating with some scikit-learn features such as StackingClassifier. See this link for more details.
  • XGBoostError now inherits ValueError, which conforms scikit-learn's exception requirement (#5696).

Improved integration with Dask

  • The XGBoost Dask API now exposes an asynchronous interface (#5862). See the document for details.
  • Zero-copy ingestion of GPU arrays via DaskDeviceQuantileDMatrix (#5623, #5799, #5800, #5803, #5837, #5874, #5901): Previously, the Dask interface had to make 2 data copies: one for concatenating the Dask partition/block into a single block and another for internal representation. To save memory, we introduce DaskDeviceQuantileDMatrix. As long as Dask partitions are resident in the GPU memory, DaskDeviceQuantileDMatrix is able to ingest them directly without making copies. This matrix type wraps DeviceQuantileDMatrix.
  • The prediction function now returns GPU Series type if the input is from Dask-cuDF (#5710). This is to preserve the input data type.

Robust handling of external data types (#5689, #5893)

  • As we support more and more external data types, the handling logic has proliferated all over the code base and became hard to keep track. It also became unclear how missing values and threads are handled. We refactored the Python package code to collect all data handling logic to a central location, and now we have an explicit list of of all supported data types.

Improvements in GPU-side data matrix (DeviceQuantileDMatrix)

  • The GPU-side data matrix now implements its own quantile sketching logic, so that data don't have to be transported back to the main memory (#5700, #5747, #5760, #5846, #5870, #5898). The GK sketching algorithm is also now better documented.
    • Now we can load extremely sparse dataset like URL, although performance is still sub-optimal.
  • The GPU-side data matrix now exposes an iterative interface (#5783), so that users are able to construct a matrix from a data iterator. See the Python demo.

New language binding: Swift (#5728)

Robust model serialization with JSON (#5772, #5804, #5831, #5857, #5934)

  • We continue efforts from the 1.0.0 release to adopt JSON as the format to save and load models robustly.
  • JSON model IO is significantly faster and produces smaller model files.
  • Round-trip reproducibility is guaranteed, via the introduction of an efficient float-to-string conversion algorithm known as the Ryū algorithm. The conversion is locale-independent, producing consistent numeric representation regardless of the locale setting of the user's machine.
  • We fixed an issue in loading large JSON files to memory.
  • It is now possible to load a JSON file from a remote source such as S3.

Performance improvements

  • CPU hist tree method optimization
    • Skip missing lookup in hist row partitioning if data is dense. (#5644)
    • Specialize training procedures for CPU hist tree method on distributed environment. (#5557)
    • Add single point histogram for CPU hist. Previously gradient histogram for CPU hist is hard coded to be 64 bit, now users can specify the parameter single_precision_histogram to use 32 bit histogram instead for faster training performance. (#5624, #5811)
  • GPU hist tree method optimization
    • Removed some unnecessary synchronizations and better memory allocation pattern. (#5707)
    • Optimize GPU Hist for wide dataset. Previously for wide dataset the atomic operation is performed on global memory, now it can run on shared memory for faster histogram building. But there's a known small regression on GeForce cards with dense data. (#5795, #5926, #5948, #5631)

API additions

  • Support passing fmap to importance plot (#5719). Now importance plot can show actual names of features instead of default ones.
  • Support 64bit seed. (#5643)
  • A new C API XGBoosterGetNumFeature is added for getting number of features in booster (#5856).
  • Feature names and feature types are now stored in C++ core and saved in binary DMatrix (#5858).

Breaking: The predict() method of DaskXGBClassifier now produces class predictions (#5986). Use predict_proba() to obtain probability predictions.

  • Previously, DaskXGBClassifier.predict() produced probability predictions. This is inconsistent with the behavior of other scikit-learn classifiers, where predict() returns class predictions. We make a breaking change in 1.2.0 release so that DaskXGBClassifier.predict() now correctly produces class predictions and thus behave like other scikit-learn classifiers. Furthermore, we introduce the predict_proba() method for obtaining probability predictions, again to be in line with other scikit-learn classifiers.

Breaking: Custom evaluation metric now receives raw prediction (#5954)

  • Previously, the custom evaluation metric received a transformed prediction result when used with a classifier. Now the custom metric will receive a raw (untransformed) prediction and will need to transform the prediction itself. See demo/guide-python/custom_softmax.py for an example.
  • This change is to make the custom metric behave consistently with the custom objective, which already receives raw prediction (#5564).

Breaking: XGBoost4J-Spark now requires Spark 3.0 and Scala 2.12 (#5836, #5890)

  • Starting with version 3.0, Spark can manage GPU resources and allocate them among executors.
  • Spark 3.0 dropped support for Scala 2.11 and now only supports Scala 2.12. Thus, XGBoost4J-Spark also only supports Scala 2.12.

Breaking: XGBoost Python package now requires Python 3.6 and later (#5715)

  • Python 3.6 has many useful features such as f-strings.

Breaking: XGBoost now adopts the C++14 standard (#5664)

  • Make sure to use a sufficiently modern C++ compiler that supports C++14, such as Visual Studio 2017, GCC 5.0+, and Clang 3.4+.

Bug-fixes

  • Fix a data race in the prediction function (#5853). As a byproduct, the prediction function now uses a thread-local data store and became thread-safe.
  • Restore capability to run prediction when the test input has fewer features than the training data (#5955). This capability is necessary to support predicting with LIBSVM inputs. The previous release (1.1) had broken this capability, so we restore it in this version with better tests.
  • Fix OpenMP build with CMake for R package, to support CMake 3.13 (#5895).
  • Fix Windows 2016 build (#5902, #5918).
  • Fix edge cases in scikit-learn interface with Pandas input by disabling feature validation. (#5953)
  • [R] Enable weighted learning to rank (#5945)
  • [R] Fix early stopping with custom objective (#5923)
  • Fix NDK Build (#5886)
  • Add missing explicit template specializations for greater portability (#5921)
  • Handle empty rows in data iterators correctly (#5929). This bug affects file loader and JVM data frames.
  • Fix IsDense (#5702)
  • [jvm-packages] Fix wrong method name setAllowZeroForMissingValue (#5740)
  • Fix shape inference for Dask predict (#5989)

Usability Improvements, Documentation

  • [Doc] Document that CUDA 10.0 is required (#5872)
  • Refactored command line interface (CLI). Now CLI is able to handle user errors and output basic document. (#5574)
  • Better error handling in Python: use raise from syntax to preserve full stacktrace (#5787).
  • The JSON model dump now has a formal schema (#5660, #5818). The benefit is to prevent dump_model() function from breaking. See this document to understand the difference between saving and dumping models.
  • Add a reference to the GPU external memory paper (#5684)
  • Document more objective parameters in the R package (#5682)
  • Document the existence of pre-built binary wheels for MacOS (#5711)
  • Remove max.depth in the R gblinear example. (#5753)
  • Added conda environment file for building docs (#5773)
  • Mention dask blog post in the doc, which introduces using Dask with GPU and some internal workings. (#5789)
  • Fix rendering of Markdown docs (#5821)
  • Document new objectives and metrics available on GPUs (#5909)
  • Better message when no GPU is found. (#5594)
  • Remove the use of silent parameter from R demos. (#5675)
  • Don't use masked array in array interface. (#5730)
  • Update affiliation of @terrytangyuan: Ant Financial -> Ant Group (#5827)
  • Move dask tutorial closer other distributed tutorials (#5613)
  • Update XGBoost + Dask overview documentation (#5961)
  • Show n_estimators in the docstring of the scikit-learn interface (#6041)
  • Fix a type in a doctring of the scikit-learn interface (#5980)

Maintenance: testing, continuous integration, build system

  • [CI] Remove CUDA 9.0 from CI (#5674, #5745)
  • Require CUDA 10.0+ in CMake build (#5718)
  • [R] Remove dependency on gendef for Visual Studio builds (fixes #5608) (#5764). This enables building XGBoost with GPU support with R 4.x.
  • [R-package] Reduce duplication in configure.ac (#5693)
  • Bump com.esotericsoftware to 4.0.2 (#5690)
  • Migrate some tests from AppVeyor to GitHub Actions to speed up the tests. (#5911, #5917, #5919, #5922, #5928)
  • Reduce cost of the Jenkins CI server (#5884, #5904, #5892). We now enforce a daily budget via an automated monitor. We also dramatically reduced the workload for the Windows platform, since the cloud VM cost is vastly greater for Windows.
  • [R] Set up automated R linter (#5944)
  • [R] replace uses of T and F with TRUE and FALSE (#5778)
  • Update Docker container 'CPU' (#5956)
  • Simplify CMake build with modern CMake techniques (#5871)
  • Use hypothesis package for testing (#5759, #5835, #5849).
  • Define _CRT_SECURE_NO_WARNINGS to remove unneeded warnings in MSVC (#5434)
  • Run all Python demos in CI, to ensure that they don't break (#5651)
  • Enhance nvtx support (#5636). Now we can use unified timer between CPU and GPU. Also CMake is able to find nvtx automatically.
  • Speed up python test. (#5752)
  • Add helper for generating batches of data. (#5756)
  • Add c-api-demo to .gitignore (#5855)
  • Add option to enable all compiler warnings in GCC/Clang (#5897)
  • Make Python model compatibility test runnable locally (#5941)
  • Add cupy to Windows CI (#5797)
  • [CI] Fix cuDF install; merge 'gpu' and 'cudf' test suite (#5814)
  • Update rabit submodule (#5680, #5876)
  • Force colored output for Ninja build. (#5959)
  • [CI] Assign larger /dev/shm to NCCL (#5966)
  • Add missing Pytest marks to AsyncIO unit test (#5968)
  • [CI] Use latest cuDF and dask-cudf (#6048)
  • Add CMake flag to log C API invocations, to aid debugging (#5925)
  • Fix a unit test on CLI, to handle RC versions (#6050)
  • [CI] Use mgpu machine to run gpu hist unit tests (#6050)
  • [CI] Build GPU-enabled JAR artifact and deploy to xgboost-maven-repo (#6050)

Maintenance: Refactor code for legibility and maintainability

  • Remove dead code in DMatrix initialization. (#5635)
  • Catch dmlc error by ref. (#5678)
  • Refactor the gpu_hist split evaluation in preparation for batched nodes enumeration. (#5610)
  • Remove column major specialization. (#5755)
  • Remove unused imports in Python (#5776)
  • Avoid including c_api.h in header files. (#5782)
  • Remove unweighted GK quantile, which is unused. (#5816)
  • Add Python binding for rabit ops. (#5743)
  • Implement Empty method for host device vector. (#5781)
  • Remove print (#5867)
  • Enforce tree order in JSON (#5974)

Acknowledgement

Contributors: Nan Zhu (@CodingCat), @LionOrCatThatIsTheQuestion, Dmitry Mottl (@Mottl), Rory Mitchell (@RAMitchell), @ShvetsKS, Alex Wozniakowski (@a-wozniakowski), Alexander Gugel (@alexanderGugel), @anttisaukko, @boxdot, Andy Adinets (@canonizer), Ram Rachum (@cool-RR), Elliot Hershberg (@elliothershberg), Jason E. Aten, Ph.D. (@glycerine), Philip Hyunsu Cho (@hcho3), @jameskrach, James Lamb (@jameslamb), James Bourbeau (@jrbourbeau), Peter Jung (@kongzii), Lorenz Walthert (@lorenzwalthert), Oleksandr Kuvshynov (@okuvshynov), Rong Ou (@rongou), Shaochen Shi (@shishaochen), Yuan Tang (@terrytangyuan), Jiaming Yuan (@trivialfis), Bobby Wang (@wbo4958), Zhang Zhang (@zhangzhang10)

Reviewers: Nan Zhu (@CodingCat), @LionOrCatThatIsTheQuestion, Hao Yang (@QuantHao), Rory Mitchell (@RAMitchell), @ShvetsKS, Egor Smirnov (@SmirnovEgorRu), Alex Wozniakowski (@a-wozniakowski), Amit Kumar (@aktech), Avinash Barnwal (@avinashbarnwal), @boxdot, Andy Adinets (@canonizer), Chandra Shekhar Reddy (@chandrureddy), Ram Rachum (@cool-RR), Cristiano Goncalves (@cristianogoncalves), Elliot Hershberg (@elliothershberg), Jason E. Aten, Ph.D. (@glycerine), Philip Hyunsu Cho (@hcho3), Tong He (@hetong007), James Lamb (@jameslamb), James Bourbeau (@jrbourbeau), Lee Drake (@leedrake5), DougM (@mengdong), Oleksandr Kuvshynov (@okuvshynov), RongOu (@rongou), Shaochen Shi (@shishaochen), Xu Xiao (@sperlingxx), Yuan Tang (@terrytangyuan), Theodore Vasiloudis (@thvasilo), Jiaming Yuan (@trivialfis), Bobby Wang (@wbo4958), Zhang Zhang (@zhangzhang10)

Assets 2
Pre-release

@hcho3 hcho3 released this Aug 12, 2020 · 912 commits to master since this release

#5970

R package: xgboost_1.2.0.1.tar.gz (Manual: xgboost_1.2.0.1-manual.pdf)

Assets 4
Pre-release

@hcho3 hcho3 released this Aug 2, 2020 · 912 commits to master since this release

#5970

R package: xgboost_1.2.0.1.tar.gz (Manual: xgboost_1.2.0.1-manual.pdf)

Assets 4
  • v1.1.1
  • 34408a7
  • Compare
    Choose a tag to compare
    Search for a tag
  • v1.1.1
  • 34408a7
  • Compare
    Choose a tag to compare
    Search for a tag

@hcho3 hcho3 released this Jun 7, 2020 · 1063 commits to master since this release

This patch release applies the following patches to 1.1.0 release:

  • CPU performance improvement in the PyPI wheels (#5720)
  • Fix loading old model. (#5724)
  • Install pkg-config file (#5744)
Assets 2