Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[c++] Initial Work for Pairwise Ranking #6182

Open
wants to merge 88 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
Show all changes
88 commits
Select commit Hold shift + click to select a range
9ae3476
initial work for pairwise ranking (dataset part)
shiyu1994 Nov 8, 2023
2314099
remove unrelated changes
shiyu1994 Nov 8, 2023
06ddf68
Merge branch 'master' into pairwise-ranking-dev
shiyu1994 Nov 8, 2023
42e91e2
Merge branch 'master' into pairwise-ranking-dev
shiyu1994 Nov 23, 2023
a8379d4
Merge branch 'master' into pairwise-ranking-dev
shiyu1994 Dec 1, 2023
da5f02d
first version of pairwie ranking bin
shiyu1994 Dec 5, 2023
9d0afd9
Merge branch 'pairwise-ranking-dev' of https://github.com/Microsoft/L…
shiyu1994 Dec 5, 2023
0cb436d
templates for bins in pairwise ranking dataset
shiyu1994 Dec 5, 2023
fc9b381
Merge branch 'master' into pairwise-ranking-dev
shiyu1994 Dec 5, 2023
6fbc674
fix lint issues and compilation errors
shiyu1994 Dec 6, 2023
6082913
Merge branch 'pairwise-ranking-dev' of https://github.com/Microsoft/L…
shiyu1994 Dec 6, 2023
9e16dc3
add methods for pairwise bin
shiyu1994 Dec 6, 2023
6154bde
instantiate templates
shiyu1994 Dec 6, 2023
3a646eb
remove unrelated files
shiyu1994 Dec 6, 2023
9e77ab9
add return values for unimplemented methods
shiyu1994 Dec 7, 2023
eba4560
add new files and windows/LightGBM.vcxproj and windows/LightGBM.vcxpr…
shiyu1994 Dec 7, 2023
f1d2281
Merge branch 'master' into pairwise-ranking-dev
shiyu1994 Dec 7, 2023
873d7ad
create pairwise dataset
shiyu1994 Dec 7, 2023
3838b9b
Merge branch 'pairwise-ranking-dev' of https://github.com/Microsoft/L…
shiyu1994 Dec 7, 2023
986a979
set num_data_ of pairwise dataset
shiyu1994 Dec 7, 2023
c40965a
skip query with no paired items
shiyu1994 Dec 15, 2023
97d34d7
store original query information
shiyu1994 Jan 31, 2024
1e57e27
copy position information for pairwise dataset
shiyu1994 Jan 31, 2024
1699c06
rename to pointwise members
shiyu1994 Feb 1, 2024
d5b6f0a
adding initial support for pairwise gradients and NDCG eval with pair…
metpavel Feb 9, 2024
2ee1199
fix score offsets
metpavel Feb 9, 2024
fe10a2c
Merge branch 'master' into pairwise-ranking-dev
shiyu1994 Feb 19, 2024
0aaf090
skip copy for weights and label if none
shiyu1994 Feb 19, 2024
8714bfb
fix pairwise dataset bugs
shiyu1994 Feb 29, 2024
250996b
Merge branch 'master' into pairwise-ranking-dev
shiyu1994 Feb 29, 2024
38b2f3e
fix validation set with pairwise lambda rank
shiyu1994 Feb 29, 2024
09fff25
Merge branch 'pairwise-ranking-dev' of https://github.com/Microsoft/L…
shiyu1994 Feb 29, 2024
ba3c815
fix pairwise ranking objective initialization
shiyu1994 Feb 29, 2024
d9b537d
keep the original query boundaries and add pairwise query boundaries
shiyu1994 Feb 29, 2024
362baf8
allow empty queries in pairwise query boundaries
shiyu1994 Mar 1, 2024
06597ac
fix query boundaries
shiyu1994 Mar 1, 2024
18e3a1b
clean up
shiyu1994 Mar 1, 2024
43b8582
various fixes
metpavel Mar 1, 2024
ad4e89f
construct all pairs for validation set
shiyu1994 Mar 1, 2024
dc17309
Merge branch 'pairwise-ranking-dev' of https://github.com/microsoft/L…
metpavel Mar 1, 2024
1ad78b2
fix for validation set
shiyu1994 Mar 1, 2024
9cd3b93
fix validation pairs
shiyu1994 Mar 1, 2024
f9d9c07
fatal error when no query boundary is provided
shiyu1994 Mar 1, 2024
97e0a81
Merge branch 'master' into pairwise-ranking-dev
shiyu1994 Mar 1, 2024
746bc82
add differential features
shiyu1994 Mar 8, 2024
f9ab075
add differential features
shiyu1994 Mar 20, 2024
7aa170b
bug fixing and efficiency improvement
metpavel Mar 25, 2024
abdb716
add feature group for differential features
shiyu1994 Mar 27, 2024
3cdfd83
refactor template initializations with macro
shiyu1994 Mar 28, 2024
3703495
tree learning with differential features
shiyu1994 Mar 28, 2024
8f55a93
avoid copy sampled values
shiyu1994 Mar 28, 2024
8c3e7be
fix sampled indices
shiyu1994 Apr 2, 2024
5aa2d17
push data into differential features
shiyu1994 Apr 11, 2024
1c319b8
fix differential feature bugs
shiyu1994 Apr 17, 2024
d8eb68b
clean up debug code
shiyu1994 Apr 17, 2024
b088236
fix validation set with differential features
shiyu1994 Apr 18, 2024
2d09897
support row-wise histogram construction with pairwise ranking
shiyu1994 Jun 15, 2024
406d0c1
fix row wise in pairwise ranking
shiyu1994 Jun 20, 2024
6c65d1f
save for debug
shiyu1994 Jun 20, 2024
7738915
update code for debug
shiyu1994 Jun 28, 2024
d6c16df
save changes
shiyu1994 Jul 4, 2024
0d572d7
save changes for debug
shiyu1994 Jul 8, 2024
1f59f85
save changes
shiyu1994 Aug 21, 2024
0618bb2
add bagging by query for lambdarank
shiyu1994 Aug 27, 2024
185bdf6
Merge branch 'master' into bagging/bagging-by-query-for-lambdarank
shiyu1994 Aug 27, 2024
38fa4c2
fix pre-commit
shiyu1994 Aug 27, 2024
2fce147
Merge branch 'bagging/bagging-by-query-for-lambdarank' of https://git…
shiyu1994 Aug 27, 2024
1f7f967
Merge branch 'master' into bagging/bagging-by-query-for-lambdarank
shiyu1994 Aug 29, 2024
9e2a322
fix bagging by query with cuda
shiyu1994 Aug 29, 2024
666c51e
fix bagging by query test case
shiyu1994 Aug 30, 2024
9e2c338
fix bagging by query test case
shiyu1994 Aug 30, 2024
3abbc11
fix bagging by query test case
shiyu1994 Aug 30, 2024
13fa0a3
add #include <vector>
shiyu1994 Aug 30, 2024
b8427b0
merge bagging by query
shiyu1994 Sep 4, 2024
0258f07
update CMakeLists.txt
shiyu1994 Sep 4, 2024
90a95fa
fix bagging by query with pairwise lambdarank
shiyu1994 Sep 20, 2024
306af04
Merge branch 'master' into pairwise-ranking-dev
shiyu1994 Sep 20, 2024
b69913d
fix compilation error C3200 with visual studio
shiyu1994 Oct 10, 2024
6dba1cf
clean up main.cpp
shiyu1994 Oct 11, 2024
3b2e29d
Exposing configuration parameters for pairwise ranking
metpavel Oct 18, 2024
f1c32d3
fix bugs and pass by reference for SigmoidCache&
shiyu1994 Nov 8, 2024
51693e2
add pairing approach
shiyu1994 Nov 8, 2024
5071842
add at_least_one_relevant
shiyu1994 Nov 8, 2024
598764b
fix num bin for row wise in pairwise ranking
shiyu1994 Nov 21, 2024
f7deab4
save for debug
shiyu1994 Dec 17, 2024
0d1b310
update doc
shiyu1994 Dec 18, 2024
8f9ab26
add random_k pairing mode
shiyu1994 Feb 18, 2025
d797122
clean up code
shiyu1994 Feb 18, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
fix num bin for row wise in pairwise ranking
  • Loading branch information
shiyu1994 committed Nov 21, 2024
commit 598764b5ca06f0e1f31b1188c17e721a0129552c
4 changes: 4 additions & 0 deletions include/LightGBM/dataset.h
Original file line number Diff line number Diff line change
@@ -1155,6 +1155,10 @@ class Dataset {
const data_size_t* train_query_boundaries_;
/*! \brief stored number of queries from training dataset, for creating differential features in pairwise lambdarank */
data_size_t train_num_queries_;
/*! \brief stored number of differential features used in training dataset, for creating differential features in pairwise lambdarank */
data_size_t num_used_differential_features_;
/*! \brief stored number of differential feature groups used in training dataset, for creating differential features in pairwise lambdarank */
data_size_t num_used_differential_groups_;
};

} // namespace LightGBM
6 changes: 6 additions & 0 deletions src/boosting/gbdt.cpp
Original file line number Diff line number Diff line change
@@ -490,7 +490,9 @@ bool GBDT::EvalAndCheckEarlyStopping() {
void GBDT::UpdateScore(const Tree* tree, const int cur_tree_id) {
Common::FunctionTimer fun_timer("GBDT::UpdateScore", global_timer);
// update training score
Log::Warning("before update score 0");
if (!data_sample_strategy_->is_use_subset()) {
Log::Warning("before update score 1");
train_score_updater_->AddScore(tree_learner_.get(), tree, cur_tree_id);

const data_size_t bag_data_cnt = data_sample_strategy_->bag_data_cnt();
@@ -506,16 +508,20 @@ void GBDT::UpdateScore(const Tree* tree, const int cur_tree_id) {
}
#endif // USE_CUDA
}
Log::Warning("before update score 2");

} else {
Log::Warning("before update score 3");
train_score_updater_->AddScore(tree, cur_tree_id);
}


Log::Warning("before update score 4");
// update validation score
for (auto& score_updater : valid_score_updater_) {
score_updater->AddScore(tree, cur_tree_id);
}
Log::Warning("before update score 5");
}

#ifdef USE_CUDA
13 changes: 11 additions & 2 deletions src/io/dataset.cpp
Original file line number Diff line number Diff line change
@@ -641,7 +641,9 @@ MultiValBin* Dataset::GetMultiBinFromAllFeatures(const std::vector<uint32_t>& of
// }
// }

const int num_original_features = static_cast<int>(most_freq_bins.size()) / 2;
Log::Warning("most_freq_bins.size() = %d, num_groups_ = %d, num_used_differential_features_ = %d, num_used_differential_groups_ = %d, ncol = %d", static_cast<int>(most_freq_bins.size()), num_groups_, num_used_differential_features_, num_used_differential_groups_, ncol);

const int num_original_features = (static_cast<int>(most_freq_bins.size()) - num_used_differential_groups_) / 2;
std::vector<uint32_t> original_most_freq_bins;
std::vector<uint32_t> original_offsets;
for (int i = 0; i < num_original_features; ++i) {
@@ -661,7 +663,7 @@ MultiValBin* Dataset::GetMultiBinFromAllFeatures(const std::vector<uint32_t>& of
fout.close();
const data_size_t num_original_data = metadata_.query_boundaries()[metadata_.num_queries()];
ret.reset(MultiValBin::CreateMultiValBin(
num_original_data, original_offsets.back(), num_original_features,
num_original_data, offsets.back(), num_original_features,
1.0 - sum_dense_ratio, original_offsets, use_pairwise_ranking, metadata_.paired_ranking_item_global_index_map()));
PushDataToMultiValBin(num_original_data, original_most_freq_bins, original_offsets, &iters, ret.get());
} else {
@@ -1025,6 +1027,10 @@ void Dataset::CreatePairWiseRankingData(const Dataset* dataset, const bool is_va
group_feature_cnt_[i] = dataset->group_feature_cnt_[original_group_index];
}

Log::Warning("cur_feature_index = %d", cur_feature_index);

num_used_differential_features_ = 0;
num_used_differential_groups_ = static_cast<int>(diff_feature_groups.size());
if (config.use_differential_feature_in_pairwise_ranking) {
for (size_t i = 0; i < diff_feature_groups.size(); ++i) {
const std::vector<int>& features_in_group = diff_feature_groups[i];
@@ -1045,6 +1051,7 @@ void Dataset::CreatePairWiseRankingData(const Dataset* dataset, const bool is_va
used_feature_map_[diff_feature_index + dataset->num_total_features_ * 2] = cur_feature_index;
++cur_feature_index;
++num_features_in_group;
++num_used_differential_features_;
const int ori_feature_index = dataset->InnerFeatureIndex(diff_original_feature_index[diff_feature_index]);
ori_bin_mappers.emplace_back(new BinMapper(*dataset->FeatureBinMapper(ori_feature_index)));
ori_bin_mappers_for_diff.emplace_back(new BinMapper(*dataset->FeatureBinMapper(ori_feature_index)));
@@ -1080,6 +1087,8 @@ void Dataset::CreatePairWiseRankingData(const Dataset* dataset, const bool is_va
num_groups_ += static_cast<int>(diff_feature_groups.size());
}

Log::Warning("cur_feature_index = %d", cur_feature_index);

feature_groups_.shrink_to_fit();

feature_names_.clear();
3 changes: 2 additions & 1 deletion src/io/multi_val_pairwise_lambdarank_bin.hpp
Original file line number Diff line number Diff line change
@@ -14,7 +14,8 @@ template <typename BIN_TYPE, template<typename> class MULTI_VAL_BIN_TYPE>
class MultiValPairwiseLambdarankBin : public MULTI_VAL_BIN_TYPE<BIN_TYPE> {
public:
MultiValPairwiseLambdarankBin(data_size_t num_data, int num_bin, int num_feature, const std::vector<uint32_t>& offsets): MULTI_VAL_BIN_TYPE<BIN_TYPE>(num_data, num_bin, num_feature, offsets) {
this->num_bin_ = num_bin * 2;
this->num_bin_ = num_bin;
Log::Warning("num_bin = %d", num_bin);
}
protected:
const std::pair<data_size_t, data_size_t>* paired_ranking_item_global_index_map_;
4 changes: 4 additions & 0 deletions src/treelearner/col_sampler.hpp
Original file line number Diff line number Diff line change
@@ -89,6 +89,7 @@ class ColSampler {
}

std::vector<int8_t> GetByNode(const Tree* tree, int leaf) {
// Log::Warning("GetByNode step 0");
// get interaction constraints for current branch
std::unordered_set<int> allowed_features;
if (!interaction_constraints_.empty()) {
@@ -110,6 +111,7 @@ class ColSampler {
}
}

// Log::Warning("GetByNode step 1");
std::vector<int8_t> ret(train_data_->num_features(), 0);
if (fraction_bynode_ >= 1.0f) {
if (interaction_constraints_.empty()) {
@@ -124,6 +126,7 @@ class ColSampler {
return ret;
}
}
// Log::Warning("GetByNode step 2");
if (need_reset_bytree_) {
auto used_feature_cnt = GetCnt(used_feature_indices_.size(), fraction_bynode_);
std::vector<int>* allowed_used_feature_indices;
@@ -175,6 +178,7 @@ class ColSampler {
ret[inner_feature_index] = 1;
}
}
// Log::Warning("GetByNode step 3");
return ret;
}

27 changes: 25 additions & 2 deletions src/treelearner/serial_tree_learner.cpp
Original file line number Diff line number Diff line change
@@ -68,7 +68,7 @@ void SerialTreeLearner::Init(const Dataset* train_data, bool is_constant_hessian

GetShareStates(train_data_, is_constant_hessian, true);
histogram_pool_.DynamicChangeSize(train_data_,
share_state_->num_hist_total_bin(),
share_state_->num_hist_total_bin() * 2,
share_state_->feature_hist_offsets(),
config_, max_cache_size, config_->num_leaves);
Log::Info("Number of data points in the train set: %d, number of used features: %d", num_data_, num_features_);
@@ -320,6 +320,8 @@ void SerialTreeLearner::BeforeTrain() {
}
}

// Log::Warning("smaller_leaf_splits_->leaf_index() = %d before train", smaller_leaf_splits_->leaf_index());

larger_leaf_splits_->Init();

if (cegb_ != nullptr) {
@@ -391,8 +393,12 @@ void SerialTreeLearner::FindBestSplits(const Tree* tree, const std::set<int>* fo
}
bool use_subtract = parent_leaf_histogram_array_ != nullptr;

// Log::Warning("before ConstructHistograms");
ConstructHistograms(is_feature_used, use_subtract);
// Log::Warning("after ConstructHistograms");
// Log::Warning("before FindBestSplitsFromHistograms");
FindBestSplitsFromHistograms(is_feature_used, use_subtract, tree);
// Log::Warning("after FindBestSplitsFromHistograms");
}

void SerialTreeLearner::ConstructHistograms(
@@ -466,21 +472,28 @@ void SerialTreeLearner::ConstructHistograms(

void SerialTreeLearner::FindBestSplitsFromHistograms(
const std::vector<int8_t>& is_feature_used, bool use_subtract, const Tree* tree) {
// Log::Warning("FindBestSplitsFromHistograms step 0");
Common::FunctionTimer fun_timer(
"SerialTreeLearner::FindBestSplitsFromHistograms", global_timer);
// Log::Warning("FindBestSplitsFromHistograms step 0.1");
std::vector<SplitInfo> smaller_best(share_state_->num_threads);
std::vector<SplitInfo> larger_best(share_state_->num_threads);
// Log::Warning("smaller_leaf_splits_->leaf_index() = %d", smaller_leaf_splits_->leaf_index());
std::vector<int8_t> smaller_node_used_features = col_sampler_.GetByNode(tree, smaller_leaf_splits_->leaf_index());
std::vector<int8_t> larger_node_used_features;
// Log::Warning("FindBestSplitsFromHistograms step 0.2");
double smaller_leaf_parent_output = GetParentOutput(tree, smaller_leaf_splits_.get());
double larger_leaf_parent_output = 0;
// Log::Warning("FindBestSplitsFromHistograms step 0.3");
if (larger_leaf_splits_ != nullptr && larger_leaf_splits_->leaf_index() >= 0) {
larger_leaf_parent_output = GetParentOutput(tree, larger_leaf_splits_.get());
}
if (larger_leaf_splits_->leaf_index() >= 0) {
larger_node_used_features = col_sampler_.GetByNode(tree, larger_leaf_splits_->leaf_index());
}

// Log::Warning("FindBestSplitsFromHistograms step 1");

if (use_subtract && config_->use_quantized_grad) {
const int parent_index = std::min(smaller_leaf_splits_->leaf_index(), larger_leaf_splits_->leaf_index());
const uint8_t parent_hist_bits = gradient_discretizer_->GetHistBitsInNode<false>(parent_index);
@@ -500,15 +513,18 @@ void SerialTreeLearner::FindBestSplitsFromHistograms(
}
}

// Log::Warning("FindBestSplitsFromHistograms step 2");

OMP_INIT_EX();
// find splits
#pragma omp parallel for schedule(static) num_threads(share_state_->num_threads)
// #pragma omp parallel for schedule(static) num_threads(share_state_->num_threads)
for (int feature_index = 0; feature_index < num_features_; ++feature_index) {
OMP_LOOP_EX_BEGIN();
if (!is_feature_used[feature_index]) {
continue;
}
const int tid = omp_get_thread_num();
// Log::Warning("FindBestSplitsFromHistograms step 2.1");
if (config_->use_quantized_grad) {
const uint8_t hist_bits_bin = gradient_discretizer_->GetHistBitsInLeaf<false>(smaller_leaf_splits_->leaf_index());
const int64_t int_sum_gradient_and_hessian = smaller_leaf_splits_->int_sum_gradients_and_hessians();
@@ -529,6 +545,7 @@ void SerialTreeLearner::FindBestSplitsFromHistograms(
}
int real_fidx = train_data_->RealFeatureIndex(feature_index);

// Log::Warning("FindBestSplitsFromHistograms step 2.2");
ComputeBestSplitForFeature(smaller_leaf_histogram_array_, feature_index,
real_fidx,
smaller_node_used_features[feature_index],
@@ -542,6 +559,7 @@ void SerialTreeLearner::FindBestSplitsFromHistograms(
continue;
}

// Log::Warning("FindBestSplitsFromHistograms step 2.3");
if (use_subtract) {
if (config_->use_quantized_grad) {
const int parent_index = std::min(smaller_leaf_splits_->leaf_index(), larger_leaf_splits_->leaf_index());
@@ -589,6 +607,7 @@ void SerialTreeLearner::FindBestSplitsFromHistograms(
}
}

// Log::Warning("FindBestSplitsFromHistograms step 2.4");
ComputeBestSplitForFeature(larger_leaf_histogram_array_, feature_index,
real_fidx,
larger_node_used_features[feature_index],
@@ -599,6 +618,10 @@ void SerialTreeLearner::FindBestSplitsFromHistograms(
OMP_LOOP_EX_END();
}
OMP_THROW_EX();


// Log::Warning("FindBestSplitsFromHistograms step 3");

auto smaller_best_idx = ArrayArgs<SplitInfo>::ArgMax(smaller_best);
int leaf = smaller_leaf_splits_->leaf_index();
best_split_per_leaf_[leaf] = smaller_best[smaller_best_idx];