Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[c++] enhance error handling for forced splits file loading #6832

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 30 additions & 11 deletions src/boosting/gbdt.cpp
Original file line number Diff line number Diff line change
@@ -83,10 +83,19 @@ void GBDT::Init(const Config* config, const Dataset* train_data, const Objective
// load forced_splits file
if (!config->forcedsplits_filename.empty()) {
std::ifstream forced_splits_file(config->forcedsplits_filename.c_str());
std::stringstream buffer;
buffer << forced_splits_file.rdbuf();
std::string err;
forced_splits_json_ = Json::parse(buffer.str(), &err);
if (!forced_splits_file.good()) {
Log::Warning("Forced splits file '%s' does not exist. Forced splits will be ignored.",
config->forcedsplits_filename.c_str());
Comment on lines +86 to +88
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be a fatal error at training time... if I'm training a model and expecting specific splits to be used, I'd prefer a big loud error to a training run wasting time and compute resources only to produce a model that accidentally does not look like what I'd wanted.

HOWEVER... I think GBDT::Init() and/or GBDT::ResetConfig() will also be called when you load a model at scoring time, and at scoring time we wouldn't want to get a fatal error because of a missing or malformed file which is only supposed to affect training.

I'm not certain how to resolve that. Can you please investigate that and propose something?

It would probably be helpful to add tests for these different conditions. You can do this in Python for this purpose. Or if you don't have time / interest, I can push some tests here and then you could work on making them pass?

So to be clear, the behavior I want to see is:

  • training time:
    • forcedsplits_filename file does not exist or is not readable --> ERROR
    • forcedsplits_filename is not valid JSON --> ERROR
  • prediction / scoring time:
    • forcedsplits_filename file does not exist or is not readable --> no log output, no errors
    • forcedsplits_filename is not valid JSON --> no log output, no errors

Copy link
Author

@KYash03 KYash03 Feb 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could add a flag to the GBDT class to indicate the current mode.

This is what I was thinking:

bool is_training_ = false;

// Turn the flag on at the start of training, and off at the end.
void GBDT::Train() {
  is_training_ = true;
  // ... regular training code ...
  is_training_ = false;
}

// In Init() and ResetConfig(), handle the file as follows:
if (is_training_) {
  // Stop with an error if anything is wrong.
} else {
  // Simply continue if there are issues.
}

Regarding the tests, I'd be happy to write them!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks very much. It is not that simple.

For example, there are many workflows where training and prediction are done in the same process, using the same Booster. So a single property is_training_ is not going to work.

There are also multiple APIs for training.

void GBDT::Train(int snapshot_freq, const std::string& model_output_path) {

bool GBDT::TrainOneIter(const score_t* gradients, const score_t* hessians) {

And we'd also want to be careful to not introduce this type of checking on every boosting round, as that would hurt performance.

Maybe @shiyu1994 could help us figure out where to put a check like this.

Also referencing this related PR to help: #5653

} else {
std::stringstream buffer;
buffer << forced_splits_file.rdbuf();
std::string err;
forced_splits_json_ = Json::parse(buffer.str(), &err);
if (!err.empty()) {
Log::Fatal("Failed to parse forced splits file '%s': %s",
config->forcedsplits_filename.c_str(), err.c_str());
}
}
}

objective_function_ = objective_function;
@@ -823,13 +832,23 @@ void GBDT::ResetConfig(const Config* config) {
if (config_.get() != nullptr && config_->forcedsplits_filename != new_config->forcedsplits_filename) {
// load forced_splits file
if (!new_config->forcedsplits_filename.empty()) {
std::ifstream forced_splits_file(
new_config->forcedsplits_filename.c_str());
std::stringstream buffer;
buffer << forced_splits_file.rdbuf();
std::string err;
forced_splits_json_ = Json::parse(buffer.str(), &err);
tree_learner_->SetForcedSplit(&forced_splits_json_);
std::ifstream forced_splits_file(new_config->forcedsplits_filename.c_str());
if (!forced_splits_file.good()) {
Log::Warning("Forced splits file '%s' does not exist. Forced splits will be ignored.",
new_config->forcedsplits_filename.c_str());
forced_splits_json_ = Json();
tree_learner_->SetForcedSplit(nullptr);
} else {
std::stringstream buffer;
buffer << forced_splits_file.rdbuf();
std::string err;
forced_splits_json_ = Json::parse(buffer.str(), &err);
if (!err.empty()) {
Log::Fatal("Failed to parse forced splits file '%s': %s",
new_config->forcedsplits_filename.c_str(), err.c_str());
}
tree_learner_->SetForcedSplit(&forced_splits_json_);
}
} else {
forced_splits_json_ = Json();
tree_learner_->SetForcedSplit(nullptr);
Loading
Oops, something went wrong.