Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix behavior for default objective and metric #4660

Merged
merged 1 commit into from
Oct 13, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions R-package/tests/testthat/test_lgb.Booster.R
Original file line number Diff line number Diff line change
Expand Up @@ -415,10 +415,10 @@ test_that("Creating a Booster from a Dataset with an existing predictor should w
expect_true(lgb.is.Booster(bst))
expect_equal(bst$current_iter(), nrounds)
expect_equal(bst$eval_train()[[1L]][["value"]], 0.1115352)
expect_true(lgb.is.Booster(bst_from_ds))
expect_equal(bst_from_ds$current_iter(), nrounds)
expect_equal(bst_from_ds$eval_train()[[1L]][["value"]], 5.65704892)
dumped_model <- jsonlite::fromJSON(bst$dump_model())
expect_identical(bst_from_ds$eval_train(), list())
expect_equal(bst_from_ds$current_iter(), nrounds)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand why the eval_train() part of this test changed, but how is this current_iter() change related to the rest of the PR?

I think the line

expect_equal(bst_from_ds$current_iter(), nrounds)

is testing that a Booster created from a Dataset with a Predictor is already considered "fitted", and that the current_iter() attribute reflects the specific training that was done to produce that Predictor.

I think just testing that bst_from_ds is a Booster isn't enough to test that the merging here worked:

private$init_predictor <- train_set$.__enclos_env__$private$predictor
# Check if predictor is existing
if (!is.null(private$init_predictor)) {
# Merge booster
.Call(
LGBM_BoosterMerge_R
, handle
, private$init_predictor$.__enclos_env__$private$handle
)
}
# Check current iteration
private$is_predicted_cur_iter <- c(private$is_predicted_cur_iter, FALSE)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is line 421 duplicate? (compared with line 419).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That line expect_equal(bst_from_ds$current_iter(), nrounds) is still in the test:

expect_equal(bst_from_ds$current_iter(), nrounds)

As @shiyu1994 correctly noticed that line was duplicated and I removed one duplicate.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ohhhh I see! My mistake, wasn't obvious to me. Thank you.

})

test_that("Booster$eval() should work on a Dataset stored in a binary file", {
Expand Down
9 changes: 3 additions & 6 deletions src/io/config.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -85,18 +85,15 @@ void GetObjectiveType(const std::unordered_map<std::string, std::string>& params
}
}

void GetMetricType(const std::unordered_map<std::string, std::string>& params, std::vector<std::string>* metric) {
void GetMetricType(const std::unordered_map<std::string, std::string>& params, const std::string& objective, std::vector<std::string>* metric) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as a general, rule, I like to avoid inserting new arguments in the middle of a function signature. In my experience, I've found that it leads to fewer mistakes when updating the places that call that function.

I think this is especially important in C/C++ code, where all arguments are provided positionally.

Would you consider adding objective at the end of this signature? Or is there a specific reason that objective needs to come before metric?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that was a semantic reason. In all such kind functions (GetXXXType()) in the config "returned" value is the last argument. It is a common practice in C++ to specify out-parameters at the very end of parameters. Also, semantically it can be read as "based on params and objective set metric". Appending objective to the end of parameters will be more confusing than benefiting, I believe. Given that this API is internal one, I think there is no problem with inserting objective in the middle of parameters and keep semantic order of parameters.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok makes sense to me, thanks very much for the thorough explanation

std::string value;
if (Config::GetString(params, "metric", &value)) {
std::transform(value.begin(), value.end(), value.begin(), Common::tolower);
ParseMetrics(value, metric);
}
// add names of objective function if not providing metric
if (metric->empty() && value.size() == 0) {
if (Config::GetString(params, "objective", &value)) {
std::transform(value.begin(), value.end(), value.begin(), Common::tolower);
ParseMetrics(value, metric);
}
ParseMetrics(objective, metric);
}
}

Expand Down Expand Up @@ -208,8 +205,8 @@ void Config::Set(const std::unordered_map<std::string, std::string>& params) {

GetTaskType(params, &task);
GetBoostingType(params, &boosting);
GetMetricType(params, &metric);
GetObjectiveType(params, &objective);
GetMetricType(params, objective, &metric);
GetDeviceType(params, &device_type);
if (device_type == std::string("cuda")) {
LGBM_config_::current_device = lgbm_device_cuda;
Expand Down
21 changes: 21 additions & 0 deletions tests/python_package_test/test_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -2029,6 +2029,27 @@ def test_multiple_feval_cv():
assert 'decreasing_metric-stdv' in cv_results


def test_default_objective_and_metric():
X, y = load_breast_cancer(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
train_dataset = lgb.Dataset(data=X_train, label=y_train)
validation_dataset = lgb.Dataset(data=X_test, label=y_test, reference=train_dataset)
evals_result = {}
params = {'verbose': -1}
lgb.train(
params=params,
train_set=train_dataset,
valid_sets=validation_dataset,
num_boost_round=5,
callbacks=[lgb.record_evaluation(evals_result)]
)

assert 'valid_0' in evals_result
assert len(evals_result['valid_0']) == 1
assert 'l2' in evals_result['valid_0']
assert len(evals_result['valid_0']['l2']) == 5


@pytest.mark.skipif(psutil.virtual_memory().available / 1024 / 1024 / 1024 < 3, reason='not enough RAM')
def test_model_size():
X, y = load_boston(return_X_y=True)
Expand Down