Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[python-package] fix retrain on sequence dataset #6414

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
move seqence test to test_basic
  • Loading branch information
eromoe committed Jul 15, 2024
commit b3bcf3725dc00284502f256d714303aafa91e0fd
34 changes: 34 additions & 0 deletions tests/python_package_test/test_basic.py
Original file line number Diff line number Diff line change
@@ -217,6 +217,40 @@ def test_sequence_get_data(num_seq):
np.testing.assert_array_equal(subset_data.get_data(), X[sorted(used_indices)])


def test_retrain_list_of_sequence():
X, y = load_breast_cancer(return_X_y=True)
seqs = _create_sequence_from_ndarray(X, 2, 100)

seq_ds = lgb.Dataset(seqs, label=y, free_raw_data=False)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was free_raw_data=False necessary here? If it wasn't, please remove it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If free_raw_data=True, model2 cannot get the data, would raise Exception I remember .


params = {
"objective": "binary",
"num_boost_round": 20,
"min_data": 10,
"num_leaves": 10,
"verbose": -1,
}

model1 = lgb.train(
params,
seq_ds,
keep_training_booster=True,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
keep_training_booster=True,

Using keep_training_booster=True here works if the initial model (to be passed to init_model) is going to be passed onto training later in memory in the same process, but is that the situation that led to #6413.

I expect it will be more common to instead want to continue training with a model loaded from a file + a Sequence object in memory.

Could you please modify this test to not use keep_training_booster=True, or explain why it's necessary?

Copy link
Author

@eromoe eromoe Jul 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because I have a rolling timeseries trainning project.
The dataset is too large to load in memory. Aim to use memory efficiently, I read 12 month data and build model , make prediction for one month in future, then update datasource(scolling one month) and retrain model.
Retrain model only use recent 12 months data, means old model with old weight would only be updated by recent data.

model = None
for idx, (train_idx, test_idx) in enumerate(scroll_train_test(dates_partition, train_size=TRAIN_LOAD_STEP, test_size=TEST_LOAD_STEP, align_idx=train_end_idx)):
    
    train_partitions = dates_partition[train_idx]
    test_partitions = dates_partition[test_idx]

    train_df = read_partitioned_df(train_partitions, pre_train_partitions, train_df)
    test_df = read_partitioned_df(test_partitions, pre_train_partitions, test_df)
    ....
        model = lgb.train(
            params,
            train_data,
            init_model=model,
            num_boost_round=num_boost_round,
            keep_training_booster=True,
        )

Since it is in the loop, no necessary to dump model as a file, I just reuse it .

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for explaining that. Very interesting use of Sequence!

But the fact that you want to use this functionality in one specific way (with the model held in memory the entire time) does not mean that that's the only pattern that should be tested.

It's very common to use LightGBM's training continuation functionality starting from a model file... for example, to update an existing model once a month based on newly-arrived data. It's important that all LightGBM training-continuation codepaths support that pattern.

Anyway, like I mentioned in #6414 (comment), I can push testing changes here. Once you see the diff of the changes I push, I'd be happy to answer any questions you have.

)

assert model1.current_iteration() == 20
assert model1.num_trees() == 20

model2 = lgb.train(
params,
seq_ds,
init_model=model1,
)

assert model2.current_iteration() == 20
assert model2.num_trees() == 20
Comment on lines +253 to +254
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
assert model2.current_iteration() == 20
assert model2.num_trees() == 20
assert model2.current_iteration() == 40
assert model2.num_trees() == 40

These don't look correct. Performing training once with "num_boost_round": 20, then continued training again with "num_boost_round": 20, should result in a model with 40 boosting rounds.


assert seq_ds.get_data() == seqs

def test_chunked_dataset():
X_train, X_test, y_train, y_test = train_test_split(
*load_breast_cancer(return_X_y=True), test_size=0.1, random_state=2
55 changes: 0 additions & 55 deletions tests/python_package_test/test_sequence.py

This file was deleted.