Skip to content

[BUG] Fix inference tests for new NNs #1087

Closed
1 task done
Mr-Geekman opened this issue Jan 30, 2023 · 3 comments
Closed
1 task done

[BUG] Fix inference tests for new NNs #1087

Mr-Geekman opened this issue Jan 30, 2023 · 3 comments
Assignees
Labels
bug Something isn't working
Milestone

Comments

@Mr-Geekman
Copy link
Contributor

Mr-Geekman commented Jan 30, 2023

🐛 Bug Report

We have failing tests in tests/test_modes/test_inference:

  • test_forecast_in_sample_full_no_target_failed_assertion_error;
  • test_forecast_out_sample_suffix_failed_assertion_error.

We want to figure out why are they failing and try to fix this.
If we understand that it cannot be easily fixed, add link to this issue near the tests and explain the problem inside the issue.

Expected behavior

Tests are passed.

In the test 1 we check that during in-sample prediction on full train dataset our predictions are not None.

In the test 2 we check that during in-future prediction with gap are equal to suffix of predictions without gap.

How To Reproduce

Run the tests.

Environment

No response

Additional context

No response

Checklist

  • Bug appears at the latest library version
@Mr-Geekman Mr-Geekman added the bug Something isn't working label Jan 30, 2023
@Mr-Geekman Mr-Geekman added this to the Inference 2.1 milestone Jan 30, 2023
@Mr-Geekman Mr-Geekman changed the title [BUG] Fix tests for new NNs [BUG] Fix inference tests for new NNs Jan 30, 2023
@Mr-Geekman Mr-Geekman self-assigned this Feb 9, 2023
@Mr-Geekman
Copy link
Contributor Author

Mr-Geekman commented Feb 9, 2023

About test_forecast_in_sample_full_no_target_failed_assertion_error:

  1. MLP uses lag features, which are equal to NaN at the beginning. As I understand these NaNs are propagated down the network and results into NaNs answer. I'm no sure how should we do in this case, we should probably raise an exception if there are NaNs in the features like we do for sklearn models.
  2. RNN depends on the context and there are no context in the beginning of the series. It can't even create ts.to_torch_dataset (it is generated empty). I think that we should somehow check constraints and raise the understandable error in this case. I propose to raise exception like "Given context isn't big enough" in DeepBaseModel.forecast if we see that test_dataset is empty.

@Mr-Geekman
Copy link
Contributor Author

Mr-Geekman commented Feb 10, 2023

About test_forecast_out_sample_suffix_failed_assertion_error:

  1. I figured out how to fix MLP, I will make a PR.
  2. For RNN this fix isn't working. As I understand, is connected to auto-regressive nature of RNN. We use different encoder sequences for prediction of full future and suffix future, so we can't really expect that result will be the same. Encoder sequence for full future takes last encoder_length values of train. But encoder sequence for future suffix takes future prefix and fewer elements of train. We should probably change the test for this particular model or change the test entirely.

@Mr-Geekman
Copy link
Contributor Author

Closed by #1108.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
Status: Done
Development

No branches or pull requests

1 participant