Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugfixes #1153

Merged
merged 10 commits into from
Sep 3, 2023
Merged

Bugfixes #1153

merged 10 commits into from
Sep 3, 2023

Conversation

kasyanovse
Copy link
Collaborator

@kasyanovse kasyanovse commented Aug 22, 2023

  1. Fix Bug with zero TS #1148 with fixed denominator in CGRU and add test for new code
  2. Fix LGBMRegressor uses all threads even though n_jobs=1 #1151 with set n_jobs=1 for some operations
  3. Add initial assumption with AR (initial assumptions for ts #1074), enable AR (All AR/ARIMA are disabled #1137)
  4. Check and add test in accordance with Multi ts in multomodal case #739
  5. Fix integration test test_result_changing

@kasyanovse kasyanovse linked an issue Aug 22, 2023 that may be closed by this pull request
@kasyanovse kasyanovse changed the title Bugfix #1151 Bugfix #1151, #1148 Aug 22, 2023
@kasyanovse kasyanovse linked an issue Aug 22, 2023 that may be closed by this pull request
@nicl-nno nicl-nno mentioned this pull request Aug 22, 2023
4 tasks
@kasyanovse kasyanovse linked an issue Aug 23, 2023 that may be closed by this pull request
@kasyanovse kasyanovse changed the title Bugfix #1151, #1148 Bugfixes Aug 23, 2023
@aim-pep8-bot
Copy link

aim-pep8-bot commented Aug 23, 2023

Hello @kasyanovse! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2023-08-30 13:19:15 UTC

@kasyanovse kasyanovse linked an issue Aug 23, 2023 that may be closed by this pull request
@kasyanovse
Copy link
Collaborator Author

Code for testing single core use in boosting models:

import numpy as np

from fedot.core.data.data import InputData
from fedot.core.pipelines.node import PipelineNode
from fedot.core.pipelines.pipeline import Pipeline
from fedot.core.repository.dataset_types import DataTypesEnum
from fedot.core.repository.tasks import Task, TaskTypesEnum

length = 1000
width = 10

data_class = InputData(idx=np.arange(length),
                       features=np.random.rand(length, width),
                       target=(np.random.rand(length) > 0.5).astype(int),
                       task=Task(TaskTypesEnum.classification),
                       data_type=DataTypesEnum.table)

data_reg = InputData(idx=np.arange(length),
                     features=np.random.rand(length, width),
                     target=np.random.rand(length),
                     task=Task(TaskTypesEnum.regression),
                     data_type=DataTypesEnum.table)

for data, models in zip([data_class, data_reg], [['xgboost', 'catboost', 'lgbm'], ['catboostreg', 'lgbmreg']]):
    for model in models:
        pipeline = Pipeline(PipelineNode(model))
        pipeline.fit(data)

@kasyanovse
Copy link
Collaborator Author

Speed test for AR and ARIMA (for IPython):

import numpy as np

from fedot.core.data.data import InputData
from fedot.core.optimisers.objective import PipelineObjectiveEvaluate, MetricsObjective
from fedot.core.optimisers.objective.data_source_splitter import DataSourceSplitter
from fedot.core.pipelines.node import PipelineNode
from fedot.core.pipelines.pipeline import Pipeline
from fedot.core.repository.dataset_types import DataTypesEnum
from fedot.core.repository.quality_metrics_repository import RegressionMetricsEnum
from fedot.core.repository.tasks import Task, TaskTypesEnum, TsForecastingParams

length = 1000
data = np.random.rand(length)

data = InputData(idx=np.arange(len(data)),
                 features=data,
                 target=data,
                 data_type=DataTypesEnum.ts,
                 task=Task(TaskTypesEnum.ts_forecasting,
                           TsForecastingParams(forecast_length=2)))

data_splitter = DataSourceSplitter()
data_split = data_splitter.build(data)
objective_eval = PipelineObjectiveEvaluate(MetricsObjective(RegressionMetricsEnum.RMSE), data_split,
                                           validation_blocks=data_splitter.validation_blocks)

pipeline = Pipeline(PipelineNode('arima'))
%timeit pipeline.fit(data)
%timeit objective_eval.evaluate(pipeline)

pipeline = Pipeline(PipelineNode('ar'))
%timeit pipeline.fit(data)
%timeit objective_eval.evaluate(pipeline)

@codecov
Copy link

codecov bot commented Aug 24, 2023

Codecov Report

Merging #1153 (edef65d) into master (97f31db) will increase coverage by 0.02%.
Report is 1 commits behind head on master.
The diff coverage is 50.00%.

@@            Coverage Diff             @@
##           master    #1153      +/-   ##
==========================================
+ Coverage   78.31%   78.34%   +0.02%     
==========================================
  Files         130      130              
  Lines        9349     9354       +5     
==========================================
+ Hits         7322     7328       +6     
+ Misses       2027     2026       -1     
Files Changed Coverage Δ
...edot/api/api_utils/assumptions/task_assumptions.py 77.19% <ø> (ø)
..._implementations/models/ts_implementations/cgru.py 26.00% <0.00%> (ø)
fedot/core/pipelines/pipeline.py 94.70% <ø> (+1.06%) ⬆️
fedot/core/pipelines/pipeline_node_factory.py 97.43% <66.66%> (-2.57%) ⬇️

... and 3 files with indirect coverage changes

Copy link
Collaborator

@valer1435 valer1435 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GLM стоит проверить

@valer1435 valer1435 self-requested a review September 3, 2023 06:36
@kasyanovse kasyanovse merged commit 5da1447 into master Sep 3, 2023
5 of 6 checks passed
@kasyanovse kasyanovse deleted the 1151-bugfix branch September 3, 2023 17:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants