Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default pipeline & Other - Running into same issue as issue #31 #33

Open
bappctl opened this issue Apr 22, 2021 · 17 comments
Open

Default pipeline & Other - Running into same issue as issue #31 #33

bappctl opened this issue Apr 22, 2021 · 17 comments
Assignees
Labels
bug Something isn't working

Comments

@bappctl
Copy link

bappctl commented Apr 22, 2021

@lhenry15

  • On running run_pipeline.py in examples with the default_pipeline.json (corresponding to load_default_pipeline()) gives below error

same issue as in
#31
#21 (comment)

Not all provided hyper-parameters for the data preparation pipeline 79ce71bd-db96-494b-a455-14f2e2ac5040 were used: ['method', 'number_of_folds', 'randomSeed', 'shuffle', 'stratified']
{'error': "[StepFailedError('Step 6 for pipeline "
"384bbfab-4f6d-4001-9f90-684ea5681f5d failed.',)]",
'method_called': 'evaluate',
'pipeline': '<d3m.metadata.pipeline.Pipeline object at 0x7f30a2d624a8>',
'status': 'ERRORED'}

  • But if I create a new pipeline (for example using build_LODA_pipline.py in examples) and substitute it in line#18 in run_pipeline.py it works fine.

image

  • If run test.sh it fails with same error.

  • with telemanom:

on buildTelemanom.py and use respective pipeline in run_pipelines.py gives below error
image

what might be the issue in all these cases?

@bappctl bappctl changed the title Running into same issue as issue #31 Default pipeline - Running into same issue as issue #31 Apr 22, 2021
@bappctl bappctl changed the title Default pipeline - Running into same issue as issue #31 Default pipeline & Other - Running into same issue as issue #31 Apr 22, 2021
@bappctl
Copy link
Author

bappctl commented Apr 23, 2021

Any pointers to overcome it?

@lhenry15
Copy link
Member

from my side, when I run test.sh, no error raised from Telemanom, could you provide some more details on buildTelemanom? Thanks!

@bappctl
Copy link
Author

bappctl commented Apr 23, 2021

@lhenry15

Steps to reproduce (master branch)

  • Run build_Telemanom
~/tods$ cd primitive_tests/
~/tods/primitive_tests$ python build_Telemanom.py

generates attached yml file (changed extension to .txt to upload)
pipeline.txt

  • In tods/examples folder edit run_pipeline.py (line 18) and point to the generated pipeline.yml
parser.add_argument('--pipeline_path', default=os.path.join(this_path, '../primitive_tests/pipeline.yml'),
                    help='Input the path of the pre-built pipeline description')
  • Run run_pipeline.py throws

~/tods/examples$ python run_pipeline.py
Not all provided hyper-parameters for the data preparation pipeline 79ce71bd-db96-494b-a455-14f2e2ac5040 were used: ['method', 'number_of_folds', 'randomSeed', 'shuffle', 'stratified']
17/17 [==============================] - 1s 80ms/step - loss: 26304992.0000 - val_loss: 45412020.0000
/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/sklearn/utils/validation.py:933: FutureWarning: Passing attributes to check_is_fitted is deprecated and will be removed in 0.23. The attributes argument is ignored.
  "argument is ignored.", FutureWarning)
/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/sklearn/utils/validation.py:933: FutureWarning: Passing attributes to check_is_fitted is deprecated and will be removed in 0.23. The attributes argument is ignored.
  "argument is ignored.", FutureWarning)
{'error': "[None, StepFailedError('Step 0 for pipeline "
          "f596cd77-25f8-4d4c-a350-bb30ab1e58f6 failed.',)]",
 'method_called': 'evaluate',
 'pipeline': '<d3m.metadata.pipeline.Pipeline object at 0x7f78d68d8e10>',
 'status': 'ERRORED'}

@bappctl
Copy link
Author

bappctl commented Apr 23, 2021

This is with respect to running test.sh in tods directory (different from above scenario)

CONDA ENV:
Python 3.6
tods 0.0.2

ERROR:

~/tods$ python --version
Python 3.6.13 :: Anaconda, Inc.

~/tods$ ./test.sh
build_ABOD_pipline.py
\t#Pipeline Building Errors: 0
\t#Pipeline Running Errors: 0
build_AutoEncoder.py
\t#Pipeline Building Errors: 0
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 6)                 42        
_________________________________________________________________
dropout (Dropout)            (None, 6)                 0         
_________________________________________________________________
dense_1 (Dense)              (None, 6)                 42        
_________________________________________________________________
dropout_1 (Dropout)          (None, 6)                 0         
_________________________________________________________________
dense_2 (Dense)              (None, 4)                 28        
_________________________________________________________________
dropout_2 (Dropout)          (None, 4)                 0         
_________________________________________________________________
dense_3 (Dense)              (None, 2)                 10        
_________________________________________________________________
dropout_3 (Dropout)          (None, 2)                 0         
_________________________________________________________________
dense_4 (Dense)              (None, 4)                 12        
_________________________________________________________________
dropout_4 (Dropout)          (None, 4)                 0         
_________________________________________________________________
dense_5 (Dense)              (None, 6)                 30        
=================================================================
Total params: 164
Trainable params: 164
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/100
36/36 [==============================] - 0s 5ms/step - loss: 2.0314 - val_loss: 1.3963
Epoch 2/100
36/36 [==============================] - 0s 1ms/step - loss: 1.8544 - val_loss: 1.3098
Epoch 3/100
36/36 [==============================] - 0s 1ms/step - loss: 1.7309 - val_loss: 1.2483
Epoch 4/100
36/36 [==============================] - 0s 1ms/step - loss: 1.6944 - val_loss: 1.1992
Epoch 5/100
36/36 [==============================] - 0s 1ms/step - loss: 1.6314 - val_loss: 1.1596
Epoch 6/100
36/36 [==============================] - 0s 1ms/step - loss: 1.6697 - val_loss: 1.1264
Epoch 7/100
36/36 [==============================] - 0s 1ms/step - loss: 1.5473 - val_loss: 1.0981
Epoch 8/100
36/36 [==============================] - 0s 1ms/step - loss: 1.5053 - val_loss: 1.0730
Epoch 9/100
36/36 [==============================] - 0s 1ms/step - loss: 1.4922 - val_loss: 1.0510
Epoch 10/100
36/36 [==============================] - 0s 1ms/step - loss: 1.4610 - val_loss: 1.0313
Epoch 11/100
36/36 [==============================] - 0s 1ms/step - loss: 1.4751 - val_loss: 1.0135
Epoch 12/100
36/36 [==============================] - 0s 1ms/step - loss: 1.4318 - val_loss: 0.9976
Epoch 13/100
36/36 [==============================] - 0s 1ms/step - loss: 1.3999 - val_loss: 0.9832
Epoch 14/100
36/36 [==============================] - 0s 1ms/step - loss: 1.3951 - val_loss: 0.9700
Epoch 15/100
36/36 [==============================] - 0s 1ms/step - loss: 1.3719 - val_loss: 0.9579
Epoch 16/100
36/36 [==============================] - 0s 1ms/step - loss: 1.3637 - val_loss: 0.9466
Epoch 17/100
36/36 [==============================] - 0s 1ms/step - loss: 1.3465 - val_loss: 0.9363
Epoch 18/100
36/36 [==============================] - 0s 1ms/step - loss: 1.3362 - val_loss: 0.9266
Epoch 19/100
36/36 [==============================] - 0s 1ms/step - loss: 1.3221 - val_loss: 0.9176
Epoch 20/100
36/36 [==============================] - 0s 1ms/step - loss: 1.3104 - val_loss: 0.9091
Epoch 21/100
36/36 [==============================] - 0s 1ms/step - loss: 1.3042 - val_loss: 0.9011
Epoch 22/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2998 - val_loss: 0.8936
Epoch 23/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2817 - val_loss: 0.8864
Epoch 24/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2761 - val_loss: 0.8797
Epoch 25/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2720 - val_loss: 0.8732
Epoch 26/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2568 - val_loss: 0.8671
Epoch 27/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2886 - val_loss: 0.8613
Epoch 28/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2444 - val_loss: 0.8557
Epoch 29/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2360 - val_loss: 0.8505
Epoch 30/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2290 - val_loss: 0.8454
Epoch 31/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2251 - val_loss: 0.8406
Epoch 32/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2349 - val_loss: 0.8360
Epoch 33/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2134 - val_loss: 0.8316
Epoch 34/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2045 - val_loss: 0.8273
Epoch 35/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1992 - val_loss: 0.8233
Epoch 36/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1990 - val_loss: 0.8194
Epoch 37/100
36/36 [==============================] - 0s 1ms/step - loss: 1.2138 - val_loss: 0.8157
Epoch 38/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1842 - val_loss: 0.8121
Epoch 39/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1808 - val_loss: 0.8087
Epoch 40/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1774 - val_loss: 0.8054
Epoch 41/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1728 - val_loss: 0.8022
Epoch 42/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1679 - val_loss: 0.7992
Epoch 43/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1656 - val_loss: 0.7963
Epoch 44/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1604 - val_loss: 0.7935
Epoch 45/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1547 - val_loss: 0.7907
Epoch 46/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1618 - val_loss: 0.7882
Epoch 47/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1509 - val_loss: 0.7857
Epoch 48/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1598 - val_loss: 0.7833
Epoch 49/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1424 - val_loss: 0.7809
Epoch 50/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1398 - val_loss: 0.7787
Epoch 51/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1380 - val_loss: 0.7766
Epoch 52/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1320 - val_loss: 0.7745
Epoch 53/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1310 - val_loss: 0.7725
Epoch 54/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1377 - val_loss: 0.7705
Epoch 55/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1248 - val_loss: 0.7686
Epoch 56/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1301 - val_loss: 0.7668
Epoch 57/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1202 - val_loss: 0.7651
Epoch 58/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1162 - val_loss: 0.7634
Epoch 59/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1166 - val_loss: 0.7618
Epoch 60/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1140 - val_loss: 0.7602
Epoch 61/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1116 - val_loss: 0.7587
Epoch 62/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1091 - val_loss: 0.7573
Epoch 63/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1071 - val_loss: 0.7559
Epoch 64/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1052 - val_loss: 0.7545
Epoch 65/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1013 - val_loss: 0.7532
Epoch 66/100
36/36 [==============================] - 0s 1ms/step - loss: 1.1019 - val_loss: 0.7519
Epoch 67/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0995 - val_loss: 0.7507
Epoch 68/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0964 - val_loss: 0.7494
Epoch 69/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0952 - val_loss: 0.7483
Epoch 70/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0947 - val_loss: 0.7472
Epoch 71/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0917 - val_loss: 0.7461
Epoch 72/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0884 - val_loss: 0.7451
Epoch 73/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0906 - val_loss: 0.7441
Epoch 74/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0960 - val_loss: 0.7431
Epoch 75/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0870 - val_loss: 0.7421
Epoch 76/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0847 - val_loss: 0.7412
Epoch 77/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0846 - val_loss: 0.7403
Epoch 78/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0825 - val_loss: 0.7394
Epoch 79/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0803 - val_loss: 0.7385
Epoch 80/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0801 - val_loss: 0.7377
Epoch 81/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0758 - val_loss: 0.7369
Epoch 82/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0740 - val_loss: 0.7361
Epoch 83/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0764 - val_loss: 0.7354
Epoch 84/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0776 - val_loss: 0.7346
Epoch 85/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0742 - val_loss: 0.7339
Epoch 86/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0754 - val_loss: 0.7332
Epoch 87/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0726 - val_loss: 0.7325
Epoch 88/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0725 - val_loss: 0.7319
Epoch 89/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0698 - val_loss: 0.7312
Epoch 90/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0698 - val_loss: 0.7306
Epoch 91/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0681 - val_loss: 0.7300
Epoch 92/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0645 - val_loss: 0.7295
Epoch 93/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0671 - val_loss: 0.7289
Epoch 94/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0618 - val_loss: 0.7283
Epoch 95/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0658 - val_loss: 0.7278
Epoch 96/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0629 - val_loss: 0.7273
Epoch 97/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0627 - val_loss: 0.7268
Epoch 98/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0625 - val_loss: 0.7263
Epoch 99/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0618 - val_loss: 0.7258
Epoch 100/100
36/36 [==============================] - 0s 1ms/step - loss: 1.0610 - val_loss: 0.7253
\t#Pipeline Running Errors: 0
build_AutoRegODetect_pipeline.py
\t#Pipeline Building Errors: 1
Traceback (most recent call last):
  File "primitive_tests/build_AutoRegODetect_pipeline.py", line 46, in <module>
    primitive_4 = index.get_primitive('d3m.primitives.tods.detection_algorithm.AutoRegODetector')
  File "/home/tods/src/d3m/d3m/index.py", line 117, in get_primitive
    return getattr(module, name)
  File "/home/tods/src/d3m/d3m/namespace.py", line 109, in __getattr__
    primitive = entry_point.resolve()  # type: ignore
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2456, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/home/tods/tods/detection_algorithm/AutoRegODetect.py", line 39, in <module>
    from .core.MultiAutoRegOD import MultiAutoRegOD
  File "/home/tods/tods/detection_algorithm/core/MultiAutoRegOD.py", line 10, in <module>
    from combo.models.score_comb import average, maximization, median, aom, moa
ModuleNotFoundError: No module named 'combo'

@bappctl
Copy link
Author

bappctl commented Apr 23, 2021

SWITCHED TO DEV BRANCH of tods and created a example_pipeline.py using the /tods/primitive_tests/detection_algorithms/Telemanom_pipeline.py

Before running Telemanom_pipeline.py I uncommented step_3.add_hyperparameter(name='use_columns', argument_type=ArgumentType.VALUE, data=(2,3,4,5,6))

# Step 3: telemanom
step_3 = PrimitiveStep(primitive=index.get_primitive('d3m.primitives.tods.detection_algorithm.telemanom'))
step_3.add_hyperparameter(name='use_semantic_types', argument_type=ArgumentType.VALUE, data=True)
**step_3.add_hyperparameter(name='use_columns', argument_type=ArgumentType.VALUE, data=(2,3,4,5,6))**
step_3.add_argument(name='inputs', argument_type=ArgumentType.CONTAINER, data_reference='steps.2.produce')
step_3.add_output('produce')
pipeline_description.add_step(step_3)
  • pointed to the newly generated pipleine in run_pipeline.py in axolotl_interface directory fails with below error
  • Also tried copying the run_pipeline.py from master branch and pointed to the newly generated pipleine in run_pipeline.py and it fails with same error
~/tods/examples$ python run_pipeline.py 
/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.preprocessing.data module is  deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.preprocessing. Anything that cannot be imported from sklearn.preprocessing is now part of the private API.
  warnings.warn(message, FutureWarning)
/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.decomposition.truncated_svd module is  deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.decomposition. Anything that cannot be imported from sklearn.decomposition is now part of the private API.
  warnings.warn(message, FutureWarning)
d3m.primitives.tods.detection_algorithm.LSTMODetector: Primitive is not providing a description through its docstring.
Not all provided hyper-parameters for the data preparation pipeline 79ce71bd-db96-494b-a455-14f2e2ac5040 were used: ['method', 'number_of_folds', 'randomSeed', 'shuffle', 'stratified']
{'error': "[StepFailedError('Step 3 for pipeline "
          "e80ea54d-103d-4ebe-b4ca-2612013beda2 failed.',)]",
 'method_called': 'evaluate',
 'pipeline': '<d3m.metadata.pipeline.Pipeline object at 0x7f84175d3be0>',
 'status': 'ERRORED'}

If I comment step_3.add_hyperparameter(name='use_columns', argument_type=ArgumentType.VALUE, data=(2,3,4,5,6)) and try again then the status is COMPLETED. But I think with step3 uncommented should work which is not happening.

How to resolve this this?

@bappctl
Copy link
Author

bappctl commented Apr 25, 2021

Any pointers on this issue.
"I uncommented step_3.add_hyperparameter(name='use_columns', argument_type=ArgumentType.VALUE, data=(2,3,4,5,6))" and fails

Thanks

@lhenry15
Copy link
Member

run_pipeline is calling a function called evaluate_pipeline (in tods/utils.py), which import axolotl to execute the pipeline and evaluate the performance at the same time (see the attached figure)
Screen Shot 2021-04-24 at 7 45 22 PM
. In order to make a pipeline run with this evaluate_pipeline API, a primitive called "construct prediction is needed to evaluate the performance". Take a look on this example. Another method to run the pipeline is to directly run it with the d3m engine (see the example). We are developing a flexible interface for axolotl to run pipeline without evaluation, will be released with the next version.

@bappctl
Copy link
Author

bappctl commented Apr 25, 2021

Also I noticed this comment. Is this something to do with this?

image

@bappctl
Copy link
Author

bappctl commented Apr 25, 2021

Tried running directly with d3 engine as suggested for telemanom.

example_pipeline_1.txt (.json extension renamed for upload)

#!/bin/bash
python3 -m d3m runtime fit-produce -p example_pipeline_1.json -r yahoo_sub_5/TRAIN/problem_TRAIN/problemDoc.json -i yahoo_sub_5/TRAIN/dataset_TRAIN/datasetDoc.json -t yahoo_sub_5/TEST/dataset_TEST/datasetDoc.json -o results.csv 2> tmp.txt
error=$(cat tmp.txt | grep 'Error' | wc -l) 
echo "\t#Pipeline Running Errors:" $error
if [ "$error" -gt "0" ]
then
    cat tmp.txt

fi
echo $file >> tested_file.txt

It fails as below


WARNING:d3m.metadata.pipeline_run:'worker_id' was generated using a random number because the MAC address could not be determined.
WARNING:d3m.metadata.pipeline_run:Configuration environment variable not set: D3MCPU
WARNING:d3m.metadata.pipeline_run:Configuration environment variable not set: D3MRAM
WARNING:d3m.metadata.pipeline_run:Docker image environment variable not set: D3M_BASE_IMAGE_NAME
WARNING:d3m.metadata.pipeline_run:Docker image environment variable not set: D3M_BASE_IMAGE_DIGEST
WARNING:d3m.metadata.pipeline_run:Docker image environment variable not set: D3M_IMAGE_NAME
WARNING:d3m.metadata.pipeline_run:Docker image environment variable not set: D3M_IMAGE_DIGEST
/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.preprocessing.data module is  deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.preprocessing. Anything that cannot be imported from sklearn.preprocessing is now part of the private API.
  warnings.warn(message, FutureWarning)
/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.decomposition.truncated_svd module is  deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.decomposition. Anything that cannot be imported from sklearn.decomposition is now part of the private API.
  warnings.warn(message, FutureWarning)
WARNING:d3m.metadata.base:d3m.primitives.tods.detection_algorithm.LSTMODetector: Primitive is not providing a description through its docstring.
INFO:numba.cuda.cudadrv.driver:init
Traceback (most recent call last):
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/runtime.py", line 1008, in _do_run_step
    self._run_step(step)
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/runtime.py", line 998, in _run_step
    self._run_primitive(step)
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/runtime.py", line 873, in _run_primitive
    multi_call_result = self._call_primitive_method(primitive.fit_multi_produce, fit_multi_produce_arguments)
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/runtime.py", line 974, in _call_primitive_method
    raise error
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/runtime.py", line 970, in _call_primitive_method
    result = method(**arguments)
  File "/home/tods/tods/common/TODSBasePrimitives.py", line 155, in fit_multi_produce
    return self._fit_multi_produce(produce_methods=produce_methods, timeout=timeout, iterations=iterations, inputs=inputs)
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/primitive_interfaces/base.py", line 559, in _fit_multi_produce
    fit_result = self.fit(timeout=timeout, iterations=iterations)
  File "/home/tods/tods/detection_algorithm/Telemanom.py", line 261, in fit
    return super().fit()
  File "/home/tods/tods/common/TODSBasePrimitives.py", line 120, in fit
    outputs = self._fit()
  File "/home/tods/tods/detection_algorithm/UODBasePrimitive.py", line 256, in _fit
    self._training_inputs, self._training_indices = self._get_columns_to_fit(self._inputs, self.hyperparams)
  File "/home/tods/tods/detection_algorithm/UODBasePrimitive.py", line 510, in _get_columns_to_fit
    can_use_column=can_produce_column)
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/base/utils.py", line 41, in get_columns_to_use
    if can_use_column(column_index):
  File "/home/tods/tods/detection_algorithm/UODBasePrimitive.py", line 505, in can_produce_column
    return cls._can_produce_column(inputs_metadata, column_index, hyperparams)
  File "/home/tods/tods/detection_algorithm/UODBasePrimitive.py", line 535, in _can_produce_column
    if not issubclass(column_metadata['structural_type'], accepted_structural_types):
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/frozendict/__init__.py", line 29, in __getitem__
    return self._dict[key]
KeyError: 'structural_type'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/software/anaconda3/envs/tods/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/software/anaconda3/envs/tods/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/__main__.py", line 6, in <module>
    cli.main(sys.argv)
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/cli.py", line 1172, in main
    handler(arguments, parser)
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/cli.py", line 1057, in handler
    problem_resolver=problem_resolver,
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/cli.py", line 539, in runtime_handler
    problem_resolver=problem_resolver,
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/runtime.py", line 2369, in fit_produce_handler
    fit_result.check_success()
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/runtime.py", line 67, in check_success
    raise self.error
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/runtime.py", line 1039, in _run
    self._do_run()
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/runtime.py", line 1025, in _do_run
    self._do_run_step(step)
  File "/home/software/anaconda3/envs/tods/lib/python3.6/site-packages/d3m/runtime.py", line 1017, in _do_run_step
    ) from error
d3m.exceptions.StepFailedError: Step 3 for pipeline c935bcf3-6c28-4391-a7b7-5e68eab65bc7 failed.
INFO:numba.cuda.cudadrv.driver:add pending dealloc: module_unload ? bytes`


@bappctl
Copy link
Author

bappctl commented Apr 25, 2021

If I remove below use_columns hyperparam from pipeline atleast it's not erroring out as above (though not sure whether it's doing the job)

,
      "hyperparams": {
        "use_semantic_types": {
          "type": "VALUE",
          "data": true
        },
        "use_columns": {
          "type": "VALUE",
          "data": [
            2,
            3,
            4,
            5,
            6
          ]
        }
      }
OUTPUT
15/15 [==============================] - 1s 84ms/step - loss: 23433782.0000 - val_loss: 13002350.0000
\t#Pipeline Running Errors: 0

Believe something to do with
image

@lhenry15 lhenry15 added the bug Something isn't working label Apr 25, 2021
@bappctl
Copy link
Author

bappctl commented May 11, 2021

Is this taken care in next release. When the next release planned. Thanks

@lhenry15
Copy link
Member

we are on it now, will be uploaded once the problem has solved. The new release is plan on the middle of June with some IPython notebook examples, sckit-learn programming interface and new primitives. You can actually access to it in dev branch. It will be merged to master branch with new Readme once the checks and other recent developments are done.

@bappctl
Copy link
Author

bappctl commented May 14, 2021

Thanks for the info. Kindly update here after this issue is fixed.

@bappctl
Copy link
Author

bappctl commented Jul 30, 2021

@lhenry15
Is this fixed

@hwy893747147
Copy link
Collaborator

@bappctl
We have been debugging this bug of Telemanom, we will inform you under this thread as soon as it gets fixed. Thanks.

@jjjzy
Copy link
Collaborator

jjjzy commented Sep 30, 2021

@bappctl
When you are using this hyperparameter 'step_3.add_hyperparameter(name='use_columns', argument_type=ArgumentType.VALUE, data=(2,3,4,5,6))', can you make sure that there are column 2, 3, 4, 5, 6 in the data? I suspect that the problem is that your data doesn't contain column 6.

If you want, you can use the telemanom in jiazhen_yu branch, if problems still occur, please reply or comment.

That branch is not official, for official use, please refer to the dev branch.

@bappctl
Copy link
Author

bappctl commented Nov 17, 2021

@bappctl When you are using this hyperparameter 'step_3.add_hyperparameter(name='use_columns', argument_type=ArgumentType.VALUE, data=(2,3,4,5,6))', can you make sure that there are column 2, 3, 4, 5, 6 in the data? I suspect that the problem is that your data doesn't contain column 6.

If you want, you can use the telemanom in jiazhen_yu branch, if problems still occur, please reply or comment.

That branch is not official, for official use, please refer to the dev branch.

Hi @jjjzy
I used the telemanom_pipleline_cols.json pipeline (attached) (which was created using the tods/primitive_tests/detection_algorithm/Telemanom_pipeline.py) and changed the --pipeline_path in tods/examples/axolotl_interface/run_pipeline.py pointing to telemanom_pipleline_cols.json before running it.

This time I am not getting any error ALSO NOT SURE whether it returns proper result, the pipeline.result.scores always come NONE. I am using datasets/anomaly/raw_data/yahoo_sub_5.csv for the data. Please check.
console
telemanom_pipeline_cols.txt
run_pipeline_py.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants