You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[2023-12-27 16:15:06] INFO (numexpr.utils/MainThread) Note: detected 128 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2023-12-27 16:15:06] INFO (numexpr.utils/MainThread) Note: NumExpr detected 128 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
[2023-12-27 16:15:06] INFO (numexpr.utils/MainThread) NumExpr defaulting to 8 threads.
[2023-12-27 16:15:06] INFO (nni.tuner.random/MainThread) Using random seed 220808582
[2023-12-27 16:15:06] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher started
[2023-12-27 16:15:06] INFO (nni.runtime.msg_dispatcher/Thread-1 (command_queue_worker)) Initial search space: {'n_steps': {'_type': 'choice', '_value': [60]}, 'n_features': {'_type': 'choice', '_value': [7]}, 'patience': {'_type': 'choice', '_value': [10]}, 'epochs': {'_type': 'choice', '_value': [200]}, 'rnn_hidden_size': {'_type': 'choice', '_value': [16, 32, 64, 128, 256, 512]}, 'lr': {'_type': 'loguniform', '_value': [0.0001, 0.01]}}
nnictl stdout and stderr:
2023-12-27 16:16:44 [INFO]: Have set the random seed as 2204 for numpy and pytorch.
2023-12-27 16:16:44 [INFO]: The tunner assigns a new group of params: {'n_steps': 60, 'n_features': 7, 'patience': 10, 'epochs': 200, 'rnn_hidden_size': 256, 'lr': 0.0054442307300676335}
2023-12-27 16:16:45 [INFO]: No given device, using default device: cuda
2023-12-27 16:16:45 [WARNING]: ‼️ saving_path not given. Model files and tensorboard file will not be saved.
2023-12-27 16:16:48 [INFO]: MRNN initialized with the given hyperparameters, the number of trainable parameters: 401,619
2023-12-27 16:16:48 [INFO]: Option lazy_load is set as False, hence loading all data from file...
2023-12-27 16:16:52 [INFO]: Epoch 001 - training loss: 1.3847, validating loss: 1.3214
How to reproduce it?:
Note that in the nnimanager.log: lr of trial XsB6F is 0.0008698020401037771 and this is also the value displayed on the local web page, but in the nnictl stdout log, the actual lr received by the model is 0.0054442307300676335, and they're mismatched. This is not a single case, I notice that hyperparameters of some trials are mismatched between the nnimanager tells and their actual values, while some of them are matched and fine.
The text was updated successfully, but these errors were encountered:
Describe the issue:
Environment:
Configuration:
Log message:
How to reproduce it?:
Note that in the nnimanager.log:
lr
of trial XsB6F is0.0008698020401037771
and this is also the value displayed on the local web page, but in the nnictl stdout log, the actuallr
received by the model is0.0054442307300676335
, and they're mismatched. This is not a single case, I notice that hyperparameters of some trials are mismatched between the nnimanager tells and their actual values, while some of them are matched and fine.The text was updated successfully, but these errors were encountered: