-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add NN options, simplify code, minor fixes #189
Add NN options, simplify code, minor fixes #189
Conversation
arcutright
commented
Mar 16, 2021
- Fix terminal colors not rendering under Windows cmd.exe
- Fix uncommon issue where 'date' column would be named 'Date', breaking fbprophet predictions. Didn't bother investigating, just lowercased the column names.
- Add documentation for GPU acceleration in TensorFlow for GamestonkTerminal (under pred)
- Refactor out common code from NN definitions (made the file less than half as long)
- Print exception traces for NNs
- Add a few new flags to NN (documented)
- Add some missing loss functions to NNs (there are more, but I chose some common ones)
- Add documentation links to --help menu for NNs within pred
'date' was sometimes 'Date'
- add more loss models - add batch_size for model training - add documentation links in '-h' for NN models - add flags for XLA settings
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing contribution here!! This is looking really nice!
Can you remove the line that fixed the "Date" - "date" since this should now have been taken care of? Once you do that I'll merge this!
And when I checkout your branch and used NN, although everything worked smoothly I got a message:
WARNING:tensorflow:6 out of the last 6 calls to <function Model.make_predict_function.<locals>.predict_function at 0x14fe37048> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Is this expected?
Great work again, really happy about this!
@@ -95,6 +95,9 @@ def fbprophet(l_args, s_ticker, df_stock): | |||
|
|||
df_stock = df_stock.sort_index(ascending=True) | |||
df_stock.reset_index(level=0, inplace=True) | |||
df_stock.columns = map( | |||
str.lower, df_stock.columns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think https://github.com/DidierRLopes/GamestonkTerminal/pull/155 fixed this. This should be no longer necessary.
default=None, | ||
help="The end date (format YYYY-MM-DD) to select - Backtesting", | ||
) | ||
# ---------------------------------------------------------------------------------------------------- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shouldn't be needed anymore
print("") | ||
|
||
finally: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't know about this keyword! This is cool!
|
||
### CUDA 11.1, cuDNN 8.0.5 - not tested | ||
|
||
### CUDA 11.2.1, cuDNN 8.1 - does not work |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haha -- try this. Tried on CUDA 11.2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually tried this but I was still running into out-of-memory issues fairly often despite having far more VRAM than needed for the models I was using. May have been due to cuDNN 8.1, may have been easy to resolve, I didn't fight it for long.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm... Here is my info for the system:
I used this version to install on an archlinux
system. The command is sudo pacman -S cuda cudnn python-tensorflow-opt-cuda
.
The output of nvidia-smi
gives:
Tue Mar 16 19:33:17 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.56 Driver Version: 460.56 CUDA Version: 11.2 |
The output of nvcc --version
gives:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Jan_28_19:32:09_PST_2021
Cuda compilation tools, release 11.2, V11.2.142
Build cuda_11.2.r11.2/compiler.29558016_0
And finally the cat
of /usr/include/cudnn_version.h
gives
#define CUDNN_MAJOR 8
#define CUDNN_MINOR 1
#define CUDNN_PATCHLEVEL 0
So I think CUDA 11.2.1, and cuDNN 8.1 is tested and proved to be working.
Thanks, changes have been made.
So this issue has to do with the input shape of x used in the calls to model.predict(x) (see snippet below). It could be fixed but I don't think it's worthwhile for an application like this since GST isn't likely to make many predictions at a time. It's just a warning that repeated calls to predict like this would be inefficient. See tensorflow/tensorflow#34025 for more details. # Prediction
yhat = model.predict(
stock_train_data[-ns_parser.n_inputs :].reshape(1, ns_parser.n_inputs, 1), # this input shape
verbose=0,
) |
Got you. Is there a way to not output this warning? |