New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
repository directory #20
Comments
Hi, did you solve this problem? Could we have a discussion by email? |
Hi Jimeng, the path specified by |
Appreciate it, George. It was fixed when I specify the absolute path. Thanks! |
Sorry to bother you, George. I have one more question: could you give us some intuitions about why you just used the Transformer encoder in your work? I did not find some explanations for this in the paper. Thanks! |
No problem! I give some motivation behind using only the transformer encoder in Section 3.1 of the KDD proceedings version of the paper: The main rationale behind this choice is that the decoder is first and foremost a component for generative tasks. The encoder builds a latent representation of the input, and the decoder, while looking at this latent input representation, learns to generate a statistically likely continuation of what it has already generated (which, during training, is what we are supplying as "ground truth"). If we were only interested in e.g. time-series forecasting, especially with a fluid future prediction horizon, then an encoder-decoder architecture might have been a good (or possibly, an even more suitable) choice. |
Hi George, thanks, did you mean that in order to keep the model's generability, we choose the encoder as the pre-train model? (refer to your comment: A decoder architecture is unsuitable, or at least redundant when dealing with such tasks; for example, it needs a whole sequence as an input, and there is no good) |
Clear explanations! It makes sense. @gzerveas |
Hi Goerge, thanks for your open-source codes. It is very clear and organized.
But I am new to use the shell script, could you please give a directory tree of the entire repository? That would be very helpful to understand the architecture. I am confused about where I should put the downloaded data and where I should make the experiments folder. Currently, I am trying with the following tree:
- datasets
- models
- regression
- utils
- main.py
- optimizers.py
- options.py
- running.py
After cd mvts_transformer, I run
python src/main.py --output_dir experiments --comment "regression from Scratch" --name FloodModeling1_fromScratch_Regression --records_file Regression_records.xls --data_dir Datasets/Regression/FloodModeling1/ --data_class tsra --pattern TRAIN --val_pattern TEST --epochs 100 --lr 0.001 --optimizer RAdam --pos_encoding learnable --task regression
, but it showsNo files found using: Datasets/Regression/FloodModeling1/*
.The text was updated successfully, but these errors were encountered: