Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Looking back #3

Open
R-N opened this issue Oct 12, 2023 · 0 comments
Open

Looking back #3

R-N opened this issue Oct 12, 2023 · 0 comments

Comments

@R-N
Copy link
Owner

R-N commented Oct 12, 2023

Looking back, there were several issues from my inexperience:

  • Starting with a big model. It's better to start small, rather than trying to lump everything in from the get go. Find out what works and what doesn't.
  • Optimizing a very large hyperparameter space. Optuna will eventually find the best parameter, but for how long? The larger the space the larger the wait.
  • Placing a high max for a parameter that significantly increase training time.
  • Using RNN. RNN must process the sequence sequentially. This makes it require more computation as they can't be done at once. However transformer requires way more parameters.
  • Keep using SIRD even though it was clear that it creates worse pattern visible.
  • Not checking whether MSSE loss produce good gradient. All I cared was that it makes it scale invariant.
  • Only doing scaling for preprocessing. The data was bad. There should be more that can be done. Neural network can theoretically learn any pattern given enough capacity and data, but there was few data and large model was infeasible. It's hard to learn from it. Maybe should've log transformed it.
  • Dimension explosion. The one hot encoded holidays added a real lot of dimension and dominated the input dimension. I should've added a linear layer for embedding or something, just like transformer does.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant