Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should i run training first? #1

Closed
zezhishao opened this issue Mar 23, 2019 · 6 comments
Closed

Should i run training first? #1

zezhishao opened this issue Mar 23, 2019 · 6 comments

Comments

@zezhishao
Copy link

zezhishao commented Mar 23, 2019

I am a beginner of trajectory data mining, I notice that the embedding is written after training in README. I am a little confused. What I mean is that the embedding layer is a part of the encoder-decoder model so shouldn't i train this layer first? And then use it to the entire encoder-decoder model?
I really can not figure this out :( can you help me ?

@boathit
Copy link
Owner

boathit commented Mar 25, 2019

Hi @zezhishao, the "embedding" in README means the process of encoding a trajectory into its vector representation, it is not the embedding layer as what we described in the paper. Sorry for making you feel confused, I had better change it to the term "encoding".

@zezhishao
Copy link
Author

@boathit Thanks for your reply!

@zezhishao
Copy link
Author

zezhishao commented Apr 12, 2019

Hi~ @boathit I have some questions again~
The python program can not run correctly. The nn in pytorch 0.1.2 does not have the class KLDivLoss. And the KLDivLoss of latest version of pytorch is not a _WeightedLoss class, see at:

_Loss class
_WeightedLoss
latest KLDivLoss

So , in train.py, I can't pass this row:

criterion = nn.KLDivLoss(weight, size_average=False)

And I have another question for a comment in the batch_loss function in train.py :

In there, the comment is :
## (seq_len, generator_batch, hidden_size) => (seq_len*generator_batch, hidden_size).

After zip(output.split(generator_batch), target.split(generator_batch))
Should the o and t be # (generator_batch, args.batch_size, hidden_size)?

@boathit
Copy link
Owner

boathit commented Apr 12, 2019

You are correct the size of o should be (generator_batch, batch_size, hidden_size). But PyTorch 0.1.12 does have KLDivLoss, please double check your PyTorch version. By the way, I plan to update the code to PyTorch 1.0+ within next week.

@boathit
Copy link
Owner

boathit commented Apr 12, 2019

I made a typo in the requirement, it should be PyTorch 0.1.12 not 0.1.2. Sorry about that.

@zezhishao
Copy link
Author

You are correct the size of o should be (generator_batch, batch_size, hidden_size). But PyTorch 0.1.12 does have KLDivLoss, please double check your PyTorch version. By the way, I plan to update the code to PyTorch 1.0+ within next week.

That's cool!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants