-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not find SWEM-hier #2
Comments
Sure, I will merge the hierarchical pooling encoder into the model.py file soon. |
+1 Very interested to see it :) |
is there any progress on this issue? |
any progress?thanks @dinghanshen |
still looking forward to this. thanks @dinghanshen |
1 similar comment
still looking forward to this. thanks @dinghanshen |
Still looking forward to this. thanks @dinghanshen |
2 similar comments
Still looking forward to this. thanks @dinghanshen |
Still looking forward to this. thanks @dinghanshen |
please refer to the It's part of a bigger project called |
Still looking forward to this. thanks @dinghanshen |
read through the paper, I didn’t find what w2v embedding other models(such as LSTM,CNN) are using. It is amazing that SWEM -ave can achieve better results than LSTM or CNN in some tasks, which in fact I don’t believe! I have done a lot of nlp tasks and I know that simply average the word embedding of a text is usually very poor. |
Hi
The author gave me the swer-hier embedding, but I do not re-run it, I am also confused why so simple operation can achieve so good performance. However, our group recently finish some experiments, actually simple-operation can achieve comparable performance. For me, if you don not believe this result, you can forget this paper or you can re-run it to show whether you are right or not.
Best Regards,
…---Original---
From: "beyondguo"<notifications@github.com>
Date: Wed, Jul 3, 2019 00:47 AM
To: "dinghanshen/SWEM"<SWEM@noreply.github.com>;
Cc: "LittleSummer114"<582326366@qq.com>;"Comment"<comment@noreply.github.com>;
Subject: Re: [dinghanshen/SWEM] Not find SWEM-hier (#2)
read through the paper, I didn’t find what w2v embedding other models(such as LSTM,CNN) are using. It is amazing that SWEM -ave can achieve better results than LSTM or CNN in some tasks, which in fact I don’t believe! I have done a lot of nlp tasks and I know that simply average the word embedding of a text is usually very poor.
I don’t think the comparisons of other models are fair. They don’t even use the same pretrained w2v. So maybe it’s just the Glove you used is better than the embedding other models used.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Hi, could you share the code to me? Thanks. |
Still looking forward to this. thanks |
感谢您的来信。
|
Hi, seems not find hier encoder as paper mentioned. Very interested to see it :)
The text was updated successfully, but these errors were encountered: