-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About sent_splits.mat #14
Comments
@NanAlbert hello,I have the same problem,Have you solved it now? |
As I know so far, there is no CNN-RNN sentence embeddings for AwA2, SUN and APY datasets. |
你好,请问您在AWA1,AWA2数据集中可以跑到论文中的效果吗?我按照作者给的参数在AWA1中只能跑到h= 67.5,论文中h = 69.1
…------------------ 原始邮件 ------------------
发件人: "Hanzy1996/CE-GZSL" ***@***.***>;
发送时间: 2022年7月24日(星期天) 晚上10:58
***@***.***>;
***@***.******@***.***>;
主题: Re: [Hanzy1996/CE-GZSL] About sent_splits.mat (Issue #14)
Closed #14 as completed.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
same question,How should such semantic data sets be generated,I want to build my own data set |
As I mentioned before, there are no CNN-RNN sentence embeddings available for the AwA2, SUN, and APY datasets. However, you could consider using language models like BERT to generate semantic data for your own dataset. |
Hello, I would like to know how to generate the file sent_splits.mat or where to get the file of other datasets, e.g., AwA2, SUN, APY.
Thank you so much!
The text was updated successfully, but these errors were encountered: