You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hello langboat, thanks for sharing the good work.
Regarding the automatically generated marketing copy in the paper
Given the input title and keywords, the models are required to generate a corresponding descriptive passage
What is the input of the model?
Is it in the form of [cls] title [sep] [keywords1,keywords2,keywords3,keywords4] [sep] [kg11,kg12,kg13] [kg21,kg22,kg23]?
The text was updated successfully, but these errors were encountered:
Hi @Nipi64310 , the marketing copywriting presentation is based on T5 architecture, [CLS], [SEP] are special tokens in BERT, which does not exist in T5.
You need to refer to Google's practice in T5 to convert your task into Seq2Seq form.
The model we have open sourced is a model with same architecture as T5 1.1 and does not include any downstream tasks. So if you want to do a demo similar to ours, you need to prepare the following data:
the title and body of the marketing copy
the keywords mentioned in the main text
the knowledge graph of the marketing domain
Using above data, construct the training text pairs. There are various forms of text pairs, and we are exploring which one is better. Here is an example:
Input:
"title|keyword1, keyword2, keyword3|<entityA, relationX, entityB>, <entityC, relationY, entityD>"
Output:
"body of the text"
hello langboat, thanks for sharing the good work.
Regarding the automatically generated marketing copy in the paper
What is the input of the model?
Is it in the form of [cls] title [sep] [keywords1,keywords2,keywords3,keywords4] [sep] [kg11,kg12,kg13] [kg21,kg22,kg23]?
The text was updated successfully, but these errors were encountered: