You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing. It's really helpful and looking forward to make it to next level with help of Bert and SQUAD dataset. Looking only for get similar utterances function.
Reason: Suppose query was "What is machine learning?"
Results with default degree_of_aug =0.1 was ['what is machine learn','what is machine thinking','what is mortar learning','what is tank learning','what is rifle learning','what is typewriter learning','what is machine discovering','what is engine learning','what is engine learning','what is factory learning']
Results with degree_of_aug =0.8 was ['what lies typewriter thinking', 'what can mortar learn', 'whatever has machine learned', 'what consists engine noticing', 'what consists engine noticing', 'what was machines hearing', 'what has rifle learned', 'how is typewriter thinking', 'what becomes rocket learns', 'what appears canister realising']
Expected examples should have been ['Can you tell me about machine learning, Explain machine learning, how do you define machine learning, Give definition of machine learning]
I am not sure if this can be achieved but we should give it a try. Bert model uses a large corpus of wiki and there are some dataset like SQUAD1, SQUAD2 which is QnA based. Can we use there features to improve the results. I have worked on Bert model and on SQUAD2 dataset and the results are great. Would you try to implement on this package or let me know about the training process in it. So that I can give it a try.
Thanks for sharing..
The text was updated successfully, but these errors were encountered:
Thanks for sharing. It's really helpful and looking forward to make it to next level with help of Bert and SQUAD dataset. Looking only for get similar utterances function.
Reason: Suppose query was "What is machine learning?"
Results with default degree_of_aug =0.1 was ['what is machine learn','what is machine thinking','what is mortar learning','what is tank learning','what is rifle learning','what is typewriter learning','what is machine discovering','what is engine learning','what is engine learning','what is factory learning']
Results with degree_of_aug =0.8 was ['what lies typewriter thinking', 'what can mortar learn', 'whatever has machine learned', 'what consists engine noticing', 'what consists engine noticing', 'what was machines hearing', 'what has rifle learned', 'how is typewriter thinking', 'what becomes rocket learns', 'what appears canister realising']
Expected examples should have been ['Can you tell me about machine learning, Explain machine learning, how do you define machine learning, Give definition of machine learning]
I am not sure if this can be achieved but we should give it a try. Bert model uses a large corpus of wiki and there are some dataset like SQUAD1, SQUAD2 which is QnA based. Can we use there features to improve the results. I have worked on Bert model and on SQUAD2 dataset and the results are great. Would you try to implement on this package or let me know about the training process in it. So that I can give it a try.
Thanks for sharing..
The text was updated successfully, but these errors were encountered: