-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Poor evaluation results on a dataset #107
Comments
Hi @anatoly-khomenko
For your task, using an asymmetric structure could be helpful. You add one (or more) dense layers to one part of the network. So for example Even if A and B are identical, B would get a different sentence embedding because one is the query and the other is the document.
Here, cat in the document and cat in the search query would get different vector representations, making it more challenging to match them. Non-contenxualized word embeddings like GloVe are easier to use in this case, as 'cat' is always mapped to the same point in vector space.
|
Hello @nreimers , Thank you for the prompt answer! I have replaced STSDataReader with my own implementation, so the score was between 0 and 1. Could you point me to an example of modifying the model as you recommend? Shall I create a separate model as in https://github.com/UKPLab/sentence-transformers/tree/master/sentence_transformers/models and make SentenceTransformer use it? Thank you! |
Hi @anatoly-khomenko What you can do is to create a new layer derived from the dense models.Dense module (let's call it AsymmetricDense). Your architecture will look like this: In AsymmetricDense, in the forward method, you have a special routine depending on a flag of your input:
Then you need a special reader. For your queries, you set the feature['input_type'] to 'query', for your documents (your titles), you set it to feature['input_type'] = 'document'. The dense layer will then only be applied to input text with input_type==document. |
Hello @nreimers , Thank you for the detailed comment! I will try to implement that and see what happens. On a side note, I have filtered my dataset so that searches are longer than 20 symbols and also uised another field as the score. This field is either 0 or 1 in most cases. After training for 16 epochs with batch size 64 on the filtered dataset, I am still getting low correlations on the test set and their value began to decrease with time: Distribution of score, in this case, is around 60/30: Probably even with the asymmetric model I would not get good embeddings, due to some other dataset properties that I do not understand for now. What are the other important properties of the dataset (except for symmetry) that might make the model perform poorly? Thank you! |
Hi @anatoly-khomenko |
Hello @nreimers,
Thank you for amazingly simple to use code!
I'm trying to fine-tune the model 'bert-base-nli-mean-tokens' model to match user searches to job titles.
My training dataset consists of 934791 pairs of sentences and score for each pair, so I use the example for fine-tuning for the STS Benchmark (https://github.com/UKPLab/sentence-transformers/blob/master/examples/training_stsbenchmark_continue_training.py)
I train using the parameters from example (4 epochs with batch size 16).
The evaluation results I'm getting after training are the following:
Which I believe means that the model has not learned useful embeddings.
Here is how my dataset looks like for one search phrase:
The distribution of the score column is:
So I would consider this as a balanced dataset.
What would you recommend as the next steps to improve the results?
Any other advice would be helpful.
Thank you!
The text was updated successfully, but these errors were encountered: