Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Customize knowledge db #6

Open
ii35322 opened this issue Oct 20, 2022 · 3 comments
Open

Customize knowledge db #6

ii35322 opened this issue Oct 20, 2022 · 3 comments

Comments

@ii35322
Copy link

ii35322 commented Oct 20, 2022

Hello, Thanks for the valuable repo, I already tried to run this code and it worked very well! Looks like the db we can download through huggingface. I want to ask can we build our customize knowledge database without download from huggingface? Thanks!

@Ag2S1
Copy link
Contributor

Ag2S1 commented Oct 21, 2022

Of course. I have added simple documentation to the index-server folder, and please refer to try it.
https://github.com/Langboat/mengzi-retrieval-lm/blob/main/index-server/README.md

@ii35322
Copy link
Author

ii35322 commented Nov 2, 2022

Hello, thank you for your reply! Following your steps above, I can run the experiment with the customize database! Now I want to evaluate the model with retrieval. I saw there is a "generate.py" file, which can generate the text with the model. But I faced two issues:

  1. If my input length is less than 64, can I still generate the text with retrieval? (e.g. use padding to let the sentence become 64?)
  2. There is a hyperparameter flag '--retrieval' can choose, but I don't know what is the "retrieval list" I need to input there. For example, if I set the input text is "Client progress notes are written by staff of a company about a specified client. It includes a client’s achievements, status and any other details about a client. Client progress note is aimed at reflecting" , what is the retrieval list I need to set? Thanks again if you have time to take a look.

@bling0830
Copy link
Contributor

  1. If the input length is less than 64, the input length can be supplemented to a length greater than 64 by padding. The length of the input token needs to be at least 65.

  2. We set the retrieval parameter so that the user can customize the retrieval, but since the similarity between the user-defined retrieval and the input texts cannot be distinguished, there is no guarantee that the model can make good use of the user-defined retrieval.

    If the input text is "Client progress notes are written by staff of a company about a specified client. It includes a client's achievements, status and any other details about a client. Client progress note is aimed at reflecting", there will be 40 tokens after tokenize.

    The first step needs to be padding on the left side of the input text, so that the number of input tokens is greater than 64 to activate the retrieval.

    The retrieval list is a two-dimensional list, the first layer length is equal to the number of chunks, and the second layer length is equal to the number of neighbors. If the retrieval token in the retrieval list is less than 128, the padding operation will be performed when the tokenize is performed, and the truncation operation will be performed when the token is greater than 128.

    Therefore, for this input text, there will be 1 chunk after padding, and the neighbor of the current model is 2. The retrieval list should be set to something like
    [['--------','--------']]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants