Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MAX_CONTEXTS tied to preprocessing? #39

Closed
hsellik opened this issue Mar 16, 2020 · 5 comments
Closed

MAX_CONTEXTS tied to preprocessing? #39

hsellik opened this issue Mar 16, 2020 · 5 comments

Comments

@hsellik
Copy link

hsellik commented Mar 16, 2020

Hi @urialon ,

I am trying to do some hyper-parameter tuning, but it seems it is a bit trickier than I thought.

  1. In order to change MAX_CONTEXTS in config.py, do I have to preprocess the data with the same MAX_CONTEXTS value as well?

  2. Does this also apply for WORD_VOCAB_SIZE and PATH_VOCAB_SIZE since I see correspondence in preprocessing and running:
    WORD_VOCAB_SIZE == config.MAX_TOKEN_VOCAB_SIZE
    PATH_VOCAB_SIZE == config.MAX_PATH_VOCAB_SIZE

@urialon
Copy link
Contributor

urialon commented Mar 16, 2020

Hi @hsellik ,

  1. As far as I remember - no, you don't need to preprocess the data yourself, as long as the number of contexts that you wish to use is lower or equal than the number of contexts that the data was preprocessed with. The number of contexts in the data (i.e., the way the data was preprocessed) is saved in the data dictionary, and then loaded here. If I remember correctly, the data was preprocessed with 1000 contexts. Thus, you don't need to re-preprocess the data, you can just change MAX_CONTEXTS here and the reader will automatically sample MAX_CONTEXTS out of the total 1000 that were saved with the data.

  2. You don't need to re-preprocess the data here as well. The data was saved with some really large vocabulary sizes (to check the actual numbers, put a breakpoint here and check the len of the loaded dictionaries). If you reduce vocab sizes here, the code will automatically take only the most frequent values.

E.g., if the data was preprocessed with a vocabulary of 1M and you set the vocab size to 1K - then the code will load all 1M values, sort them (descendingly) by their frequency, and take only the top 1K.

If you wish to use larger values for contexts or vocabularies than the values that the data was preprocessed with -- then yes, in that case, you will have to re-preprocess the data. However, I don't think that using more than 1000 contexts or larger vocabularies will help.

I hope it helps, let me know if I was unclear or if you have any other questions.

@hsellik
Copy link
Author

hsellik commented Mar 16, 2020

Okay, makes perfect sense, does the same apply for code2vec project?

@urialon
Copy link
Contributor

urialon commented Mar 16, 2020

In code2vec -
Regarding vocabularies - yes - you can reduce the sizes.
Regarding num_contexts - I don't think that you can change that

@hsellik
Copy link
Author

hsellik commented Mar 16, 2020

Okay, nice and clear. Thank you for the quick answers! :)

@hsellik hsellik closed this as completed Mar 16, 2020
@Guardian99
Copy link

On running a debugger, I get a message of "Expect 1001 fields but have 2456 in record 0".
1)How do i deal with this error in general, If i have to process codes that have approximately 2500-3000 lines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants