Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NeuralNetwork -- for the fine-tunning it is impossible to pick a default embedding/vocabulary. #359

Closed
nicolay-r opened this issue Jul 6, 2022 · 0 comments
Assignees
Labels
bug Something isn't working enhancement New feature or request

Comments

@nicolay-r
Copy link
Owner

nicolay-r commented Jul 6, 2022

We need there provide a check when the target path is None.
The same is for vocabulary.
Need to perfrorm refactoring at least for a better logic understanding.
Keep the single-time declared logic both for vocabulary and embedding.

def __get_term_embedding_source(self):
""" It is possible to load a predefined embedding from another experiment
using the related filepath provided by model_io.
"""
model_io = self._exp_ctx.ModelIO
if model_io is None:
return self.__get_default_embedding_filepath()
assert(isinstance(model_io, NeuralNetworkModelIO))
return model_io.get_model_embedding_filepath() if self.__model_is_pretrained_state_provided(model_io) \
else self.__get_default_embedding_filepath()

@nicolay-r nicolay-r added bug Something isn't working enhancement New feature or request labels Jul 6, 2022
@nicolay-r nicolay-r self-assigned this Jul 6, 2022
nicolay-r added a commit that referenced this issue Jul 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant