Skip to content

Commit

Permalink
Fix behavior when encountering a bad embedding (#2721)
Browse files Browse the repository at this point in the history
When encountering a bad embedding, InvokeAI was asking about reconfiguring models. This is because the embedding load error was never handled - it now is.
  • Loading branch information
JPPhoto committed Feb 19, 2023
1 parent b9ecf93 commit d3c1b74
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion ldm/modules/textual_inversion_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,10 @@ def load_textual_inversion(self, ckpt_path: Union[str,Path], defer_injecting_tok

embedding_info = self._parse_embedding(str(ckpt_path))

if (
if embedding_info is None:
# We've already put out an error message about the bad embedding in _parse_embedding, so just return.
return
elif (
self.text_encoder.get_input_embeddings().weight.data[0].shape[0]
!= embedding_info["embedding"].shape[0]
):
Expand Down Expand Up @@ -287,6 +290,7 @@ def _parse_embedding(self, embedding_file: str):
return self._parse_embedding_bin(embedding_file)
else:
print(f">> Not a recognized embedding file: {embedding_file}")
return None

def _parse_embedding_pt(self, embedding_file):
embedding_ckpt = torch.load(embedding_file, map_location="cpu")
Expand Down

0 comments on commit d3c1b74

Please sign in to comment.