from_pretrained
should just work for DecisionGPT2LMHeadModel
#2
Labels
good first issue
Good for newcomers
from_pretrained
should just work for DecisionGPT2LMHeadModel
#2
Currently to create a model that accept the scalar reward, you need to do this if you want to use pretrained gpt2.
rather than just
We should allow users to just pass "gpt2". And under the hood, we'll detect that its a pretrained gpt2 model.
Case 1 -
Its a
DecisionGPT2LMHeadModel
that got saved.We should retrain the original behavior and load all weights
Case 2 -
Its a gpt2 pretrained model that doesn't have our
embed_return
layer.We'll log that we detected that, and we'll randomly initialize our
self.embed_return
layer.Write tests for this behavior :)
Case 3 -
Neither of these scenarios.
We should throw an exception here.
Write tests for this behavior :)
The text was updated successfully, but these errors were encountered: