Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

[Reranker] Set decoding method #4473

Merged
merged 4 commits into from Apr 5, 2022
Merged

[Reranker] Set decoding method #4473

merged 4 commits into from Apr 5, 2022

Conversation

jxmsML
Copy link
Contributor

@jxmsML jxmsML commented Apr 1, 2022

Patch description

  1. Add the set_decoding_method for flexibility.
    Seems like for the SeeKeR models, in order to use the --inference-strategies one must set decoding through:
    agent_clone.opt['inference'] = XXXX for agent_clone in self.dialogue_agent_clones
    instead of self.opt['drm_inference'] = XXXX, since self.opt['drm_inference'] is only used during initialization.
  2. Add the batch_reply as the parameter of get_observations_for_reranker for flexibility:
    For the SeeKeR case, the knowledge sentence can only be accessed thru batch_reply[i].knowledge_response. Adding batch_reply to the function get_observations_for_reranker would be useful for make the context + knowledge_ sentence as the observation['full_text']

Testing steps

Other information

@jxmsML jxmsML requested review from klshuster and removed request for klshuster April 1, 2022 17:35
@jxmsML jxmsML changed the title [Reranker] Set decoding method [WIP Reranker] Set decoding method Apr 1, 2022
@jxmsML jxmsML changed the title [WIP Reranker] Set decoding method [Reranker] Set decoding method Apr 4, 2022
@jxmsML jxmsML requested a review from klshuster April 4, 2022 15:21
@jxmsML jxmsML merged commit 2e7f917 into main Apr 5, 2022
@jxmsML jxmsML deleted the reranker_seeker branch April 5, 2022 02:38
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants