Replies: 2 comments
-
Hi @baravit! We are testing internally a technique that uses your suggestion as a mechanism for generating training data. We'll consider adding it to the NeMo Guardrails toolkit as well. Meanwhile, if you craft a few-shot prompt where you show how two conversations are converted from "raw text" to canonical forms (you'll have to do these manually) and ask the model to convert a third conversation, you will most likely get good results (text-davinci-003 or gpt-3.5-turbo should work well). |
Beta Was this translation helpful? Give feedback.
0 replies
-
Will give at a try.. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone!
First of all,
This tool is AMAZING. it reduced the time took us building guardrails around our LLM's from few weeks to a couple of days!
I was wondering if there is any intent on enabling rerun of chat conversation and extract canonical forms from them (for both user and bot)?
That means, sending an array of messages and instead of getting the next message, just generating the context object of the conversation (getting the canonical forms of all the messages).
That would be a major step towards being able to dynamically generate the nemo config files needed and even to personalize them with just few chat examples.
Thank you again!
Beta Was this translation helpful? Give feedback.
All reactions