-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using a trained CRAFT model to predict the outcome of a single prompt-reply conversation #61
Comments
Hey Alex, Thanks for raising this issue! It turns out there is a bit of a bug in the Forecaster's behavior regarding the final comment of a conversation (which is why you're seeing odd behavior in your example, as your input comment is the final comment of its conversation). Please bear with us as we get that patched up - we should have a hotfix deployed by the end of the week! |
@akoen A hotfix for this issue has been published! If you go ahead and update your installation of ConvoKit, you should now get the results you expect. IMPORTANT: note that as part of this update, the arguments Here's a quick demonstration I did using the code snippet you provided above, showing that the prediction changes (as expected) when changing from a positive reply to a rude reply:
|
This is about the absolute best possible reply I could have received to my comment. Thanks @jpwchang, you're the man. |
@all-contributors please add @akoen for bug |
I've put up a pull request to add @akoen! 🎉 |
Hi I'm Alex, a first-year student from the University of British Columbia trying to wrap my head around conversational analysis.
What I'm about to ask is way out of my depth, and I totally understand if you don't have the time or the energy to respond.
I'm trying to create a script to forecast whether or not a conversation will derail based on a user-entered response to a prompt from the conversations-gone-awry corpus. To do so, I'm trying to train a CRAFT forecaster on the CGA dataset to then predict the outcome of the 'conversation' that I create.
However, when I train my model and run in on the conversation, I get the same prediction probability regardless of the response:
Here is my best effort:
I really appreciate your time. I have taken this on as part of my paper for an english course, and so this is way out of my league―but what you've made is really cool and I'd be overjoyed if I got this to work.
The text was updated successfully, but these errors were encountered: