-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate fact checking action #24
Conversation
This comment was marked as resolved.
This comment was marked as resolved.
GitGuardian warning is a false positive |
@alxarno I will need your help with review |
Just for testing inside the PR. Revert before merge.
Fact-check failed due to errors |
The action is triggered by the So to demo the fact checking script, I copied one of the posts and modified the following statement to be false:
The fact checking isn't exhaustive and will check a few statements chosen by the model and not all of them. In this case it ignored the falsificated statement. Note: Revert the test data before merging. |
Fact-check failed due to errors |
The script is failing because of:
|
@danisztls do you need help with this? |
Yes. Need to add a payment method to the account. |
Fix a security vulnerability warned by Dependabot in the testing repo.
@danisztls done |
|
|
@marina-chibizova This is the action output. Is this what is expected? |
yes, let's merge this one. you can also add some caching to not process each commit every time - something like this https://github.com/probot/metadata. but let's leave it for next pr |
This reverts commit b532bf4.
A nitpick but I recommend squashing instead of merging, the result will be the same but the Git log will make more sense. 😄
It doesn't process each commit every time. It process files modified by the PR when it's marked as ready for review. I would cache the workflow metadata in the repo itself instead of in the PR as a comment like Currently there's kind of a bug where in the first run it extracts and verifies a few statements and in subsequent runs it verify a bit more. Last time it was exceptionally triggered twice due to the CODEOWNERS change and in the second run it verified an additional statement which was identified as false but yet that is covering ~half of the content only. What's is happening is that the model is timing out as it normally does. I should debug and fix this, probably by chunking the prompts in smaller bits so it doesn't time out. |
Fix #14