🎯
Focusing
Highlights
- Pro
Pinned Loading
-
llm-misinformation/llm-misinformation
llm-misinformation/llm-misinformation PublicThe dataset and code for the ICLR 2024 paper "Can LLM-Generated Misinformation Be Detected?"
-
llm-misinformation/llm-misinformation-survey
llm-misinformation/llm-misinformation-survey PublicPaper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misinformation", accepted by AI Magazine 2024
-
ClinicalBench
ClinicalBench PublicCode for the paper "ClinicalBench: Can LLMs Beat Traditional ML Models in Clinical Prediction?"
-
llm-editing/editing-attack
llm-editing/editing-attack PublicCode and dataset for the paper: "Can Editing LLMs Inject Harm?"
-
llm-editing/HalluEditBench
llm-editing/HalluEditBench PublicCan Knowledge Editing Really Correct Hallucinations?
-
camel-ai/agent-trust
camel-ai/agent-trust PublicThe code for "Can Large Language Model Agents Simulate Human Trust Behaviors?"
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.