Official implementation of LoT paper: "Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic"
-
Updated
Mar 13, 2024 - Python
Official implementation of LoT paper: "Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic"
ThinkBench is an LLM benchmarking tool focused on evaluating the effectiveness of chain-of-thought (CoT) prompting for answering multiple-choice questions.
Add a description, image, and links to the chain-of-thought-prompting topic page so that developers can more easily learn about it.
To associate your repository with the chain-of-thought-prompting topic, visit your repo's landing page and select "manage topics."