You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recent advancements in large language models have showcased their remarkablegeneralizability across various domains. However, their reasoning abilitiesstill have significant room for improvement, especially when confronted withscenarios requiring multi-step reasoning. Although large language modelspossess extensive knowledge, their behavior, particularly in terms ofreasoning, often fails to effectively utilize this knowledge to establish acoherent thinking paradigm. Generative language models sometimes showhallucinations as their reasoning procedures are unconstrained by logicalprinciples. Aiming to improve the zero-shot chain-of-thought reasoning abilityof large language models, we propose Logical Chain-of-Thought (LogiCoT), aneurosymbolic framework that leverages principles from symbolic logic to verifyand revise the reasoning processes accordingly. Experimental evaluationsconducted on language tasks in diverse domains, including arithmetic,commonsense, symbolic, causal inference, and social problems, demonstrate theefficacy of the enhanced reasoning paradigm by logic.
AkihikoWatanabe
changed the title
あ
Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models
through Logic, Xufeng Zhao+, N/A, arXiv'23
Oct 9, 2023
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: