You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The causal capabilities of large language models (LLMs) is a matter ofsignificant debate, with critical implications for the use of LLMs insocietally impactful domains such as medicine, science, law, and policy. Wefurther our understanding of LLMs and their causal implications, consideringthe distinctions between different types of causal reasoning tasks, as well asthe entangled threats of construct and measurement validity. LLM-based methodsestablish new state-of-the-art accuracies on multiple causal benchmarks.Algorithms based on GPT-3.5 and 4 outperform existing algorithms on a pairwisecausal discovery task (97%, 13 points gain), counterfactual reasoning task(92%, 20 points gain), and actual causality (86% accuracy in determiningnecessary and sufficient causes in vignettes). At the same time, LLMs exhibitunpredictable failure modes and we provide some techniques to interpret theirrobustness. Crucially, LLMs perform these causal tasks while relying on sources ofknowledge and methods distinct from and complementary to non-LLM basedapproaches. Specifically, LLMs bring capabilities so far understood to berestricted to humans, such as using collected knowledge to generate causalgraphs or identifying background causal context from natural language. Weenvision LLMs to be used alongside existing causal methods, as a proxy forhuman domain knowledge and to reduce human effort in setting up a causalanalysis, one of the biggest impediments to the widespread adoption of causalmethods. We also see existing causal methods as promising tools for LLMs toformalize, validate, and communicate their reasoning especially in high-stakesscenarios. In capturing common sense and domain knowledge about causal mechanisms andsupporting translation between natural language and formal methods, LLMs opennew frontiers for advancing the research, practice, and adoption of causality.
The text was updated successfully, but these errors were encountered:
AkihikoWatanabe
changed the title
あ
Causal Reasoning and Large Language Models: Opening a New Frontier for
Causality, Emre Kıcıman+, N/A, arXiv'23
May 4, 2023
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
重要なことに、LLMsは、非LLMベースのアプローチとは異なる知識源と方法に依存しながら、因果関係のタスクを実行する。具体的には、LLMsは、因果グラフを生成するために収集された知識を使用したり、自然言語から背景の因果関係を特定するなど、これまで人間に制限されていた能力を持っている。私たちは、LLMsを既存の因果関係手法と併用し、因果関係の分析を設定するための人間のドメイン知識の代理として使用し、因果関係手法の広範な採用の最大の障害の1つである人間の労力を削減することを想定している。また、高リスクシナリオにおいて特に、LLMsが推論を形式化、検証、および伝達するための有望なツールとして既存の因果関係手法を見ている。
因果メカニズムに関する常識的な知識やドメイン知識を捉え、自然言語と形式的な方法の間の翻訳をサポートすることで、LLMsは因果関係の研究、実践、および採用の新しいフロンティアを開拓する。
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: