Skip to content

C4AI/llm_component_relation_classification

Repository files navigation

Computational argumentation is a promising enabler of valu- able evidence-based discussions in topics from politics to health. This work presents a assessment of large language models in component rela- tion classification, a key task in computational argumentation. We inves- tigate context enrichment techniques, such as retrieving external infor- mation from knowledge graphs, and add both internal context (with sen- timent analysis) and task context (with the few-shot technique). We com- bined those techniques using prompt engineering and chain of thought and applied that to three benchmark datasets. Experimental results demonstrate that our context enrichment strategies significantly improve the baseline F1-scores on all datasets. Specifically, task context through few-shot proved to be the most effective method for contextual enrich- ment, surpassing other methods in performance. This research provides insights into the effective use of large language models for complex natu- ral language understanding while noting that there is no definitive tech- nique for prompt engineering in component relation classification.

About

Presents an Assesment of Component Classification using LLMs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published