Computational argumentation is a promising enabler of valu- able evidence-based discussions in topics from politics to health. This work presents a assessment of large language models in component rela- tion classification, a key task in computational argumentation. We inves- tigate context enrichment techniques, such as retrieving external infor- mation from knowledge graphs, and add both internal context (with sen- timent analysis) and task context (with the few-shot technique). We com- bined those techniques using prompt engineering and chain of thought and applied that to three benchmark datasets. Experimental results demonstrate that our context enrichment strategies significantly improve the baseline F1-scores on all datasets. Specifically, task context through few-shot proved to be the most effective method for contextual enrich- ment, surpassing other methods in performance. This research provides insights into the effective use of large language models for complex natu- ral language understanding while noting that there is no definitive tech- nique for prompt engineering in component relation classification.
-
Notifications
You must be signed in to change notification settings - Fork 0
C4AI/llm_component_relation_classification
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
About
Presents an Assesment of Component Classification using LLMs
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published