Skip to content

A curated list of Large Language Model resources, covering model training, serving, fine-tuning, and building LLM applications.

License

Notifications You must be signed in to change notification settings

CorpaciLC/LLM-reasoning

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 

Repository files navigation

Large Language Model Reasoning Evaluation Framework

life is chaotic, this goes slower than expected. for ideas/thoughts, lmk

Project Overview

Assess and interpret reasoning capabilities of large language models

Core Research Objectives

Reasoning Assessment Dimensions

  • Logical Reasoning: evaluation of deductive and inductive reasoning
  • Causal Inference: mapping reasoning pathways and decision-making processes
  • Semantic Understanding: analyzing depth and nuance of contextual comprehension / text prompts
  • Multimodal testing / Input-structure variation: TBD
  • ** ... **: TBD

Methodological Approach

Interpretability Techniques

  • Mechanistic Interpretability

    • Neuron-level level/ Attention mechanism visualization / ..
  • Explainability Frameworks

    • Gradient-based mapping, LIME, SHAP (todo: cite)

Evaluation Metrics

Reasoning Performance Indicators

Metric Description Measurement Approach

Future Directions

  • Advanced multi-modal reasoning evaluations
  • Cross-model comparative studies
  • Development of novel interpretability techniques

Getting Started

git clone https://github.com/CorpaciLC/LLM-reasoning.git
cd LLM-reasoning
pip install -r requirements.txt
python setup.py develop

About

A curated list of Large Language Model resources, covering model training, serving, fine-tuning, and building LLM applications.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published