Skip to content

LongarMD/CLIFT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CLIFT: A Benchmark for Contextual Learning across Inference, Format, and Transfer

Hugging Face — CLIFT

CLIFT Benchmark Overview

Install

Install uv and run:

uv sync

Core generation works without CLRS. Tasks insertion_sort and binary_search use the optional clrs sampler; install it with:

uv sync --extra clrs

Development tools (pytest, ruff):

uv sync --group dev --extra clrs

Dataset

The prebuilt evaluation matrix is in data/:

  • data/clift.jsonl — one JSON object per line. Each row is a full instance.
  • data/manifest.json — generation parameters, expected line count, and a SHA-256 of the canonical JSONL payload.

This snapshot uses 10 instances per (task, format, application, difficulty) cell, master seed 42, and all tasks in clift.common.TASKS. It contains 5160 records.

Regenerating your own dataset

  1. Install with the CLRS extra so the full task list can be built:

    uv sync --extra clrs
  2. Generate and export:

    from clift.common import export_jsonl
    from clift.data import generate_clift_dataset
    
    instances = generate_clift_dataset(n_instances_per_cell=10, seed=42)
    export_jsonl(instances, "data/clift.jsonl")

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages