stanley-fork
Popular repositories Loading
-
spacebarchat-client
spacebarchat-client PublicForked from spacebarchat/client
Open source, themeable and extendable discord-compatible native Spacebar client
TypeScript 1
-
llama3_interpretability_sae
llama3_interpretability_sae PublicForked from PaulPauls/llama3_interpretability_sae
A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.
Python 1
-
cpp-static-todo
cpp-static-todo PublicForked from aurelienrb/cpp-static-todo
C++ TODO / FIXME macros that expire (fail to compile) after a specific date
C++
-
ibus-teni
ibus-teni PublicForked from rinleit/ibus-teni
Bộ gõ tiếng Việt cho Linux chạy trên IBus
Go
-
mender-artifact
mender-artifact PublicForked from mendersoftware/mender-artifact
Library for managing Mender artifact files
Go
-
mender-cli
mender-cli PublicForked from mendersoftware/mender-cli
A general-purpose CLI for the Mender backend
Go
Repositories
- zForth Public Forked from zevv/zForth
zForth: tiny, embeddable, flexible, compact Forth scripting language for embedded systems
- hf-datasets Public Forked from huggingface/datasets
🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
- awesome-cpp Public Forked from fffaraz/awesome-cpp
A curated list of awesome C++ (or C) frameworks, libraries, resources, and shiny things. Inspired by awesome-... stuff.
- AdGuardHome Public Forked from AdguardTeam/AdGuardHome
Network-wide ads & trackers blocking DNS server
- khoj Public Forked from khoj-ai/khoj
Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI (gpt, claude, gemini, llama, qwen, mistral). Get started - free.
- dspy Public Forked from stanfordnlp/dspy
DSPy: The framework for programming—not prompting—language models
- distributed-llama Public Forked from b4rtaz/distributed-llama
Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage.
People
This organization has no public members. You must be a member to see who’s a part of this organization.
Top languages
Loading…
Most used topics
Loading…