The first open-source, exascale-ready foundation model designed not just to chat, but to act, build, and control.
An advanced open-source AI model β built for programming, reasoning, and security. Made with β€οΈ by the AlphaExaAI team.
ExaMind is our first publicly released model β an advanced conversational AI built on the Qwen2 architecture with 7.62 billion parameters (~8B). It was fine-tuned by the AlphaExaAI team with a focus on:
- π₯οΈ Advanced Programming β Code generation, debugging, architecture design
- π§© Complex Problem Solving β Multi-step logical reasoning
- π Security-First Design β 92% prompt injection resistance rate
- π Multilingual β Supports all major world languages
- β‘ CPU Deployable β No GPU required
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("AlphaExaAI/ExaMind")
tokenizer = AutoTokenizer.from_pretrained("AlphaExaAI/ExaMind")
messages = [{"role": "user", "content": "Explain how to secure a REST API."}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs, max_new_tokens=512, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))π Full documentation, benchmarks, and usage guide: huggingface.co/AlphaExaAI/ExaMind
| Benchmark | Score |
|---|---|
| MMLU β World Religions (0-shot) | 94.8% |
| MMLU β Overall (5-shot) | 72.1% |
| HumanEval pass@1 | 79.3% |
| MBPP pass@1 | 71.8% |
| GSM8K (8-shot CoT) | 82.4% |
| ARC-Challenge (25-shot) | 68.4% |
| HellaSwag (10-shot) | 78.9% |
| Prompt Injection Resistance | 92% |
The era of "chatbots" is over. AlphaExaAI is built for the era of Agentic Intelligence.
While models like GPT-5 or Gemini 3 focus on conversation, AlphaExaAI is engineered for execution. It is designed to be the "brain" behind autonomous agents, capable of complex reasoning, software engineering, system control, and multimodal creation.
ExaMind supports up to 128K tokens with RoPE scaling. Ingest entire codebases, documentation, or datasets in a single prompt.
Using advanced LoRA and efficient fine-tuning, ExaMind achieves remarkable performance while consuming a fraction of the compute resources.
ExaMind is the foundation. Coming soon:
- ExaMind-Code: Specialized coding variant
- ExaMind-Vision: Multimodal capabilities
- ExaMind V3: Extended context, improved reasoning
AlphaExaAI models are trained on real-world operational data, giving them practical intelligence that academic models lack.
- ExaMind V1 β Initial research release
- ExaMind β π LIVE NOW on Hugging Face
- ExaMind V2-GGUF β Quantized versions for efficient CPU inference
- ExaMind V3 β Extended 128K context, enhanced reasoning
- ExaMind-Code β Specialized coding model
- ExaMind-Vision β Multimodal capabilities
- AlphaExaAI 250B β The ultimate open-source frontier model
We believe in open science. That's why we open-source not just the models, but the data used to train them.
The official training dataset for ExaMind, covering 17 domains and 35 languages.
| Stats | Details |
|---|---|
| Samples | 569,193 |
| Size | 404 MB |
| Focus | Debugging (17%), Math (16%), Coding (14%), Agents (9%) |
| Languages | 35+ (English, Arabic, Chinese, etc.) |
AlphaExaAI is a community-driven project. We believe the best AI is built together.
| Who | How You Can Help |
|---|---|
| π§βπ» Developers | Code contributions, bug fixes, tooling |
| π¬ Research Teams | Benchmarking, evaluation, novel training methods |
| π« Universities | Academic research, student projects, compute partnerships |
| π’ Organizations | Resource sponsorship, infrastructure support |
| π Community Members | Documentation, translations, tutorials, feedback |
- Fork this repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
See CONTRIBUTING.md for detailed guidelines.
We actively welcome academic collaborations! If you're a university or research lab interested in:
- π¬ Using ExaMind for research
- π₯οΈ Contributing compute resources
- π Co-authoring papers
- π Student projects
Please open a collaboration issue or email us at h.hleli@tuta.io.
Building open-source AI requires significant resources. If you believe in our mission, consider supporting us:
| Platform | Link |
|---|---|
| GitHub Sponsors | Sponsor @hleliofficiel |
| Buy Me a Coffee | Coming soon |
| Bitcoin (BTC) | Coming soon |
| Ethereum (ETH) | Coming soon |
We especially welcome compute resource donations:
- GPU/TPU cloud credits (AWS, GCP, Azure, Lambda Labs)
- Access to HPC clusters for training
- Storage and bandwidth for model distribution
Every contribution, no matter how small, helps advance open-source AI. β€οΈ
A heartfelt thank you to everyone who made ExaMind possible:
| Contributor | Role |
|---|---|
| @hleliofficiel | Lead Researcher & Founder |
| @imec-idlab | Resource Contributor & Infrastructure Partner |
| @HuggingFace | Model hosting & open-source AI infrastructure |
| @DarekHub | Data Contributor |
| Kaitlyn Truby | Researcher |
| AlphaExaAI Community | Testing, feedback, and support |
| Qwen Team | Base model architecture |
This project was built with love, late nights, and an obsession with pushing the boundaries of open-source AI. β€οΈ
| Resource | URL |
|---|---|
| π€ Hugging Face Model | AlphaExaAI/ExaMind |
| π€ Organization | huggingface.co/AlphaExaAI |
| π» GitHub | github.com/hleliofficiel/AlphaExaAI |
| π§ Email | h.hleli@tuta.io |
Apache License 2.0 β Free for commercial use, modification, and distribution.
- β Commercial use
- β Modification & distribution
- β Patent grant
- β Private use
See LICENSE for full text.
Built with β€οΈ by AlphaExaAI Team β 2026
Advancing open-source AI, one model at a time.