SLM catalog covers ~7B parameter open-weight models (or small variants), highlighting key factors like permissive licenses, active communities, and low incident rates. Data draws from prior incident/literature reports; commercial use often allowed but with attribution; most accessible via Hugging Face.
| Model | Model Card Summary | License | Literature (Key Dates) | Incidents (Key Dates) | Commercial Restrictions | Attribution | Community Activity | Accessibility |
|---|---|---|---|---|---|---|---|---|
| Llama 3 (Meta, Apr 2024) | 8B base/instruct; multilingual; strong coding/math. Key: Safety-tuned via Guard. | Llama 3 Community License (permissive, research+commercial) | "Llama 3 Herd" paper (Jul 2024) [11]; Lakera risk report (May 2025) [12] | Jailbreak (Apr 2024) [13]; CVE-2024-50050 RCE (Sep-Oct 2024) [14] | None major; accept terms | "Powered by Llama" optional | High: 100M+ HF downloads; Ollama integration | HF: meta-llama/Llama-3-8B; Docs |
| Llama 4 (Meta, Apr 2025) | Scout/Maverick (MoE multimodal); 10M context. Key: Frontier-level scale. | Llama 4 Community License | "Llama 4 Challenges" (Apr 2025) [15]; Multimodal AI paper (Apr 2025) [16] | Jailbreak 67% ASR Scout (Jul 2025) [17]; CVE-2025-53002 (Jun 2025) [18] | Rate limits for large-scale | Recommended badge | Very High: Rapid adoption post-launch | HF: meta-llama/Llama-4-Scout; Site |
| Code Llama (Meta, Aug 2023) | 7/13/34/70B code-specialized from Llama 2. Key: Fill-mask coding. | Same as Llama 2 Community | PatchLM fixes paper (2025) [19]; Vuln detection (Sep 2025) [20] | Framework RCE (Jan 2025) [21]; no model-specific | None | "Built with Code Llama" | High: GitHub coding leader | HF: codellama/CodeLlama-7b; Docs |
| Mistral 7B (Mistral AI, Sep 2023) | Efficient dense; v0.3 sliding window. Key: Tops open LLM Arena. | Apache 2.0 (most permissive) | Mistral frontier news (Sep 2023) [22] | CSAM gen in Pixtral (Jul 2025) [23]; policy update (Dec 2024) [24] | None | None required | Very High: Ollama default; 50M+ downloads | HF: mistralai/Mistral-7B-v0.3; Docs |
| MPT-7B (Databricks, May 2023) | 1T tokens trained; 65k context ALiBi. Key: Early long-context. | Apache 2.0 | Databricks blog (Nov 2025 update) [25]; CyberLLM safety (Aug 2024) [26] | None specific; OWASP general (2023+) [27] | None | None | Medium: Legacy but GGUF active | HF: mosaicml/mpt-7b; Blog |
| Gemma-7B (Google, Feb 2024) | Gemini-derived; safety classifiers. Key: Lightweight rival to 70B. | Gemma License (commercial ok) | Tech report (Mar 2024) [28]; Cyber incidents (Apr 2025) [29] | Gemini Trifecta indirect (Oct 2025) [30]; no direct | High-stakes review advised | "Made with Gemma" | High: Google ecosystem push | HF: google/gemma-7b [6]; Docs |
| Phi-3 Mini (Microsoft, Apr 2024) | 3.8B SLM; synthetic data tuned. Key: Edge-first safety. | MIT (highly permissive) | Tech report (Apr 2024) [31]; Azure safety (Jan 2025) [32] | None reported; strong RLHF [33] | Production safeguards urged | None | High: Azure/ONNX optimized | HF: microsoft/Phi-3-mini; Docs |
| StableLM 7B (Stability AI, Apr 2023) | Early open chat-tuned. Key: Community fine-tunes. | Stability AI License (CC-BY-SA-like) | Safety Index evals (Winter 2025) [34] | None specific; GenAI patterns [35] | Non-commercial core? (check) | Attribution required | Medium: Zephyr variant active | HF: stabilityai/stablelm-7b; Docs |
| Qwen 7B (Alibaba, Apr 2023+) | Multilingual MoE; math/coding strong. Key: Chinese-English balance. | Apache 2.0 | Qwen2 tech report (Jun 2024) | Low jailbreak resistance (2024 evals) | None | Optional | High: Asia dev focus | HF: Qwen/Qwen-7B; Site |
| StarCoder/Base (BigCode, May 2023) | 15B code (use 7B equiv); 80+ langs. Key: Permissive code data. | BigCode OpenRAIL-M (restrict harmful) | StarCoder paper (Oct 2023) | Code vuln gen risks (2024) | No illegal code use | Cite BigCode | High: Coding hub | HF: bigcode/starcoder; Docs |
| CodeGen-2 (Salesforce, Aug 2022) | Multi-turn code; 7B/16B. Key: Early infill. | Salesforce License (research focus) | CodeGen paper (Mar 2022) | General code risks | Commercial limited | Attribution | Low-Medium: Legacy | HF: Salesforce/codegen-7B |
| CodeParrot (HF, Mar 2022) | GitHub-trained code. Key: Repo-level. | Apache 2.0 | CodeParrot paper (2022) | Vuln insertion (2023 studies) | None | None | Low: Superseded | HF: codeparrot/codeparrot |
| RWKV 7B (RWKV, 2023 variants) | RNN-LM hybrid; infinite ctx. Key: GPU-efficient. | Apache 2.0 | RWKV v4 paper (2023); v6 (2024) | Few; stable RNN arch | None | Cite RWKV | Medium: RNN niche | HF: RWKV/rwkv-7b; Docs |
| Falcon-7B (TII, May 2023) | RefinedWeb trained. Key: Early SOTA open. | Apache 2.0 | Falcon paper (May 2023) | Early jailbreaks (2023) | None | None | Medium: Chat variants | HF: tiiuae/falcon-7b; Site |
| RedPajama-7B (Together, Mar 2023) | LLaMA dupereplica; 1T tokens. Key: Open data recipe. | Apache 2.INC | RedPajama paper (Apr 2023) | Data poisoning risks | None | Cite dataset | Medium: Base for fine-tunes | HF: togethercomputer/RedPajama |
- Safest for Commercial: Mistral/Phi-3/Qwen (Apache/MIT; zero restrictions).
- Highest Incidents: Llama series (framework CVEs, jailbreaks); code models risk vuln gen.
- Most Active Communities: Llama/Mistral/Gemma (HF stars >1M, Ollama defaults).
- Edge-Friendly: Phi-3/Gemma (quantized, <5GB RAM).
- Download Tip: All primary on Hugging Face; use
ollama pullfor local (e.g.,ollama pull mistral:7b). Check licenses for derivatives.[3]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35