internlm
Here are 16 public repositories matching this topic...
internlm funetuning
-
Updated
Apr 22, 2024 - Python
Speed benchmarking a 7B LLM on different gcloud VMs (using llama.cpp)
-
Updated
Jul 23, 2024 - Python
🐋MindChat(漫谈)——心理大模型:漫谈人生路, 笑对风霜途
-
Updated
Sep 13, 2024 - Python
Openai style api for open large language models, using LLMs just as chatgpt! Support for LLaMA, LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, Xverse, SqlCoder, CodeLLaMA, ChatGLM, ChatGLM2, ChatGLM3 etc. 开源大模型的统一后端接口
-
Updated
Sep 26, 2024 - Python
Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
-
Updated
Oct 24, 2024 - Python
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
-
Updated
Nov 8, 2024 - Python
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
-
Updated
Nov 18, 2024 - Python
InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.
-
Updated
Nov 18, 2024 - Python
Improve this page
Add a description, image, and links to the internlm topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the internlm topic, visit your repo's landing page and select "manage topics."