Jan
An open source alternative to OpenAI that runs on your own computer or server
Pinned Loading
Repositories
Showing 10 of 35 repositories
- cortex.llamacpp Public
cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server at runtime.
janhq/cortex.llamacpp’s past year of commit activity - cortex.tensorrt-llm Public Forked from NVIDIA/TensorRT-LLM
Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU accelerated inference on NVIDIA's GPUs.
janhq/cortex.tensorrt-llm’s past year of commit activity - winget-pkgs Public
janhq/winget-pkgs’s past year of commit activity