Skip to content

Releases: smallcloudai/refact

server/v1.11.2

17 Jun 07:38
Compare
Choose a tag to compare
  • parse thinking part for third-party models (qwen, deepseek reasoning styles)

server/v1.11.0

09 May 06:06
Compare
Choose a tag to compare
  • third-party models configurable headers
  • drop support for refact, starcoder2, deepseek-coder
  • [experimental] refact proxy: lightweight version of server with third-party models proxying

server/v1.10.0

17 Apr 09:50
Compare
Choose a tag to compare
  • Configurable Third-Party APIs: set up any chat model supported by litellm (available from JetBrains 6.3.2, VSCode 6.3.5)
  • Model deprecation: starcoder2*, deepseek-coder*, refact

server/v1.9.3

17 Apr 09:47
Compare
Choose a tag to compare

Features:

  • claude-3-7-sonnet, o3-mini models
  • support of reasoning capabilities

v1.9.1

06 Feb 18:00
Compare
Choose a tag to compare

Features:

  • cpu embeddings model thenlper/gte-base/cpu
  • manual upload weights for offline mode

v1.9.0

04 Feb 12:15
7acd76d
Compare
Choose a tag to compare

Features:

  • new 3rdparty providers: Gemini, Grok (XAI), OpenAI o1
  • offline mode: server now works with preloaded models without connection to huggingface

Minor fixes:

  • caps reload on model list change fix
  • removed deprecated models

v1.8.0

03 Dec 15:03
1b094ba
Compare
Choose a tag to compare

Refact.ai Self-hosted:

  • CUDA and cuDNN Version Update: 11.8.0 -> 12.4.1
  • New models: llama3.1, llama3.2 and qwen2.5/coder families.
  • New providers support: 3rd party APIs for groq and cerebras.
  • Support for multiline code completion models.

v1.7.0

20 Sep 17:05
5411936
Compare
Choose a tag to compare

Refact.ai Self-hosted:

  • New models: last OpenAI models is now available in Docker.
  • Tool usage support for 3rd party models: turn on 3rd party APIs to use latest features of Refact.
  • Removed deprecated models.

v1.6.4

05 Jul 11:44
Compare
Choose a tag to compare

Refact.ai Self-hosted:

  • Claude-3.5 Sonnet Support: New model from Anthropic is now available in Docker.

Refact.ai Enterprise:

  • Llama3 vLLM Support: Added vLLM version of Llama-3-8B-Instruct for better performance.

v1.6.3

20 Jun 16:44
Compare
Choose a tag to compare

Refact.ai Self-hosted:

  • Llama3 8k Context: llama3 models now support 8k context.
  • Credentials Management: We added information about tokens and keys.
  • Deprecated Models: The models starcoder, wizardlm, and llama2 are deprecated and will be removed in the next release.

Refact.ai Enterprise:

  • Refact Model 4k Context: refact model now supports 4k context.