Skip to content

marketplace Search Results · topic:shell org:codacy

27.6k results
 (58 ms)

27.6k results

incodacy (press backspace or delete to remove)
Some qualifiers in your query (topic, org) are not supported when searching the marketplace. Try searching for repositories instead?

A 398B parameters (94B active) multilingual model, offering a 256K long context window, function calling, structured output, and grounded generation.
  • By AI21 Labs
  • 262k input
  • 4k output

Excels in image reasoning capabilities on high-res images for visual understanding apps.
  • By Meta
  • 128k input
  • 4k output

DeepSeek-R1 excels at reasoning tasks using a step-by-step training process, such as language, scientific reasoning, and coding tasks.
  • By DeepSeek
  • 128k input
  • 4k output

a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
  • By DeepSeek
  • 128k input
  • 4k output

JAIS 30b Chat is an auto-regressive bilingual LLM for Arabic & English with state-of-the-art capabilities in Arabic.
  • By Core42
  • 8k input
  • 4k output

A 52B parameters (12B active) multilingual model, offering a 256K long context window, function calling, structured output, and grounded generation.
  • By AI21 Labs
  • 262k input
  • 4k output

Codestral 25.01 by Mistral AI is designed for code generation, supporting 80+ programming languages, and optimized for tasks like code completion and fill-in-the-middle
  • By Mistral AI
  • 256k input
  • 4k output

Command R is a scalable generative model targeting RAG and Tool Use to enable production-scale AI for enterprise.
  • By Cohere
  • 131k input
  • 4k output

Command R is a scalable generative model targeting RAG and Tool Use to enable production-scale AI for enterprise.
  • By Cohere
  • 131k input
  • 4k output

Command R+ is a state-of-the-art RAG-optimized model designed to tackle enterprise-grade workloads.
  • By Cohere
  • 131k input
  • 4k output