







Enhance your workflow with extensions
Tools from the community and partners to simplify tasks and automate processes
Search results
Text-embedding-3 series models are the latest and most capable embedding model from OpenAI.
OpenAI o3-mini
Modelo3-mini includes the o1 features with significant cost-efficiencies for scenarios requiring high performance.
Text-embedding-3 series models are the latest and most capable embedding model from OpenAI.
Same Phi-3-medium model, but with a larger context size for RAG or few shot prompting.
OpenAI GPT-4o mini
ModelAn affordable, efficient AI solution for diverse text and image tasks.
OpenAI o1
ModelFocused on advanced reasoning and solving complex problems, including math and science tasks. Ideal for applications that require deep contextual understanding and agentic workflows.
OpenAI o1-mini
ModelSmaller, faster, and 80% cheaper than o1-preview, performs well at code generation and small context operations.
OpenAI o1-preview
ModelFocused on advanced reasoning and solving complex problems, including math and science tasks. Ideal for applications that require deep contextual understanding and agentic workflows.
OpenAI GPT-4o
ModelOpenAI's most advanced multimodal model in the gpt-4o family. Can handle both text and image inputs.
A 7B parameters model, proves better quality than Phi-3-mini, with a focus on high-quality, reasoning-dense data.
Refresh of Phi-3-mini model.
A new mixture of experts model
Refresh of Phi-3-vision model.
Phi-4
ModelPhi-4 14B, a highly capable model for low latency scenarios.
A 14B parameters model, proves better quality than Phi-3-mini, with a focus on high-quality, reasoning-dense data.
Same Phi-3-mini model, but with a larger context size for RAG or few shot prompting.
Tiniest member of the Phi-3 family. Optimized for both quality and low latency.
Same Phi-3-small model, but with a larger context size for RAG or few shot prompting.
Phi-4-mini-instruct
Model3.8B parameters Small Language Model outperforming larger models in reasoning, math, coding, and function-calling
First small multimodal model to have 3 modality inputs (text, audio, image), excelling in quality and efficiency