v0.1.8
New Models
- CodeBooga: A high-performing code instruct model created by merging two existing code models.
- Dolphin 2.2 Mistral: An instruct-tuned model based on Mistral. Version 2.2 is fine-tuned for improved conversation and empathy.
- MistralLite: MistralLite is a fine-tuned model based on Mistral with enhanced capabilities of processing long contexts.
- Yarn Mistral an extension of Mistral to support a context window of up to 128 tokens
- Yarn Llama 2 an extension of Llama 2 to support a context window of up to 128 tokens
What's Changed
- Ollama will now honour large context sizes on models such as
codellama
andmistrallite
- Fixed issue where repeated characters would be output on long contexts
ollama push
is now much faster. 7B models will push up to ~100MB/s and large models (70B+) up to 1GB/s if network speeds permit
New Contributors
- @dloss made their first contribution in #948
- @noahgitsham made their first contribution in #983
Full Changelog: v0.1.7...v0.1.8