Skip to content

Testing the Ollama AI on Spring Boot, with Kotlin and Swagger configured.

Notifications You must be signed in to change notification settings

Adrianogba/Spring-Ollama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

By default Ollama only uses your CPU for processing, but if you have a Nvidia or AMD GPU and want to also use them, folow the steps on the link: https://hub.docker.com/r/ollama/ollama

To download the model, run this on the terminal (take a while to download the model):

curl http://localhost:11434/api/pull -d '{    
"name": "llama2"  
}'

Or

curl http://localhost:11434/api/pull -d '{    
"name": "mistral"  
}'

List of the models: https://ollama.com/library

About

Testing the Ollama AI on Spring Boot, with Kotlin and Swagger configured.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages