-
-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update ollama-curated.yaml #411
Conversation
Hey @HavenDV , Thanks for opening the PR, I appreciate it (and great to see cross-collaboration between other LangChain ports!). Your changes look good (apart from the comment about the streaming, which is a limitation on our side). I've reviewed the Ollama Go client and aligned some more fields that we were missing. It should be fully up-to-date now 🙂 |
@davidmigloz I found another problem, it seems that you need to replace the name parameter with digest and specify that it is in: path. The current entry is meaningless because {digest} is not defined /blobs/{digest}:
head:
operationId: checkBlob
tags:
- Models
summary: Check to see if a blob exists on the Ollama server which is useful when creating models.
parameters:
- in: query
name: name
schema:
type: string
required: true
description: the SHA256 digest of the blob
example: sha256:c8edda1f17edd2f1b60253b773d837bda7b9d249a61245931a4d7c9a8d350250
responses:
'200':
description: Blob exists on the server
'404':
description: Blob was not found
post:
operationId: createBlob
tags:
- Models
summary: Create a blob from a file. Returns the server file path.
parameters:
- in: query
name: name
schema:
type: string
required: true
description: the SHA256 digest of the blob
example: sha256:c8edda1f17edd2f1b60253b773d837bda7b9d249a61245931a4d7c9a8d350250
requestBody:
content:
application/octet-stream:
schema:
type: string
format: binary
responses:
'201':
description: Blob was successfully created |
Hello
While working on https://github.com/tryAGI/LangChain (hello from the .NET version) and .NET SDK (https://github.com/tryAGI/Ollama) I found some inaccuracies in the current specification (I compared with other implementations) and I want to offer you an updated version