Communicating with GPT4All with C# through HTTP POST JSON #2398
stradiotto
started this conversation in
Bindings
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi
I'm trying to make a communication from Unity C# to GPT4All, through HTTP POST JSON
I started GPT4All, downloaded and choose the LLM (Llama 3)
In GPT4All I enable the API server
I start a first dialogue in the GPT4All app, and the bot answer my questions
In Unity 2023, I wrote the following code for a component (Note that I'm using TotalJSON, which transforms instances ):
5.1) In LLMJSONAux.cs
public class Message
{
public string role;
public string content;
}
public class HttpJSONQuery
{
public string model = "";
public Message[] messages;
public float temperature = .7f;
public int max_tokens = -1;
public bool stream = false;
}
public class Choice
{
public int index;
public Message message;
public string finish_reason;
}
public class Usage
{
public int prompt_tokens;
public int completion_tokens;
public int total_tokens;
}
public class HttpJSONResponse
{
public string id;
//public string _object;
public int created;
public string model;
public Choice[] choices;
public Usage usage;
}
5.2) In GPT4AllHTTPPOST.cs
using Leguar.TotalJSON;
using System.IO;
using System.Net;
using UnityEngine;
// https://www.xoborg.com/blog/configuracion-de-gpt4all-y-localai
public class GPT4AllHTTPPOST : MonoBehaviour
{
void testGPT4All()
{
HttpWebRequest httpWebRequest =
(HttpWebRequest)WebRequest.
Create("http://localhost:4891/v1/chat/completions");
}
I put the GPT4AllHTTPPOST component into a game object and run the Unity application I just made
The Console logs are these:
7.1) The JSON instruction that was generated and sent:
{
"model": "Meta-Llama-3-8B-Instruct.Q4_0.gguf",
"messages": [
{
"role": "system",
"content": "You are a devilish person."
},
{
"role": "user",
"content": "Tell me another joke"
}
],
"temperature": 0.7,
"max_tokens": -1,
"stream": true
}
UnityEngine.MonoBehaviour:print (object)
GPT4AllHTTPPOST:testGPT4All () (at Assets/_GAME/__Prototypes/LLM Studio/GPT4AllHTTPPOST.cs:49)
GPT4AllHTTPPOST:Start () (at Assets/_GAME/__Prototypes/LLM Studio/GPT4AllHTTPPOST.cs:67)
7.2) The GPT4All JSON answer:
{"choices":[{"finish_reason":"stop","index":0,"message":{"content":"","role":"assistant"},"references":[]}],"created":1717310181,"id":"foobarbaz","model":"Llama 3 Instruct","object":"text_completion","usage":{"completion_tokens":0,"prompt_tokens":25,"total_tokens":25}}
UnityEngine.MonoBehaviour:print (object)
GPT4AllHTTPPOST:testGPT4All () (at Assets/_GAME/__Prototypes/LLM Studio/GPT4AllHTTPPOST.cs:61)
GPT4AllHTTPPOST:Start () (at Assets/_GAME/__Prototypes/LLM Studio/GPT4AllHTTPPOST.cs:67)
I do the same instruction for LMStudio, and LMStudio answers correctly to this Unity prototype, and responds my query ("content": "Tell me another joke").
So, what part in my JSON instruction I'm missing here, for GPT4All?
PS.: I tried to find Nomic AI e-mail, with no success, but from their web page I could find their Discord account, and this Github discussion list. Also, I have made a search for similar discussions and found nothing about it on GPT4All bindings. If the kind of discussion already exists, it means that my search was not complete, and I apologize for repeating the same request here.
Thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions