Skip to content

Commit

Permalink
Merge pull request #1 from lg810312/dev
Browse files Browse the repository at this point in the history
Ready to release an improved alpha version with some new features
  • Loading branch information
lg810312 committed May 7, 2023
2 parents 8eb2713 + 9fe2f58 commit 8f8dcf7
Show file tree
Hide file tree
Showing 6 changed files with 222 additions and 36 deletions.
75 changes: 57 additions & 18 deletions OpenAIUI/Pages/Chat.razor
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,23 @@

<h1>Chat</h1>

<ul style="width: 100%; max-height: 70vh; overflow-y: scroll">
<div>
Advanced Settings
<div style="cursor: pointer; display: inline-block" onclick="toggleAdvancedSettings(this)">&#11206;</div>
<div id="advancedSettings" data-display="0" style="display: none">
<div id="speechSynthesis">
<select></select>
</div>
</div>
</div>

<ul style="width: 100%; max-height: 70vh; padding-left: 0; overflow-y: scroll">
@foreach (var ChatEntry in ChatEntries)
{
<li style="width: 100%;">
<div class="chat-role chat-@(ChatEntry.Role)">@(ChatEntry.Role)</div>
<li style="width: 100%; margin-bottom: 0.5rem; display: flex">
<div class="chat-role chat-@(ChatEntry.Role)">
<img src="@($"https://api.multiavatar.com/{(ChatEntry.Role == "user" ? string.Empty : "ChatGPT")}{UserUUID}.png")" width="48" />
</div>
<div class="chat-content chat-@(ChatEntry.Role)">
<div id="@(Guid.NewGuid().ToString("N"))" class="markdown-raw">@(ChatEntry.Content)</div>
<div class="markdown-container"></div>
Expand All @@ -30,22 +42,28 @@

<div style="width: 100%">
<div style="color: red; display: @(ErrorShow ? "block": "none")">@ErrorMessage</div>
<EditForm Model="ChatInput">
<InputTextArea style="width: 80%" @bind-Value="ChatInput" @oninput="SetButton" />
<div style="width: 20%">
<button class="btn btn-primary" style="margin-bottom: 2rem" title="Send" disabled="@ChatDisabled" @onclick="async () => await AddChatEntry()">&#9992;</button>
<button class="btn btn-outline-secondary" style="margin-bottom: 2rem" title="Go to Top" onclick="document.querySelector('#markdown-is-rendering').parentNode.querySelector('ul').scrollTop = 0">↑</button>
<button class="btn btn-secondary" style="margin-bottom: 2rem" disabled="@ChatDisabled" @onclick="() => ClearChatEntries()">Clear</button>
<EditForm Model="ChatInput" style="display: flex">
<InputTextArea style="flex: 1 1 80%;" id="ChatInput" @bind-Value="ChatInput" @oninput="SetButton" />
<div style="width: 20%; margin-left: 0.5rem; flex: 1 1 20%">
<button class="btn btn-primary" style="margin-right: 0.5rem; margin-bottom: 0.5rem;" title="Send" disabled="@ChatDisabled" @onclick="async () => await AddChatEntry()">&#9992;</button>
<button class="btn btn-primary" style="margin-right: 0.5rem; margin-bottom: 0.5rem;" title="Voice Input" hidden="@(!SupportSpeechRecognition)" disabled="@(ChatIsRequesting || ChatIsRendering == 1)" onclick="startRecognition('ChatInput')">&#127908;</button>
<button class="btn btn-outline-secondary" style="margin-right: 0.5rem; margin-bottom: 0.5rem;" title="Go to Top" onclick="document.querySelector('#markdown-is-rendering').parentNode.querySelector('ul').scrollTop = 0">&#128285;</button>
<button class="btn btn-secondary" style="margin-right: 0.5rem; margin-bottom: 0.5rem;" disabled="@(ChatIsRequesting || ChatIsRendering == 1)" @onclick="() => ClearChatEntries()">&#128465;</button>
</div>
</EditForm>
</div>

@code {
private bool ErrorShow = false;
private string UserUUID = Guid.NewGuid().ToString("N");

private bool SupportSpeechRecognition { get; set; }
private bool SupportSpeechSynthesis { get; set; }

private bool ErrorShow;
private string ErrorMessage = string.Empty;

private bool ChatDisabled = true;
private bool ChatIsRequesting = false;
private bool ChatIsRequesting;
private int ChatIsRendering = 0;

private string ChatInput = string.Empty;
Expand All @@ -54,12 +72,14 @@

private const int ChatContextLength = 10;

protected override async Task OnInitializedAsync()
protected override async Task OnAfterRenderAsync(bool firstRender)
{
var ChatReference = DotNetObjectReference.Create(this);
_ = JSRuntime.InvokeVoidAsync("GLOBAL.SetDotnetReference", ChatReference);

await base.OnInitializedAsync();
if (firstRender)
{
var ChatReference = DotNetObjectReference.Create(this);
_ = JSRuntime.InvokeVoidAsync("GLOBAL.SetDotnetReference", ChatReference);
}
await base.OnAfterRenderAsync(firstRender);
}

private void SetButton(ChangeEventArgs e)
Expand Down Expand Up @@ -88,7 +108,7 @@
{
Model = OpenAIConfig.Value.APIType is APIType.Azure ? "gpt-35-turbo" : "gpt-3.5-turbo",
Messages = ChatEntries.Skip(ChatEntries.Count <= ChatContextLength ? 0 : ChatEntries.Count - ChatContextLength)
.Select(c => new ChatGPTMessage(c.Role, c.Content)).ToArray()
.Select(c => new ChatGPTMessage(c.Role, c.Content)).ToArray()
});

ChatEntry = (Completion.Choices[0].Message.Role, Completion.Choices[0].Message.Content);
Expand All @@ -105,7 +125,8 @@
}

ChatIsRequesting = false;
ChatIsRendering = 1;
if (!ErrorShow) ChatIsRendering = 1;
ChatDisabled = ChatIsRequesting || ChatIsRendering == 1 || ChatInput.Length == 0;
StateHasChanged();
}

Expand All @@ -115,6 +136,8 @@
ErrorMessage = string.Empty;

ChatEntries.Clear();

UserUUID = Guid.NewGuid().ToString("N");
}

[JSInvokable(nameof(UpdateChatRenderStatus))]
Expand All @@ -123,4 +146,20 @@
ChatIsRendering = ChatRenderStatus;
StateHasChanged();
}

[JSInvokable(nameof(RefreshStatus))]
public void RefreshStatus(string VoiceInput)
{
ChatInput = VoiceInput;
ChatDisabled = ChatIsRequesting || ChatIsRendering == 1 || (ChatInput).Length == 0;
StateHasChanged();
}

[JSInvokable(nameof(CheckAccessibility))]
public void CheckAccessibility(bool hasSpeechRecognition, bool hasSpeechSynthesis)
{
SupportSpeechRecognition = hasSpeechRecognition;
SupportSpeechSynthesis = hasSpeechSynthesis;
StateHasChanged();
}
}
2 changes: 1 addition & 1 deletion OpenAIUI/Pages/Index.razor
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,5 @@
Welcome to ChatGPT

<p style="margin-top: 2rem">
Please click menu item on the left. Feel free to try!
Please click menu item on the left/top. Feel free to try!
</p>
144 changes: 137 additions & 7 deletions OpenAIUI/Pages/_Host.cshtml
Original file line number Diff line number Diff line change
Expand Up @@ -11,21 +11,25 @@
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.7.0/highlight.min.js" integrity="sha512-bgHRAiTjGrzHzLyKOnpFvaEpGzJet3z4tZnXGjpsCcqOnAH6VGUx9frc5bcIhKTVLEiCO6vEhNAgx5jtLUYrfA==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.7.0/styles/base16/material.min.css" integrity="sha512-FKzMeNkm8zqCguwqHyTYskFTD4L7WW5znImGuc+fYTIJGRpUWszuJLGh9Bq8smPaPzN0LtqagnRgihN53PL04A==" crossorigin="anonymous" referrerpolicy="no-referrer" />
<script language="javascript">
var GLOBAL = {};
GLOBAL.DotNetReference = null;
const GLOBAL = {}
GLOBAL.DotNetReference = null
GLOBAL.SetDotnetReference = function (pDotNetReference) {
GLOBAL.DotNetReference = pDotNetReference;
};
GLOBAL.DotNetReference = pDotNetReference
}
const SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition
const SpeechSynthesis = window.SpeechSynthesis || window.speechSynthesis
document.addEventListener("DOMContentLoaded", (event) => {
setTimeout(checkAccessibility, 2000)
document.addEventListener('DOMContentLoaded', (event) => {
setTimeout(renderMarkdownTimer, 3000)
})
let currentCharIndex = 0;
function afterRenderCompletion() {
if (GLOBAL.DotNetReference != null)
GLOBAL.DotNetReference.invokeMethodAsync('UpdateChatRenderStatus', 0)
GLOBAL.DotNetReference.invokeMethodAsync("UpdateChatRenderStatus", 0)
setTimeout(renderMarkdownTimer, 3000)
}
Expand All @@ -42,7 +46,7 @@
const currentText = markdownRaw.textContent.slice(0, currentCharIndex)
markdownContainer.innerHTML = marked.parse(currentText + "")
currentCharIndex++;
currentCharIndex++
if (currentCharIndex <= markdownRaw.textContent.length) {
const delayMax = 50
Expand All @@ -55,6 +59,13 @@
const markdownParent = markdownRaw.parentElement
markdownParent.innerHTML = markdownContainer.innerHTML.replace(//, "")
if (SpeechSynthesis) {
let speakElement = document.createElement("div")
markdownParent.previousElementSibling.appendChild(speakElement)
speakElement = markdownParent.previousElementSibling.lastElementChild
speakElement.outerHTML = '<div style="text-align: center; cursor: pointer" onclick="speak(this)">&#128362;</div>'
}
hljs.highlightAll()
afterRenderCompletion()
Expand All @@ -68,5 +79,124 @@
currentCharIndex = 0
renderNextChar()
}
function checkAccessibility() {
if (GLOBAL.DotNetReference != null)
GLOBAL.DotNetReference.invokeMethodAsync("CheckAccessibility", SpeechRecognition != null, SpeechSynthesis != null)
if (!SpeechSynthesis)
document.querySelector("#speechSynthesis").style = "display:none"
else {
voices = SpeechSynthesis.getVoices().sort(function (a, b) {
const aname = a.name.toUpperCase()
const bname = b.name.toUpperCase()
if (aname < bname)
return -1
else if (aname == bname)
return 0
else
return +1
})
const voiceSelect = document.querySelector("#speechSynthesis > select")
voiceSelect.innerHTML = ""
let options = [];
for (let i = 0; i < voices.length; i++) {
const option = document.createElement("option")
option.textContent = `${voices[i].name} (${voices[i].lang})`
if (voices[i].default) {
option.textContent += " -- DEFAULT"
option.selected = true
}
option.setAttribute("data-lang", voices[i].lang)
option.setAttribute("data-name", voices[i].name)
options.push(option)
}
options.sort((a, b) => {
const aAttr = a.dataset.lang
const bAttr = b.dataset.lang
if (aAttr < bAttr)
return -1
else if (aAttr > bAttr)
return 1
else
return 0
})
options.forEach((option) => voiceSelect.appendChild(option))
}
}
function startRecognition(voiceInputId) {
const recognition = new SpeechRecognition()
recognition.lang = navigator.language || navigator.userLanguage
recognition.interimResults = true;
recognition.maxAlternatives = 1;
recognition.onresult = function (event) {
const result = event.results[event.results.length - 1][0].transcript
document.querySelector("#" + voiceInputId).value = result
if (GLOBAL.DotNetReference != null)
GLOBAL.DotNetReference.invokeMethodAsync("RefreshStatus", result)
}
recognition.onnomatch = function (event) {
console.log("Cannot recognize any from voice input")
}
recognition.onerror = function (event) {
console.log("Error occurred in recognition: " + event.error)
}
recognition.start();
}
let voices = [];
function toggleAdvancedSettings(sender) {
const advancedSettings = document.querySelector("#advancedSettings")
advancedSettings.dataset.display = advancedSettings.dataset.display == 1 ? 0 : 1
advancedSettings.style = advancedSettings.dataset.display == 1 ? "" : "display: none"
sender.innerHtml = advancedSettings.dataset.display == 1 ? "&#9650;" : "&#9660;"
}
function speak(sender) {
if (!SpeechSynthesis) return
if (SpeechSynthesis.speaking) {
console.error("speechSynthesis.speaking")
return
}
const voiceSelect = document.querySelector("#speechSynthesis > select")
// get text content except code nodes
let textToSpeechNode = sender.parentElement.nextElementSibling.cloneNode(true)
const codeNodes = textToSpeechNode.querySelectorAll("code")
for (let i = 0; i < codeNodes.length; i++)
textToSpeechNode.removeChild(codeNodes[i].parentElement)
const textToSpeech = textToSpeechNode.textContent
textToSpeechNode = null
const utterThis = new SpeechSynthesisUtterance(textToSpeech)
utterThis.onend = function (event) {
console.log("SpeechSynthesisUtterance.onend")
};
utterThis.onerror = function (event) {
console.error("SpeechSynthesisUtterance.onerror")
};
const selectedOption = voiceSelect.selectedOptions[0].dataset.name
for (let i = 0; i < voices.length; i++)
if (voices[i].name === selectedOption) {
utterThis.voice = voices[i]
break
}
SpeechSynthesis.speak(utterThis);
}
</script>

2 changes: 1 addition & 1 deletion OpenAIUI/Shared/MainLayout.razor
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

<main>
<div class="top-row px-4">
<a href="javascript;" target="_blank">About</a>
<a href="javascript:void(0)">About</a>
</div>

<article class="content px-4">
Expand Down
14 changes: 8 additions & 6 deletions OpenAIUI/wwwroot/css/site.css
Original file line number Diff line number Diff line change
Expand Up @@ -64,21 +64,23 @@ a, .btn-link {
}

.chat-role {
flex: 1 1 10%;
padding-right: 1rem;
flex: 1 1 5%;
padding: 0.5rem;
}

.chat-content {
flex: 1 1 90%;
padding-left: 1rem;
flex: 1 1 95%;
padding: 0.5rem;
}

.chat-user {
background-color: #eee;
background-color: #f3f3f3;
border: 1px solid #ddd;
}

.chat-assistant {
background-color: #ccc;
background-color: #e0e0e0;
border: 1px solid #ccc;
}

.markdown-raw {
Expand Down
21 changes: 18 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
# HopLind
OpenAI SDK an WebAPI for OpenAI and Azure OpenAI Service

The SDK is developed using .net and supports both OpenAI and Azure OpenAI Service.
The SDK has been created using .NET and is capable of supporting both OpenAI and Azure OpenAI Service. At this time, the SDK is specifically targeting GPT and includes wrapped APIs for embedding and completion. As GPT-3.5 is more powerful and easier to use than older models for completion API, only GPT-3.5 is currently supported. However, GPT-4 is planned to be supported in the future roadmap. These APIs have been designed to allow for effortless switching between the two SaaS providers without any need for coding modifications.

The WebAPI and Blazor are both based on the SDK and call the OpenAI API at the backend depending on the API configuration in appsettings.json.
The WebAPI and Blazor Server projects are both based on the SDK and call the OpenAI API at the backend depending on the API configuration in appsettings.json.

***API configuration example:***

**OpenAI**
```json
"OpenAI": {
"APIType": "openai",
Expand All @@ -15,4 +17,17 @@ The WebAPI and Blazor are both based on the SDK and call the OpenAI API at the b
}
```

The SDK, WebAPI, and Blazor projects are currently under development and in alpha testing. As a result, there may be breaking changes.
**Azure**
```json
"OpenAI": {
"APIType": "azure",
"APIBase": "https://YOUR-SERVICE.openai.azure.com/",
"APIKey": "YOUR API KEY",
"APIVersion": "2023-03-15-preview"
}
```
Azure OpenAI API does not support specifying model and must deploy model before use, to seamless switch between OpenAI and Azure, the deployment name should be as same as its model name.

You may deploy either WebAPI or Blazor Server depending on your requirements. You can also use them as templates or utilize the SDK to build applications from scratch.

**Note:** The SDK, WebAPI, and Blazor projects are currently under development and in alpha testing. As a result, there may be breaking changes.

0 comments on commit 8f8dcf7

Please sign in to comment.