Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] - Using different prompts to signify which model to choose is strange. #3

Closed
IIvexII opened this issue Mar 24, 2023 · 3 comments
Assignees
Labels
documentation Improvements or additions to documentation good first issue Good for newcomers question Further information is requested

Comments

@IIvexII
Copy link

IIvexII commented Mar 24, 2023

You can make a single prompt and let your model choose which model will suit this input text.

Input

!bot Generate an image of a black cat with light green eyes

Note: Choose DALL-E automatically and generate an image.

Output

Input

!bot what is HTTP?

Note: Choose ChatGPT automatically.

Output

HTTP stands for Hypertext Transfer Protocol, which is a protocol used to transfer data over the internet. It is a standard application layer protocol that defines how data is transmitted between web servers and web browsers.

@Zain-ul-din Zain-ul-din added enhancement New feature or request good first issue Good for newcomers labels Mar 24, 2023
@Zain-ul-din
Copy link
Owner

Zain-ul-din commented Mar 24, 2023

You could create a custom model to achieve this functionality.

Creating a Custom Model

To create a custom model, you need to add a new field in models.Custom. The modelName field specifies the name of the model, and the prefix field specifies the prefix that the model uses. The enable field determines whether the model is enabled or not, and the context field specifies the context that the model uses to generate responses. The context can be a string of text, a file path, or a URL.

  • modelName: a string that represents the name of your custom model
  • prefix: a string that represents the prefix that messages should have to get a reply from your custom model
  • enable: a boolean that indicates whether your custom model should be enabled or disabled
  • context: a string that represents the context of your custom model. This can be one of the following options:
    "your_context": a string that represents the context directly
    "path to file (.md,.txt)": a string that represents the path to a file containing the context
    "url": a string that represents the URL to a website containing the context.
{
    modelName: "your_model_name",
    prefix: "!your_prefix", 
    enable: true, 
    context: "your_context" | "path to file (.md,.txt)" | "url",
}

Test your model

  • run server yarn dev
  • type a message starting with !your_prefix.

Demo

  • Name you're model let's call it bot
  • Add prefix !bot.
  • create bot.md file in static folder.
  • add context context: "./static/whatsapp-ai-bot.md"
  • set enable to true
  • add the following context
Hey GPT, if the provided question seems to be asking for an image then just return the question with the prefix !dalle otherwise return the question with the prefix !chatgpt

Examples:- 

question: 
   Generate an image of a black cat with light green eyes
   you should return =>   !dalle Generate an image of a black cat with light green eyes
 
question: 
  what is HTTP?
  you should return => !chatgpt what is HTTP?
  
Example End.
  

Results

image
image

Note! Although this approach will work. But, there is an overhead check this diagram how this process is working.

image

See more about how the custom model works under the hood


Apart from that, To achieve this kind of functionality we need to use an NLP model on the server side which may slow down the response.
Note! All models being used in this bot is not free any mistake may cause a loss of money.

Maybe in the future, I plan to use a custom TensorFlow NLP model

or

Another option is: Google is releasing a new AI model Bard which can respond to messages in latency of seconds.

Bard Overview

@Zain-ul-din Zain-ul-din added question Further information is requested and removed good first issue Good for newcomers labels Mar 24, 2023
@Zain-ul-din Zain-ul-din self-assigned this Mar 24, 2023
@Zain-ul-din Zain-ul-din added documentation Improvements or additions to documentation good first issue Good for newcomers and removed enhancement New feature or request labels Mar 24, 2023
@ZeanArd
Copy link

ZeanArd commented Jul 1, 2024

error /root/WhatsApp-Ai-bot/node_modules/whatsapp-web.js/node_modules/puppeteer: Command failed.
Exit code: 1
Command: node install.js
Arguments:
Directory: /root/WhatsApp-Ai-bot/node_modules/whatsapp-web.js/node_modules/puppeteer
Output:
The chromium binary is not available for arm64.
If you are on Ubuntu, you can install with:

sudo apt install chromium

sudo apt install chromium-browser

/root/WhatsApp-Ai-bot/node_modules/whatsapp-web.js/node_modules/puppeteer/lib/cjs/puppeteer/node/BrowserFetcher.js:119
throw new Error();
^

Error
at /root/WhatsApp-Ai-bot/node_modules/whatsapp-web.js/node_modules/puppeteer/lib/cjs/puppeteer/node/BrowserFetcher.js:119:27
at FSReqCallback.oncomplete (node:fs:198:21)

Node.js v22.3.

How to solve it?

@Zain-ul-din
Copy link
Owner

move this discussion here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation good first issue Good for newcomers question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants