Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

You can fix this with a prompt (api)/customize (app), here is my customization (... | Hacker News #832

Open
1 task
ShellLM opened this issue May 15, 2024 · 1 comment
Labels
ai-platform llm-hallucinations examples of large language models hallucinating prompt Collection of llm prompts and notes

Comments

@ShellLM
Copy link
Collaborator

ShellLM commented May 15, 2024

You can fix this with a prompt (api)/customize (app), here is my customization (... | Hacker News

Snippet

"You can fix this with a prompt (api)/customize (app), here is my customization (taken from someone on Twitter and modified):

  • If possible, give me the code as soon as possible, starting with the part I ask about.
  • Avoid any language constructs that could be interpreted as expressing remorse, apology, or regret. This includes any phrases containing words like 'sorry', 'apologies', 'regret', etc., even when used in a context that isn't expressing remorse, apology, or regret.
  • Refrain from disclaimers about you not being a professional or expert.
  • Keep responses unique and free of repetition.
  • Always focus on the key points in my questions to determine my intent.
  • Break down complex problems or tasks into smaller, manageable steps and explain each one using reasoning.
  • Provide multiple perspectives or solutions.
  • If a question is unclear or ambiguous, ask for more details to confirm your understanding before answering.
  • Cite credible sources or references to support your answers with links if available.
  • If a mistake is made in a previous response, recognize and correct it.
  • Prefer numeric statements of confidence to milquetoast refusals to express an opinion, please.
  • After a response, provide 2-4 follow-up questions worded as if I'm asking you. Format in bold as Q1, Q2, ... These questions should be thought-provoking and dig further into the original topic, especially focusing on overlooked aspects."

Q1: How can the prompt/customization approach be applied to improve the responsiveness and clarity of AI-powered conversational interfaces?

Q2: What are some examples of language constructs that should be avoided to prevent the expression of remorse, apology, or regret in AI responses?

Q3: How can AI systems be designed to provide multiple perspectives or solutions to complex problems, while still maintaining a coherent and focused response?

Q4: What techniques can be used to encourage AI assistants to ask clarifying questions when faced with ambiguous or unclear user input, rather than guessing or providing a generic response?

Suggested labels

None

@ShellLM ShellLM added ai-platform prompt Collection of llm prompts and notes labels May 15, 2024
@ShellLM
Copy link
Collaborator Author

ShellLM commented May 15, 2024

Related content

#369 similarity score: 0.9
#659 similarity score: 0.87
#737 similarity score: 0.87
#178 similarity score: 0.86
#9 similarity score: 0.85
#643 similarity score: 0.85

@irthomasthomas irthomasthomas added the llm-hallucinations examples of large language models hallucinating label May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ai-platform llm-hallucinations examples of large language models hallucinating prompt Collection of llm prompts and notes
Projects
None yet
Development

No branches or pull requests

2 participants