Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions I had: #22

Closed
Devansh-kajve opened this issue Oct 23, 2023 · 9 comments
Closed

Some questions I had: #22

Devansh-kajve opened this issue Oct 23, 2023 · 9 comments
Labels
stale The issue is stale and will be closed soon

Comments

@Devansh-kajve
Copy link

  • Why am I getting this error even though the generation is working as it should?

Screenshot 2023-10-23 173424

  • For the text adventure module, how can I control 'do' or 'say' options, there seems to be no such parameter, there's only prompt.

  • Is there an example of managing context size that I can look at? I wanted to see how to delete context line by line? Currently I have my prompt as just plain text that I append into as you told, should I make it something like an array of sentences instead so that I can remove by index number?

  • How does the memory context that you provide in novelAI webapp work? Is it any different?

@Aedial
Copy link
Owner

Aedial commented Oct 23, 2023

So.

  1. is about asyncio, you might be doing something wrong somewhere or it could be normal if you get it when the program ends (known asyncio issue).

  2. Do and Say are shortcuts to '>You ...' or '>You say "..."'. I don't really remember the exact behavior, but you should see it easily in the context window when using the website (right panel, Advanced, Current Context)

  3. Yes, you can have something like an array for Memory, one for Story, and one for AN. Then try to remove the first lines of Story until Memory + Story + AN fits in context (max context size - generation size - 20 if generate_until_sentence is enabled). That's the easiest way of context building.

  4. In the website ? The website handles the context building as a whole before sending it to the API (the API knows nothing about the story outside of context), and its complexity is unfathomable. Most of this complexity comes from Lorebook handling, and what to cut, when and how. I tried to replicate this behavior (still WIP), but it's a daunting task and it will never be full-featured.

@Devansh-kajve
Copy link
Author

Thanks for answering, I am not getting good responses from text adventure module so I'll probably won't end up using it

Copy link

This issue is stale because it has been open for 30 days with no activity. It will be closed if no activity happens within the next 7 days.

@github-actions github-actions bot added the stale The issue is stale and will be closed soon label Nov 25, 2023
@Fensterbank
Copy link
Contributor

Thank you for your response on this. I thought I could revive this thread with some follow up.

So then it is true, that the API itself doesn't know anything about Memory, Authors Note, etc. It's just the WebUI placing these text snippets at a specific position in the prompt, which is only one big string.

try to remove the first lines of Story until Memory + Story + AN fits in context (max context size - generation size - 20 if generate_until_sentence is enabled)

Right now I'm using api.high_level.generate where I pass the prompt string. I saw that the method also takes a token array.
So a good approach (in my case, I don't have an Author's Note) would be:

  • Tokenize the Memory as memory[int]
  • Tokenize the Story as story[int]
  • Remove items from the start of story[int] so that len(memory[int])+len(story[int]) <= 8192 if I have the 8192 token limit.

Is that approach correct?

@Aedial
Copy link
Owner

Aedial commented Nov 26, 2023

Yes. I would even recommend trimming the story by paragraph instead of sentence, word or token, getting rid of the oldest first.

@Aedial Aedial removed the stale The issue is stale and will be closed soon label Nov 26, 2023
@Devansh-kajve
Copy link
Author

Devansh-kajve commented Dec 1, 2023

I think I can use that approach. I am currently just storing the string in a .txt file and reading from there before using it as prompt.

Do you know the specific place where the web UI inserts the memory string? I want tp place the [Title: ; Tags: ; Genre ;] line that you usually put in memory without hampering with the generation.

Also is there a way to program the responses in a specific structure? For example if I want to generate an item and I want it to generate as "Name: '', class: '', description: ''". Or if I want to generate a specific line like "Health: +20". I want to read these strings and update the database in my game (inventory, stats, etc) accordingly.

@Aedial
Copy link
Owner

Aedial commented Dec 3, 2023

Do you know the specific place where the web UI inserts the memory string? I want tp place the [Title: ; Tags: ; Genre ;] line that you usually put in memory without hampering with the generation.

Memory is at the very top, AN 3 lines from bottom, Lorebook it depends on insertion order and position.

Also is there a way to program the responses in a specific structure? For example if I want to generate an item and I want it to generate as "Name: '', class: '', description: ''". Or if I want to generate a specific line like "Health: +20". I want to read these strings and update the database in my game (inventory, stats, etc) accordingly.

You could use a "generator", a few-shot example of what you want (there are examples in the official prompts). However, NAI is not very good at math or keeping a state of things like stats. I would recommend doing so externally.

Copy link

github-actions bot commented Jan 2, 2024

This issue is stale because it has been open for 30 days with no activity. It will be closed if no activity happens within the next 7 days.

@github-actions github-actions bot added the stale The issue is stale and will be closed soon label Jan 2, 2024
Copy link

This issue was closed because it has been stale for 7 days with no activity. Reopen it if relevant or open a new issue, further discussion on closed issues might not be seen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale The issue is stale and will be closed soon
Projects
None yet
Development

No branches or pull requests

3 participants