Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI generated content in markdown is messy #58

Closed
jt6677 opened this issue Jun 28, 2023 · 11 comments
Closed

AI generated content in markdown is messy #58

jt6677 opened this issue Jun 28, 2023 · 11 comments

Comments

@jt6677
Copy link

jt6677 commented Jun 28, 2023

1
2
openAI's response does not play nice with new markdown feature. My guess you need to force the response to be plain text.

@steven-tey
Copy link
Owner

Yeahhh I noticed that too – might need to coerce that to plaintext for now :meow_disappointed:

@jt6677
Copy link
Author

jt6677 commented Jun 28, 2023

That would have issue too. Previously, the gtp response would have a nice format but any editing of the response would cause the text to lose all the line break. I could not find a solution to that problem at the moment.

@steven-tey
Copy link
Owner

Removed markdown formatting from the prompt for now until we figure out a solution for this: 97b0c49

@jt6677
Copy link
Author

jt6677 commented Jun 28, 2023

I will try to tackle this problem on Thursday. My hacky way to preserve line breaks and format previously was to put text in blackquote onFinish

    onFinish: (_prompt, completion) => {
      if (!editor) return
      const from = editor.state.selection.from - completion.length
      const to = editor.state.selection.from
      const text = editor.state.doc.textBetween(from, to)
      editor
        ?.chain()
        .deleteRange({ from, to })
        .insertContent({ type: 'paragraph', content: [{ type: 'text', text: text }] })
        .setBlockquote()
        .run()
    },

@steven-tey
Copy link
Owner

@jt6677 yeah that's smart! The root of the issue is streaming response doesn't play well with the way they're converted into markdown by tiptap-markdown – the maintainer of the lib, @aguingand confirmed that as well!

Setting everything with setContent when the response finishes streaming would work, but it becomes a blocking response and defeats the purpose of streaming: #7 (reply in thread)

@jt6677
Copy link
Author

jt6677 commented Jun 29, 2023

@steven-tey I came close with a solution.

//temporarily escape markdown syntax as stream coming in  
useEffect(() => {
  const diff = completion.slice(prev.current.length);
  prev.current = completion;

  // Escape markdown syntax
  const escapedDiff = diff.replace(/([#*_{}[\]()`~>#+-.!|])/g, '\\$1');

  // Replace the content in the editor with the escaped markdown
  editor?.commands.setContent(escapedDiff, false);

}, [isLoading, editor, completion]);
\\ parse it again onFinish 

onFinish: (_prompt, completion) => {
  if (!editor) return
  console.log('🚀 ~ file: Editor.tsx:116 ~ Editor ~ completion:', completion)
  const from = editor.state.selection.from - completion.length
  const to = editor.state.selection.from

  // Replace the current content in the editor with the completion, parsed as Markdown
  editor.commands.setContent(completion, false);
},

The problem is setContent would work but not insertContent. setContent replaces everything. Maybe someone with better knowledge with tiptap would know what to do here.

@peterokwara
Copy link

@jt6677 ended up doing the same thing.
I stream the content. Then when the streaming finishes, I set it. Will work for now.

@jt6677
Copy link
Author

jt6677 commented Jul 7, 2023

yeah, tiptap markdown does not play nice with streamed response. You either need to escape all the markdown syntax while streaming then set it again or just no using markdown at all. I delete the markdown plugin because the markdown was just too unpredictable.

comolove added a commit to comolove/novel that referenced this issue Nov 18, 2023
@mixasite
Copy link

mixasite commented Dec 4, 2023

@steven-tey, really appreciate your work. From Vercel's Platforms Starter Kit to this awesome UI editor!

Format gets broken as the generated markdown text streams into the editor. Solution is replacing the whole content iteratively instead of just inserting the diff

so instead of

  const prev = useRef("");

  // Insert chunks of the generated text
  useEffect(() => {
    const diff = completion.slice(prev.current.length);
    prev.current = completion;
    editor?.commands.insertContent(diff);
  }, [isLoading, editor, completion]);

remove the last appended completion + append the whole new completion again to let the editor fix the formatting:

const prev = useRef("");
useEffect(() => {
  // reset prev when `complete` is called again
  if (prev?.current.length > completion.length) {
    prev.current = ""
  }
  editor?.chain()
    .deleteRange({
      from: editor.state.selection.from - prev.length,
      to: editor.state.selection.from,
    })
    .insertContent(completion)
    .run()
}, [editor, completion])

@haydenbleasel
Copy link
Contributor

@andrewdoro This issue is likely resolved by the new AI implementation since it's not streamed into the editor directly.

@andrewdoro
Copy link
Collaborator

@haydenbleasel Yep thanks for the tag here. So the new AI implementation works with Markdown. We no longer have a working example for the old ++ implementation where we stream directly in the editor. (if someone wants to work with that, the current AI code might help).

I also recommend the new approach. This is what Notion also uses.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants