-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Speed up stripping of markdown #2097
Conversation
This is a great find, any idea if it's certain keywords and if so which? Edit: Seems like it's this – stiang/remove-markdown#35 – so a large number of spaces in a searched document would trigger it. |
Ooh. I thought it was just because of big documents. Updating your fork would be a better fix! |
I'll pull in the fix from the other repo that hasn't been merged 🙄 – you're right I think less churn would be good here and being able to retain the |
And I guess the HTML that needs to be whitelisted are for emphasizing the matching terms? |
That's right – pg returns html tags for that, lol |
Got it! Closing this then. Thank you! |
… many space characters see: #2097 see: https://snyk.io/vuln/SNYK-JS-REMOVEMARKDOWN-73635
By the way, I suppose this was also the issue you were seeing with timeout's when searching from Slack on the cloud hosted version. |
Yep! Most probably |
We were encountering huge CPU spikes that would cause our outline server to stall for an hour when our wiki users searched for certain keywords. I identified the cause of the high CPU usage after some digging. It was when the search endpoint calls
removeMarkdown()
to render search results context.My fix was to replace the package used with remark and the strip-markdown plugin. Do note that the plugin doesn't have the option to disable stripping of HTML and has some quirks. Not really sure what HTML is being whitelisted here.