-
Notifications
You must be signed in to change notification settings - Fork 748
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alter handling of huge text files #3097
Comments
I vote for second option too, such huge files are no good for humans anyways(humans would filter them anyhow), so why bother with them? No gain, very narrow use case, they generally should just get rid of the way (exactly as stated in #1646 (comment) comment). |
OK I'm glad to get agreement |
Issue #3090 asks whether matches might be excerpted in results from the search API to avoid a performance-killing situation such as returning a line that is a gigabyte in length. There is the open #2732 to convert
SearchEngine
to use the modern Lucene unified highlighter. With that PR's newHitFormatter
, it would be fairly straight-forward to refactor to use the same excerpting as applied byLineHighlight
for UI search.Huge text files present additional problems, however, for OpenGrok.
The Lucene
uhighlight
API makes it ultimately impossible to avoid loading full, indexed source content into memory. While in some places in the API, Lucene permits content to be represented asCharSequence
, which would allow (with a bit of work) to lazily load source content into memory; the final formatting via LucenePassageFormatter
is done with a method,format(Passage[] passages, String content)
, where aString
is demanded.As well keep in mind that Lucene postings have an offset datatype of
int
, so content past an offset of 2,147,483,647 cannot be indexed for OpenGrok to present context, since OpenGrok chooses to be able to store postings-with-offsets so that later context presentation is not re-analyzing files. (Currently OpenGrok does not limit the number of characters read, which results in issues like #2560. The latest JFlex 1.8.x has revised itsyychar
as along
, but Lucene would still have anint
limit for offsets.)For huge text files then I can think of a few possible choices:
PassageFormatter
. This means however that some content from very large files would be missing from the index. (Currently all content from >2GB files is missing from the index.)or
or
int
and likely fitting within sayshort
to make the pieces very manageable), and fully index the pieces, and allow presenting context for each piece separately.I generally think the second option might be satisfactory. Is there truly much utility to excerpting from a 1GB JSON file? What does "context" mean within such a file? I don't expect realizing that option would be too difficult. I suppose it could be done by reclassifying huge
Genre.PLAIN
files asGenre.DATA
; but still using the plain-text analyzer and, where applicable, a language-specific symbol tokenizer; and also avoiding XREF generation (by virtue of beingGenre.DATA
).The text was updated successfully, but these errors were encountered: