New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RawContent isn't raw anymore #2601
Comments
I am currently working on this. Have already reproduced the bug. |
The problem is that Page.rawContent does not contain the raw content of the file. A comment states that this is to "save memory". This makes no sense, as clearly the raw content is needed. rawContent and rawContentCopy should probably be named differently. rawContentCopy should be eliminated altogether, because it serves no real purpose. Instead one could just not reassign rawContent. At least it should not be part of the struct, as it is only used in preparePagesForRender and the methods it calls. I am going to do this refactoring of rawContentCopy, but what do you think I should do to rawContent? |
I would be surprised if I think we should let this one sit for a little bit. One option is to deprecate the |
I have made a pull request that removes Page.rawContentCopy. #2634 It was only used in one method and functions that that method calls, that really should be more obviously part of the method. |
I don't think the memory savings of not having the raw content are worth it. It is meant to be written by hand after all, which means you need pretty many writers to just keep up with RAM growth. |
What do you mean? As to being "worth it", do you have any data to back your claim? |
As to numbers: https://github.com/bep/hugo-benchmark can be useful. |
I meant that the raw content is not automatically generated and is therefore rather limited in size. I do not have any data on whether raw content is needed other than that there is a person who had this issue with it. Leaving out the feature can be a good idea IMO, but not because of memory optimization. Less features is a desirable goal. Why not optimize memory? A book is about a megabyte long. For the raw content to take up a gigabyte, it'd have to be about 1k books. Sites with a lot of text are usually one book long. For example Fallen London has stated that it is 1.5M words long. Which is only a few megabytes. The only kind of site I can think of where memory would be even significant, though not problematic is the site of a large news producer. And for that amount of content you'd want so much traffic that I don't think you'd care about wasting a gigabyte of memory. |
If I may add my 2 cents, this "RawContent-not-being-raw-anymore" issue is breaking the ability to integrate with (at least) reveal.js for instance (see https://discuss.gohugo.io/t/integration-with-reveal-js-to-present-slides/3047), which I find highly annoying ;-) |
An alternative approach: allow the use of |
Yes, that I can like. I will first do a quick check to see what performance impact creating a copy of the content has. |
This was a regression introduced in Hugo 0.17. Fixes gohugoio#2601
This was a regression introduced in Hugo 0.17. Fixes #2601
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
https://discuss.gohugo.io/t/rawcontent-not-working-in-0-17/4343
The text was updated successfully, but these errors were encountered: