New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Write demeshening description. Fixes #2629. #3293
Conversation
This should fix #2629. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for doing this Matt ! here are a couple of questions I had while reading it.
Co-authored-by: Clément Robert <cr52@protonmail.com>
Thanks for your prompt response ! |
It doesn't currently log that, no. I don't think it's out of the question to do so, but I still think it's going to be unecessary. A completely uncompressed set of coarse bitarrays would take up:
Looking at the data in our sample collection, it seems to me that the largest index we have is about 111M, for Anyway, I think this is something that could be discussed, but right now I only have these heuristics to go on -- and I don't know that we want to include them if they're going to potentially be weirdly wrong or something. |
If it's possible to inflate disk usage by 17%, however unlikely, I think it warrants a log entry, albeit conditional. I've worked on clusters where exceeding my space quota by even 1MB would block any writing, so I don't think I'd be very happy if I had to go through yt's documentation to understand how that happened, most importantly because I would probably not think of yt as a suspect as I search for the reason why I keep going over my quota. |
I think it's out of scope for this particular PR but we may want to add
another blocker issue.
…On Sat, May 22, 2021, 1:39 PM Clément Robert ***@***.***> wrote:
If it's *possible* to inflate disk usage by 17%, however unlikely, I
think it warrants a log entry, albeit conditional. I've worked on clusters
where exceeding my space quota by even 1MB would block any writing, so I
don't think I'd be very happy if I had to go through yt's documentation to
understand how that happened, most importantly because I would probably not
*think* of yt as a suspect as I search for the reason why I keep going
over my quota.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#3293 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAVXOYWCDZ5BSORTRU7ZATTO7255ANCNFSM45KUVNVA>
.
|
I agree ! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really terrific work. The writing is clean and easy to read but also extremely informative. And it includes practical information on how to speed up for your own datasets. Thanks for writing this!
I included a couple of comments. The only other additional comment is something that could be included in this PR or we could work out in another PR: how does yt now treat smoothed particle datasets for the bulk of its operations? Previously, we deposited particles into octrees, and then generated things like slices and projections from that gridded data, but now we're flying free without the mesh. I think we need a base level description of how yt conducts the line integrals that traverse these smoothing kernels is warranted. But as I said, it doesn't have to be you who writes it, and it doesn't have to be in this PR, and I'm certainly open to discussion on this point. Thanks for writing this PR, @matthewturk .
@chummels once you've taken a look, let me know which areas need mroe explanation, and if you feel it's OK, please go ahead and click "resolve." And if they need more work, I'll get right on it! :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the additional notes--they helped clear up some of my misunderstanding. Very thorough job here. My suggestions should not hold things up, but I just noted them since you switched from "refined index" to "fine index" about halfway through. Great writeup!
note @matthewturk , in order for #3298 to affect this PR, you'll need to rebase or merge from master |
Co-authored-by: Cameron Hummels <chummels@gmail.com>
I think the merge is done and this should be ready to go! Thanks for the review, folks. |
Hi @matthewturk -- Maybe not part of this PR, but one other thing that may be relevant from the perspective of the user is how slices and projections of SPH data have changed due to the demeshening, and why these are more accurate. I am not an expert on this stuff, otherwise I would be happy to write it myself. Maybe a couple of paragraphs would suffice? |
Good idea. What if I linked to https://matthewturk.github.io/yt4-gallery/
? I'm somewhat hesitant to check all those images in here.
…On Tue, May 25, 2021 at 8:23 AM John ZuHone ***@***.***> wrote:
Hi @matthewturk <https://github.com/matthewturk> --
Maybe not part of this PR, but one other thing that may be relevant from
the perspective of the user is how slices and projections of SPH data have
changed due to the demeshening, and why these are more accurate.
I am not an expert on this stuff, otherwise I would be happy to write it
myself. Maybe a couple of paragraphs would suffice?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3293 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAVXOZA33BJ3BN2GDNHLU3TPOQEHANCNFSM45KUVNVA>
.
|
How about we just show one (the first one) and link to that page for the rest? |
This adds narrative docs about demeshening and how it works from the perspective of someone using it. It includes links to the yt 4 paper description, and I've also added cross-references within the docs where appropriate.