-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize HM autocorrelation power spectra #891
Conversation
Pull Request Test Coverage Report for Build 4437332894
💛 - Coveralls |
Just raising a question - do we want profiles with different FFTlog precision parameters to be treated as equivalent? Is there a chance this could induce some kind of numerical difference from what we are expecting to get at a level we would care about? |
Right, so I added an FFTLog precision parameters check just in case. It probably wouldn't make any difference in the calculation of the power spectra unless the different precision parameters were way too different, but better to be on the safe side. |
Update
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One preliminary question before anything else.
Is this ready to be looked at again? @nikfilippas @damonge |
@c-d-leonard yes, this is ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, some comments below.
My main potential worry is whether further down the line we start implementing profiles that include complex parameters (e.g. objects, just like we currently have concentration-mass relations) and we forget to update the __eq__
method. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, so the issue is that profiles may be complicated objects that have complicated attributes, and comparing them may not be straightforward, and it's hard to predict what future profiles will contain and how to compare them. I would suggest the following approach:
a) You define an __eq__
function in the base profile class that simply does the following:
def __eq__(self, prof2):
# You can compare types instead if you prefer
if self.name == prof2.name:
return self.__eq_same(prof2)
else
return False
b) define a __eq_same
method in each profile subclass that compares itself with another profile of the same subclass. In many cases this will be just a matter of comparing the __dict__
s and fftlog params of both objects, so you can define a default __eq_same
method in the base class that does that, and then only overload that method in specific profiles that are more complicated than that (e.g. NFW or HOD, which also contain a cM relation).
This way, when we create new profiles in the future, if they are complicated, we just need to create a custom __eq_same
method for that specific profile.
Thoughts?
I think that even if we go down this route, the check will still not be 100% failsafe.
Of course, it doesn't really matter which concentration prescription you use to convert between mass definitions (as long as it's something sensible), but checking the |
My point is that if we define an |
OK, there are two separate issues: |
a) Likewise, as we create new profiles, it will be annoying to have to repeat the same lines of code for the same checks in every new profile. Because this all happens in a for-loop, we can't cut the loop in two pieces and have the numerical checks happening at the base and the difficult checks ad hoc. So this issue boils down to:
b) Using the current implementation, EDIT: Actually the extra pars for Ishiyama are a boolean and a boolean, but the fact that we can also compare strings adds to my argument about the redundancy of explicit checks for each profile. |
I don't think that's true. You would create a default |
@damonge have a look at this new implementation - it's simpler than the one we had before and I think it addresses all of your concerns. Basically I implemented an Also look at the one comment that is still open - I made those changes as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A first preliminary review. I won't look at the halo_model
code until the cclobject branch has been merged
BTW, don't you need to rebase this to master? |
It was done automatically when I merged CCLObject to master. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More comments. Will move to tests after this.
(also, tests seem to be failing now) |
@nikfilippas tests should be passing now, and I'd be happy to merge as it is, but since I've made modifications, you should take a look. Just look at my last 3 commits. |
@damonge there is no point in adding all those TODO items for later. Let's just implement an |
I do not want to leave anything that could affect perfomance negatively on master. It's best if you get rid of the TODOs in the |
Update: This now relies on
CCLObject
(PR934), so leaving it as draft until PR934 is reviewed. In particular, it makes use of the__eq__
framework built there.Addresses #890 .
Checklist of optimizations:
I_x_2
)halomod_power_spectrum
)halomod_trispectrum_1h
)halomod_Tk3D_SSC
)Profile2pt.fourier_2pt
)In short, calculating power spectra and trispectra for autocorrelations can be 30% faster if we do some basic checks on the passed profiles. With this PR, functions in the checklist above will now check if the passed profiles are equivalent (through a newly-implemented
__eq__
method inHaloProfile
), and it will save time by not calculating unnecessary integrals if they are.Branch Dependency Visualization