-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about Crowded fields #72
Comments
This limit doesn't seem to be intrinsic to SExtractor code. Calling the command line version SExtractor is perfectly happy to get a large number of faint sources (it takes a while to run though). @kbarbary any thoughts? |
Is it a configurable parameter in Source Extractor? It could have changed since I forked the SEP codebase. Without looking, I'm guessing it could be made configurable in SEP, either with a global (like |
It doesn't appear to be configurable by the user. It appears to dynamically figure out what it needs somehow. I haven't looked deeply enough in the code to have a full understanding. |
Fixes #72 Expose sub-object deblending limit as parameter
I have been running into issues extracting photometry in crowded fields near the buldge and had a few questions about if it was possible to tune SEP to perform better in these fields.
One question is that the maximum number of objects per level is hard coded to 1024.
#define NSONMAX 1024 /* max. number per level */
Is that required, or could that be parameterized like set_pixextract()?
One way to not have SEP crash is to raise the detection threshold considerably. That works for a basic extraction of astrometry, but I was hoping to be able to go deeper in the image. Setting the detection threshold high in these fields can cause strange issues if the seeing/focus of the image is not great.
Deblending in general is quite a challenge in these crowded fields so it would be nice to set the deblend threshold quite low, but that seems to hit a crash quite quickly.
Memory is not an issue on the machine we are using and we don't mind if the processing takes significant time.
Any suggestions? Any ideas are welcome.
The text was updated successfully, but these errors were encountered: