Skip to content

Conversation

@taranu
Copy link
Contributor

@taranu taranu commented Nov 24, 2025

No description provided.

@taranu taranu force-pushed the tickets/DM-52667 branch 2 times, most recently from cf17855 to 4db2ecf Compare November 26, 2025 01:38
Copy link
Member

@ctslater ctslater left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, a few suggestions on code structure but I also understand that it's crunch time, so I'm not going to be mad if you don't think they're worth changing.

self.model_column_flux, self.model_column_flux_err,
)
)
return flux_psf, fluxerr_psf, flux_model, fluxerr_model
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This four-tuple is a good candidate for making a tiny class, so you can pass these as a single unit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, it could be a dataclass and I used to be more enthusiastic about using those in such cases, but I'd prefer to leave this be for now.

flux_psf, fluxerr_psf, flux_model, fluxerr_model = (
np.zeros(n_obj, dtype=float) for _ in range(4))
for band_single in bands:
flux_psf_b, fluxerr_psf_b, flux_model_b, fluxerr_model_b = self._get_fluxes(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If _get_tuple returned a class, you could have a free function that took a list of instances of those classes and returned the sum. I think that'd keep the main flow of the computation cleaner.

flux_model[good] += flux_model_b[good]
fluxerr_psf[good] += fluxerr_psf_b[good]**2
fluxerr_model[good] += fluxerr_model_b[good]**2
fluxerr_psf = np.sqrt(fluxerr_psf)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reusing the same variable is confusing; before this line I think you should be accumulating fluxerr_psf_squared and fluxerr_model_squared

bands_is_multi = [(band, False) for band in self.bands] + [
(band, True) for band in self.bands_combined.keys()]
output = {}
for band, is_multi in bands_is_multi:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option is

band_mappings_to_process = {band: [band] for band in self.bands}
band_mappings_to_process.update({k: v.split(",") for k, v in self.bands_combined.items()})
for output_band, input_bands in band_mappings_to_process.items():

Up to you whether that's cleaner.

0.5*(table[size_column.format(axis='x')]**2 + table[size_column.format(axis='y')]**2)
)
small = size_model < self.max_reff_compact
n_obj = len(small)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

len(table) would be clearer, since it's not really related to small.

# Attempt to correct any flux-independent systematic offset
# Might need to be a function of S/N
if np.sum(good == True) > 10: # noqa: E712
flux_ratio *= 1./np.nanmedian(flux_ratio[good])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be very hesitant about anything that's not a per-object calculation since it's difficult for a user to recreate. Unless this makes a significant improvement, I'd leave out this correction.

Copy link
Contributor Author

@taranu taranu Dec 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made it optional and turned off by default. It did make a difference in COSMOS so I will need to investigate further.

Per-tract isn't necessarily the ideal scale to apply this sort of correction but it's the largest that can be done at this stage and I don't think dropping to a smaller scale like patches would be of any benefit.

class ModelExtendednessColumnAction(ExtendednessColumnActionBase):
fluxerr_coefficent = pexConfig.Field[float](
doc="The coefficient to multiply the flux error by when adding to the model flux.",
default=1.0,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My read from the plots was that you had settled on 0.5 for this instead of 1.0. My vote is still zero but I could live with 0.5. I think it's ok to merge the code and backport a tweak to this value later if you want to get a third opinion etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I set it to 0.5, partly because it's slightly better and because I think it's worth discussing in the paper.

@taranu taranu force-pushed the tickets/DM-52667 branch 2 times, most recently from 059fcc1 to c2f3b22 Compare December 3, 2025 07:31
@taranu taranu merged commit 7b1d1ea into main Dec 3, 2025
4 checks passed
@taranu taranu deleted the tickets/DM-52667 branch December 3, 2025 20:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants