Conversation
|
Local tests working up to revamped SD imaging stage. Passing |
|
@thomaswilliamsastro this is ready for a first pass review. I'm changing the behaviour from our usual loop to process all products per target and (under the hood) this changes the ordering of operations a bit in Some parts are kludgy but work. Most of the clean-up reflects metadata handling improvements in the ALMA pipeline or the CASA task (e.g., |
|
To-dos:
|
… mask for baseline masking
a0d932f to
b435919
Compare
|
I have another full test run going locally after merging in #286 and #280. Otherwise this is ready for review. There's some follow-up steps that can be addressed later after more discussion:
|
low-sky
left a comment
There was a problem hiding this comment.
Looks great; I don't see any showstoppers, but added some low utility comments.
| width=str(chan_dv_kms)+'km/s', | ||
| start=str(start_vel)+'km/s', | ||
| veltype ="radio", | ||
| outframe='LSRK', |
There was a problem hiding this comment.
These will be nice to override in the future.
|
|
||
| logger.info(f"Using these ASDM files: {EBsnames}") | ||
|
|
||
| if len(EBsnames) == 0: |
There was a problem hiding this comment.
Shouldn't this come before the renaming step?
There was a problem hiding this comment.
good point. moved it up
| vel_line_mask = product_dict[this_product]['vel_line_mask'] | ||
|
|
||
| # Convert velocity range to frequency range | ||
| freq_line_mask = (vel_line_mask * u.km / u.s).to(u.Hz, u.doppler_optical(freq_rest * u.MHz)).value |
There was a problem hiding this comment.
u.doppler_radio for consistency with hardwire convention choice?
There was a problem hiding this comment.
I figured this would be minor. but agreed we should stick to the same velocity convention throughout.
|
Add most of the suggestions in; thanks @low-sky ! |
|
@thomaswilliamsastro I have one more complete test run on-going; the storage is slow so it's taking longer than it should. I would like it to finish to make sure I caught all the renaming of the SD products. After that and your review, I'm ready to merge this. |
|
@e-koch I should have some time to run this end of the week. Just let me know when your tests are done and I'll check it does what I'd expect on my end |
|
@thomaswilliamsastro alright my test made it through. Ready for your testing! |
|
I'll check on the larger spectral range. I'm probably passing the padded range for the baseline fitting |
|
@e-koch made some updates above. Only one problem left on my end! |
|
@thomaswilliamsastro - alright I think I've fixed things re: #277 . I have a test run going for ngc7793_1 and _2. The pipeiine is now going to re-run the whole calibration separately for each part, despite being observed together. I'm 99% sure the legacy pipeline was doing the same thing. It's not the most efficient approach but this is likely to remain a corner-case for most nearby galaxy ALMA obs having multiple small mosaics in a single EB. If that changes, we can figure out how to reconcile this via the |
Exactly, I did the dumb thing. Running this case now |
|
A few more notes from corner cases:
|
|
The line wing padding for the baseline fitting is currently set to 200 km/s beyond the source velocity range. This can likely be smaller and may hit some issues where the masked region covers a whole SPW. We haven't hit an issue with this yet so I suggest not changing it in this PR for consistency with the legacy TP pipeline |
|
First, I'm not 100% on the data model being used here so this might be way off. Having faffed around with this for a while, I more convinced that we want to separate the SD calibration step from the gridding step. The pipeline is capable of doing all the calibration for all the SPWs at once, which can put a per EB calibrated stack of spectra into a directory tree. This can serve as the analogue of interferometric MS and we can then pick stack of MS with products to do the gridding into a single cube. Then, I think we will want to be throwing all the spectra bundles into single gridding operations for the whole cube on a per-line basis. This also makes it simpler to incorporate archival SD data, provided it can stumble through the pipeline. It does blow up the bookkeeping a bit (potentially arguing for another keyfile analogous to ms_file_key) but gives good flexibility and preserves our data philosophy. |
|
This is a good point and we should do this. @low-sky do you have an opinion on merging the current PR now as "updated and working with current data model" or including the data model changes together here? I think either could work as it's already a significant change but isn't breaking the API. |
|
I think probably pull the trigger on this merge and then do a refactor. This is clearly an important increment. I'm not sure that the best practice is stable enough across multiple MS imaging that next steps will be fast. |







Updating the pipeline to use the default ALMA SD pipeline with custom selection for baseline fitting.