You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
as confirmed with @markccchiang, currently the spatial range of an image cube is derived using the bottom-left (BL) pixel and the top-right (TR) pixel. This works nicely if there is no sky rotation (mostly true with radio image cube). However, for image cubes from optical observations, there might be a non-zero sky rotation. When this happens, using the BL and TR pixels to derive the spatial range is not accurate.
To have a better estimate of the spatial ranges in both ra/dec or lon/lat coordinates, we need to use the four corners of the images to get the world coordinates first. Then from the four ra (or lon) to fine the maximum range. Same applies to dec (or lat).
The text was updated successfully, but these errors were encountered:
as confirmed with @markccchiang, currently the spatial range of an image cube is derived using the bottom-left (BL) pixel and the top-right (TR) pixel. This works nicely if there is no sky rotation (mostly true with radio image cube). However, for image cubes from optical observations, there might be a non-zero sky rotation. When this happens, using the BL and TR pixels to derive the spatial range is not accurate.
To have a better estimate of the spatial ranges in both ra/dec or lon/lat coordinates, we need to use the four corners of the images to get the world coordinates first. Then from the four ra (or lon) to fine the maximum range. Same applies to dec (or lat).
The text was updated successfully, but these errors were encountered: