Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NIRSpec calwebb_spec2 flat_field step producing wrong results #2179

Closed
stscijgbot opened this issue Jun 21, 2018 · 51 comments
Closed

NIRSpec calwebb_spec2 flat_field step producing wrong results #2179

stscijgbot opened this issue Jun 21, 2018 · 51 comments
Assignees
Labels
Milestone

Comments

@stscijgbot
Copy link
Collaborator

Issue JP-335 was created by Maria Pena-Guerrero:

The step is running to completion, however it is producing strange results. The medians for all modes are off (we would like it to be of the order 1e-7), but the mode least affected is IFU. I am attaching a plot for an IFU slice, the slits plots for Fixed Slit (FS), and for MOS. The results for FS and MOS are not correct, i.e. the resulting arrays are not finding the spectrum and their value is NaN. For the  FS data, the algorithm is producing correct results for slit S200A1, however the algorithm finds very little pixels with signal for slits S200A2 and S1600A1 and not finding any pixels for slit S400A1. 

 

I have placed files for FS, IFU, and MOS data and results at:

/grp/jwst/wit4/nirspec/penaguerrero/flat_field_problems

There you will find the pdf plots that our code produces. Our code re-creates the flat_field algorithm and then compares with the pipeline's resulting flat, this is why we expect machine precision differences (1e-7) between the two. The files *_calc.fits are the calculated flat by our code, and the *_comp.fits is the comparison between the pipeline's (_intflat.fits) and the calculated one.

@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: I looked at the fixed-slit data, and I don't understand your calculated flat fields.  For example, I displayed (using ds9) the first extension (slit name S200A1) in gain_scale_assign_wcs_extract_2d_intflat.fits and the first extension in gain_scale_assign_wcs_extract_NRS1_flat_calc.fits.  If I understand correctly, the latter is the flat field that you computed from the reference files.  The _intflat.fits file has values that range from 0.008766 to 1.823.  The _calc.fits file (the first extension) has values that are almost all 1 except where the data quality array (the second extension) in gain_scale_assign_wcs_extract_2d_intflat.fits is greater than zero.  Some of the reference file data are all ones (in particular the fflat file jwst_nirspec_fflat_0014.fits), but others have values that are very different from one.  I don't think it makes sense that your computed reference file is almost exactly one except where pixels are flagged as bad.

I have attached cutouts of the ds9 display of the flat field computed by the flat_field step (FS_intflat_1.png) and the flat field computed by your code (FS_calc_1.png).

!FS_intflat_1.png!!FS_calc_1.png!

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: Yes, we know there is a problem with our code but we have not been able to determine what is the problem. You can see our code at [https://github.com/spacetelescope/nirspec_pipe_testing_tool/blob/master/calwebb_spec2_pytests/auxiliary_code/flattest_fs.py]

We started having these problems after Nadia fixed an indexing issue with assign_wcs, before that our code was producing reasonable results. Do you think the problem could be linked to an indexing issue as well?

@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: I found a couple of problems.  This one probably didn't hurt anything:

line 195:  extract2d_wcs_file = step_input_filename.replace("_extract2d_flat_field.fits", ".fits")

The file name is "gain_scale_assign_wcs_extract_2d_flat_field.fits", i.e. with an underscore between "extract" and "2d".  As written, the .replace() method will not find the search string, and it will return step_input_filename unchanged.  But you didn't want the "gain_scale_assign_wcs.fits" file anyway (which is what you would have gotten if the search string included the underscore); you want "gain_scale_assign_wcs_extract_2d.fits".  The reason you don't want** assign_wcs.fits is that that isn't a MultiSlitModel, and even if you open it as such, it will only have one "slit", the full image.

The other problem is serious:

line 378:  pipeflat = fits.getdata(flatfile, ext+(ext-1)*2)

That assumes that there are three extensions for each slit in flatfile, but there are now four:  SCI, DQ, ERR, WAVELENGTH.  That should have worked for the first slit but failed for the others.  I think there's still another problem, but I don't have any ideas.

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: Grate catches! Thank you! This seem to have greatly improved results for the FS data, however, the problem persists for the MOS data.

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: Great catches! Thank you! This seem to have greatly improved results for the FS data, however, the problem persists for the MOS data.

@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: OK, good!

I forgot to comment on your earlier question about the indexing issue.  There might still be an indexing issue (e.g. off by one pixel), but if so, I think it would show up as mostly just an offset.  It shouldn't result in the kind of difference you see in the MOS data, for example.

@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: Do you have a directory containing the output flat-field files using the latest version of your software?  I'd like to take a look to see whether I can understand the differences.  Thanks.

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: Sure. I place the new results for FS at:

/grp/jwst/wit4/nirspec/penaguerrero/flat_field_problems/FS/results_with_codefix

 

I did not get different results for MOS even with the modifications. Here is the code that we use for the MOS data:

[https://github.com/spacetelescope/nirspec_pipe_testing_tool/blob/master/calwebb_spec2_pytests/auxiliary_code/flattest_mos.py]

 

Thank you.

@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: I can't get to that directory.  Either the nirspec/ subdirectory of /grp/jwst/wit4/ no longer exists, or perhaps I no longer have permission to see it.

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: It is fixed. Sorry, I have been having some problems with my default permissions in central store. 

@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: In flattest_mos.py, I don't understand why function reverse_cols is used.  Why would you need to interchange the first and last columns?

In the "get the subwindow origin" section (starting around line 274), it doesn't look to me as if you should be adding model.meta.subarray.xstart and ystart.  Or if you do, you should subtract 1 from those values to convert to zero indexing.

When you call wcstools.grid_from_bounding_box at around line 268, and when you use the shape of the wavelength array returned by slit.meta.wcs together with subwindow offsets, you make the assumption that the bounding box is the same as the shape as the slit.  I think it is in this case, but that won't always be correct.  It would be better to use the shape of the slit, i.e. slit.data.shape, both for getting the wavelength array and also for looping over pixels in the slit data.

At around lines 331 and 332, I don't understand t=np.where(wave == jwav) and pind = [t[0][0]+py0, t[1][0]+px0].  There can be many pixels in the wavelength array that have the same wavelength jwav, and the particular one at index j in the flattened array won't necessarily be the first in the tuple t.  Rather than flattening the wavelength array (wave) and looping over the 1-D array, you could just loop over both axes of the 2-D array.

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: I change the loop to go through the 2D array instead of flattening. Results improved a little but are still very wrong.

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: I change the loop to go through the 2D array instead of flattening. Results improved a little but are still very wrong. Here is the new version of the code:

[https://github.com/spacetelescope/nirspec_pipe_testing_tool/blob/master/calwebb_spec2_pytests/auxiliary_code/flattest_mos.py]

 

@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: I found a few problems.  Here's a brief description of each.

The function reverse_cols does not always work as expected.  Here is an example for data from an actual s-flat file, nirspec_MOS_sflat_G235H_OPAQUE_FLAT2_nrs1_f_01.01.fits; sci is the data from the first extension:

sci.shape
(39, 2048, 2048)
sci = reverse_cols(sci)
sci.shape
(39, 39, 2048)

It should certainly not change the shape.  I don't see why this function is being called anyway.

For NRS2 data, converting to DMS orientation involves not just transposing the last two axes but also rotating the image array by 180 degrees.  Instead of the following:

sfim = sfim[::-1]

you should do this:

sfim = sfim[:, ::-1, ::-1]

This looks like a typo:

if dfimdq[:, pind[0]][pind[1]] != 0:

Instead, that line should be:

if dfimdq[pind[0], pind[1]] != 0:

@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: I'd like to make a suggestion regarding testing.  If you have the IDL version of this code, you can modify a copy of both the IDL version and your Python version to print out some values within a small, rectangular region.  In the Python version this is line 428:

flatcor[k, j] = dff * dfs * sff * sfs * fff * ffs

I would suggest printing out all six of those values on the right hand side.  If the IDL and Python versions differ for one or more of those values, that will point out where you should look for typos in the translation from IDL to Python.

@stscicrawford stscicrawford modified the milestones: 0.10.0, 0.11.0 Jul 31, 2018
@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: I'm sorry, I gave you some misleading advice last month.  I wrote that for rotating an array you should use sfim[:, ::-1, ::-1], but that will only work for a 3-D array, and some arrays are 2-D.  You can use this syntax, which will rotate the last two axes by 180 degrees and will work for 2-D or higher dimension:

sfim = sfim[..., ::-1, ::-1]

This comment applies to DQ arrays as well.

Do you have any NRS2 data in your test suite?  I didn't see any in the directories you mentioned above.

@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: A month ago you were still seeing significant differences for MOS.  Have you changed anything since then that could improve the results?  I ran a test using a different science file from the one you used, and I'm getting pretty good agreement between your code and the pipeline flat_field step, usually better than 0.1%.  The file I used has two slits, and the MSA configuration file has multiple open shutters for each slit.  I did find one small problem, but fixing it didn't change the results in this case.  Starting around line 251 in flattest_mos.py, you have a section to find the quadrant, column number, and row number for the shutter with background = "N".  The search for the matching row gave isrc as the index.  However, you used im instead of isrc as the index in these three lines:

quad = slitlet_info.field("SHUTTER_QUADRANT")[im]
row = slitlet_info.field("SHUTTER_ROW")[im]
col = slitlet_info.field("SHUTTER_COLUMN")[im]

There's a simpler solution, though.  Instead of reading the MSA configuration file, the info is available as attributes of the slit, i.e. you can use:

quad = slit.quadrant
row = slit.xcen
col = slit.ycen

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: Hi Phil,

Thank you for checking our code so carefully. I made the changes you suggested and I am now running a test with a different MOS file. Now the medians vary between about 1e-5 and 1e-7 but the standard deviations and means vary between about 1e-2 and 1e-4 and 1e-3 to 1e-6, respectively. I placed some of the plots and the files at:

/grp/jwst/wit4/nirspec/penaguerrero/flat_field_problems/MOS/results_with_codefix

We made changes in the code for Fixed Slits and the code is now saving the products correctly (it was a simple bug fix). I placed the new products and plots for this at:

/grp/jwst/wit4/nirspec/penaguerrero/flat_field_problems/FS/results_with_codefix

 

@philhodge
Copy link
Contributor

There's one more thing that I should mention. This only affects NRS2 data, so if you haven't run a test with data from that detector, you might not have noticed it.
For the detector flat, you rotate the DQ image dfimdq, but you don't rotate the SCI image dfim. The latter should also be rotated using, e.g.:

dfim = dfim[..., ::-1, ::-1]

For the s-flat, you rotate the SCI image but not the DQ. The DQ image should also be rotated, e.g.

sfimdq = sfimdq[..., ::-1, ::-1]

@philhodge
Copy link
Contributor

I'm checking flattest_ifu.py, and I got an error at this line:

ffv = fits.getdata(ffile, "SCI")#1)

because the extension name is "IFU", not "SCI". Otherwise, it looks good!

One thing I did just for my convenience was to write the calculated flat field to a full-frame array. It's easier for me to compare with the output of the pipeline flat_field step, and it's easy to check that the alignment is correct. I created a 2048 x 2048 array and populated it as follows:

calc_flat[pind[0], pind[1]] = flatcor[j]

just after the line that assigns the value to flatcor[j]. I'm not suggesting that you should do that, but if you want to visually compare the two, e.g. by blinking in ds9, it makes it a lot easier.

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: Thanks Phil! 

I placed the new MOS files in central sore:

/grp/jwst/wit4/nirspec/penaguerrero/flat_field_problems/MOS/results_with_codefix

Thanks

@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: I'm looking into this.  There are some significant differences, i.e. several percent different.  In one case (the fifth image set) almost all of the output slit was flagged with NO_FLAT_FIELD or UNRELIABLE_FLAT (or both), and the differences (except near the left edge) were only in the small regions that had DQ = 0.  This could take some time to track down.

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: Hi Phil, sorry for the extra email. This was an error on my part since I thought I was in another ticket. 

Thanks.

@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: The flat_field step in the post-Build 7.2 version of the JWST pipeline code includes changes that I think will result in good agreement with Maria's code for testing this step for NIRSpec data. In particular, the flat field will be set to 1 for any pixel that is flagged with DQ value UNDEFINED_FLAT or NO_FLAT_FIELD. The latter flag is set for any pixel for which the wavelength is out of range for any of the three components of the flat field, for either the image or the fast variation table.

The GitHub issue for these changes is #2677, and the PR is #2775.

@stscijgbot
Copy link
Collaborator Author

Comment by James Muzerolle: Excellent results for IFU exposure, G140H, NRS1:  comparison mean ~ 10^-7, median ~ 10^-8, stdev ~ 10^-5.

 

@stscijgbot
Copy link
Collaborator Author

Comment by James Muzerolle: Excellent results for IFU exposure, G140H, NRS1:  comparison mean ~ 10^-7, median ~ 10^-8, stdev ~ 10^-5 (see attachment "final_output_caldet1_NRS1_flat_field_NRS1_00_IFUflatcomp_histogram.pdf" for histogram of comparison differences).

 

@stscijgbot
Copy link
Collaborator Author

Comment by James Muzerolle: The validation test is failing for IFU data on the NRS2 detector - residuals are typically of order 10^-2.  This may be related to a possible pixel indexing error we found in the assign_wcs processing for NRS2 (see https://jira.stsci.edu/browse/JP-478).

@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: Could I see the on-the-fly flat field computed by both the flat_field step and the validation code, for NRS2 IFU data?  What code was used for the validation test?  Could I see that code as well?  Thanks.

@stscijgbot
Copy link
Collaborator Author

Comment by Philip Hodge: I should explain why I'm asking about the code.  In Maria's flattest_*.py files, for NRS2 data the conversion of the IDT's reference files from detector orientation to DMS orientation is incomplete.  For the D-FLAT, the SCI extension needs to be rotated by 180 degrees.  For the S-FLAT, the DQ extension needs to be rotated by 180 degrees.  If flattest_IFU.py is being used for the test, and if these rotations have not be added to the code, large differences for NRS2 data are to be expected.

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: Hi Phil, yes, the scrips used were my flattest_*.py. I am correcting this with the lines applied twice:

dfim = np.rot90(dfim)

sfimdq = np.rot90(sfimdq)

Will rerun test with these changes and see if the problem is fixed.

 

@stscijgbot
Copy link
Collaborator Author

Comment by James Muzerolle: For MOS data, we are finding large discontinuities between the central and edge pixels of the 2D traces introduced by the flat fielding (see attached screenshot).  I think this is because reference pixels with the flag "UNRELIABLE_FLAT" are not being included in the correction.  Because of the way the S-flat is created in this case, there are many pixels (ones that are affected by the bar shadow) that have approximated correction factors using a smoothing function across the directly-measured unshadowed pixels.  Even though the correction values may be more uncertain, they should be included, otherwise a significant fraction of pixels in MOS 2D spectra have no correction applied, and these large discontinuities appear (there are also cases where no pixels in the trace are corrected).

@stscicrawford stscicrawford modified the milestones: Build 7.3, Build 7.4 Sep 20, 2019
@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: Though the original issue was resolved, there are still problems that need to be addressed regarding this ticket. I closed it by mistake. I am now reopening it.

@stscijgbot
Copy link
Collaborator Author

Comment by Alicia Canipe: Based on Maria's comment, I'm changing the resolution for this ticket

@stscicrawford stscicrawford modified the milestones: Build 7.4, Build 7.5 Dec 4, 2019
@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: I re-tested flat field with NIRSpec ALLSLITS data and the results are still strange. The code we used for testing is located at: [https://github.com/spacetelescope/nirspec_pipe_testing_tool/blob/master/calwebb_spec2_pytests/auxiliary_code/flattest_fs.py]

I am attaching plots of our results, as well as the fits files used.

 

@stscijgbot
Copy link
Collaborator Author

Comment by Nadia Dencheva: [~pena] Is this still an issue or was everything fixed in JP-1071, JP-1663 and JP-1691?

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: [~dencheva] No, this issue is indeed fixed in version 1.0.0.

Turns out for some reason my pipeline version was not updating so I was still testing with the old version. Sorry about this. Closing this ticket.

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: [~dencheva] I am continuing the test for MOS flat field and I am running into a discrepancy. The file that I tested with before (the same that [~morrison] used), is now not running in spec 2 so I cannot verify that the result I obtained before (which was good, median of the order of 10^-7), is still true. This data is for configuration G140M_LINE1, you can find the data that is not running in spec 2 at: /grp/jwst/wit4/nirspec/penaguerrero/mos_data_crashing_spec2

Additionally, I have run other MOS data and I am finding very large discrepancies with the NIRSpec testing script (median of the order of 10^-4). I am not sure is something was changed in the pipeline regarding the flat filed?  

The other MOS data can be found here:

/grp/jwst/wit4/nirspec/penaguerrero/mos_g235h_f170lp

and 

/grp/jwst/wit4/nirspec/penaguerrero/mos_g395m_290lp

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: update:  [~dencheva]  [~morrison]  [~muzerol]

I was forgetting to run the wavecor step before the flat field step. Now that I added this step and made a couple of changes to the MOS data header keywords, both of the MOS files run through spec2. Nonetheless, there seem to still be issues. I also tested other data. All data sets were passing in build 7.6.

I believe the issue is with s_flat reference file for the MOS, and perhaps the reference file is also an issue for the high resolution IFU data. The issue with for the BOTS data seems to be similar to what was happening during build 7.5. 

Data configuration             Detector NRS1       Detector NRS2

BOTS_G235H_F170LP          Failed                       Passed

BOTS_PRISM_CLEAR            Failed                       N/A

FS_G395H_F290LP              Passed                     Passed

ALLSLITS_G140H_F100LP   Passed                     Passed

MOS_G140M_F100LP           Passed                     Passed

MOS_G395M_F290LP          Failed                       N/A (no open slits in detector NRS2)

IFU_G395H_F290LP             Failed (all slits fail)  Failed (only slit 29 fails)        

IFU_G140H_F100LP              Failed (all slits fail)  Failed (only slit 29 fails)                  

IFU_G140M_F100LP             Passed                      N/A

@stscijgbot
Copy link
Collaborator Author

Comment by Howard Bushouse: Working through the list of items reported recently, one by one:

The crash reported above for the dataset contained in the directory /grp/jwst/wit4/nirspec/penaguerrero/mos_data_crashing_spec2/ seems to be the same problem with the dither_point_index values in the MSA metadata file discussed separately in JP-1953. If the dither_point_index values in the MSA file are set to 1 (as they should be), the pipeline runs as expected.

@stscijgbot
Copy link
Collaborator Author

Comment by Howard Bushouse: In the table listed in the comment from 15/Apr/21 12:08 PM, does "Failed" mean the processing actually crashed or the comparison against the NIRSpec team pipeline failed (i.e. was above the desired threshold)?

@stscijgbot
Copy link
Collaborator Author

Comment by Howard Bushouse: Also, where can we find the data files? Under the /grp/jwst/wit4/nirspec/penaguerrero/ directory I see a few subdirectories that correspond to items listed above (e.g. ALLSLITS_G140H_F100LP and IFU_G140M_F100LP), but I don't see many of the others. In some of the mos* subdirectories I only see products from a few steps, such as extract_2d, wavecorr, and flat_field. There may have been some changes upstream in the calwebb_spec2 pipeline that the flat-field step (and others) depend on, hence it's probably safer to always start over from a rate product and process up through the flat-field step.

@stscijgbot
Copy link
Collaborator Author

Comment by Howard Bushouse: Finally, are any of these test datasets based on LAMP exposures? B7.7 included a lot of changes to calspec2 processing to handle lamp exposures in a special way; different from regular science exposures. Is it possible that the B7.7 pipeline is picking up on the fact that these are lamp data and hence applying different processing than before?

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: update:  [~dencheva]  [~morrison]  [~muzerol]

I was forgetting to run the wavecor step before the flat field step. Now that I added this step and made a couple of changes to the MOS data header keywords, both of the MOS files run through spec2. Nonetheless, there seem to still be issues. I also tested other data. All data sets were passing in build 7.6.

I believe the issue is with s_flat reference file for the MOS, and perhaps the reference file is also an issue for the high resolution IFU data. The issue with for the BOTS data seems to be similar to what was happening during build 7.5. 

NOTE: The Failed/Passed in the following table is the result of the NIRSpec flat field comparison test.

Data configuration             Detector NRS1       Detector NRS2

BOTS_G235H_F170LP          Failed                       Passed

BOTS_PRISM_CLEAR            Failed                       N/A

FS_G395H_F290LP              Passed                     Passed

ALLSLITS_G140H_F100LP   Passed                     Passed

MOS_G140M_F100LP           Passed                     Passed

MOS_G395M_F290LP          Failed                       N/A (no open slits in detector NRS2)

IFU_G395H_F290LP             Failed (all slits fail)  Failed (only slit 29 fails)        

IFU_G140H_F100LP              Failed (all slits fail)  Failed (only slit 29 fails)                  

IFU_G140M_F100LP             Passed                      N/A

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: [~bushouse], [~muzerol] Please note that all those Pass/Fail in the above table are the result of the NIRSpec pipeline testing tool (NPTT) flat field comparison test.

yes, all of these are CV3 data sets. So they are all in a different internal lamp state. We changed the filter keyword from OPAQUE to the corresponding science filter in order to be able to use the data to test the pipeline.

What changed in the processing of lamp data?

Working on making all these data available to you in central store. I will let you know when it's all there.

@stscijgbot
Copy link
Collaborator Author

Comment by James Muzerolle: [~pena] for the IFU and MOS cases that fail, what is the value of EXP_TYPE?  If they are NRS_IFU or NRS_MSASPEC, and FILTER is not OPAQUE, then the data should have been processed as if they were on-sky science exposures.  If EXP_TYPE is NRS_LAMP, then the lamp processing rules apply, in which case different sets of the 3 flat components are applied, depending on the type of lamp (and we should change that to the relevant mode to avoid this).

The BOTS data are observations of an external point-like source taken in CV3 using the BOTS template, so lamp processing is not an issue in that case.

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: [~muzerol], you are correct. All these data do not have  EXP_TYPE=NRS_LAMP, they are all set to their corresponding mode so then the lamp processing should not have affected the runs.

@stscijgbot
Copy link
Collaborator Author

Comment by Misty Cracraft: [~pena] [~muzerol]  So what is the current status of this ticket? Are all the keywords set correctly for the pipeline tests, and if so, are they passing or still failing? Do we think any remaining issues are pipeline or data related?

@stscijgbot
Copy link
Collaborator Author

Comment by Maria Pena-Guerrero: Closing this ticket since it was split into tickets JP-2224, JP-2225, and JP-2226 for FS and BOTS, MOS, and IFU, respectively

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants