You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
retcor.peakgroups and retcor.obiwarp perform both 1) the correction of the retention times per spectrum and 2) adjust the retention time reported for the identified features (peaks) in the @peaks slot.
The step 2) is performed in both methods with the code below (n being the number of samples), rtcor the raw retention times that are being corrected and rtdevsmo the difference between the raw and the corrected retention times. In other words, the difference by which the raw retention time have to be corrected.
So, the .Call to obiwarp returns the adjusted retention times which are stored into object@rt$corrected, rtcor contains again the raw retention times at this stage. So, rtdevsmo is expected to be again the difference between the raw and the adjusted retention times, but why should we round this difference here?
This is rather strange to me, because that way we are correcting the peak retention times afterwards with something that does not exactly represent (mathematically speaking) the difference.
Now, I can live with or without the rounding because it will not cause a large difference in the results - but I would like to have it either in both or in none of the methods to be consistent.
For simplicity I would opt here to drop the round in the retcor.obiwarp, @sneumann , what do you think?
The text was updated successfully, but these errors were encountered:
The real puzzling and disturbing thing to me is that the old retcor.obiwarp method uses the (unrounded) adjusted rt as corrected rts for the spectra, but adjusts the rt of the identified peaks using rounded adjusted rt values. I would rather like to adjust the rt of the identified peaks also using the unrounded adjusted rt. @sneumann any comments or objections?
retcor.peakgroups
andretcor.obiwarp
perform both 1) the correction of the retention times per spectrum and 2) adjust the retention time reported for the identified features (peaks) in the@peaks
slot.The step 2) is performed in both methods with the code below (
n
being the number of samples),rtcor
the raw retention times that are being corrected andrtdevsmo
the difference between the raw and the corrected retention times. In other words, the difference by which the raw retention time have to be corrected.Now,
retcor.obiwarp
does something unexpected and incomprehensible to me:So, the
.Call
to obiwarp returns the adjusted retention times which are stored intoobject@rt$corrected
,rtcor
contains again the raw retention times at this stage. So,rtdevsmo
is expected to be again the difference between the raw and the adjusted retention times, but why should we round this difference here?This is rather strange to me, because that way we are correcting the peak retention times afterwards with something that does not exactly represent (mathematically speaking) the difference.
Now, I can live with or without the rounding because it will not cause a large difference in the results - but I would like to have it either in both or in none of the methods to be consistent.
For simplicity I would opt here to drop the
round
in theretcor.obiwarp
, @sneumann , what do you think?The text was updated successfully, but these errors were encountered: