You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@olliecheng First instance of the filter selecting the wrong rank. See figures below. First in default setting, second is shifting the upper limit to 12000.
The text was updated successfully, but these errors were encountered:
At the moment the default lower bound is the 0.95 quantile (code). Admittedly I just eyeballed this value because it looks 'about right' with my sample datasets, but it seems to be a bit too strict at times.
A couple of alternative ideas
BLAZE uses the value of the 0.95 quantile, divided by 20
BLAZE also includes a 'high sensitivity' setting which uses c0.95 divided by 200. Might be good to include such a flag for those stubborn datasets as an easy fix
A possibly more reliable approach could look for this region, which tends to correlate to bad reads which should be discarded. These are indicated by regions with hundreds/thousands of reads which all have the same, low count (usually something like <1).
@olliecheng First instance of the filter selecting the wrong rank. See figures below. First in default setting, second is shifting the upper limit to 12000.
The text was updated successfully, but these errors were encountered: