-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Empty reflection blocks break refinement #1417
Comments
Hi @kmdalton, refinement is supposed to do something sensible when there are too few reflections. In fact, I can see that the failure occurs while in the I think your change is probably reasonable, but I would like to experiment with a test case to be sure of what happens. Do you have one to hand you can share? |
I don't have a convenient test case on hand right now, but I can certainly produce one for you. I'll be happy to upload something early next week. The example I have in mind is the polychromatic beam situation. I think that is the easier one to work up into an a useful test case. You're right, this is no doubt a corner case that affects a very limited number of users. I have a habit of finding those lately... |
Explicitly skip experiments with no reflections when setting up scan-varying refinement. Fixes #1417
Explicitly skip experiments with no reflections when setting up scan-varying refinement. Fixes #1417
- ``dials.integrate``: fix integrator=3d_threaded crash if njobs > 1 (#1410) - ``dials.integrate``: Check for and show error message if shoebox data is missing (#1421) - ``dials.refine``: Avoid crash for experiments with zero reflections if the `auto_reduction.action=remove` option was active (#1417) - ``dials.merge``: improve help message by adding usage examples (#1413) - ``dials.refine``: More helpful error message when too few reflections (#1431)
This issue may seem academic, but it does have consequences for users (okay maybe just this user).
The problem is that in certain complicated experiment models, refinement fails with the following traceback.
Which is triggered by this line:
dials/algorithms/refinement/parameterisation/scan_varying_prediction_parameters.py
Line 310 in d9e9629
In my case, it resulted from having an experiment that had no strong spots. You might decide that this means I have posited an unreasonable or invalid experiment model. However, I'd argue that valid experiment models trigger this
RuntimeError
more often than you'd expect. In fact, I've now hit this same error while working on two separate projects.The issue can arise when you have multiple "Experiments" which share the same global parameters. In my case, I had a single crystal, beam, gonio, scan, and detector model with multiple image sets leading to separate experiments for each image set. I would still expect dials to refine the model if one of the weaker sets has all its reflections rejected as outliers.
I've also run into this issue while working on polychromatic data wherein we have many beam objects with different wavelengths. In this model there are shared crystal, goniometer, and detector objects. Importantly, I fixed the beam directions so that all the refinement parameters were truly global. By chance, one wavelength can lose all its spots during outlier rejection and trip this
RuntimeError
.In my local copy of
dials/algorithms/refinement/parameterisation/scan_varying_prediction_parameters.py
, I've bypassed the error by adding the following control flowI am not submitting this as a pull request, because this is pretty deep inside dials.refine and outside my comfort zone. I don't know if my workaround is an acceptable solution. What do you all think? I hope I explained that clearly. Let me know if you are mystified by my comment, and I'll try to be more explicit.
The text was updated successfully, but these errors were encountered: