Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Empty reflection blocks break refinement #1417

Closed
kmdalton opened this issue Sep 19, 2020 · 2 comments · Fixed by #1419
Closed

Empty reflection blocks break refinement #1417

kmdalton opened this issue Sep 19, 2020 · 2 comments · Fixed by #1419

Comments

@kmdalton
Copy link
Contributor

This issue may seem academic, but it does have consequences for users (okay maybe just this user).

The problem is that in certain complicated experiment models, refinement fails with the following traceback.

`Traceback` (most recent call last):                                                                      
  File "/home/kmdalton/opt/dials-v3-0-4/build/../modules/dials/command_line/refine.py", line 481, in <mod
ule>                                                                                                    
    run()                                                                                                
  File "/home/kmdalton/opt/dials-v3-0-4/build/../modules/dials/command_line/refine.py", line 396, in run
    experiments, reflections, params                                                                    
  File "/home/kmdalton/opt/dials-v3-0-4/build/../modules/dials/command_line/refine.py", line 284, in run_
dials_refine                                                                                            
    refiner, reflections, history = run_macrocycle(params, reflections, experiments)                    
  File "/home/kmdalton/opt/dials-v3-0-4/build/../modules/dials/command_line/refine.py", line 187, in run_
macrocycle                                                                                              
    params, reflections, experiments                                                                    
  File "/home/kmdalton/opt/dials-v3-0-4/modules/dials/algorithms/refinement/refiner.py", line 276, in fro
m_parameters_data_experiments                                                                            
    return cls._build_components(params, reflections, experiments)                                      
  File "/home/kmdalton/opt/dials-v3-0-4/modules/dials/algorithms/refinement/refiner.py", line 401, in _bu
ild_components                                                                                          
    autoreduce()                                                                                        
  File "/home/kmdalton/opt/dials-v3-0-4/modules/dials/algorithms/refinement/parameterisation/autoreduce.p
y", line 260, in __call__                                                                                
    self.check_and_remove()                                                                              
  File "/home/kmdalton/opt/dials-v3-0-4/modules/dials/algorithms/refinement/parameterisation/autoreduce.p
y", line 202, in check_and_remove                                                                        
    self.pred_param.compose(obs)                                                                        
  File "/home/kmdalton/opt/dials-v3-0-4/modules/dials/algorithms/refinement/parameterisation/scan_varying
_prediction_parameters.py", line 310, in compose                                                        
    for block in range(flex.min(blocks), flex.max(blocks) + 1):                                          
RuntimeError: Please report this error to dials-support@lists.sourceforge.net: min() argument is an empty
 array

Which is triggered by this line:

for block in range(flex.min(blocks), flex.max(blocks) + 1):

In my case, it resulted from having an experiment that had no strong spots. You might decide that this means I have posited an unreasonable or invalid experiment model. However, I'd argue that valid experiment models trigger this RuntimeError more often than you'd expect. In fact, I've now hit this same error while working on two separate projects.

The issue can arise when you have multiple "Experiments" which share the same global parameters. In my case, I had a single crystal, beam, gonio, scan, and detector model with multiple image sets leading to separate experiments for each image set. I would still expect dials to refine the model if one of the weaker sets has all its reflections rejected as outliers.

I've also run into this issue while working on polychromatic data wherein we have many beam objects with different wavelengths. In this model there are shared crystal, goniometer, and detector objects. Importantly, I fixed the beam directions so that all the refinement parameters were truly global. By chance, one wavelength can lose all its spots during outlier rejection and trip this RuntimeError.

In my local copy of dials/algorithms/refinement/parameterisation/scan_varying_prediction_parameters.py, I've bypassed the error by adding the following control flow

Original Modified
306 # reset current frame cache for scan-varying parameterisations
307 self._current_frame = {}
308 
309 # get state and derivatives for each block
310 for block in range(flex.min(blocks), flex.max(blocks) + 1):
    ...
# reset current frame cache for scan-varying parameterisations
self._current_frame = {}

if len(blocks) > 0:
    block_ids = range(flex.min(blocks), flex.max(blocks) + 1)
else:
    block_ids = []

# get state and derivatives for each block
for block in block_ids:
    ...

I am not submitting this as a pull request, because this is pretty deep inside dials.refine and outside my comfort zone. I don't know if my workaround is an acceptable solution. What do you all think? I hope I explained that clearly. Let me know if you are mystified by my comment, and I'll try to be more explicit.

@dagewa
Copy link
Member

dagewa commented Sep 19, 2020

Hi @kmdalton, refinement is supposed to do something sensible when there are too few reflections. In fact, I can see that the failure occurs while in the check_and_remove function in autoreduce.py, which is exactly the place where the sensible things are supposed to occur. This is apparently a corner case I didn't think of.

I think your change is probably reasonable, but I would like to experiment with a test case to be sure of what happens. Do you have one to hand you can share?

@kmdalton
Copy link
Contributor Author

I don't have a convenient test case on hand right now, but I can certainly produce one for you. I'll be happy to upload something early next week. The example I have in mind is the polychromatic beam situation. I think that is the easier one to work up into an a useful test case.

You're right, this is no doubt a corner case that affects a very limited number of users. I have a habit of finding those lately...

dagewa added a commit that referenced this issue Sep 22, 2020
Explicitly skip experiments with no reflections when setting up scan-varying refinement.

Fixes #1417
ndevenish pushed a commit that referenced this issue Sep 28, 2020
Explicitly skip experiments with no reflections when setting up scan-varying refinement.

Fixes #1417
@ndevenish ndevenish mentioned this issue Sep 28, 2020
ndevenish added a commit that referenced this issue Sep 28, 2020
- ``dials.integrate``: fix integrator=3d_threaded crash if njobs > 1 (#1410)
- ``dials.integrate``: Check for and show error message if shoebox data is missing (#1421)
- ``dials.refine``: Avoid crash for experiments with zero reflections if the `auto_reduction.action=remove` option was active (#1417)
- ``dials.merge``: improve help message by adding usage examples (#1413)
- ``dials.refine``: More helpful error message when too few reflections (#1431)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants