Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some broken benchmarks #180

Open
kosack opened this issue Mar 15, 2022 · 1 comment
Open

Some broken benchmarks #180

kosack opened this issue Mar 15, 2022 · 1 comment
Labels
benchmarks wrong behaviour The code works but produces clearly wrong results
Milestone

Comments

@kosack
Copy link
Contributor

kosack commented Mar 15, 2022

Describe the problem

A few benchmarks are broken, and need some updates to fix.

To Reproduce**

TRAINING/benchmarks_DL1_image_intensity_resolution

ImportError: cannot import name 'CTAMARS_radii' from 'protopipe.pipeline.utils' (/Users/kkosack/Projects/CTA/Working/protopipe/protopipe/pipeline/utils.py)

TRAINING/benchmarks_R0_R1_waveforms_pre-calibration

  • ----> 1 [use_seaborn] = string_to_boolean([use_seaborn]) NameError: name 'string_to_boolean' is not defined
  • all paths are hardcoded to /Users/mperesano (should be read from config file I guess)

TRAINING/benchmarks_DL2_to_energy-estimation

Exception encountered at "In [12]":
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/var/folders/dv/rg9cnf0d3qg6n5khk1d7yjyxqg1x5k/T/ipykernel_25276/2934236983.py in <module>
    16                'label': 'E [{:.2f},{:.2f}] TeV'.format(true_energy_bin_edges[jdx], true_energy_bin_edges[jdx+1]),
    17                'ms': 6}
---> 18         plot_profile(ax, data=data_sel,
    19                      xcol='impact_dist', ycol='hillas_intensity',
    20                      n_xbin=xbins, x_range=xrange, logx=True, **opt)

ValueError: 'i' is not a valid value for color

TRAINING/benchmarks_DL2_to_classification

/var/folders/dv/rg9cnf0d3qg6n5khk1d7yjyxqg1x5k/T/ipykernel_25420/1308642254.py in <module>
      1 # Read configuration file for particle classification model
      2 model_configuration_path = Path(analyses_directory) / analysis_name / "configs" / model_configuration_filename
----> 3 model_configuration = load_config(model_configuration_path)
      4
      5 # Read feature list from model configutation file

NameError: name 'load_config' is not defined

seems missing import

MODELS/benchmarks_MODELS_classification

Crashes. The model seems to have built ok though.

---------------------------------------------------------------------------
Exception encountered at "In [24]":
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/var/folders/dv/rg9cnf0d3qg6n5khk1d7yjyxqg1x5k/T/ipykernel_25648/404696727.py in <module>
     29         )
     30
---> 31         PrecisionRecallDisplay.from_estimator(diagnostic[camera].model,
     32                                     selected_test_data[features].to_numpy(),
     33                                     selected_test_data[cfg["Method"]["target_name"]],

ValueError: Found array with 0 sample(s) (shape=(0, 8)) while a minimum of 1 is required.

setup

protopipe dev version (0.4.0.post2.dev401+g1f392ea.d20220315)

@kosack kosack added the wrong behaviour The code works but produces clearly wrong results label Mar 15, 2022
@HealthyPear HealthyPear added this to the v0.5.0 milestone Mar 25, 2022
@HealthyPear HealthyPear added this to Needs triage in Bugs and wrong behaviours via automation Mar 25, 2022
@HealthyPear HealthyPear moved this from Needs triage to High priority in Bugs and wrong behaviours Mar 25, 2022
@HealthyPear
Copy link
Member

Also #173 needs to be added to this list

@HealthyPear HealthyPear modified the milestones: v0.5.0, v0.5.1 Apr 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmarks wrong behaviour The code works but produces clearly wrong results
Projects
Bugs and wrong behaviours
  
High priority
Development

No branches or pull requests

2 participants