Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSError : [Errno 22] Invalid argument #19

Open
hanzigs opened this issue Sep 5, 2022 · 4 comments
Open

OSError : [Errno 22] Invalid argument #19

hanzigs opened this issue Sep 5, 2022 · 4 comments

Comments

@hanzigs
Copy link

hanzigs commented Sep 5, 2022

Hi,
Can I have a help here please
I am using this NatureInspiredSearchCV as

    grid = NatureInspiredSearchCV(model,
                                  cv=3,
                                  param_grid=model_parameters_space,
                                  verbose=0,
                                  algorithm='hba',
                                  population_size=50,
                                  max_n_gen=100,
                                  max_stagnating_gen=20,
                                  runs=5,
                                  scoring='accuracy',
                                #   n_jobs=-1,
                                  random_state=42)

If I comment n_jobs, it is working fine
If I use n_jobs, I am getting below error,
It looks like n_jobs is not working, not sure,

"""Exception occured: OSError : [Errno 22] Invalid argument (  File "C:\Python\Lib\site-packages\joblib\externals\loky\backend\resource_tracker.py", line 209, in _send
    nbytes = os.write(self._fd, msg)
  File "C:\Python\Lib\site-packages\joblib\externals\loky\backend\resource_tracker.py", line 182, in _check_alive
    self._send('PROBE', '', '')
  File "C:\Python\Lib\site-packages\joblib\externals\loky\backend\resource_tracker.py", line 102, in ensure_running
    if self._check_alive():
  File "C:\Python\Lib\site-packages\joblib\externals\loky\backend\spawn.py", line 86, in get_preparation_data
    _resource_tracker.ensure_running()
  File "C:\Python\Lib\site-packages\joblib\externals\loky\backend\popen_loky_win32.py", line 54, in __init__
    prep_data = spawn.get_preparation_data(
  File "C:\Python\Lib\site-packages\joblib\externals\loky\backend\process.py", line 39, in _Popen
    return Popen(process_obj)
  File "C:\Python\Lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "C:\Python\Lib\site-packages\joblib\externals\loky\process_executor.py", line 1087, in _adjust_process_count
    p.start()
  File "C:\Python\Lib\site-packages\joblib\externals\loky\process_executor.py", line 1096, in _ensure_executor_running
    self._adjust_process_count()
  File "C:\Python\Lib\site-packages\joblib\externals\loky\process_executor.py", line 1122, in submit
    self._ensure_executor_running()
  File "C:\Python\Lib\site-packages\joblib\externals\loky\reusable_executor.py", line 177, in submit
    return super(_ReusablePoolExecutor, self).submit(
  File "C:\Python\Lib\site-packages\joblib\_parallel_backends.py", line 531, in apply_async
    future = self._workers.submit(SafeFunction(func))
  File "C:\Python\Lib\site-packages\joblib\parallel.py", line 777, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
  File "C:\Python\Lib\site-packages\joblib\parallel.py", line 859, in dispatch_one_batch
    self._dispatch(tasks)
  File "C:\Python\Lib\site-packages\joblib\parallel.py", line 1041, in __call__
    if self.dispatch_one_batch(iterator):
  File "C:\Python\Lib\site-packages\sklearn\model_selection\_search.py", line 795, in evaluate_candidates
    out = parallel(delayed(_fit_and_score)(clone(base_estimator),
  File "C:\Python\Lib\site-packages\sklearn_nature_inspired_algorithms\model_selection\_parameter_search.py", line 38, in _evaluate
    cv_results = self.evaluate_candidates([params])
  File "C:\Python\Lib\site-packages\niapy\problems\problem.py", line 57, in evaluate
    return self._evaluate(x)
  File "C:\Python\Lib\site-packages\niapy\task.py", line 144, in eval
    x_f = self.problem.evaluate(x) * self.optimization_type.value
  File "C:\Python\Lib\site-packages\sklearn_nature_inspired_algorithms\model_selection\_stagnation_stopping_task.py", line 40, in eval
    x_f = super().eval(A)
  File "C:\Python\Lib\site-packages\numpy\lib\shape_base.py", line 379, in apply_along_axis
    res = asanyarray(func1d(inarr_view[ind0], *args, **kwargs))
  File "<__array_function__ internals>", line 5, in apply_along_axis
  File "C:\Python\Lib\site-packages\niapy\algorithms\algorithm.py", line 38, in default_numpy_init
    fpop = np.apply_along_axis(task.eval, 1, pop)
  File "C:\Python\Lib\site-packages\niapy\algorithms\algorithm.py", line 258, in init_population
    pop, fpop = self.initialization_function(task=task, population_size=self.population_size, rng=self.rng,
  File "C:\Python\Lib\site-packages\niapy\algorithms\basic\ba.py", line 135, in init_population
    population, fitness, d = super().init_population(task)
  File "C:\Python\Lib\site-packages\niapy\algorithms\algorithm.py", line 308, in iteration_generator
    pop, fpop, params = self.init_population(task)
  File "C:\Python\Lib\site-packages\niapy\algorithms\algorithm.py", line 333, in run_task
    xb, fxb = next(algo)
  File "C:\Python\Lib\site-packages\niapy\algorithms\algorithm.py", line 353, in run
    r = self.run_task(task)
  File "C:\Python\Lib\site-packages\niapy\algorithms\algorithm.py", line 357, in run
    raise e
  File "C:\Python\Lib\site-packages\sklearn_nature_inspired_algorithms\model_selection\nature_inspired_search_cv.py", line 43, in _run_search
    self.__algorithm.run(task=task)
  File "C:\Python\Lib\site-packages\sklearn\model_selection\_search.py", line 841, in fit
    self._run_search(evaluate_candidates)
  File "C:\Python\Lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
    return f(*args, **kwargs)
  File "C:\src\scripts\AutoMLUtils.py", line 775, in feHPTuning
    lgbmgrid_result = lgbmgrid.fit(X_train,
  File "C:\src\scripts\AutoMLUtils.py", line 865, in feHyperParameterSelection
    febest_hyperparameters = feHPTuning(X_train,
  File "C:\src\scripts\AutoMLUtils.py", line 1050, in featureEnggData
    FE_HParams = feHyperParameterSelection(X_train_stomek,
  File "C:\src\scripts\AutoMLTrainer.py", line 537, in train
    to_drop, FE_HParams, balancer_algo, balancer = featureEnggData(cleaned_df,
===
   at Python.Runtime.PyObject.Invoke(PyTuple args, PyDict kw)
   at Python.Runtime.PyObject.InvokeMethod(String name, PyTuple args, PyDict kw)
   at Python.Runtime.PyObject.TryInvokeMember(InvokeMemberBinder binder, Object[] args, Object& result)
   at CallSite.Target(Closure , CallSite , Object , String , Object )
   at System.Dynamic.UpdateDelegates.UpdateAndExecute3[T0,T1,T2,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2)
   at Intuition.AutoML.Trainer.Run(Guid tenantId, TrainingRunParameter parameter) in C:\src\src\Intuition.AutoML\Implementation\Trainer.cs:line 109)
@timzatko
Copy link
Owner

Hi, can you share some sample jupyter notebook for debugging?

@hanzigs
Copy link
Author

hanzigs commented Sep 26, 2022

Thanks for the reply
In Only python environment, it is working,
we use the python in docker env with .net wrapper, all the algorithms like LGBM, XGB, Tensorflow keras, everything is working there with n_jobs, but this library throwing OSError, is there any difference in python multiprocessing spawn configuration in this library compared to LGBM, XGB or tensorflow

@hanzigs
Copy link
Author

hanzigs commented Nov 14, 2022

Hi @timzatko ,
One question, In NatureInspiredSearchCV,
I am using cv=3 and runs=5,
Should I use only the optimization runs, whether cv=3 is not required?
Thanks

@timzatko
Copy link
Owner

Hi, regarding the first issue with n_jobs, I was not able to reproduce it.

Cross-validation is not required but encouraged to have to best model for individuals.
The number of optimization runs affects the number of populations generated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants