Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use AMBNAS algorithm? #63

Closed
anuragverma77 opened this issue Apr 14, 2021 · 2 comments
Closed

How to use AMBNAS algorithm? #63

anuragverma77 opened this issue Apr 14, 2021 · 2 comments

Comments

@anuragverma77
Copy link

anuragverma77 commented Apr 14, 2021

Hi @Deathn0t

For regevo we can use deephyper nas regevo --problem nas_problems...................................................
For random we can use deephyper nas random --problem nas_problems...................................................

What is command for using AMBNAS? Nothing is mentioned here. Is it still under development?

Is it something like deephyper nas ambs --problem nas_problems................................................... ?

When I tried with deephyper nas regevo --evaluator subprocess --problem nas_problems_train_final.polynome2.problem.Problem --max-evals 1 , it is working but when I tried with deephyper nas ambs --evaluator subprocess --problem nas_problems_train_final.polynome2.problem.Problem --max-evals 1 but I am getting the following error:

2021-04-15 05:07:27.656897: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/gurobi/linux64/lib/:/opt/gurobi/linux64/lib
2021-04-15 05:07:27.656939: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
 ************************************************************************
   Maximizing the return value of function: deephyper.nas.run.alpha.run
 ************************************************************************
train_X shape: (29327, 34)
train_y shape: (29327, 2)
valid_X shape: (12569, 34)
valid_y shape: (12569, 2)
2021-04-15 05:07:30.397783: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/gurobi/linux64/lib/:/opt/gurobi/linux64/lib
2021-04-15 05:07:30.397824: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
train_X shape: (29327, 34)
train_y shape: (29327, 2)
valid_X shape: (12569, 34)
valid_y shape: (12569, 2)
Uncaught exception <class 'AssertionError'>: Number of possible operations is: 2, but index given is: 13 (index starts from 0)!Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/deephyper/evaluator/runner.py", line 36, in <module>
    retval = func(d)
  File "/usr/local/lib/python3.6/dist-packages/deephyper/nas/run/alpha.py", line 65, in run
    search_space = setup_search_space(config, input_shape, output_shape, seed=seed)
  File "/usr/local/lib/python3.6/dist-packages/deephyper/nas/run/util.py", line 134, in setup_search_space
    search_space.set_ops(arch_seq)
  File "/usr/local/lib/python3.6/dist-packages/deephyper/nas/space/keras_search_space.py", line 112, in set_ops
    node.set_op(op_i)
  File "/usr/local/lib/python3.6/dist-packages/deephyper/nas/space/node.py", line 111, in set_op
    self.get_op(index).init(self)
  File "/usr/local/lib/python3.6/dist-packages/deephyper/nas/space/node.py", line 122, in get_op
    ), f"Number of possible operations is: {len(self._ops)}, but index given is: {index} (index starts from 0)!"
AssertionError: Number of possible operations is: 2, but index given is: 13 (index starts from 0)!
@Deathn0t
Copy link
Member

Hello @anuragverma77 ,

For the error, can you tell me which deephyper version you are using?

>>> import deephyper
>>> deephyper.__version__

Sorry for the lack of the documentation on this part, I will try to add more soon.
When you wonder what arguments can be processed by the deephyper command line I encourage you to use the --help argument, for example:

$ deephyper nas --help                                                                                            (dh)  I
usage: deephyper nas [-h] {ambs,random,regevo,agebo,ambsmixed,regevomixed} ...

positional arguments:
  {ambs,random,regevo,agebo,ambsmixed,regevomixed}

optional arguments:
  -h, --help            show this help message and exit

Which give you a clear list of acceptable keyword for the search algorithms.

Also if you want more information about a specific algorithm use the same trick but after adding the search argument, such as:

$ deephyper nas ambs --help                                                                                       (dh)  I
usage: deephyper nas ambs [-h] [--problem PROBLEM] [--backend BACKEND]
                          [--max-evals MAX_EVALS]
                          [--eval-timeout-minutes EVAL_TIMEOUT_MINUTES]
                          [--ray-address RAY_ADDRESS]
                          [--ray-password RAY_PASSWORD]
                          [--num-cpus-per-task NUM_CPUS_PER_TASK]
                          [--num-gpus-per-task NUM_GPUS_PER_TASK]
                          [--seed SEED] [--cache-key {uuid,to_dict}]
                          [--num-ranks-per-node NUM_RANKS_PER_NODE]
                          [--num-evals-per-node NUM_EVALS_PER_NODE]
                          [--num-nodes-per-eval NUM_NODES_PER_EVAL]
                          [--num-threads-per-rank NUM_THREADS_PER_RANK]
                          [--num-threads-per-node NUM_THREADS_PER_NODE]
                          [--num-workers NUM_WORKERS] [--log-dir LOG_DIR]
                          [--run RUN] [--evaluator EVALUATOR]
                          [--surrogate-model {RF,ET,GBRT,DUMMY,GP}]
                          [--liar-strategy {cl_min,cl_mean,cl_max}]
                          [--acq-func {LCB,EI,PI,gp_hedge}] [--kappa KAPPA]
                          [--xi XI] [--n-jobs N_JOBS]

optional arguments:
  -h, --help            show this help message and exit
  --problem PROBLEM     Module path to the Problem instance you want to use
                        for the search.
  --backend BACKEND     Keras backend module name
  --max-evals MAX_EVALS
                        maximum number of evaluations
  --eval-timeout-minutes EVAL_TIMEOUT_MINUTES
                        Kill evals that take longer than this
  --ray-address RAY_ADDRESS
                        This parameter is mandatory when using evaluator==ray.
                        It reference the "IP:PORT" redis address for the RAY-
                        driver to connect on the RAY-head.
  --ray-password RAY_PASSWORD
  --num-cpus-per-task NUM_CPUS_PER_TASK
  --num-gpus-per-task NUM_GPUS_PER_TASK
  --seed SEED           Random seed used.
  --cache-key {uuid,to_dict}
                        Cache policy.
  --num-ranks-per-node NUM_RANKS_PER_NODE
                        Number of ranks per nodes for each evaluation. Only
                        valid if evaluator==balsam and balsam job-mode is
                        'mpi'.
  --num-evals-per-node NUM_EVALS_PER_NODE
                        Number of evaluations performed on each node. Only
                        valid if evaluator==balsam and balsam job-mode is
                        'serial'.
  --num-nodes-per-eval NUM_NODES_PER_EVAL
                        Number of nodes used for each evaluation. This
                        Parameter is usefull when using data-parallelism or
                        model-parallism with evaluator==balsam and balsam job-
                        mode is 'mpi'.
  --num-threads-per-rank NUM_THREADS_PER_RANK
                        Number of threads per MPI rank. Only valid if
                        evaluator==balsam and balsam job-mode is 'mpi'.
  --num-threads-per-node NUM_THREADS_PER_NODE
                        Number of threads per node. Only valid if
                        evaluator==balsam and balsam job-mode is 'mpi'.
  --num-workers NUM_WORKERS
                        Number of parallel workers for the search. By default,
                        it is being automatically computed depending on the
                        chosen evaluator. If fixed then the default number of
                        workers is override by this value.
  --log-dir LOG_DIR     Path of the directory where to store information about
                        the run.
  --run RUN             Defaults to 'deephyper.nas.run.alpha.run'.
  --evaluator EVALUATOR
                        Defaults to 'ray'.
  --surrogate-model {RF,ET,GBRT,DUMMY,GP}
                        Type of surrogate model (learner).
  --liar-strategy {cl_min,cl_mean,cl_max}
                        Constant liar strategy
  --acq-func {LCB,EI,PI,gp_hedge}
                        Acquisition function type
  --kappa KAPPA         Controls how much of the variance in the predicted
                        values should be taken into account. If set to be very
                        high, then we are favouring exploration over
                        exploitation and vice versa. Used when the acquisition
                        is "LCB".
  --xi XI               Controls how much improvement one wants over the
                        previous best values. If set to be very high, then we
                        are favouring exploration over exploitation and vice
                        versa. Used when the acquisition is "EI", "PI".
  --n-jobs N_JOBS       number of cores to use for the 'surrogate model'
                        (learner), if n_jobs=-1 then it will use all cores
                        available.

@anuragverma77
Copy link
Author

anuragverma77 commented Apr 16, 2021

Hi @Deathn0t

Thanks. Please don't be sorry. You are helping people and that's more than enough.
I am using the latest version 0.2.4

@Deathn0t Deathn0t closed this as completed May 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants