-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should I run setup again? #10
Comments
I don't think regenerating the dataset is necessary. |
Thank you for the suggestion. But in the end, I couldn't get the code to run due to the incompatibility issues between nvidia-tensorflow 1.15 and keras. I have to use the nvidia-tensorflow because RTX 30 series do not support CUDA 10 while prebuilt tensorflow 1.x does not support CUDA 11. I'm wondering if you are currently porting the code to PyTorch? |
yes, that aspect is also something I am struggling with. My lab cluster is also transitioning to CUDA11, so I am attempting to rewrite part of the code, but the development is slow. |
you could try to build tf 1.15 for cuda 11. |
I just learned that NVIDIA (not Google) provides a backward-compatible version of tensorflow 1.15 that works on cuda 11. |
Thank you so much for you help, Asaiさん. I too discovered the
Now I am using the 4.1.3 version from the release source code, and running Anyway the problem I run into now is the error when executing
AttributeError: 'NoneType' object has no attribute 'summary' I am also considering to use the trained weights directly if training cannot be done, but I need some guidance. |
setup-dataset also downloads unrelated npz files that are not used in ijcai paper (but are used on other papers). Sorry for this confusion, this is because this entire repository is a kind of my "lab environment" which sets up everything I use for all of my papers. The failed ones for photorealistic-blocksworld are not used, so no worries. Instead, all datasets needed for reproducing the ijcai paper are rendered locally using a script included in this repo.
Since you already have the trained weights, running this script is not necessary. All results including the csv dump and the PDDL domain file is included in the archive.
Here is what is happening: If you want to regenerate the reconstructions etc., then The hyperparameter search is completely parallelized in the process level. So, if you have an 8-core 8-gpu machine, just run 8 processes in parallel. |
If you do want to train the model, you may also want to prune some hyperparameters by looking at |
My immediate goal is to use Cube-space AE to encode some MNIST 8-puzzle images. Then to better understand Latplan, I am planning to train the network, get my hands dirty with the implementation. But this is out of the topic of the issue, maybe I should open a new one. Thank you for all your help. |
I have git cloned the repository and run
./setup.py install
and./setup-dataset.sh
, but then I realizedtrain_all.sh
was not present. Later I found it in the 4.1.3 release. Do I need to set up once again in the 4.1.3 directory or just copytrain_all.sh
,train_others.sh
and other script files? Thank you.The text was updated successfully, but these errors were encountered: