-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to use for metric depth - ZoeDepth sanity check complains #122
Comments
Try to use timm == 0.6.7 |
Hello, name: zoe
Cheers, P.S.: I don't know if this is relevant or not, but I also see the message: "xFormers not available" as this runs. |
I hard-coded a few parameters to make it easier to run on the debugger, and I'm further along. No idea why they were not input properly. ZoeDepth seems better initialized now. I'll follow up on this bug later, but for now, will keep it hardcoded. ZoeDepth( I now get as far as this: ERROR api_key not configured (no-tty). call wandb.login(key=[your_api_key]) . |
Try to install xformer : pip install xformers |
Thanks. I will add it to the environment.yml file. I'm afraid of breaking my Conda build proceeding otherwise. |
To fix your error: “ ERROR api_key not configured (no-tty). call wandb.login(key=[your_api_key]) .” go to wandb.ai then create account after that writing command:” wandb login “ your id was created” . Before training |
Thanks for the info. I'll work on it now. ...mpr/mpr/DepthAnything/metric_depth/train_mono.py", line 173, in Woops found something else. Could be related. Failed to load image Python extension: libtorch_cuda_cu... |
I think that there are some limitations in the current environment.yml file. I noticed that the pytorch 1.13.1 does not overwrite an existing pytorch library on conda, say 2.2.1. if the latter is already present. The environment loads incompatible torchvision and torchaudio libraries as a result. I'm playing with competing options for this environment file. I'll make it available to the community if there is an interest. |
Send me messages through instagram. My id is dung26032000 |
I'm going to use this post to track some of the other problems that I encountered so far. I've been getting a socket error as well. This added option seems to address that: --master_port=(some number) e.g. --master_port=25678 . |
Screen shot your problem then send it to me |
Hi Dung. I've solved that one actually, but I'm using this thread as a document to track what I am seeing, in case anyone in the community wants to re-use the code, and also for my colleagues who will build on metric Depth Anything. The one thing that I am dealing with now is that the code imposes a path for finding the training data, and I have to ascertain where that is and overwrite those instructions. I'll let you know if I need to interact with you. Cheers, Michel |
My last obstacle is making sure that I train on the files of my choosing. This established in DepthAnything/metric_depth/zoedepth/utils/config.py .
The filenames_file is particularly relevant, as ar data_path and gt_path. I need to overwrite that filenames_file with something else. The structure of that txt file is like this: 2011_09_26/2011_09_26_drive_0051_sync/image_02/data/0000000093.png 2011_09_26_drive_0051_sync/proj_depth/groundtruth/image_02/0000000093.png 721.5377 2011_09_30/2011_09_30_drive_0028_sync/image_02/data/0000002714.png 2011_09_30_drive_0028_sync/proj_depth/groundtruth/image_02/0000002714.png 707.0912 2011_09_26/2011_09_26_drive_0061_sync/image_02/data/0000000045.png 2011_09_26_drive_0061_sync/proj_depth/groundtruth/image_02/0000000045.png 721.5377 The first file is probably the raw image, while the second is the groundtruth, obviously, but there is also third entry, a number of some kind. Any idea what that represents? |
I am progressing... I managed to format my datafile as ZoeDepth expects to see it, but CUDA complains about memory... torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.03 GiB (GPU 0; 15.70 GiB total capacity; 11.81 GiB already allocated; 167.81 MiB free; 12.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Will look into reducing my training set... |
For the community... Edit: GPU hummed for a while but eventually the training process still ran out of memory, with arguably the lowest bs=1 setting... Following up with export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512' |
Hi Dung, I met some problem for model training, can you help me to have a look? I can send my problem through Instagram to you. |
Send me now |
Thanks, I've just sent the following application on instagram to you |
#125 (comment) here is my current problem, and I've also sent it to you on instagram |
I am seeing PYTORCH_CUDA_ALLOC_CONF memory errors. |
Is it possible to use gradient accumulation to deal with this? Can I invoke train_mono with extra parameters like --gradient_accumulation_steps and expect it to address my memory error? |
Hello,
I am trying to use ZoeDepth for metric depth. When I try to use the ZoeDepth sanity check, it complains of...
"xformers not available" as well as RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ZoeDepth:
Missing key(s) in state_dict: "core.core.pretrained.cls_token" and various other missing keys.
What am I missing, to fully enable the transformer code?
Best wishes,
Michel
The text was updated successfully, but these errors were encountered: