Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Too many levels of symlinks error when loading model in AWS Lambda function #1177

Open
suneelmatham opened this issue Dec 3, 2021 · 6 comments · May be fixed by #2476
Open

Too many levels of symlinks error when loading model in AWS Lambda function #1177

suneelmatham opened this issue Dec 3, 2021 · 6 comments · May be fixed by #2476
Assignees
Projects

Comments

@suneelmatham
Copy link

Hi, I have been trying to containerize Speaker Recognition model in an AWS lambda function which has a read-only Linux file system with just '/tmp' folder available for write. Following issues #1001 and #1155, I have downloaded the model files and copied them into '/tmp' during build and loaded as below
diarizer = SpeakerRecognition.from_hparams(source='/tmp', savedir='/tmp', overrides={"pretrained_path": '/tmp'})

Getting a too many levels of symlinks error
sb-lambda-err

Please let me know how I can fix it. Thanks

@mravanelli
Copy link
Collaborator

@Gastron, any idea?

@Gastron
Copy link
Collaborator

Gastron commented Dec 7, 2021

Too many levels of symlinks results from some kind of broken, probably circular symlink. If you do the same thing on your own machine, do you get the same error. Also double check whether you're actually copying the model files and not just the symlinks (HuggingFace Hub downloads to its cache and SpeechBrain creates a symlink in savedir) during your build.

@suneelmatham
Copy link
Author

Yes, the model is loading fine in my local docker container and on my machine too.
I have downloaded the model files from huggingface with git-lfs and ensured that they weren't symlinks. While testing on local, I noticed that a label_encoder.ckpt file is being created during load which is symlinked to label_encoder.txt. Could this be causing the issue in Lambda?

@Gastron
Copy link
Collaborator

Gastron commented Dec 7, 2021

It could be that - and also, are you using the paths argument to Pretrainer? Perhaps you could simplify, name the checkpoint files appropriately in /tmp - maybe that is causing additional symlinks like for instance this label encoder file.

Other than that, if it works on the local side, but not on AWS Lambda, I suggest going over the loading process on the local side once more - perhaps it needs some other location (outside /tmp) after all? Or is there some other difference between the Lambda instance and your local? Can you replicate the environment where only /tmp is writeable?

@anautsch anautsch added this to To do in CI/CD via automation Apr 21, 2022
@anautsch anautsch moved this from To do to VAD & speech enhancement in CI/CD Apr 21, 2022
@anautsch anautsch moved this from VAD & speech enhancement to Performance & housekeeping in CI/CD Apr 21, 2022
@Adel-Moumen
Copy link
Collaborator

Hello,

Any news with this issue please?

Thanks.
Adel

@asumagic
Copy link
Collaborator

asumagic commented Apr 8, 2024

I noticed this issue before but the reason didn't occur to me until now: source and savedir cannot be the same, which likely causes a symlink into itself. We should add a check of some sort to prevent this, and reduce the use of symlinks in general (#2476).

@asumagic asumagic linked a pull request Jun 7, 2024 that will close this issue
13 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
CI/CD
Performance & housekeeping
Development

Successfully merging a pull request may close this issue.

5 participants