You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! I'm using pretrained clap models for downstream tasks. I'm wondering if it's necessary to do the preprocessing in your wrapper code here?
Seems like this function made sure the audio is repeated or truncated to be 7 or 5 seconds as predefined. However, I found the model HTSAT or CNN can take variable lengths and output 1024 embedding regardless even without such preprocessing.
Could you tell me if it's important to do this when I use the model for inference?
Thanks!
The text was updated successfully, but these errors were encountered:
CLAP performs better with recordings of 7 seconds length, that's why we preprocess the audio. You can try different ways to preprocess the length, and different ways to preprocess longer recordings. The length constraint from CLAP comes from HTSAT.
Hi! I'm using pretrained clap models for downstream tasks. I'm wondering if it's necessary to do the preprocessing in your wrapper code here?
Seems like this function made sure the audio is repeated or truncated to be 7 or 5 seconds as predefined. However, I found the model HTSAT or CNN can take variable lengths and output 1024 embedding regardless even without such preprocessing.
Could you tell me if it's important to do this when I use the model for inference?
Thanks!
The text was updated successfully, but these errors were encountered: