You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to the dataset description on the Zenodo, I suppose the timestamp for each audio clip should be included in the label.csv file. While within this file, only the clip index within a particular song can be found. Did you segment the whole song with equal length? If so what is the duration for each clip, otherwise, how can I determine the timestamp?
The text was updated successfully, but these errors were encountered:
For the timestamp information, please get the latest version of EMOPIA: EMOPIA 2.2. It contains a file called timestamps.json for easier usage. It records all the timestamps in dict format. The notebook scripts/load_timestamp.ipynb shows the format example. Or you can also access tagging_lists/ for timestamps of each file. Also, you can use scripts/timestamp2clip.py to get audio clips using timestamps.
We don't segment the whole song with equal length. We choose the segment with a specific emotion and try to cut the endpoint at the end of the music sentence. As a result, there might be 1 or more segments per song.
A reminder: if you want to use the data for full-song analysis, you might need to notice that there are a few songs that are actually playlists on YouTube. So if a song contains many segments (for example 10+), it might be a playlist from YouTube.
According to the dataset description on the Zenodo, I suppose the timestamp for each audio clip should be included in the label.csv file. While within this file, only the clip index within a particular song can be found. Did you segment the whole song with equal length? If so what is the duration for each clip, otherwise, how can I determine the timestamp?
The text was updated successfully, but these errors were encountered: