-
-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Old videos don't download #183
Comments
Thanks for the issue. Yeah that should be matched, it indeed has a 240p version and the cut-off was lowered to 240p in #162 so something has changed in the metadata most likely that's causing the matching code to require a tweak. |
Noticed the same issue, the log does have additional infos tho: "has no published date set, marking to be skipped" on old videos, may be some older videos have a different format for their metadata that TubeSync does not understand |
@azukaar can you paste a link to a video that has that error? If it is legitimately missing a publish date in the metadata that would cause this issue. |
Hello! weirdly enough, I was going back to Tubesync to get such videos, all the videos marked as skipped by the system seems to have been downloaded anyway now, despite of having that message in the log "has no published date set, marking to be skipped",
|
That video does have an upload date, what I can only assume is the first attempt to get the videos metadata failed and the video would have been marked to be skipped. At a later time when the metadata task was retried and successful it was unskipped and the video was downloaded. These sort of issues are relatively common and one of the reasons tasks are retried quite a few times before failing permanently. |
Seeing the same issue here today for multiple videos on a playlist. Playlist: https://www.youtube.com/playlist?list=PLJtPH8t_isfVo8JMCfpm50ZoGWK9XASW6 Examples from the log :
I also have a playlist which I created myself, where I add videos which I find interesting and want to be downloaded and seeing the same issue there also. The playlist only has one media item currently and a similar error in the logs about the media being skipped. Deleted the installation and re-created and still seeing the same problem. Running the latest version at the time of writing (0.13.2) in a docker container. |
This error means that the |
The metadata is downloaded ok, that happened after the message shown in my original post, which I thought was strange as why would the metadata be downloaded if the video was being skipped by the system. Tried resetting the tasks, but the videos still don't download.
If I manually skip and then unskip a video, it starts to download. Q1: Is there a way to do this for all the skipped media, such as command line argument ? ( would take a very long time to do this manually for each video) Q2: Is there a way to add a source and specify to download the media even if there is no "upload_date" in the metadata ? Thanks for the prompt reply and the great software ! |
You can use the Currently the |
Unfortunately it still didn't download the videos, it downloaded the metadata again, but skipped the videos.
I worked around the issue by running the following query on the database I'm using (MySQL). UPDATE Reset the tasks and now they are downloading. Would be nice if there was any easier method to perform this type of action, such as an override button for all media which is in a "skipped" state. |
@hunterpankey the actual date parsing of the string into a datetime object is here: https://github.com/meeb/tubesync/blob/main/tubesync/sync/models.py#L1088
$ docker exec -ti tubesync python3 /app/manage.py shell Then in the Python shell enter the following test code (my example is using UTC as a test timezone, your timezone should be whatever your Python 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> from datetime import datetime
>>> from django.utils import timezone
>>> example_upload_date = '20230611'
>>> dt = datetime.strptime(example_upload_date, '%Y%m%d')
>>> timezone.make_aware(dt)
datetime.datetime(2023, 6, 11, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='UTC')) |
Same results, basically. It's parsing ok, and make_aware() looks like it's doing things properly. In my fumbling, I tried to log the date when it outputs the message, but I don't know enough about either docker, python, or django to get it to restart properly and inadvertently reset all my sources which took a couple of hours to clear all the tasks.
|
Well that's certainly weird. Once your tasks have all caught up find the UUID of the media item that says it has no upload date (browse to it on the web interface and you can nick the UUID from the URL like http://tubesynchost:4848/media/21882d36-7cfe-4ac2-8d86-537125417349 ) then try this in a shell instead (obviously replacing with your media items UUID): $ docker exec -ti tubesync python3 /app/manage.py shell Then: Python 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> from sync.models import Media
>>> from django.utils import timezone
>>> media_item = Media.objects.get(uuid='21882d36-7cfe-4ac2-8d86-537125417349')
>>> media_item.upload_date
datetime.datetime(2023, 11, 27, 0, 0)
>>> timezone.make_aware(media_item.upload_date)
datetime.datetime(2023, 11, 27, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='UTC')) If that works I'm going to be running low on ideas, also as to why skipping then unskipping works if the |
Oh, I can't get any media to download normally, 0%. The only things that ever download are the ones I manually skip and then unskip, so I don't think the issue is with the metadata. Anyway, I'll give this a shot and see what happens with a specific UUID. Thanks for the help! |
This really doesn't seem like a complicated issue so if we can get it debugged with your install then a patch should be quite trivial to fix it. |
It's so weird! Everything looks like it should. I was expecting to see an empty date in here, but it looks normal! (I included the YouTube ID in case you wanted to see the metadata for yourself.)
|
And that video when being downloaded errors with upload_date not set? |
Hm, that's not quite what the task does actually, can you try this as well: Python 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> from sync.models import Media
>>> from common.utils import json_serial
>>> media_item = Media.objects.get(uuid='b2625b71-2964-41f6-8049-eb93d158a910')
>>> metadata = media.index_metadata()
>>> metadata
{... lots of spam ... }
>>> media.metadata = json.dumps(metadata, default=json_serial)
>>> media_item.upload_date
datetime.datetime(2023, 11, 27, 0, 0)
>>> timezone.make_aware(media_item.upload_date)
datetime.datetime(2023, 11, 27, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='UTC')) |
Looks like I don't have "json" in scope for the "json.dumps()" line. Where can I import that from? |
OK, got it together with an extra
|
@hunterpankey this turned out to be a race condition with the handling of metadata. It should be fixed in the latest release. |
Good solve! I'll pull the latest and confirm. |
Yep, looking good. Thanks for the quick turnaround! |
Using the "Get next best resolution or codec" option doesn't retrieve videos that are very old (2009 at 240p best resolution).
Example channel ID: UCDsElQQt_gCZ9LgnW-7v-cQ
Example video: https://www.youtube.com/watch?v=QCUCTLk05cA
The reason provided: Media cannot be downloaded because it has no formats which match the source requirements.
Available formats: Media has no indexed available formats
Desired format: 1080p (video:VP9, audio:OPUS) 60FPS HDR
I'm not sure if one of the flags (60FPS, HDR, OPUS, VP9. 1080p) is causing it to ignore the video that IS available? Or if it's just because the video doesn't provide the information expected.
It could also be because it is both the next best resolution AND codec?
The text was updated successfully, but these errors were encountered: