Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Youtube channels exclude shorts from showing #106

Open
bigsk1 opened this issue May 24, 2024 · 10 comments
Open

[FEATURE] Youtube channels exclude shorts from showing #106

bigsk1 opened this issue May 24, 2024 · 10 comments
Labels
help wanted Extra attention is needed

Comments

@bigsk1
Copy link

bigsk1 commented May 24, 2024

Not sure if there is a way to not see shorts from youtube channels and only actual videos.

@helloteemo
Copy link

Maybe the YouTube RSS link won't be added to shorts

@bigsk1
Copy link
Author

bigsk1 commented May 24, 2024

an idea

extract the ytp-time-duration from html

Add function

package feed

import (
	"fmt"
	"io"
	"net/http"
	"regexp"
	"strings"
	"time"
)

// fetchVideoDurationFromHTML fetches the video duration by parsing the HTML of the video page.
func fetchVideoDurationFromHTML(videoURL string) (time.Duration, error) {
	resp, err := http.Get(videoURL)
	if err != nil {
		return 0, err
	}
	defer resp.Body.Close()

	body, err := io.ReadAll(resp.Body)
	if err != nil {
		return 0, err
	}

	// Regular expression to find the duration in the HTML
	re := regexp.MustCompile(`class="ytp-time-duration">(\d+):(\d+)</span>`)
	matches := re.FindStringSubmatch(string(body))
	if len(matches) < 3 {
		return 0, fmt.Errorf("duration not found")
	}

	// Parse minutes and seconds
	minutes, err := strconv.Atoi(matches[1])
	if err != nil {
		return 0, err
	}
	seconds, err := strconv.Atoi(matches[2])
	if err != nil {
		return 0, err
	}

	duration := time.Duration(minutes)*time.Minute + time.Duration(seconds)*time.Second
	return duration, nil
}

Update FetchYoutubeChannelUploads

package feed

import (
	"fmt"
	"log/slog"
	"net/http"
	"net/url"
	"strings"
	"time"
)

type youtubeFeedResponseXml struct {
	Channel     string `xml:"title"`
	ChannelLink struct {
		Href string `xml:"href,attr"`
	} `xml:"link"`
	Videos []struct {
		Title     string `xml:"title"`
		Published string `xml:"published"`
		Link      struct {
			Href string `xml:"href,attr"`
		} `xml:"link"`

		Group struct {
			Thumbnail struct {
				Url string `xml:"url,attr"`
			} `xml:"http://search.yahoo.com/mrss/ thumbnail"`
		} `xml:"http://search.yahoo.com/mrss/ group"`
	} `xml:"entry"`
}

func parseYoutubeFeedTime(t string) time.Time {
	parsedTime, err := time.Parse("2006-01-02T15:04:05-07:00", t)

	if err != nil {
		return time.Now()
	}

	return parsedTime
}

func FetchYoutubeChannelUploads(channelIds []string, videoUrlTemplate string) (Videos, error) {
	requests := make([]*http.Request, 0, len(channelIds))

	for i := range channelIds {
		request, _ := http.NewRequest("GET", "https://www.youtube.com/feeds/videos.xml?channel_id="+channelIds[i], nil)
		requests = append(requests, request)
	}

	job := newJob(decodeXmlFromRequestTask[youtubeFeedResponseXml](defaultClient), requests).withWorkers(30)

	responses, errs, err := workerPoolDo(job)

	if err != nil {
		return nil, fmt.Errorf("%w: %v", ErrNoContent, err)
	}

	videos := make(Videos, 0, len(channelIds)*15)

	var failed int

	for i := range responses {
		if errs[i] != nil {
			failed++
			slog.Error("Failed to fetch youtube feed", "channel", channelIds[i], "error", errs[i])
			continue
		}

		response := responses[i]

		for j := range response.Videos {
			video := &response.Videos[j]

			videoURL := video.Link.Href

			// Fetch video duration
			duration, err := fetchVideoDurationFromHTML(videoURL)
			if err != nil || duration <= 60*time.Second {
				continue
			}

			// Skip shorts based on title and duration
			if strings.Contains(video.Title, "#shorts") || duration <= 60*time.Second {
				continue
			}

			if videoUrlTemplate != "" {
				parsedUrl, err := url.Parse(videoURL)
				if err == nil {
					videoURL = strings.ReplaceAll(videoUrlTemplate, "{VIDEO-ID}", parsedUrl.Query().Get("v"))
				} else {
					videoURL = "#"
				}
			}

			videos = append(videos, Video{
				ThumbnailUrl: video.Group.Thumbnail.Url,
				Title:        video.Title,
				Url:          videoURL,
				Author:       response.Channel,
				AuthorUrl:    response.ChannelLink.Href + "/videos",
				TimePosted:   parseYoutubeFeedTime(video.Published),
			})
		}
	}

	if len(videos) == 0 {
		return nil, ErrNoContent
	}

	videos.SortByNewest()

	if failed > 0 {
		return videos, fmt.Errorf("%w: missing videos from %d channels", ErrPartialContent, failed)
	}

	return videos, nil
}

Added a function fetchVideoDurationFromHTML to fetch video duration by parsing the HTML content of the video page.
Updated FetchYoutubeChannelUploads to filter out videos based on their duration and the existing title check.

@helloteemo
Copy link

@bigsk1 How to deal with delays? The fetchVideodurationFromHtml function requires an additional network request, and there is also CPU consumption for regular parsing of HTML (850KB+)

@helloteemo
Copy link

Perhaps leaving it to Chrome for processing would be a better choice,

please refer to: iframe_api

@bigsk1
Copy link
Author

bigsk1 commented May 24, 2024

Perhaps leaving it to Chrome for processing would be a better choice,

please refer to: iframe_api

Yes it's alot to check all html and parse. The ytp-time-duration isn't being shown anyway I found out when scraping,

What about this approach, need to see the size of requests

Fetches the video duration by making an HTTP request to the video page and extracting the duration from the embedded ytInitialPlayerResponse JSON object.

This approach efficiently filters out YouTube Shorts by checking the video duration using the embedded metadata in the HTML

package feed

import (
	"fmt"
	"io/ioutil"
	"net/http"
	"net/url"
	"regexp"
	"strconv"
	"strings"
	"time"
	"log/slog"
)

type youtubeFeedResponseXml struct {
	Channel     string `xml:"title"`
	ChannelLink struct {
		Href string `xml:"href,attr"`
	} `xml:"link"`
	Videos []struct {
		Title     string `xml:"title"`
		Published string `xml:"published"`
		Link      struct {
			Href string `xml:"href,attr"`
		} `xml:"link"`

		Group struct {
			Thumbnail struct {
				Url string `xml:"url,attr"`
			} `xml:"http://search.yahoo.com/mrss/ thumbnail"`
		} `xml:"http://search.yahoo.com/mrss/ group"`
	} `xml:"entry"`
}

func parseYoutubeFeedTime(t string) time.Time {
	parsedTime, err := time.Parse("2006-01-02T15:04:05-07:00", t)

	if err != nil {
		return time.Now()
	}

	return parsedTime
}

// FetchVideoDuration fetches the duration of a YouTube video using the embedded metadata
func FetchVideoDuration(videoID string) (time.Duration, error) {
	resp, err := http.Get("https://www.youtube.com/watch?v=" + videoID)
	if err != nil {
		return 0, err
	}
	defer resp.Body.Close()

	body, err := ioutil.ReadAll(resp.Body)
	if err != nil {
		return 0, err
	}

	// Find the ytInitialPlayerResponse JSON object in the HTML
	re := regexp.MustCompile(`"lengthSeconds":"(\d+)"`)
	matches := re.FindStringSubmatch(string(body))
	if len(matches) < 2 {
		return 0, fmt.Errorf("duration not found")
	}

	seconds, err := strconv.Atoi(matches[1])
	if err != nil {
		return 0, err
	}

	return time.Duration(seconds) * time.Second, nil
}

func FetchYoutubeChannelUploads(channelIds []string, videoUrlTemplate string) (Videos, error) {
	requests := make([]*http.Request, 0, len(channelIds))

	for i := range channelIds {
		request, _ := http.NewRequest("GET", "https://www.youtube.com/feeds/videos.xml?channel_id="+channelIds[i], nil)
		requests = append(requests, request)
	}

	job := newJob(decodeXmlFromRequestTask[youtubeFeedResponseXml](defaultClient), requests).withWorkers(30)

	responses, errs, err := workerPoolDo(job)

	if err != nil {
		return nil, fmt.Errorf("%w: %v", ErrNoContent, err)
	}

	videos := make(Videos, 0, len(channelIds)*15)

	var failed int

	for i := range responses {
		if errs[i] != nil {
			failed++
			slog.Error("Failed to fetch youtube feed", "channel", channelIds[i], "error", errs[i])
			continue
		}

		response := responses[i]

		for j := range response.Videos {
			video := &response.Videos[j]

			// Extract video ID from URL
			parsedUrl, err := url.Parse(video.Link.Href)
			if err != nil {
				slog.Error("Failed to parse video URL", "url", video.Link.Href, "error", err)
				continue
			}
			videoID := parsedUrl.Query().Get("v")
			if videoID == "" {
				slog.Error("Failed to extract video ID from URL", "url", video.Link.Href)
				continue
			}

			// Fetch video duration
			duration, err := FetchVideoDuration(videoID)
			if err != nil {
				slog.Error("Failed to fetch video duration", "videoID", videoID, "error", err)
				continue
			}

			// Skip shorts based on duration
			if duration <= 60*time.Second {
				continue
			}

			var videoUrl string

			if videoUrlTemplate == "" {
				videoUrl = video.Link.Href
			} else {
				videoUrl = strings.ReplaceAll(videoUrlTemplate, "{VIDEO-ID}", videoID)
			}

			videos = append(videos, Video{
				ThumbnailUrl: video.Group.Thumbnail.Url,
				Title:        video.Title,
				Url:          videoUrl,
				Author:       response.Channel,
				AuthorUrl:    response.ChannelLink.Href + "/videos",
				TimePosted:   parseYoutubeFeedTime(video.Published),
			})
		}
	}

	if len(videos) == 0 {
		return nil, ErrNoContent
	}

	videos.SortByNewest()

	if failed > 0 {
		return videos, fmt.Errorf("%w: missing videos from %d channels", ErrPartialContent, failed)
	}

	return videos, nil
}

@bigsk1
Copy link
Author

bigsk1 commented May 24, 2024

Also have an idea for a cloudflare worker to get the video length and do the heavy lifting

Cloudflare Worker: The worker fetches the HTML of a YouTube video page, extracts the duration, and returns it.
Glance application sends requests to the Cloudflare Worker for each video ID to get the duration and then filters out short videos.

The Cloudflare Worker script will fetch the HTML content of the YouTube video page and extract the duration from the ytInitialPlayerResponse object.

Worker Script

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
  const url = new URL(request.url)
  const videoId = url.searchParams.get('videoId')
  if (!videoId) {
    return new Response('Missing videoId parameter', { status: 400 })
  }

  try {
    const response = await fetch(`https://www.youtube.com/watch?v=${videoId}`)
    const html = await response.text()
    
    // Extract video duration from HTML
    const durationMatch = html.match(/"lengthSeconds":"(\d+)"/)
    if (!durationMatch || durationMatch.length < 2) {
      return new Response('Duration not found', { status: 404 })
    }
    
    const durationSeconds = parseInt(durationMatch[1], 10)
    return new Response(JSON.stringify({ duration: durationSeconds }), {
      headers: { 'Content-Type': 'application/json' }
    })
  } catch (error) {
    return new Response('Error fetching video duration', { status: 500 })
  }
}

youtube.go

package feed

import (
	"encoding/json"
	"fmt"
	"net/http"
	"net/url"
	"strings"
	"time"
	"log/slog"
)

type youtubeFeedResponseXml struct {
	Channel     string `xml:"title"`
	ChannelLink struct {
		Href string `xml:"href,attr"`
	} `xml:"link"`
	Videos []struct {
		Title     string `xml:"title"`
		Published string `xml:"published"`
		Link      struct {
			Href string `xml:"href,attr"`
		} `xml:"link"`

		Group struct {
			Thumbnail struct {
				Url string `xml:"url,attr"`
			} `xml:"http://search.yahoo.com/mrss/ thumbnail"`
		} `xml:"http://search.yahoo.com/mrss/ group"`
	} `xml:"entry"`
}

func parseYoutubeFeedTime(t string) time.Time {
	parsedTime, err := time.Parse("2006-01-02T15:04:05-07:00", t)
	if err != nil {
		return time.Now()
	}
	return parsedTime
}

// FetchVideoDuration fetches the duration of a YouTube video using the Cloudflare Worker
func FetchVideoDuration(videoID string) (time.Duration, error) {
	workerURL := fmt.Sprintf("https://YOUR_WORKER_SUBDOMAIN.workers.dev?videoId=%s", videoID)
	resp, err := http.Get(workerURL)
	if err != nil {
		return 0, err
	}
	defer resp.Body.Close()

	var result struct {
		Duration int `json:"duration"`
	}

	if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
		return 0, err
	}

	return time.Duration(result.Duration) * time.Second, nil
}

func FetchYoutubeChannelUploads(channelIds []string, videoUrlTemplate string) (Videos, error) {
	requests := make([]*http.Request, 0, len(channelIds))

	for i := range channelIds {
		request, _ := http.NewRequest("GET", "https://www.youtube.com/feeds/videos.xml?channel_id="+channelIds[i], nil)
		requests = append(requests, request)
	}

	job := newJob(decodeXmlFromRequestTask[youtubeFeedResponseXml](defaultClient), requests).withWorkers(30)

	responses, errs, err := workerPoolDo(job)

	if err != nil {
		return nil, fmt.Errorf("%w: %v", ErrNoContent, err)
	}

	videos := make(Videos, 0, len(channelIds)*15)

	var failed int

	for i := range responses {
		if errs[i] != nil {
			failed++
			slog.Error("Failed to fetch youtube feed", "channel", channelIds[i], "error", errs[i])
			continue
		}

		response := responses[i]

		for j := range response.Videos {
			video := &response.Videos[j]

			// Extract video ID from URL
			parsedUrl, err := url.Parse(video.Link.Href)
			if err != nil {
				slog.Error("Failed to parse video URL", "url", video.Link.Href, "error", err)
				continue
			}
			videoID := parsedUrl.Query().Get("v")
			if videoID == "" {
				slog.Error("Failed to extract video ID from URL", "url", video.Link.Href)
				continue
			}

			// Fetch video duration
			duration, err := FetchVideoDuration(videoID)
			if err != nil {
				slog.Error("Failed to fetch video duration", "videoID", videoID, "error", err)
				continue
			}

			// Skip shorts based on duration
			if duration <= 60*time.Second {
				continue
			}

			var videoUrl string

			if videoUrlTemplate == "" {
				videoUrl = video.Link.Href
			} else {
				videoUrl = strings.ReplaceAll(videoUrlTemplate, "{VIDEO-ID}", videoID)
			}

			videos = append(videos, Video{
				ThumbnailUrl: video.Group.Thumbnail.Url,
				Title:        video.Title,
				Url:          videoUrl,
				Author:       response.Channel,
				AuthorUrl:    response.ChannelLink.Href + "/videos",
				TimePosted:   parseYoutubeFeedTime(video.Published),
			})
		}
	}

	if len(videos) == 0 {
		return nil, ErrNoContent
	}

	videos.SortByNewest()

	if failed > 0 {
		return videos, fmt.Errorf("%w: missing videos from %d channels", ErrPartialContent, failed)
	}

	return videos, nil
}

@svilenmarkov
Copy link
Member

Hey,

This is something that I've been annoyed by as well ever since I added the videos widget. I don't know of a reasonable way to solve this problem that doesn't involve using YouTube's API.

Having to make an extra request for every single video is extremely inefficient and would either result in timeouts, slowed page loads or hitting rate limits. I have 37 channels added to one of my videos widgets, at 15 videos per feed that's 555 extra requests. Those requests would prevent the entire page from loading until they're done. I'm sure there's plenty of people with widgets that have more than 37 channels in them which would exacerbate the issue even further.

Maybe someone can chime in with a different approach to tackling this problem.

@svilenmarkov svilenmarkov added the help wanted Extra attention is needed label May 25, 2024
@bigsk1
Copy link
Author

bigsk1 commented May 26, 2024

I wonder if public or local invidious instances can be used to get videos as an alternative and have a way to directly sort video types

@itsdoublearon
Copy link

I would also like to share my support for adding a way to block shorts please.

@Subtle-Vipier
Copy link

I'm not sure if making HEAD requests instead of GET requests would make it acceptable to have 1 request by video (probably not given the "rate limits" argument) but if so, doing a HEAD request on https://www.youtube.com/shorts/<VIDEO_ID> gives a 200 on a short video and a 303 on regular videos. It could help discriminate videos without having to get the full body data for each request

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

5 participants