Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow range request for every content type, move lock only when metad… #495

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 19 additions & 5 deletions server/handlers.go
Original file line number Diff line number Diff line change
Expand Up @@ -686,9 +686,6 @@ func (s *Server) unlock(token, filename string) {
}

func (s *Server) checkMetadata(ctx context.Context, token, filename string, increaseDownload bool) (metadata, error) {
s.lock(token, filename)
defer s.unlock(token, filename)

var metadata metadata

r, _, err := s.storage.Get(ctx, token, fmt.Sprintf("%s.metadata", filename))
Expand All @@ -705,7 +702,23 @@ func (s *Server) checkMetadata(ctx context.Context, token, filename string, incr
} else if !metadata.MaxDate.IsZero() && time.Now().After(metadata.MaxDate) {
return metadata, errors.New("maxDate expired")
} else if metadata.MaxDownloads != -1 && increaseDownload {
// todo(nl5887): mutex?
s.lock(token, filename)
defer s.unlock(token, filename)

r2, _, err := s.storage.Get(ctx, token, fmt.Sprintf("%s.metadata", filename))
defer CloseCheck(r2.Close)

if err != nil {
return metadata, err
}

if err := json.NewDecoder(r2).Decode(&metadata); err != nil {
return metadata, err
}

if metadata.Downloads >= metadata.MaxDownloads {
return metadata, errors.New("maxDownloads expired")
}

// update number of downloads
metadata.Downloads++
Expand Down Expand Up @@ -994,6 +1007,7 @@ func (s *Server) headHandler(w http.ResponseWriter, r *http.Request) {

remainingDownloads, remainingDays := metadata.remainingLimitHeaderValues()

w.Header().Set("Accept-Ranges", "bytes")
w.Header().Set("Content-Type", contentType)
w.Header().Set("Content-Length", strconv.FormatUint(contentLength, 10))
w.Header().Set("Connection", "close")
Expand Down Expand Up @@ -1051,7 +1065,7 @@ func (s *Server) getHandler(w http.ResponseWriter, r *http.Request) {
reader = ioutil.NopCloser(bluemonday.UGCPolicy().SanitizeReader(reader))
}

if w.Header().Get("Range") != "" || strings.HasPrefix(metadata.ContentType, "video") || strings.HasPrefix(metadata.ContentType, "audio") {
if r.Header.Get("Range") != "" {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This creates a very big amount of unnecessary downloads of the entire file and thus lots of load on the backend storage (for cases of remote storage via full downloads and for local storage due to copying the file entirely into a temp file) in L1078.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it just creates them for every type of file content instead of video and audio files
where was a bug (w vs r) in the original implementation that prevented range requests to be fulfilled on non audio/video files

but yes, we should try to exploit range donwload directly from the storage implementation.
we can change the storage.Get interface to receive the range to pass down to the concrete storage "sdk" in order to minimise what's actually downloaded

good catch anyway: mine was a quick hack and I didn't think about the performance implications

file, err := ioutil.TempFile(s.tempPath, "range-")
defer s.cleanTmpFile(file)

Expand Down