Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP upload - Akamai #939

Closed
Canta opened this issue Apr 28, 2021 · 12 comments
Closed

HTTP upload - Akamai #939

Canta opened this issue Apr 28, 2021 · 12 comments
Labels
status: archived Archived and locked; will not be updated type: bug Something isn't working correctly
Milestone

Comments

@Canta
Copy link
Contributor

Canta commented Apr 28, 2021

System info

Operating System: Ubuntu 18.04.5 LTS (dockerized)
Shaka Packager Version: c1f64e5-release

Issue and steps to reproduce the problem

I'm struggling to push (actually PUT) some live stream to Akamai HTTP endpoints, based on this instructions.
It says here that the fresh HTTP PUT upload functionality was Tested with ingesting encrypted DASH segments against Akamai MSL4 EP's. Yet, several things are weird:

  • I see no example anywhere on how to push to Akamai using shaka packager.
  • I was able to push unencrypted HLS by using just ffmpeg, but I can't so far with shaka packager. I have some logs regarding this, but would like to clear some other stuff first.
  • While trying to send unencrypted HLS to Akamai using shaka packager, by increasing loglevel for http_file module I can see strange decisions and dialogs, like using the Size() function when it's not available or wrongly interpreting playlist URL. I'm actually confused right now about when to add the put+ header to which parameters.
  • This other link from Akamai says Note: HDS and DASH live streams published from non-qualified encoders will be rejected, and shaka packager is not listed there. ffmpeg IS there, but only qualified for HLS. Didn't tried yet ffmpeg+DASH, but the message is actually scary.
  • I believe one could just spoof some useragent or headers to look like another encoder, but I see no option in shaka packager to do such thing.
  • I can see here, in @termoose fork, that there actually WAS a module string parameter for packager's http useragent.

So, with that in mind, before creating more precise tickets or doing a million extra tests, I would like to ask you people for some clarification.
Can anybody confirm packager's current ability to push HLS and/or MPEG-DASH to Akamai? Some command line example would be nice in order to reproduce it.

Packager Command:

If you execute this, it works fine:

ffmpeg -y 
  -f lavfi -re -i "testsrc=size=320x240:r=30" 
  -f lavfi -re -i "anullsrc" 
  -pix_fmt yuv420p 
  -c:v h264 -preset ultrafast -b:v 100k -r 30 -flags +cgop -g 90 
  -c:a aac -b:a 64k -ac 2 -bsf:a aac_adtstoasc 
  -f hls 'http://[redacted].akamaientrypoint.net/[redacted]/teststream/master.m3u8'

This other combination of commands don't:

ffmpeg -y 
  -f lavfi -re -i "testsrc=size=320x240:r=30" 
  -f lavfi -re -i "anullsrc" 
  -pix_fmt yuv420p 
  -c:v h264 -preset ultrafast -b:v 100k -r 30 -flags +cgop -g 90 
  -c:a aac -b:a 64k -ac 2 -bsf:a aac_adtstoasc 
  -f mpegts 'udp://127.0.0.1:20011?pkt_size=1316&buffer_size=100000000'
packager 
'in=udp://127.0.0.1:20011?timeout=8000000&buffer_size=10000000,stream_selector=0,segment_template=put+http://[redacted].akamaientrypoint.net/[redacted]/teststream/720/720_$Time$.ts,playlist_name=720/720.m3u8' 
'in=udp://127.0.0.1:20011?timeout=8000000&buffer_size=10000000,stream_selector=1,segment_template=put+http://[redacted].akamaientrypoint.net/[redacted]/teststream/audio/audio_$Time$.ts,playlist_name=audio/audio.m3u8' 
--io_block_size 65536 
--io_cache_size 1000000 
--hls_master_playlist_output "http://[redacted].akamaientrypoint.net/[redacted]/teststream/master.m3u8" 
--hls_playlist_type LIVE 
--default_language=eng 
--dump_stream_info 
--vmodule=http_file=10

What is the expected result?

Shaka packager should push to Akamai.

What happens instead?

Different combinations of URLs/settings (like using or not the put+ prefix in the hls_master_playlist_output, segment_template, or playlist_name parameters) end up in different situations, but in any case it doesn't work.

@termoose
Copy link
Contributor

Hi. I can't vouch for what was merged into the master branch as I haven't tested it from the master branch myself yet. I'm running with my own branch which supports setting the user agent. You definitely need to set a user agent that's not rejected by Akamai's ingest servers. I'm also not using {{put+http}} in the URL, but this might be required with how it works on the master branch.

@Canta
Copy link
Contributor Author

Canta commented Apr 30, 2021

@termoose
Thank you, the status is definitely more clear now to me.

I would like to use this ticket to confirm, maybe even fix, HTTP upload to Akamai using google's master branch.
Would you please share here some command line that work for your branch? You can ignore the useragent part, as that's just a detail we all now understand. And, of course, you can redact the private parts, as I did in the ticket examples. What I'd like is an example of how to exactly set hls_master_playlist_output, segment_template, playlist_name, and stuff like that. An example of MPEG-DASH could also be fine. The point is: with proper commands for your branch, I could debug master branch comparing to yours, and that way easily find how the differences affect what exactly. Without an example, I'm kinda blind to details that may come from a lot of places in the code: the example reduces the debug points.

As a side note, regarding Akamai's rejecting useragents: I believe HLS should not be rejected by useragent validations. Akamai docs says that's the case only for HDS and MPEG-DASH. So maybe we don't even need useragent changing for HLS. But, in any case, I'll do some tests and note here the results.

Thanks.

@Canta
Copy link
Contributor Author

Canta commented Apr 30, 2021

I have been debugging this issue, and have some news.

Please take a look at this diff:

diff --git a/packager/file/http_file.cc b/packager/file/http_file.cc
index 8ba7b47..3f08fbb 100644
--- a/packager/file/http_file.cc
+++ b/packager/file/http_file.cc
@@ -218,7 +218,8 @@ Status HttpFile::CloseWithStatus() {
   const Status result = status_;
   LOG_IF(ERROR, !result.ok()) << "HttpFile request failed: " << result;
   delete this;
-  return result;
+  //return result;
+  return Status::OK;
 }
 
 bool HttpFile::Close() {

With that modification, Akamai push works.

The problem is as follows:

  1. http_file module is implemented as a class descendant from the more abstract file. This way, different functions (like "write the manifest/playlist", or "write the chunk/segment") treat http_file as a file.
  2. Yet, http do have several differences from a local file. Some operations are not available. This is actually explicit in http_file's code.
  3. Problem is, those functionalities that are correctly called from functions treating http_file as a descendant of file, are also correctly returning errors from http_file as the funcionalities are not available. Therefore, error checking routines do get errors in the process. And so this routines also correctly stops the process because of the error.

The problem here is, I believe, the wrong assumption that throwing errors is the correct behaviour for this particular file descendant when a base functionality is not available. This delegates the responsability of error handling to caller routines, which is usually ok, yet packager's code is full of calls from third party codes (like chromium). Preciselly, I believe mpd and hls manifest/playlist writters are calling ImportantFileWritter::WriteFileAtomically, which most likely will never catch packager's stuff.

I think is OK to throw errors when calling those functions, but I don't believe this should force so much code check in so many random places: that looks like an unwanted side effect to me. So, I believe we should consider a less aggresive implementation of the file interface for http_file. If you want to honor strict design guidelines, perhaps we could consider a second class with another name (which I frankly can't think of right now). Or perhaps even adding some configurations for the http_file class.

For clarification, this are the "functionalies generating errors" I'm talking about:

[0429/202108:WARNING:file.cc(291)] Writing to http://[redacted].akamaientrypoint.net/[redacted]/teststream/720/720.m3u8 is not guaranteed to be atomic.
(...)
[0429/202108:ERROR:http_file.cc(220)] HttpFile request failed: 7 (HTTP_FAILURE): Failed writing received data to disk/application
[0429/202108:ERROR:file.cc(269)] Failed to close file 'http://[redacted].akamaientrypoint.net/[redacted]/teststream/720/720.m3u8', possibly file permission issue or running out of disk space.
[0429/202108:ERROR:media_playlist.cc(489)] Failed to write playlist to: http://[redacted].akamaientrypoint.net/[redacted]/teststream/720/720.m3u8
[0429/202108:ERROR:simple_hls_notifier.cc(261)] Failed to write playlist http://[redacted].akamaientrypoint.net/[redacted]/teststream/720/720.m3u8
[0429/202108:WARNING:hls_notify_muxer_listener.cc(243)] Failed to add new segment.



[0430/171228:VERBOSE2:http_file.cc(192)] Opening http://[redacted].akamaientrypoint.net/[redacted]/teststream/720/720_init.m4s
[0430/171228:ERROR:http_file.cc(239)] HttpFile does not support Size().
(...)
[0430/171234:ERROR:http_file.cc(219)] HttpFile request failed: 7 (HTTP_FAILURE): Failed writing received data to disk/application
[0430/171234:ERROR:demuxer.cc(356)] Failed to process sample 3 5 (FILE_FAILURE): Cannot close file http://[redacted].akamaientrypoint.net/[redacted]/teststream/720/720_12345.m4s, possibly file permission issue or running out of disk space.
(...)

So, how do you people think we should handle this?
@kqyang

@kqyang
Copy link
Collaborator

kqyang commented May 3, 2021

@Canta Thanks for looking into it. Can you share the command you use?

It could be a bug in the handling of the returned data in HttpFile.

Can you try the below patch to see if it fixes the issue?

diff --git i/packager/file/http_file.cc w/packager/file/http_file.cc
index 8ba7b471..268a6a64 100644
--- i/packager/file/http_file.cc
+++ w/packager/file/http_file.cc
@@ -45,8 +45,14 @@ constexpr const int kMinLogLevelForCurlDebugFunction = 2;
 
 size_t CurlWriteCallback(char* buffer, size_t size, size_t nmemb, void* user) {
   IoCache* cache = reinterpret_cast<IoCache*>(user);
-  size_t length = cache->Write(buffer, size * nmemb);
-  VLOG(3) << "CurlWriteCallback length=" << length;
+  size_t length = size * nmemb;
+  if (cache) {
+    length = cache->Write(buffer, length);
+    VLOG(3) << "CurlWriteCallback length=" << length;
+  } else {
+    // For the case of HTTP Put, the returned data may not be consumed. Return
+    // the size of the data to avoid curl errors.
+  }
   return length;
 }
 
@@ -284,7 +290,8 @@ void HttpFile::SetupRequest() {
   curl_easy_setopt(curl, CURLOPT_FAILONERROR, 1L);
   curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L);
   curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, &CurlWriteCallback);
-  curl_easy_setopt(curl, CURLOPT_WRITEDATA, &download_cache_);
+  curl_easy_setopt(curl, CURLOPT_WRITEDATA,
+                   method_ == HttpMethod::kPut ? nullptr : &download_cache_);
   if (method_ != HttpMethod::kGet) {
     curl_easy_setopt(curl, CURLOPT_READFUNCTION, &CurlReadCallback);
     curl_easy_setopt(curl, CURLOPT_READDATA, &upload_cache_);

You'll still see errors like below, but they can be ignored.

[0430/171228:ERROR:http_file.cc(239)] HttpFile does not support Size().

@kqyang kqyang added type: bug Something isn't working correctly and removed needs triage labels May 3, 2021
@shaka-bot shaka-bot added this to the v2.5 milestone May 3, 2021
@Canta
Copy link
Contributor Author

Canta commented May 3, 2021

Thanks @kqyang :)
Several notes:

  1. Your patch works ok. I'll most likely make a PR later today. That fixes the HTTP upload bug.

  2. Here are my commands. They're mostly the same as the ticket description, but clarifying the put+ thing (no prefix, anywhere):

ffmpeg -loglevel error  \
-f lavfi -re -i "testsrc=size=1280x720:r=30" \
-f lavfi -re -i "anullsrc" \
-map 0:v:0 -c:v h264 -preset ultrafast -r 30 -b:v 3072000  \
-map 1:a:0 -c:a aac -ac 2 -b:a 64k \
-f mpegts udp://127.0.0.1:1234?pkt_size=1316


out/Debug/packager \
'in=udp://127.0.0.1:1234?buffer_size=10000000,stream_selector=0,segment_template=http://[redacted].akamaientrypoint.net/[redacted]/teststream/720/720_$Time$.m4s,init_segment=http://[redacted].akamaientrypoint.net/[redacted]/teststream/720/720_init.m4s,playlist_name=720/720.m3u8,drm_label=SD,bandwidth=3072000' \
--io_block_size 65536 \
--io_cache_size 1000000 \
--hls_master_playlist_output "http://[redacted].akamaientrypoint.net/[redacted]/teststream/master.m3u8" \
--hls_playlist_type LIVE \
--dump_stream_info \
--vmodule=http_file=2,file=3 
  1. Thing is, the ticket is about "http upload to akamai", and while fixing the bug fixes HLS, there's still the MPEG-DASH issue. If you try to push MPEG DASH using Shaka Packager, you get "bad request" from the server. This is intended, as Akamai has it's own encoder homologation guidelines: link1, link2.
    I belive it's correct to handle that other detail in this ticket, not just the http bug. With that in mind, there's two possible ways to HTTP upload MPEG-DASH to Akamai: by changing Shaka Packager's useragent to something Akamai would like, or by registering Shaka Packager in Akamai list of qualified encoders. That last option seems to be the proper way to go (link2 mentions guidelines and a mail address for the procedure), but I believe that should be done by some formal Shaka Packager representative. IDK if it's a gratis procedure (it should IMO), nor what does it involves. OTOH, we've lost the ability to change Shaka Packager's useragent by command line parameter, I suppose for a reason. Perhaps that reason could be reconsidered for this use case.

  2. Shouldn't the "size" error be a warning instead?

@kqyang
Copy link
Collaborator

kqyang commented May 3, 2021

@Canta Thanks for the quick response!

Your patch works ok. I'll most likely make a PR later today. That fixes the HTTP upload bug.

Great. Thanks!

If you try to push MPEG DASH using Shaka Packager, you get "bad request" from the server. This is intended, as Akamai has it's own encoder homologation guidelines: link1, link2.

Have you tried if updating the hard-coded useragent fixed the issue? If not, can you try?

OTOH, we've lost the ability to change Shaka Packager's useragent by command line parameter, I suppose for a reason. Perhaps that reason could be reconsidered for this use case.

Certainly. Would you like to create another PR to bring it back?

Shouldn't the "size" error be a warning instead?

Good point. I think it should just be a VLOG, i.e.

Change

  LOG(ERROR) << "HttpFile does not support Size().";

to

  VLOG(1) << "HttpFile does not support Size().";

You are welcome to submit another PR to update it.

Thanks!

Canta pushed a commit to Canta/shaka-packager that referenced this issue May 4, 2021
…ject#939 :

   * Fixing a bug regarding "content length" and "file size" that was impeding normal HTTP operations. See issue comment 4.
   * Adding back the previously lost ability to change HTTP User Agent from command line parameter (mostly taken from termoose's code).
   * Changing the "HttpFile does not support Size()" message to VLOG(1) loglevel.
@Canta
Copy link
Contributor Author

Canta commented May 4, 2021

@kqyang

Couldn't do it yesterday, but there's the PR with the following modifications:

  • Fix for the http size bug.
  • Reimplementation of useragent override by command line argument.
  • Changing the loglevel for the "size not implemented" message.

Of course, I've already tested the changes, without any new issues.

Have you tried if updating the hard-coded useragent fixed the issue? If not, can you try?

I did, but I actually don't have any qualified useragent available, so still don't have a successful MPEG-DASH test.

I began talks with Akamai support about how to handle this, as I don't thing it's sustainable to just "lie" about the useragent: changing it is fine for testing stuff and other use cases, but the issue here are Akamai's policies, not any Shaka Packager technical problem. Shaka does what it does just fine, its useragent should not be its qualification parameter.

BTW, I don't wanna be picky, but this ticket is a good-enough context for this other detail: I believe Shaka Packager's HTTP default user agent should change.
Right now it's shaka-packager-http-fetch/1.0. Yet, we're not doing any "fetch" with HttpFile in this use cases. I believe that "fetch" should change to something in the lines of "requests", "requester", "module", or something like that. Because "we're doing a fetch of a file through HTTP with this class" was exactly the idea behind the bug that was blocking HTTP uploads when I created this ticket; the current useragent is based on a wrong mindset for the class.
I didn't changed the default useragent already because I wanted your oppinion first.

@kqyang
Copy link
Collaborator

kqyang commented May 4, 2021

@Canta Thanks for the PR! Appreciated.

I believe that "fetch" should change to something in the lines of "requests", "requester", "module", or something like that. Because "we're doing a fetch of a file through HTTP with this class" was exactly the idea behind the bug that was blocking HTTP uploads

Good catch!

How about change it to ShakaPackager/<version>?

#include "packager/version/version.h"

...
user_agent_ = "ShakaPackager/" + GetPackagerVersion();

The file.gyp needs to be updated to include an additional dependency '../version/version.gyp:version',.

@kqyang kqyang closed this as completed in 37b16b4 May 5, 2021
@kqyang kqyang reopened this May 5, 2021
Canta pushed a commit to Canta/shaka-packager that referenced this issue May 6, 2021
@Canta
Copy link
Contributor Author

Canta commented May 6, 2021

@kqyang
Please take a look at that last PR. I've had some trouble building (wasn't familiar with the gyp files), and faced type conversion issues, but that commit is tested and working.

Canta pushed a commit to Canta/shaka-packager that referenced this issue May 7, 2021
@kqyang kqyang closed this as completed in 8515a66 May 7, 2021
@Canta
Copy link
Contributor Author

Canta commented May 12, 2021

@kqyang
I'm still dealing with details regarding Akamai HTTP upload for MPEG-DASH. I would like to ask you for some guidance, so we can decide if this means further modifications.

This are Akamai's guidelines for MPEG-DASH:
https://learn.akamai.com/en-us/webhelp/media-services-live/media-services-live-encoder-compatibility-testing-and-qualification-guide-v4.0/GUID-986421C4-F48B-441C-B986-E8E264F2178E.html

Here's the thing I would like to ask you about:
https://learn.akamai.com/en-us/webhelp/media-services-live/media-services-live-encoder-compatibility-testing-and-qualification-guide-v4.0/GUID-40277CC9-50D5-4D5F-B27D-2936B035269F.html

I've done some tests, and Shaka tries to push the chunks before the manifest. That's ok by me, and actually makes sense, but it seems to be against Akamai's guidelines. So I would like to make some command line flag in the likes of --generate_premature_manifest or something like that, so Shaka can make a manifest quicker than the first chunks. But I'm not sure about this, and some insights would be nice. Here are my main doubts:

  1. Do you think some parameter like that is ok for Shaka Packager?

  2. Where's the right place for this logic in packager's code? I have my doubts about app/mpd_generator.cc, mpd/base/mpd_builder.cc, and some other places.

Thanks.

@Canta
Copy link
Contributor Author

Canta commented May 12, 2021

Another thing regarding Akamai's qualification procedure. Take a look at this link:
https://learn.akamai.com/en-us/webhelp/media-services-live/media-services-live-encoder-compatibility-testing-and-qualification-guide-v4.0/GUID-5728DEC8-5912-4191-85A4-87BAF5F91C14.html

I was thinking of starting the qualification process by myself. Yet, I don't think that's ok, as the guidelines ask for business formalities that I just can't provide: I'm not a Google or Shaka representative. Giving my contact as formal business representative for this project would be... well... kinda scammy, frankly.

Is there any chance that some Shaka Packager representative start that process? I could be somehow a technical contact for Akamai, yet I don't feel confortable telling them something like "yeah, I'm Shaka Packager representative": I'm just not that person.

@kqyang
Copy link
Collaborator

kqyang commented May 16, 2021

it seems to be against Akamai's guidelines

Interesting. Is it a hard requirement? Does it allow the manifest to be updated later?

So I would like to make some command line flag in the likes of --generate_premature_manifest or something like that, so Shaka can make a manifest quicker than the first chunks.

A proper place to do it is in SimpleMpdNotifier::NotifyNewContainer (https://github.com/google/shaka-packager/blob/master/packager/mpd/base/simple_mpd_notifier.cc#L71). WriteMpdToFile can be called there to write the DASH mpd.

Another thing regarding Akamai's qualification procedure.

I don't know what is the best way to approach it. Shaka Packager is an open source project instead of a product from Google.

I'll check with Google open source team / Google legal on how we can approach it.

For now, feel free to tell Akamai that you or your company is using the open source packager Shaka Packager and whether it can be allowed on Akamai system.

nvincen pushed a commit to nvincen/shaka-packager that referenced this issue Jun 8, 2021
- Do not write the HTTP PUT response to cache which can potentially overflow the cache buffer as it is not consumed.
- VLOG(1) instead of LOG(ERROR) on HttpFile::Size() as it can be called during normal code execution.
- Add a command line flag `--user_agent` to allow users to specify their custom user agent string.

Fixes shaka-project#939.
nvincen pushed a commit to nvincen/shaka-packager that referenced this issue Jun 8, 2021
@shaka-bot shaka-bot added the status: archived Archived and locked; will not be updated label Jul 6, 2021
@shaka-project shaka-project locked and limited conversation to collaborators Jul 6, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
status: archived Archived and locked; will not be updated type: bug Something isn't working correctly
Projects
None yet
Development

No branches or pull requests

4 participants