Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quickstart fails, valid config/config.yaml is required (and other mattdsteele questions) #52

Closed
mattdsteele opened this issue Jul 12, 2020 · 23 comments

Comments

@mattdsteele
Copy link
Contributor

Ran through the Dockerized setup in the quickstart, and I was able to successfully build the container, but starting it I'm encountering this error:

root@ubuntu-s-1vcpu-1gb-sfo2-01:~/owncast# docker run -p 8080:8080 -p 1935:1935 -it owncast
INFO[2020-07-12T19:20:21Z] Owncast v0.0.0-localdev (unknown)
FATA[2020-07-12T19:20:21Z] ERROR: valid config/config.yaml is required.  Copy config-example.yaml to config.yaml and edit

I copied the sample config into config/config.yaml before building the container:

root@ubuntu-s-1vcpu-1gb-sfo2-01:~/owncast# ls -l config
total 16
-rw-r--r-- 1 root root 4941 Jul 12 19:10 config.go
-rw-r--r-- 1 root root 1441 Jul 12 19:18 config.yaml
-rw-r--r-- 1 root root  842 Jul 12 19:10 configUtils.go

The only line I changed was updating streamingKey to a different value. I also tried copying config-example.yaml directly, and that also failed.

I'll try the non-Docker approach next, but would definitely like to have this running in a container as an option!

For context, this is a stock DigitalOcean VPS running Ubuntu 20.04

@mattdsteele
Copy link
Contributor Author

Another note: while building the project it failed as my machine didn't have gcc:

...snip
go: downloading github.com/jackpal/go-nat-pmp v1.0.2
go: downloading golang.org/x/text v0.3.2
go: downloading github.com/jmespath/go-jmespath v0.3.0
# command-line-arguments
/usr/local/go/pkg/tool/linux_amd64/link: running gcc failed: exec: "gcc": executable file not found in $PATH

Probably worth mentioning that dependency in the quickstart

@mattdsteele
Copy link
Contributor Author

After building from source and running, I'm getting the same error as when running Dockerized:

root@ubuntu-s-1vcpu-1gb-sfo2-01:~/owncast# go run main.go
INFO[2020-07-12T19:40:54Z] Owncast v0.0.0-localdev (unknown)
FATA[2020-07-12T19:40:54Z] ERROR: valid config/config.yaml is required.  Copy config-example.yaml to config.yaml and edit
exit status 1

Which makes me think the issue is in my config file; I'm just not sure what's causing it.

The full file I'm testing in `config/config.yaml`: root@ubuntu-s-1vcpu-1gb-sfo2-01:~/owncast# cat config/config.yaml
publicHLSPath: webroot/hls
privateHLSPath: hls
ffmpegPath: /usr/bin/ffmpeg
webServerPort: 8080

instanceDetails:
  name: Owncast
  title: Owncast Demo Server
  summary: "This is brief summary of whom you are or what your stream is. You can read more about it at owncast.online.  You can edit this description in your config file."
  extraUserInfoFileName: "/static/content.md"

  logo:
    small: /img/logo128.png
    large: /img/logo128.png

  tags:
    - music
    - software
    - animal crossing

  # See documentation for full list of supported social links.  All optional.
  socialHandles:
    - platform: twitter
      url: http://twitter.com/owncast
    - platform: instagram
      url: http://instagram.biz/owncast
    - platform: facebook
      url: http://facebook.gov/owncast
    - platform: tiktok
      url: http://tiktok.cn/owncast
    - platform: soundcloud
      url: http://soundcloud.com/owncast

videoSettings:
  chunkLengthInSeconds: 4
  streamingKey: abc123
  offlineContent: static/offline.m4v # Is displayed when a stream ends

  streamQualities:
    # Transcode the video to a lower bitrate and resize
    - medium:
    videoBitrate: 800
    encoderPreset: superfast

files:
  maxNumberInPlaylist: 30

ipfs:
  enabled: false
  gateway: https://ipfs.io

s3:
  enabled: false
  endpoint: https://s3.us-west-2.amazonaws.com
  accessKey: ABC12342069
  secret: lolomgqwtf49583949
  region: us-west-2
  bucket: myvideo

@mattdsteele mattdsteele changed the title Docker quickstart fails, valid config/config.yaml is required Quickstart fails, valid config/config.yaml is required Jul 12, 2020
@gabek
Copy link
Member

gabek commented Jul 12, 2020

You're the first to use the Dockerized setup for anything 🥇

I'll have to update the documentation and code, we recently changed the config to just be in the root of the project instead of config/config.yaml, sorry about that! Hopefully just moving your config there should be good to go. Let me know!

gabek added a commit that referenced this issue Jul 12, 2020
@mattdsteele
Copy link
Contributor Author

Success!

image

I'll close this out and maybe submit a few PRs for what I mentioned above. We can then use this ticket for any other questions I run into. Thanks!

@mattdsteele
Copy link
Contributor Author

General question as I'm provisioning hardware; about how much disk is required to serve locally? I'd prefer not to have to use cloud storage just to keep things simple, and I've got Cloudflare in front so bandwidth isn't as much of an issue.

Say I set two stream qualities, with bitrates at 1200 and 600. Out of the box my VPS has 17GB available; is that enough for a few hours of streaming?

Also, do the files in hls/ get purged automatically? I was looking at some from yesterday, but check again and they were gone 😮

@mattdsteele mattdsteele changed the title Quickstart fails, valid config/config.yaml is required Quickstart fails, valid config/config.yaml is required (and other mattdsteele questions) Jul 14, 2020
@mattdsteele
Copy link
Contributor Author

mattdsteele commented Jul 14, 2020

From Cloudflare's perspective, I believe the only files I need to exclude from the CDN and disable caching are /hls/**/*.m3u8? I'll be embedding the stream on a custom page and won't be including stats or chat.

@gabek
Copy link
Member

gabek commented Jul 14, 2020

So the idea is to keep files around only as long as they're useful to be there. They get cleaned up during two different times:

  1. When the service starts up it heavy handedly wipes the working directories of hls files since there's no reference to those files in any playlists anymore, so it just gets rid of them.
  2. When a single segment of video drops off a playlist because it's too far in the past, and is no longer referenced in the stream, the file gets removed from the local filesystem.

Because of this the number of files actually on disk at any given time is quite low. The downside is if you're looking to have a complete back buffer of video and not just focusing on live. You can adjust the number of segments that are "in play" by changing maxNumberInPlaylist in the config file https://github.com/gabek/owncast/blob/master/doc/config-example-full.yaml#L66. In theory you could crank that up and allow your viewers to seek backwards to have maxNumberInPlaylist * chunkLengthInSeconds seconds of video referenceable and seekable, but that really hasn't been something we've tested since we've been focusing on the live edge and keeping the resource footprint low.

As for the cache, that should be correct. The correct "do not cache" cache-control headers are also set on m3u8 files so hopefully the CDN is respecting those, as well.

I'd love to hear your experience embedding the video into another page, since it's not something we've done. I'd like to take your implementation tips and throw it in a document to use as a guide for other people. The catch is having a player that supports HLS. Our web UI uses VideoJS but hls.js is an option too.

@mattdsteele
Copy link
Contributor Author

Got it! I presume since I'm running in Docker it'll also get wiped when I rebuild the container. I'll probably pull that off into a volume just so it persists.

I'll be focused on live for this use case as well; after my event we may host an edited replay on the site, but that'll probably require some additional tweaks anyhow.

So far embedding has worked great! I'm using hls.js with their quickstart instructions, without any issues. Main gotcha was using native HLS on mobile Safari, which I should have caught. I'll write something up as I learn more.

@mattdsteele
Copy link
Contributor Author

Just making a note; while testing streaming today I noticed Cloudflare was recording essentially 0% cache hits; nearly all requests for data files were being sent to the owncast instance.

Cloudflare's got a good post describing the problem with caching live video, as well as a solution they provide, but only for their paid Stream Delivery product.

Not sure what to do with this other than maybe note the limitations with CDNing live video in the README? I'll probably end up using S3 or the like, after making sure the egress charges are advantageous...

@gabek
Copy link
Member

gabek commented Jul 15, 2020

Thanks for the update! That article is a good reference.

Like any other caching scenario, there has to be enough demand to make it effective, so I'm not surprised. Not to mention the variability of if a previously cached file in a region A edge server is available yet in a region B edge server. You could try something like Cloudfront to have a little more control over things, but I'm not sure it would be much better without sufficiant demand.

Depending on what you think the demand for your stream will be it might be completely fine to let your instance be the origin for the video segments, but object storage does allow for some peace of mind.

@graywolf336
Copy link
Contributor

I would be curious to see if KeyCDN works well. I've thought about trying this myself, but have yet to do it. https://www.keycdn.com/blog/hls-streaming

@mattdsteele
Copy link
Contributor Author

@graywolf336 I setup KeyCDN and it seems to work OK, though I haven't found a good way to load test to see cache hits.

My naive approach has been to connect up a few browsers and watch the bandwidth charts on my VPS. Any better approaches? I've seen one post using JMeter but want to see if there's any better tooling... https://www.ubik-ingenierie.com/blog/easy-and-realistic-load-testing-of-http-live-streaming-hls-with-apache-jmeter/

@mattdsteele
Copy link
Contributor Author

mattdsteele commented Jul 19, 2020

One other note: I fetched the latest changes and attempted to rebuild the Docker container, and at least on my $5 DigitalOcean VPS it's now unable to build the container, I think it's running out of memory:

//snip
go: downloading github.com/huin/goupnp v1.0.0
go: downloading github.com/jackpal/go-nat-pmp v1.0.2
go: downloading github.com/cheekybits/genny v1.0.0
go: downloading github.com/francoispqt/gojay v1.2.13
go: downloading golang.org/x/text v0.3.2
go: downloading github.com/marten-seemann/qtls v0.9.1
go: downloading github.com/jmespath/go-jmespath v0.3.0
/usr/local/go/pkg/tool/linux_amd64/link: signal: killed
The command '/bin/sh -c CGO_ENABLED=1 GOOS=linux go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -o owncast .' returned a non-zero code: 1

I'm using it outside a container and it builds fine; so no worries. But probably worth noting if anyone else tries to build the container on a 1GB instance.

Update - This actually fails anytime you try to build the binary, not just in Docker!

root@ubuntu-s-1vcpu-1gb-sfo2-01:~/owncast# go build main.go
/usr/local/go/pkg/tool/linux_amd64/link: signal: killed

I can still run it with go run main.go, but it's something about building the actual binary that appears to take lots of memory?

@gabek
Copy link
Member

gabek commented Jul 19, 2020

Very odd, I'm not seeing any issues building it locally, and I added a Github action now that builds the image and binary to try to catch any issues like this, and that seems to be working ok. So resources makes sense, but it doesn't make sense that you can't build it, but you can run it.

Thanks for letting me know! I'll look into it and see what can be done. I'm assuming it has to do with the requirement of cgo now, but that shouldn't be the end of the world.

@mattdsteele
Copy link
Contributor Author

Did some testing on my instance tonight; we watched a movie over it with a few folks (stats show it maxed at 5 viewers), all in the same city. Unfortunately it doesn't look like KeyCDN did a great job limiting the requests to the origin server:

image

I pretty quickly saw a maxing out of bandwidth, and the feed got choppy for everyone:

image

I reconfigured down to a single low-quality stream for the rest of the movie and it got through, but I'm wondering if I just happened to hit the limit for what local serving can support on my little VPS.

So I'm thinking I'll be trying the S3 approach next. DigitalOcean has an S3-compatible storage offering, so I'll give that a whirl and let you know how it goes.

Kind of a bummer that KeyCDN didn't do what it says on the tin, but I think switching storage providers will be easier than trying to debug the CDN 🤷

@gabek
Copy link
Member

gabek commented Jul 26, 2020

I think the S3 approach is the way to go, it's less of a wildcard. You know for sure the files will be served from there.

I haven't tried the DigitalOcean object storage offering, so I'd love to hear how it works for you. I wrote some documentation on other providers if it's at all helpful: https://github.com/gabek/owncast/blob/master/doc/S3.md

The recommendation I can make is to find out how to expire files so you don't have a bunch of old segments sitting around taking up space.

@mattdsteele
Copy link
Contributor Author

@gabek This isn't directly an Owncast question, but I noticed in this post you've used video passthrough for one of the streams.

What broadcaster and settings did you use for that? I've never been able to get a browser to connect with that stream. I'm using vanilla OBS and have tried both hardware (QSV) and software (x264) encoding.

Owncast settings:

    - full:
      videoPassthrough: true
      audioPassthrough: true

@gabek
Copy link
Member

gabek commented Aug 1, 2020

I don't suggest you use videoPassthrough. I was defaulting to it and using it myself heavily without any issues, but when troubleshooting Restream I came to realize not all broadcasts were created equally, and passthrough isn't going to always work. So I stopped using it myself and stopped putting it in documentation and the example configs.

However, when I was using it, I was using both vanilla OBS and ffmpeg as broadcasting clients on a regular basis, with some iOS apps sprinkled in as well, and they all worked. But it's possible that the changes that went in to support Restream (new RTMP pipeline) somehow no longer allows for passthrough to work.

It's a bummer that there's so many variables that passthrough can't easily work, since it makes a huge difference with CPU load, but as it stands it seems like you can't really trust a broadcast without forcing it through the transcoder first.

@mattdsteele
Copy link
Contributor Author

Thanks for the info! Mostly I was hoping for a "compute-free" way to get an additional stream; I have no issues using the two transcoded ones I've configured.

@mattdsteele
Copy link
Contributor Author

mattdsteele commented Aug 3, 2020

Did a final round of testing for my event, and things are looking good! We streamed for about an hour at three different bitrates, and it was mostly solid.

There was one instance of buffering I saw as a stream viewer, but it resumed 6 seconds later. In the owncast stdout I saw this message:

time="2020-08-03T02:05:50Z" level=error msg="failed to save the file to the chunk storage. open hls/2/stream-1596420192.ts: no such file or directory"

I'm serving the video files via S3-compatible storage, so my guess is the upload to the bucket failed? I didn't catch any other logs.

Anyhow, we're on schedule for a production use next weekend. Tune in to https://steele-codr.wedding/ at 3:30pm Central time on 8/8 for the livestream!

@gabek
Copy link
Member

gabek commented Aug 3, 2020

Thanks for the update! Yeah that's what that message means, either that segment never finished writing to disk fast enough for it to be uploaded, or there was just some blip in the actual uploading of it. It does try to re-upload if it fails, so you would see that same error multiple times if it kept failing. However, it's possible by the time it did upload your client already tried to pull it so it ultimately resulted in that buffering. If you end up seeing it more often I suggest moving to a faster encoder preset in your config (at the detriment of visual quality, however). But hopefully you won't have to do that.

I'm looking forward to taking part in your event virtually! This is all so super cool!

@mattdsteele
Copy link
Contributor Author

A few follow-ups; the event went great! Analytics showed 47 folks used it at the max; the majority were on phones/tablets, so I'm glad we tested there :)

  1. One thing I need to investigate: before the event, we were using the server's web UI for testing, but it was failing to load any of the .ts files from S3 due to CORS errors. So far I've just got a photo of the error: https://steele.blue/static/47539cb8095f550ae189ae93060546c3/d2602/owncast-1.jpg

I'm not sure what caused this; I believe the CORS headers were all setup with * settings. It wasn't catastrophic; my website was able to load the same files, from a different domain, and it worked fine. I'll try and reproduce and open an Issue.

  1. I wrote a couple blog posts about my experiences, let me know if you've got any changes/suggestions:
  1. I'm also hoping to talk about the experience more at the next Barcamp (looks like it'll be virtual on 9/19) - if you're interested in working together on a talk, let me know!

@gabek
Copy link
Member

gabek commented Aug 17, 2020

Thanks for the follow-up!

I actually dropped by your Owncast instance the day of your wedding to check it out and saw the CORS errors and thought "oh he must have specifically locked down his CORS policy to only access the video from his personal website". I guess that's not the case. I'm confused how "*" was working for your personal site, but not others. There must be some explanation. Same (type of) player, same source of video.

Your blog posts are awesome, it's so cool that it's been helpful enough to you for you to want to tell others and share some real use cases. I really appreciate it! Your wedding was the first real event anybody has used Owncast for, and it's a big one. It's so cool to share it.

I'd be super into working together on a talk. Let me know what direction you were thinking of going in as far as content. Feel free to email me: gabek@real-ity.com.

gabek added a commit that referenced this issue Apr 26, 2022
fix: set thumbnail image to fixed size and fix label color
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants