Skip to content

Commit

Permalink
Merge pull request #22 from games-on-whales/docs
Browse files Browse the repository at this point in the history
Documentation
  • Loading branch information
ABeltramo committed Jun 29, 2021
2 parents 00793be + e1d93ac commit 98e5080
Show file tree
Hide file tree
Showing 16 changed files with 304 additions and 130 deletions.
136 changes: 6 additions & 130 deletions README.md
@@ -1,8 +1,10 @@
# GOW - Games on Whales [![Discord](https://img.shields.io/discord/856434175455133727.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2)](https://discord.gg/kRGUDHNHt2)

Stream games (and GUI) over Docker with HW acceleration and low latency
Stream games (and GUI) running on Docker with HW acceleration and low latency!

![Screenshot of GOW running](screen/GOW-running.jpg)
Read more on our [documentation](docs/README.md)

![Screenshot of GOW running](docs/img/GOW-running.jpg)

## Quickstart

Expand All @@ -13,134 +15,8 @@ docker-compose pull
docker-compose up
```

Connect over Moonlight by manually adding the IP address of the PC running the Docker container. To validate the PIN you can use the Sunshine web interface (at `https://<IP>:47990/` username: sunshine, password is auto generated on startup check the docker logs.) or directly calling: `curl <IP>:47989/pin/<PIN>`.

From Moonlight open the `Desktop` app, from there you should be able to see your X11 apps running!


## RetroArch first time configuration

> Using the keyboard you can move using the arrows and get back to the previous menu by pressing backspace
From the **Main Menu** > **Online Updater** select:
- Update Core Info Files
- Update assets

Press `F` to toggle fullscreen.
Wait, are you seriously running code from the internet?

It's dangerous out there! Make sure to checkout the [documentation](docs/README.md) first!

## GPU HW acceleration

> **TESTING**: the following is still under development
### Nvidia GPUs with `nouveau` drivers

Make sure that the host doesn't use proprietary drivers but it's using the open source `nouveau` drivers.
```
sudo lshw -class video | grep driver=
configuration: driver=nouveau latency=0
```

Double check that the GPU card is correctly listed under `/dev/dri/`:
```
ls -la /dev/dri/
total 0
drwxr-xr-x 3 root root 100 Jun 20 09:47 .
drwxr-xr-x 17 root root 3100 Jun 20 10:33 ..
drwxr-xr-x 2 root root 80 Jun 20 09:47 by-path
crw-rw---- 1 root video 226, 0 Jun 20 09:47 card0
crw-rw---- 1 root render 226, 128 Jun 20 09:47 renderD128
```

### Nvidia GPUs with proprietary drivers

You can see if your host is using the proprietary driver using `lshw`:
```console
$ lshw -class video | grep -i driver
configuration: driver=nvidia latency=0
```

In order to make use of your GPU inside docker containers, you'll need to set up the [NVIDIA Container Toolkit](https://github.com/NVIDIA/nvidia-docker).

Once that's done, you can run the container, you should add the following ENV variables to the docker-compose file

```yaml
environment:
NVIDIA_VISIBLE_DEVICES: GPU-[uuid] # Replace [uuid] (see the instructions)
NVIDIA_DRIVER_CAPABILITIES: utility,graphics,video,display
```

To get the correct UUID for your GPU, use the `nvidia-container-cli` command:
```console
$ sudo nvidia-container-cli --load-kmods info
NVRM version: [version]
CUDA version: 11.3

Device Index: 0
Device Minor: 0
Model: NVIDIA GeForce [model]
Brand: GeForce
GPU UUID: GPU-[uuid]
Bus Location: 00000000:0a:00.0
Architecture: 7.5
```

##### Xorg drivers

Although the NVIDIA Container Toolkit automatically provides most of the drivers needed to use the GPU inside a container, Xorg is _not_ officially supported. This means that the runtime will not automatically map in the specific drivers needed by Xorg.

There are two libraries needed by Xorg: `nvidia_drv.so` and `libglxserver_nvidia.so.[version]`. It is preferred to map these into the container as a bind volume from the host, because this guarantees that the version will exactly match between the container and the host. Locate the two modules and add a section like this to the `xorg` service in your `docker-compose.yml`:
```yaml
volumes:
- /path/to/nvidia_drv.so:/nvidia/xorg/nvidia_drv.so:ro
- /path/to/libglxserver_nvidia.so.[version]:/nvidia/xorg/libglxserver_nvidia.so:ro
```

Be sure to replace `[version]` with the correct version number from the `nvidia-container-cli` command above.

Some common locations for `nvidia_drv.so` include:
* `/usr/lib64/xorg/modules/drivers/` (Unraid)
* `/usr/lib/x86_64-linux-gnu/nvidia/xorg/` (Ubuntu 20.04)

Some common locations for `libglxserver_nvidia.so.[version]` include:
* `/usr/lib64/xorg/modules/extensions/` (Unraid)
* `/usr/lib/x86_64-linux-gnu/nvidia/xorg/` (Ubuntu 20.04)

If you don't want to do this, or if you can't find the driver on your host for some reason, the container will attempt to install the correct version for you automatically. However, there is no guarantee that it will be able to find a version that exactly matches the driver version on your host.

If for some reason you want to skip the entire process and just assume the driver is already installed, you can do that too:
```yaml
environment:
SKIP_NVIDIA_DRIVER_CHECK: 1
```

## Troubleshooting

You can access Retroarch logs at `~/retroarch/retroarch.log`

### Error: Could not create Sunshine Mouse: No such file or directory

Make sure that `/dev/uinput/` is present in the host and that it does have the correct permissions:

```console
ls -la /dev/uinput
crw-rw---- 1 $USER input 10, 223 Jun 9 08:57 /dev/uinput # Check that $USER is not root but your current user
```

Try following this: https://github.com/chrippa/ds4drv/issues/93#issuecomment-265300511
(On Debian I had to modify `/etc/modules-load.d/modules.conf`, adding `/etc/modules-load.d/uinput.conf` didn't trigger anything to me)

Non permanent fix:
```console
sudo chmod 0660 /dev/uinput
```

## Docker build

You can either build the docker image or use the pre-built one available at [DockerHub](https://hub.docker.com/u/gameonwhales).

To build it locally run:

```console
docker-compose build
```
33 changes: 33 additions & 0 deletions docs/README.md
@@ -0,0 +1,33 @@
# Documentation

If something is missing here feel free to reach us on [Discord](https://discord.gg/kRGUDHNHt2) or open up a [discussion](https://github.com/games-on-whales/gow/discussions/new) here on Github.

## What is Games on Whales?

- [Overview](overview.md)
- [Components overview](components-overview.md)
- [Roadmap](roadmap.md)

## Running GOW

- GNU/Linux
- I have a headless server (CLI only)
- [Debian/Ubuntu](debian-instructions.md)
- [Unraid](https://github.com/games-on-whales/unraid-plugin)
- [I already have a Desktop environment](desktop-instructions.md)
- [Windows](https://github.com/games-on-whales/gow/issues/13)
- OSX

## HW Acceleration

- [Nvidia GPU](nvidia.md)
- [iGPU](https://github.com/games-on-whales/gow/issues/21)

## Troubleshooting

- [Common errors](troubleshooting.md)

## Contributing

- [How can I help?](contributing.md)
- [Building manually](docker-build.md)
57 changes: 57 additions & 0 deletions docs/components-overview.md
@@ -0,0 +1,57 @@
# Components Overview

Make sure to read first the [overview](overview.md) section to get a grasp on what's the idea behind GOW.

<p align="center">
<img width="500" src="img/gow-diagram.svg">
</p>

GOW is a composition of Docker containers that enable users to stream graphical applications to Moonlight clients.

We wrapped each individual software with the necessary dependencies into a single Docker image and we use [`docker-compose`](https://docs.docker.com/compose/) in order to manage the composition.

## Sunshine

[Sunshine](https://github.com/loki-47-6F-64/sunshine) is the heart of this system: it's the streaming host and it's in charge of:
- Encoding the graphical environment (`Xorg`) and audio (`PulseAudio`) into a video that will be streamed to [Moonlight](https://moonlight-stream.org/) clients
- This process can be HW accelerated using [`VAAPI`](https://en.wikipedia.org/wiki/Video_Acceleration_API) on compatible HW
- Translating remote inputs into local input devices (aka: keyboards, mouse, joypad)
- This is achieved by using the [`uinput`](https://www.kernel.org/doc/html/v4.12/input/uinput.html) kernel module

### uinput

Uinput makes possible to emulate virtual devices. It's required by Sunshine and it's the one and only real requirement that we need in the host machine kernel.

Most Linux distributions already ships it and you'll find it already there if you use Ubuntu or Debian for example.

If it's not there already, since it's part of the Linux kernel, it might be difficult to compile it from scratch. We try to add support for platforms who don't have it, for example, [we are working on a plugin for Unraid](https://github.com/games-on-whales/unraid-plugin).

If you have issues with inputs (mouse, joypad) while streaming using GOW, it's very likely that something is wrong with `uinput`


## Xorg + PulseAudio

This two components are in charge respectively of *Display* and *Audio*.

- If your OS comes with a [desktop environment](https://en.wikipedia.org/wiki/Desktop_environment) already, you can use that instead of running it over Docker.
- If you are running a [headless](https://en.wikipedia.org/wiki/Headless_computer) system you'll need to run them in order to run graphical applications, you can use our Docker images for that.

While PulseAudio runs just fine without a real sound device, Xorg can (and should) be HW accelerated using a GPU. That's the main reason why we choose Xorg over [`Xvfb`](https://en.wikipedia.org/wiki/Xvfb), while it's more complicated to run the full Xorg server the benefits of having HW acceleration are too big to be dismissed.

## GUIs

Graphical applications can run easily on top of Xorg and PulseAudio, that's how most desktop environment works!

<p align="center">
<img width="300" src="img/gui-overview.svg">
</p>

Sharing [sockets](https://en.wikipedia.org/wiki/Unix_domain_socket) between containers is the mechanism that enables us to have proper isolation. Instead of having a big single Docker image which installs and runs all these softwares together we can decouple them and share only a communication channel.

This means that it's very simple to make a Docker container of any given GUI application and that same container will work both on **GOW** or on a normal *Desktop Environment*, enabling users to have a high degree of freedom on how to use them.

## GPU

A GPU is not required to run any of this, but it's highly recommended.

Sharing a GPU across Docker containers is possible and it's generally done by sharing the [DRM devices (`/dev/dri/cardX`)](https://en.wikipedia.org/wiki/Direct_Rendering_Manager). As always there are exceptions and we have specific instructions for [Nvidia cards](nvidia.md).
11 changes: 11 additions & 0 deletions docs/contributing.md
@@ -0,0 +1,11 @@
# How can I help?

:tada: First off, thanks for taking the time to contribute! :tada:

There are many ways to contribute to this project!

Just by running it and reporting issues (or even letting us know everything works out of the box) you are helping us!

You don't have to code or understand all the little details of how everything work, helping us mantaining and expanding the documentation is another great way to help. Asking questions in the public [discussion board](https://github.com/games-on-whales/gow/discussions) is another great way to share knowledge for others.

Feel free to join our [discord server](https://discord.gg/kRGUDHNHt2) if you have any question, if you need any help, or you would like to join us on our journey!
21 changes: 21 additions & 0 deletions docs/debian-instructions.md
@@ -0,0 +1,21 @@
# Debian/Ubuntu instructions

Make sure to checkout the [Overview](overview.md) first.

## Quickstart

```console
git clone https://github.com/games-on-whales/gow.git
cd gow
docker-compose pull
docker-compose up
```

Connect over Moonlight by manually adding the IP address of the PC running the Docker container. To validate the PIN you can use the Sunshine web interface (at `https://<IP>:47990/` username: sunshine, password is auto generated on startup check the docker logs.) or directly calling: `curl <IP>:47989/pin/<PIN>`.

From Moonlight open the `Desktop` app, from there you should be able to see your X11 apps running!

## Next steps

- Checkout how to [configure RetroArch](retroarch-first-start.md)
- Head [back to the Documentation](README.md) to configure and use your GPU (if you have one)
3 changes: 3 additions & 0 deletions docs/desktop-instructions.md
@@ -0,0 +1,3 @@
# Desktop instructions

There's not much content in here, uh?
9 changes: 9 additions & 0 deletions docs/docker-build.md
@@ -0,0 +1,9 @@
# Docker build

You can either build the docker image or use the pre-built one available at [DockerHub](https://hub.docker.com/u/gameonwhales).

To build it locally run:

```console
docker-compose build
```
File renamed without changes
3 changes: 3 additions & 0 deletions docs/img/gow-diagram.svg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/gow-logo.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions docs/img/gui-overview.svg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
84 changes: 84 additions & 0 deletions docs/nvidia.md
@@ -0,0 +1,84 @@
# GPU HW acceleration

> **TESTING**: the following is still under development
## Nvidia GPUs with `nouveau` drivers

Make sure that the host doesn't use proprietary drivers but it's using the open source `nouveau` drivers.
```
sudo lshw -class video | grep driver=
configuration: driver=nouveau latency=0
```

Double check that the GPU card is correctly listed under `/dev/dri/`:
```
ls -la /dev/dri/
total 0
drwxr-xr-x 3 root root 100 Jun 20 09:47 .
drwxr-xr-x 17 root root 3100 Jun 20 10:33 ..
drwxr-xr-x 2 root root 80 Jun 20 09:47 by-path
crw-rw---- 1 root video 226, 0 Jun 20 09:47 card0
crw-rw---- 1 root render 226, 128 Jun 20 09:47 renderD128
```

## Nvidia GPUs with proprietary drivers

You can see if your host is using the proprietary driver using `lshw`:
```console
$ lshw -class video | grep -i driver
configuration: driver=nvidia latency=0
```

In order to make use of your GPU inside docker containers, you'll need to set up the [NVIDIA Container Toolkit](https://github.com/NVIDIA/nvidia-docker).

Once that's done, you can run the container, you should add the following ENV variables to the docker-compose file

```yaml
environment:
NVIDIA_VISIBLE_DEVICES: GPU-[uuid] # Replace [uuid] (see the instructions)
NVIDIA_DRIVER_CAPABILITIES: utility,graphics,video,display
```

To get the correct UUID for your GPU, use the `nvidia-container-cli` command:
```console
$ sudo nvidia-container-cli --load-kmods info
NVRM version: [version]
CUDA version: 11.3

Device Index: 0
Device Minor: 0
Model: NVIDIA GeForce [model]
Brand: GeForce
GPU UUID: GPU-[uuid]
Bus Location: 00000000:0a:00.0
Architecture: 7.5
```

### Xorg drivers

Although the NVIDIA Container Toolkit automatically provides most of the drivers needed to use the GPU inside a container, Xorg is _not_ officially supported. This means that the runtime will not automatically map in the specific drivers needed by Xorg.

There are two libraries needed by Xorg: `nvidia_drv.so` and `libglxserver_nvidia.so.[version]`. It is preferred to map these into the container as a bind volume from the host, because this guarantees that the version will exactly match between the container and the host. Locate the two modules and add a section like this to the `xorg` service in your `docker-compose.yml`:
```yaml
volumes:
- /path/to/nvidia_drv.so:/nvidia/xorg/nvidia_drv.so:ro
- /path/to/libglxserver_nvidia.so.[version]:/nvidia/xorg/libglxserver_nvidia.so:ro
```

Be sure to replace `[version]` with the correct version number from the `nvidia-container-cli` command above.

Some common locations for `nvidia_drv.so` include:
* `/usr/lib64/xorg/modules/drivers/` (Unraid)
* `/usr/lib/x86_64-linux-gnu/nvidia/xorg/` (Ubuntu 20.04)

Some common locations for `libglxserver_nvidia.so.[version]` include:
* `/usr/lib64/xorg/modules/extensions/` (Unraid)
* `/usr/lib/x86_64-linux-gnu/nvidia/xorg/` (Ubuntu 20.04)

If you don't want to do this, or if you can't find the driver on your host for some reason, the container will attempt to install the correct version for you automatically. However, there is no guarantee that it will be able to find a version that exactly matches the driver version on your host.

If for some reason you want to skip the entire process and just assume the driver is already installed, you can do that too:
```yaml
environment:
SKIP_NVIDIA_DRIVER_CHECK: 1
```

0 comments on commit 98e5080

Please sign in to comment.