Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I/O Timeout on routes to container services #144

Closed
dmlogv opened this issue Jul 15, 2022 · 1 comment
Closed

I/O Timeout on routes to container services #144

dmlogv opened this issue Jul 15, 2022 · 1 comment

Comments

@dmlogv
Copy link

dmlogv commented Jul 15, 2022

Hi there! I'm trying to prettify my OpenMediaVault installation (Debian based) with ReProxy to manage with unhandy port service bindings, e. g.:

      omv.nas.local -> nas.local:8080
portainer.nas.local -> nas.local:9000
    gitea.nas.local -> docker/gitea:3000
 miniflux.nas.local -> docker/miniflux:8080
     plex.nas.local -> docker/plex:32400

and so on.

Static routes works fine, but I can't get to working routes into containers for a couple of days.

ReProxy logs:

2022/07/15 13:14:56.403 [DEBUG] {provider/docker.go:331 provider.(*Docker).listContainers} running container added, {ID:797366b7450b6e29cd5984e9470723f14daec9a6ab13af0014269dd7dfb93124 Name:miniflux State:running Labels:map[com.docker.compose.config-hash:b6334a769b54411ef9d8e64c11a75ea5bea2ecd7f11f97270424db20146cee6d com.docker.compose.container-number:1 com.docker.compose.oneoff:False com.docker.compose.project:miniflux com.docker.compose.project.config_files:/data/compose/2/docker-compose.yml com.docker.compose.project.working_dir:/data/compose/2 com.docker.compose.service:miniflux com.docker.compose.version:1.27.4 org.opencontainers.image.description:Miniflux is a minimalist and opinionated feed reader org.opencontainers.image.documentation:https://miniflux.app/docs/ org.opencontainers.image.licenses:Apache-2.0 org.opencontainers.image.source:https://github.com/miniflux/v2 org.opencontainers.image.title:Miniflux org.opencontainers.image.url:https://miniflux.app org.opencontainers.image.vendor:Frédéric Guillot reproxy.dest:/@1 reproxy.port:8080 reproxy.route:^/(.*) reproxy.server:rss.nas.local] TS:2022-07-14 22:28:34 +0300 MSK IP:172.20.0.2 Ports:[8080 8080]}
2022/07/15 13:14:56.403 [DEBUG] {provider/docker.go:331 provider.(*Docker).listContainers} running container added, {ID:8316c095dc2aea5729c99aa5f6c51fdd4b74f82aebb55d75815fd3fbc931a0fc Name:miniflux_db State:running Labels:map[com.docker.compose.config-hash:4b5cdb16019646e2ddefcd53ba6edbc596b468564f350a448623af584db11d9f com.docker.compose.container-number:1 com.docker.compose.oneoff:False com.docker.compose.project:miniflux com.docker.compose.project.config_files:/data/compose/2/docker-compose.yml com.docker.compose.project.working_dir:/data/compose/2 com.docker.compose.service:db com.docker.compose.version:1.27.4 reproxy.exclude:true] TS:2022-07-14 11:43:36 +0300 MSK IP:172.20.0.3 Ports:[5432]}
2022/07/15 13:14:56.403 [DEBUG] {provider/docker.go:331 provider.(*Docker).listContainers} running container added, {ID:fbdba0d84b9b21780d23c11a95acd873bbfb8735e993843afeeaf121bada106a Name:gitea State:running Labels:map[com.docker.compose.config-hash:113816a58caf946e83ea9ef581502d965aed87328c4e94d5d4ee1493f7a63731 com.docker.compose.container-number:1 com.docker.compose.oneoff:False com.docker.compose.project:gitea com.docker.compose.project.config_files:/data/compose/5/docker-compose.yml com.docker.compose.project.working_dir:/data/compose/5 com.docker.compose.service:server com.docker.compose.version:1.27.4 maintainer:maintainers@gitea.io org.opencontainers.image.created:2022-06-22T00:17:02Z org.opencontainers.image.revision:710a1419fa58ac8b8d8d981eaf969db4be34dc69 org.opencontainers.image.source:https://github.com/go-gitea/gitea.git org.opencontainers.image.url:https://github.com/go-gitea/gitea reproxy.dest:/@1 reproxy.port:3000 reproxy.route:^/(.*) reproxy.server:git.nas.local] TS:2022-07-14 11:25:29 +0300 MSK IP:172.21.0.2 Ports:[22 22 3000 3000]}
2022/07/15 13:14:56.404 [DEBUG] {provider/docker.go:318 provider.(*Docker).listContainers} skip container plex, no ip on defined networks
2022/07/15 13:14:56.404 [DEBUG] {provider/docker.go:331 provider.(*Docker).listContainers} running container added, {ID:cda94259debecc37c1bf0878d1f3b58081302a594b9daab9f860a666389f0080 Name:portainer State:running Labels:map[com.docker.desktop.extension.api.version:>= 0.2.2 com.docker.desktop.extension.icon:https://portainer-io-assets.sfo2.cdn.digitaloceanspaces.com/logos/portainer.png com.docker.extension.additional-urls:[{"title":"Website","url":"https://www.portainer.io?utm_campaign=DockerCon&utm_source=DockerDesktop"},{"title":"Documentation","url":"https://docs.portainer.io"},{"title":"Support","url":"https://join.slack.com/t/portainer/shared_invite/zt-txh3ljab-52QHTyjCqbe5RibC2lcjKA"}] com.docker.extension.detailed-description:<p data-renderer-start-pos="226">Portainer&rsquo;s Docker Desktop extension gives you access to all of Portainer&rsquo;s rich management functionality within your docker desktop experience.</p><h2 data-renderer-start-pos="374">With Portainer you can:</h2><ul><li>See all your running containers</li><li>Easily view all of your container logs</li><li>Console into containers</li><li>Easily deploy your code into containers using a simple form</li><li>Turn your YAML into custom templates for easy reuse</li></ul><h2 data-renderer-start-pos="660">About Portainer&nbsp;</h2><p data-renderer-start-pos="680">Portainer is the worlds&rsquo; most popular universal container management platform with more than 650,000 active monthly users. Portainer can be used to manage Docker Standalone, Kubernetes, Docker Swarm and Nomad environments through a single common interface. It includes a simple GitOps automation engine and a Kube API.&nbsp;</p><p data-renderer-start-pos="1006">Portainer Business Edition is our fully supported commercial grade product for business-wide use. It includes all the functionality that businesses need to manage containers at scale. Visit <a class="sc-jKJlTe dPfAtb" href="http://portainer.io/" title="http://Portainer.io" data-renderer-mark="true">Portainer.io</a> to learn more about Portainer Business and <a class="sc-jKJlTe dPfAtb" href="http://portainer.io/take5?utm_campaign=DockerCon&amp;utm_source=Docker%20Desktop" title="http://portainer.io/take5?utm_campaign=DockerCon&amp;utm_source=Docker%20Desktop" data-renderer-mark="true">get 5 free nodes.</a></p> com.docker.extension.publisher-url:https://www.portainer.io com.docker.extension.screenshots:[{"alt": "screenshot one", "url": "https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-1.png"},{"alt": "screenshot two", "url": "https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-2.png"},{"alt": "screenshot three", "url": "https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-3.png"},{"alt": "screenshot four", "url": "https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-4.png"},{"alt": "screenshot five", "url": "https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-5.png"},{"alt": "screenshot six", "url": "https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-6.png"},{"alt": "screenshot seven", "url": "https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-7.png"},{"alt": "screenshot eight", "url": "https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-8.png"},{"alt": "screenshot nine", "url": "https://portainer-io-assets.sfo2.digitaloceanspaces.com/screenshots/docker-extension-9.png"}] org.opencontainers.image.description:Docker container management made simple, with the world’s most popular GUI-based container management platform. org.opencontainers.image.title:Portainer org.opencontainers.image.vendor:Portainer.io] TS:2022-05-17 15:13:05 +0300 MSK IP:172.17.0.2 Ports:[8000 8000 9000 9000 9443]}
2022/07/15 13:14:56.404 [DEBUG] {provider/docker.go:336 provider.(*Docker).listContainers} completed list


// Looks fine
2022/07/15 13:14:56.405 [INFO]  {discovery/discovery.go:140 discovery.(*Service).Run} proxy  static: omv.nas.local ^/(.*) -> http://nas.local:8080/$1
2022/07/15 13:14:56.405 [INFO]  {discovery/discovery.go:140 discovery.(*Service).Run} proxy  static: portainer.nas.local ^/(.*) -> http://nas.local:9000/$1
2022/07/15 13:14:56.405 [INFO]  {discovery/discovery.go:140 discovery.(*Service).Run} proxy  docker: rss.nas.local ^/(.*) -> http://172.20.0.2:8080/$1
2022/07/15 13:14:56.405 [INFO]  {discovery/discovery.go:140 discovery.(*Service).Run} proxy  docker: git.nas.local ^/(.*) -> http://172.21.0.2:3000/$1


// Works fine
2022/07/15 13:15:40.760 [DEBUG] {proxy/proxy.go:246 proxy.(*Http).proxyHandler.func2} proxy to http://nas.local:9000/
...
2022/07/15 13:15:46.054 [DEBUG] {proxy/proxy.go:246 proxy.(*Http).proxyHandler.func2} proxy to http://nas.local:9000/api/users/1


// Does not work
2022/07/15 13:16:10.107 [DEBUG] {proxy/proxy.go:246 proxy.(*Http).proxyHandler.func2} proxy to http://172.20.0.2:8080/
2022/07/15 13:16:40.109 [WARN]  {lgr/adaptor.go:16 lgr.(*Writer).Write} http: proxy error: dial tcp 172.20.0.2:8080: i/o timeout
2022/07/15 13:16:40.117 [DEBUG] {proxy/proxy.go:246 proxy.(*Http).proxyHandler.func2} proxy to http://172.20.0.2:8080/
2022/07/15 13:17:10.119 [WARN]  {lgr/adaptor.go:16 lgr.(*Writer).Write} http: proxy error: dial tcp 172.20.0.2:8080: i/o timeout
2022/07/15 13:17:10.124 [DEBUG] {proxy/proxy.go:246 proxy.(*Http).proxyHandler.func2} proxy to http://172.20.0.2:8080/
2022/07/15 13:17:28.564 [WARN]  {lgr/adaptor.go:16 lgr.(*Writer).Write} http: proxy error: context canceled
2022/07/15 13:17:31.178 [DEBUG] {proxy/proxy.go:246 proxy.(*Http).proxyHandler.func2} proxy to http://172.20.0.2:8080/
2022/07/15 13:17:35.765 [WARN]  {lgr/adaptor.go:16 lgr.(*Writer).Write} http: proxy error: context canceled


// Does not work
2022/07/15 13:17:44.987 [DEBUG] {proxy/proxy.go:246 proxy.(*Http).proxyHandler.func2} proxy to http://172.21.0.2:3000/
2022/07/15 13:18:14.987 [WARN]  {lgr/adaptor.go:16 lgr.(*Writer).Write} http: proxy error: dial tcp 172.21.0.2:3000: i/o timeout
2022/07/15 13:18:14.992 [DEBUG] {proxy/proxy.go:246 proxy.(*Http).proxyHandler.func2} proxy to http://172.21.0.2:3000/
2022/07/15 13:18:44.993 [WARN]  {lgr/adaptor.go:16 lgr.(*Writer).Write} http: proxy error: dial tcp 172.21.0.2:3000: i/o timeout
2022/07/15 13:18:45.000 [DEBUG] {proxy/proxy.go:246 proxy.(*Http).proxyHandler.func2} proxy to http://172.21.0.2:3000/
2022/07/15 13:19:15.001 [WARN]  {lgr/adaptor.go:16 lgr.(*Writer).Write} http: proxy error: dial tcp 172.21.0.2:3000: i/o timeout
2022/07/15 13:19:15.008 [DEBUG] {proxy/proxy.go:246 proxy.(*Http).proxyHandler.func2} proxy to http://172.21.0.2:3000/
2022/07/15 13:19:21.144 [WARN]  {lgr/adaptor.go:16 lgr.(*Writer).Write} http: proxy error: context canceled


// Works fine
2022/07/15 13:19:32.527 [DEBUG] {proxy/proxy.go:246 proxy.(*Http).proxyHandler.func2} proxy to http://nas.local:8080/inter-latin-400-normal.c96fe5ff771f9e7b53ab.woff2
...
2022/07/15 13:19:43.851 [DEBUG] {proxy/proxy.go:246 proxy.(*Http).proxyHandler.func2} proxy to http://nas.local:8080/rpc.php

ReProxy routes:

{
  "git.nas.local": [
    {
      "route": "^/(.*)",
      "destination": "http://172.21.0.2:3000/$1",
      "server": "git.nas.local",
      "match": "proxy",
      "provider": "docker",
      "ping": "http://172.21.0.2:3000/ping"
    }
  ],
  "omv.nas.local": [
    {
      "route": "^/(.*)",
      "destination": "http://nas.local:8080/$1",
      "server": "omv.nas.local",
      "match": "proxy",
      "provider": "static"
    }
  ],
  "portainer.nas.local": [
    {
      "route": "^/(.*)",
      "destination": "http://nas.local:9000/$1",
      "server": "portainer.nas.local",
      "match": "proxy",
      "provider": "static"
    }
  ],
  "rss.nas.local": [
    {
      "route": "^/(.*)",
      "destination": "http://172.20.0.2:8080/$1",
      "server": "rss.nas.local",
      "match": "proxy",
      "provider": "docker",
      "ping": "http://172.20.0.2:8080/ping"
    }
  ]
}

ReProxy compose:

services:
  reproxy:
    image: umputun/reproxy:master
    container_name: reproxy
    hostname: reproxy
    ports:
      - "80:8080"
      - "443:8443"
      - "8081:8081"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      - TZ=Europe/Moscow
      - LISTEN=0.0.0.0:8080
      - DOCKER_ENABLED=true
      - STATIC_ENABLED=true
      - STATIC_RULES=
          omv.nas.local,^/(.*),http://nas.local:8080/@1;
          portainer.nas.local,^/(.*),http://nas.local:9000/@1;
      - DEBUG=true
      - LOGGER=stdout
      - MGMT_ENABLED=true
      - MAX=0
      - HEALTH_CHECK_ENABLED=false

Miniflux compose:

version: '3.4'
services:
  miniflux:
    image: miniflux/miniflux:latest
    container_name: miniflux
    ports:
      - "8083:8080"
    depends_on:
      - db
    environment:
      - DATABASE_URL=postgres://***:***@db/miniflux?sslmode=disable
      - RUN_MIGRATIONS=1
      - CREATE_ADMIN=1
      - ADMIN_USERNAME=***
      - ADMIN_PASSWORD=***
    restart: unless-stopped
    labels:
      - reproxy.server=rss.nas.local
      - reproxy.route=^/(.*)
      - reproxy.dest=/@1
      - reproxy.port=8080

  db:
    image: postgres:latest
    container_name: miniflux_db
    environment:
      - POSTGRES_USER=***
      - POSTGRES_PASSWORD=***
      - PUID=1000
      - PGID=100
    volumes:
      - /mnt/data/appdata/miniflux/db:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "miniflux"]
      interval: 10s
      start_period: 30s
    restart: unless-stopped
    labels:
      - reproxy.exclude=true

Gitea compose:

version: "2"                                                                                                                                                                                 
                                                                                                                                                                                             
services:                                                                                                                                                                                    
  server:                                                                                                                                                                                    
    image: gitea/gitea                                                                                                                                                               
    container_name: gitea                                                                                                                                                                    
    environment:                                                                                                                                                                             
      - USER_UID=1002                                                                                                                                                                        
      - USER_GID=1000                                                                                                                                                                        
    restart: always                                                                                                                                                                          
    volumes:                                                                                                                                                                                 
      - /mnt/data/appdata/gitea:/data                                                                                                                                      
      - /etc/timezone:/etc/timezone:ro                                                                                                                                                       
      - /etc/localtime:/etc/localtime:ro                                                                                                                                                     
    ports:                                                                                                                                                                                   
      - "3000:3000"                                                                                                                                                                          
      - "222:22" 
    labels:
      - reproxy.server=git.nas.local
      - reproxy.route=^/(.*)
      - reproxy.dest=/@1
      - reproxy.port=3000

Of course dockerized services (miniflux and gitea) are accessible from the host.

Gitea:

nas.local$ curl -is 172.21.0.2:3000 | head -n15
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
...

<!DOCTYPE html>
<html lang="en-US" class="theme-">
<head>
	<meta charset="utf-8">
	<meta name="viewport" content="width=device-width, initial-scale=1">
	<title> Gitea: Git with a cup of tea</title>
...

Miniflux:

nas.local$ curl -is 172.20.0.2:8080 | head -n20
HTTP/1.1 200 OK
Cache-Control: no-cache, max-age=0, must-revalidate, no-store
Content-Type: text/html; charset=utf-8
...

<!DOCTYPE html>
<html lang="en-US">
<head>
    <meta charset="utf-8">
    <title>Sign In - Miniflux</title>
...

What I'm doing wrong?

@umputun
Copy link
Owner

umputun commented Jul 15, 2022

To make reproxy to see/access networks from your other composes, you need to define the network and add it to your composes. This is not related to reproxy, but rather the docker / docker-compose thing. See https://docs.docker.com/compose/networking/ for more details, but generally, all you need is to add smth like this to reporxy compose and use it for networks: in your other composes

networks:
  proxy:
    external:
      name: reproxy

this is not an actual issue/bug report, I'll be moving it to discussions

Repository owner locked and limited conversation to collaborators Jul 15, 2022
@umputun umputun converted this issue into discussion #145 Jul 15, 2022

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants