Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Guzzle/Curl connections between multiple projects #435

Closed
chrispappas opened this issue Nov 16, 2016 · 65 comments
Closed

Guzzle/Curl connections between multiple projects #435

chrispappas opened this issue Nov 16, 2016 · 65 comments

Comments

@chrispappas
Copy link

I'm running Laradock in the "Multiple Projects" mode (https://github.com/laradock/laradock#b-setup-for-multiple-projects) for two projects. I've configured nginx to properly serve both projects under separate hostnames (let's call them api-project.dev and consumer-project.dev, names changed to protect the innocent). I've set up the /etc/hosts file on my Mac to point to localhost:

127.0.0.1  api-project.dev
127.0.0.1  consumer-project.dev

I can access the running laravel instances on both without any problems, but when the consumer tries to make Guzzle/Curl requests to the api-project, I get a cURL error:

[curl] 7: Failed to connect to api-project.dev port 80: Connection refused

This seems to be an error where the containers on which the curl request are being made don't know the proper IP address and thus are making their requests to "localhost", meaning their own container (and not hitting the nginx container instead).

Wat do? How can I fix this?

@manshoor
Copy link

manshoor commented Nov 19, 2016

@chrispappas in order to communicate between 2 tier architecture you have to add api-project.dev and consumer-project.dev in docker hosts file too.

go to your docker-compose.yml

php fpm container

extra_hosts:
# IMPORTANT: Replace with your Docker Host IP (will be appended to /etc/hosts)

        - "dockerhost:10.0.75.1"

        - "api-project.dev:10.0.75.1"

        - "consumer-project.dev:10.0.75.1"

Same goes if you want to access them in workspace container.

and rebuild the containers by using this command

docker-compose build --no-cache php-fpm workspace

@manshoor
Copy link

@philtrep I guess you can close this issue.

@philtrep
Copy link
Member

@manshoor Awesome!

@huiyonghkw
Copy link

@manshoor How to dynamically configure multiple domain name for multiple projects

huiyonghkw added a commit to huiyonghkw/lnmp-docker that referenced this issue Jun 6, 2017
huiyonghkw added a commit to huiyonghkw/lnmp-docker that referenced this issue Jun 6, 2017
@manshoor
Copy link

manshoor commented Jun 6, 2017

@bravist I'm not sure about this, need to do some research.

@Snaver
Copy link

Snaver commented Jul 13, 2017

@bravist @manshoor @chrispappas - I managed to come up with a fairly robust solution to this problem. I ended up assigning the webserver a static IP on a new network (webserver_network) - but also include domain mapping via extra_hosts.

I can now curl/guzzle/file_get_contents between 'domains' & containers.


Project setup, three different code bases (vhosts in apache2):

.env (New vars)

WEBSERVER_IP=172.16.238.20   

WEBSERVER_GATEWAY=172.16.238.1    

WEBSERVER_SUBNET=172.16.238.0/24

docker-compose.yml (New network)

### Apache Server Container #################################

    apache2:
      ...
      ports:
        - "{WEBSERVER_IP}:${APACHE_HOST_HTTP_PORT}:80"
        - "{WEBSERVER_IP}:${APACHE_HOST_HTTPS_PORT}:443"
      ...
      extra_hosts:
        - "dockerhost:${DOCKER_HOST_IP}"
        - "admin.dev:${WEBSERVER_IP}"
        - "api.dev:${WEBSERVER_IP}"
        - "front-end.dev:${WEBSERVER_IP}"
      networks:
        webserver_network:
          ipv4_address: ${WEBSERVER_IP}
        frontend:
        backend:

### PHP-FPM Container #######################################

    php-fpm:
      ...
      extra_hosts:
        - "dockerhost:${DOCKER_HOST_IP}"
        - "admin.dev:${WEBSERVER_IP}"
        - "api.dev:${WEBSERVER_IP}"
        - "front-end.dev:${WEBSERVER_IP}"
      ...
      networks:
        - frontend
        - backend
        - webserver_network

### Networks Setup ############################################

networks:
  webserver_network:
    driver: bridge
    ipam:
        driver: default
        config:
        - subnet: "${WEBSERVER_SUBNET}"
          gateway: "${WEBSERVER_GATEWAY}"

@huiyonghkw
Copy link

@Snaver

Is the DOCKER_HOST_IP equal WEBSERVER_IP or other?

@Snaver
Copy link

Snaver commented Jul 14, 2017

Hey @bravist

In my case DOCKER_HOST_IP was left as default (10.0.75.1). I'm not actually too sure what DOCKER_HOST_IP does, all I can see is that it's mapped to /etc/hosts via the extra_hosts mapping in docker-compose.yml.

Hope that helps.

@izdanevych
Copy link

izdanevych commented Aug 27, 2017

win10, docker, laradock, apache2

I have same issue.
I can access the running laravel-web instance on my virtual host (named "second") without any problems, and can access my laravel-api running on same virtual host with postman, but when the laravel-web tries to make Guzzle/Curl requests to the api, I get a cURL error:
[curl] 7: Failed to connect to api-project.dev port 80: Connection refused

I went to my docker-compose.yml and added in
PHP-FPM Container
extra_hosts:

  • "second:10.0.75.1"

rebuild image, and now i have other error:
cURL error 7: Failed to connect to second port 80: Connection timed out

Can anybody help me? How I can fix it?

@marcus13371337
Copy link

marcus13371337 commented Sep 20, 2017

I fixed this issue by going in to my nginx container and checked the ip assigned to it.

Then I updated in my docker-compose.yml file to:
- "api.dev:172.20.0.6"

And all problems were gone!

@avaslev
Copy link

avaslev commented Dec 19, 2017

@bravist my ./nginx/sites/default.conf

server {
    listen 80;
    listen   [::]:80 default ipv6only=on; ## listen for ipv6

    server_name localhost nginx;

    location ~/([A-Za-z0-9-\.]+)/(.*)$ {
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $1;
        proxy_pass http://127.0.0.1/$2$is_args$args;
    }

    error_log /var/log/nginx/error.log;
    access_log /var/log/nginx/access.log;
}

Hots for app

nginx/other-project.local

@rubmic
Copy link

rubmic commented Mar 9, 2018

@manshoor
Awesome extra_hosts !

@AmmarRahman
Copy link

A resolved it using the docker aliases option.
In docker-compose.yml -> nginx -> networks adjust the networks to:

      networks:
        frontend:
         aliases:
          - api.dev
        backend:
         aliases:
          - api.dev

I guess this would be safer than using the IP

@vjrngn
Copy link
Contributor

vjrngn commented Mar 21, 2018

I like your solution @iceheat. Can you show the complete solution including changes to docker-compose.yml file ?

@AmmarRahman
Copy link

That's literary the only change you need to do, not going to post the whole of docker-compose file in a thread but here is the whole of the nginx section

### NGINX Server Container ##################################

    nginx:
      build:
        context: ./nginx
        args:
          - PHP_UPSTREAM_CONTAINER=${NGINX_PHP_UPSTREAM_CONTAINER}
          - PHP_UPSTREAM_PORT=${NGINX_PHP_UPSTREAM_PORT}
      volumes_from:
        - applications
      volumes:
        - ${NGINX_HOST_LOG_PATH}:/var/log/nginx
        - ${NGINX_SITES_PATH}:/etc/nginx/sites-available
      ports:
        - "${NGINX_HOST_HTTP_PORT}:80"
        - "${NGINX_HOST_HTTPS_PORT}:443"
      depends_on:
        - php-fpm
      networks:
        frontend:
         aliases:
          - api.dev
        backend:
         aliases:
          - api.dev

@vjrngn
Copy link
Contributor

vjrngn commented Mar 21, 2018

Thanks. Much better solution.

@ramadani
Copy link

great solution @iceheat 👍

@thferreira
Copy link

thferreira commented Aug 15, 2018

@iceheat @ramadani after change i need rebuild the nginx container? you have example for multiple vhosts (example.test, mydomain.test, localhost, mydomain.test)

@AmmarRahman
Copy link

@thferreira you will need to restart nginx container. You don't need to rebuild the container since its a runtime configuration. The solution above is to manage multiple domains within the same swarm. If communication is going out of swarm, then it would have to rely on the host's dns.

@montogeek
Copy link

@manshoor What if I am using just localhost and not a custom domain.dev?

@samayo
Copy link

samayo commented Oct 11, 2018

I'm still struggling with this issue. My local dev runs LEMP stack with no issue, but on production server I get this error:
2018/10/11 00:01:10 [emerg] 1#1: host not found in upstream "php-fpm:9000" in /etc/nginx/conf.d/upstream.conf:1

@entimm
Copy link

entimm commented Oct 24, 2018

great

@hironate
Copy link

hironate commented Nov 19, 2018

I have also same issue,

I have multiple websites hosted on my digital ocean server. All on same ip address on same docker. i have created different vhosts in apache2/sites.

One of my domain is analytics.dytest.in and other is services.dytest.in

when application hosted on services.dytest.in tries to curl some api on analytics.dytest.in connection gets time out. tried every possible way like extra_hosts in php-fpm but not solution.

Not: my websites are running on internet so there is no local environment. domains are pointed to server ip address.

Plz someone help. here is my php-fpm docker compose

extra_hosts:
        - "dockerhost:${DOCKER_HOST_IP}"
        - "griffon.dytest.in:${DOCKER_HOST_IP}"
        - "analytics.dytest.in:${DOCKER_HOST_IP}"
        - "services.dytest.in:${DOCKER_HOST_IP}"

environment:
        - PHP_IDE_CONFIG=${PHP_IDE_CONFIG}
        - DOCKER_HOST=tcp://docker-in-docker:2375
        - FAKETIME=${PHP_FPM_FAKETIME}
      depends_on:
        - workspace
      networks:
        - backend
      links:
        - docker-in-docker

@theyudhiztira
Copy link

theyudhiztira commented May 10, 2020

Anyone still stuck on this i tried adding aliases and also adding extra_hosts, but unfortunately none of it working.

My case is i'm unable to do a guzzle api call from one domain to another domain.
The call is done by vox.test to vapi.test.

PHP-FPM Extra Hosts :

extra_hosts:
  - "dockerhost:${DOCKER_HOST_IP}"
  - "vios.test:${DOCKER_HOST_IP}"
  - "vox.test:${DOCKER_HOST_IP}"
  - "vapi.test:${DOCKER_HOST_IP}"

and this is my NGINX Networks Aliases

### NGINX Server #########################################
    nginx:
      build:
        context: ./nginx
        args:
          - CHANGE_SOURCE=${CHANGE_SOURCE}
          - PHP_UPSTREAM_CONTAINER=${NGINX_PHP_UPSTREAM_CONTAINER}
          - PHP_UPSTREAM_PORT=${NGINX_PHP_UPSTREAM_PORT}
          - http_proxy
          - https_proxy
          - no_proxy
      volumes:
        - ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}${APP_CODE_CONTAINER_FLAG}
        - ${NGINX_HOST_LOG_PATH}:/var/log/nginx
        - ${NGINX_SITES_PATH}:/etc/nginx/sites-available
        - ${NGINX_SSL_PATH}:/etc/nginx/ssl
      ports:
        - "${NGINX_HOST_HTTP_PORT}:80"
        - "${NGINX_HOST_HTTPS_PORT}:443"
        - "${VARNISH_BACKEND_PORT}:81"
      depends_on:
        - php-fpm
      networks:
        frontend:
          aliases:
            - vox.test
            - vapi.test
        backend:
          aliases:
            - vox.test
            - vapi.test

Can anyone help me about this, i spent 2 days to only solve this problem

@acodingproject
Copy link

Just remove the extra_hosts.

@Kauto
Copy link

Kauto commented May 26, 2020

@theyudhiztira We had the same problem. It suddenly worked when we only used your nginx changes, but kept the php-fpm section in it's original state.

@theyudhiztira
Copy link

@Kauto sh*t i spent so many times to fix a bug that caused by myself 😅

@pabloonbrand
Copy link

As other said in here:

Do not use PHP-FPM extra_hosts, instead use the nginx network aliases!

@LFTroya
Copy link

LFTroya commented Sep 18, 2020

@pabloonbrand can you give us an example? please

@pabloonbrand
Copy link

Add this to your docker-compose.override:

    nginx:
      networks:
        frontend:
          aliases:
            - app.test
            - back.test
        backend:
          aliases:
            - app.test
            - back.test

Leave the extra_hosts by default.
Replace app.test and back.test with your project URLs.
Done!

@pabloonbrand
Copy link

:( It is not working in production!

The alises work locally but not in live server.

When just try CURL to get an image it doesn't work, for some reason I don't know yet, the CURL is timed out.

Aliases nor extra_hosts are working.

I'm using Laradock as it is and tried both options. Aliases fix the problem locally, it doesn't when the container URL is a publis URL...

Let me describe the scenario...

Project A
Project B

Project B needs to CURL Project A to get a simple image.

Both Projects live within the same Laradock workspace.

And just to clarify alises work when the domains are locally resolved, not when these are resolved by the DNS.

We are using caddy, nginx, php-fpm and mysql containers.

I've no idea how to solve the problem :( any help would be appreciated!

Thank you.

@pabloonbrand
Copy link

pabloonbrand commented Oct 23, 2020

Hi everyone,

I need to solve the issue and I'll consider PAYING someone that can help with this.

The staging/production sites aren't working with the alias solution nor with extra_hosts.

We pointed the public domain to the host IP (127.0.0.1) same as in our local and still not working.

The only difference between local and staging/production is the use of Caddy, so I presume the error is in there, but what the issue is, I've no idea.

Anyone that could help?

@lionslair
Copy link
Contributor

Maybe you do not need the alias in staging production because assume you have haproxy or similar routing the domains to the host? I would have through FQDN would resolve even if that means going out on the internet and back again from your host.

@pabloonbrand
Copy link

@lionslair Thank you... That was how it orginally was set up until we hit the issue and we tried with the aliases. It doesn't work with and without. The request is not reaching out from the server to the internet to resolve.

@lanphan
Copy link
Contributor

lanphan commented Oct 27, 2020

Hi @pabloonbrand ,
Please pay attention to your https certificate if your production URL is https.
I did see some https cert is invalid, therfore, if you want to curl/guzzle them, please add option verify_ssl = false.

@pabloonbrand
Copy link

@lanphan We just tried and it doesn't work either.

This command curl --insecure -I https://... works in our local but it does not in the servers.

Any other tip could be very helpful.

Thank you.

@lanphan
Copy link
Contributor

lanphan commented Oct 27, 2020

@pabloonbrand
In this case, it's not a https cert problem.
I think you should find a way to inspect traffic in & out of Caddy

@diego-lipinski-de-castro

This solution may be not working anymore?!?!

@centerdevs
Copy link

Nginx aliases do not working for me too. ( extra_hosts too )

@alexteixeira4379
Copy link

alexteixeira4379 commented Jan 18, 2021

I found another solution, since none of these attempts worked for me ...

My way of solving it was to let php-fpm solve external links to a specific container
I found this way of saying in the documentation:

php-fpm:
build:
......
extra_hosts:
- "dockerhost:${DOCKER_HOST_IP}"
external_links:
- "nginx:my.site.url1"
- "nginx:my.site.url2"
- "nginx:my.site.url3"
- "nginx:my.site.url4"

environment:
- PHP_IDE_CONFIG=${PHP_IDE_CONFIG}
- DOCKER_HOST=tcp://docker-in-docker:2376
- DOCKER_TLS_VERIFY=1
- DOCKER_TLS_CERTDIR=/certs
- DOCKER_CERT_PATH=/certs/client
- FAKETIME=${PHP_FPM_FAKETIME}
depends_on:
- workspace
networks:
- backend
links:
- docker-in-docker

@centerdevs
Copy link

I solved the issue. Aliases works fine, we just need to restart containers with docker-compose command, restarting from desktop app isn't working.

@pabloonbrand
Copy link

Please, check again! Aliases work in your local where the host is resolving the URLs, but it is not working in production where the URLs have to be resolved by internet DNS!

This issue is making laradock completely useless for our case. We had to move to a custom docker solution :(

Laradock is great but it doesn't work in complex scenarios, I'm afraid!

@rubenrafamercado
Copy link

Add this to your docker-compose.override:

    nginx:
      networks:
        frontend:
          aliases:
            - app.test
            - back.test
        backend:
          aliases:
            - app.test
            - back.test

Leave the extra_hosts by default.
Replace app.test and back.test with your project URLs.
Done!

Great. I verified it in Apache

@FarhanShares
Copy link

Add this to your docker-compose.override:

    nginx:
      networks:
        frontend:
          aliases:
            - app.test
            - back.test
        backend:
          aliases:
            - app.test
            - back.test

Leave the extra_hosts by default.
Replace app.test and back.test with your project URLs.
Done!

Great. I verified it in Apache

Working fine with NGINX too.
I had to run docker-compose down & then docker-compose up -d.

@DiogoLindoso
Copy link

DiogoLindoso commented Aug 10, 2021

worked for me on Ubuntu

  1. sudo docker network ls
    NETWORK ID NAME DRIVER SCOPE
    1dbf49f78f1a bridge bridge local
    e163c7e41745 host host local
    60743ba805f8 laradock_backend bridge local ****
    7f389975b2f7 laradock_default bridge local
    02bdd0bb2579 laradock_frontend bridge local
    3e9dcdfcb182 none null local

  2. sudo docker network inspect laradock_backend
    [
    {
    "Name": "laradock_backend",
    "Id": "60743ba805f87ed08b2a7e653dd4ae6a8c81a0e3b759e42628af43a84fd77607",
    "Created": "2021-04-09T16:55:22.165658326-04:00",
    "Scope": "local",
    "Driver": "bridge",
    "EnableIPv6": false,
    "IPAM": {
    "Driver": "default",
    "Options": null,
    "Config": [
    {
    "Subnet": "172.28.0.0/16",
    "Gateway": "172.28.0.1" *****
    }
    ]
    },

  3. sudo nano /etc/hosts
    172.28.0.1 host.test
    172.28.0.1 host.api

@alexmillman2394
Copy link

That's literary the only change you need to do, not going to post the whole of docker-compose file in a thread but here is the whole of the nginx section

### NGINX Server Container ##################################

    nginx:
      build:
        context: ./nginx
        args:
          - PHP_UPSTREAM_CONTAINER=${NGINX_PHP_UPSTREAM_CONTAINER}
          - PHP_UPSTREAM_PORT=${NGINX_PHP_UPSTREAM_PORT}
      volumes_from:
        - applications
      volumes:
        - ${NGINX_HOST_LOG_PATH}:/var/log/nginx
        - ${NGINX_SITES_PATH}:/etc/nginx/sites-available
      ports:
        - "${NGINX_HOST_HTTP_PORT}:80"
        - "${NGINX_HOST_HTTPS_PORT}:443"
      depends_on:
        - php-fpm
      networks:
        frontend:
         aliases:
          - api.dev
        backend:
         aliases:
          - api.dev

Thank you!

@murilolivorato
Copy link

@marcus13371337 , I did like you did , but It didnt work to me ,
I inspected the nginx container .

It gave me this IP - 172.18.0.1
then I changed in my host - /etc/hosts ( I am using linux )

172.18.0.1  my-site-name 

after I went in docker-compose.yml -

extra_hosts:
        - "dockerhost:${DOCKER_HOST_IP}"
        - "api.dev:${WEBSERVER_IP}"

.env

WEBSERVER_IP=172.18.0.1

and than I run the command -

docker-compose build --no-cache php-fpm workspace

I can access , with chrome an composer this url -

http://my-site-name//api/my-api

But I cant access with guzzle like this

   $client        = new Client(['verify' => false, 'verify_ssl' => false]);
        return $client->request('GET','http://my-site-name/api/my-api');

It give an error -

GuzzleHttp\Exception\ConnectException: cURL error 6: Could not resolve host: 

@AmmarRahman , I did like you did as well . with alias .
how could I access this alias after the change ? showld I change the host file ? showld I access this alias with guzzle ? like this -

   $client        = new Client(['verify' => false, 'verify_ssl' => false]);
        return $client->request('GET','back.test/api/my-api');

I dont understand . If you could explain better , I wold appreciate .

@cjlaborde
Copy link

cjlaborde commented May 1, 2022

That's literary the only change you need to do, not going to post the whole of docker-compose file in a thread but here is the whole of the nginx section

### NGINX Server Container ##################################

    nginx:
      build:
        context: ./nginx
        args:
          - PHP_UPSTREAM_CONTAINER=${NGINX_PHP_UPSTREAM_CONTAINER}
          - PHP_UPSTREAM_PORT=${NGINX_PHP_UPSTREAM_PORT}
      volumes_from:
        - applications
      volumes:
        - ${NGINX_HOST_LOG_PATH}:/var/log/nginx
        - ${NGINX_SITES_PATH}:/etc/nginx/sites-available
      ports:
        - "${NGINX_HOST_HTTP_PORT}:80"
        - "${NGINX_HOST_HTTPS_PORT}:443"
      depends_on:
        - php-fpm
      networks:
        frontend:
         aliases:
          - api.dev
        backend:
         aliases:
          - api.dev

Worked perfectly thank you.

This solved the issue I had with nuxt and laradock

Error in fetch(): FetchError: request to failed, reason: connect ECONNREFUSED +laradock

Which the problem was

nuxt tries to connect to site.test/api
but then it says that it thinks that it's on 127.0.0.1
so nuxt, in workspace, tries to connect to workspace
but connection is refused, there is nothing listening on port 80
which means, it should connect to the API
and API is laravel
and to reach that, it has to go through webserver

Initially I fixed it following a friend solution
in workspace, edit /etc/hosts and put linereal.test

Problem was I needed to go into workspace.. each time server restarted.

  workspace:
    build:
      context: ./workspace
    extra_hosts:
      - "dockerhost:${DOCKER_HOST_IP}"
      - "test.com:172.18.0.1"
      

Problem was the nginx IP changed each time I restart laradock.

But now thanks to your solution I had no need to modify etc/host file.

@cjlaborde
Copy link

cjlaborde commented May 1, 2022

@marcus13371337 , I did like you did , but It didnt work to me , I inspected the nginx container .

It gave me this IP - 172.18.0.1 then I changed in my host - /etc/hosts ( I am using linux )

172.18.0.1  my-site-name 

after I went in docker-compose.yml -

extra_hosts:
        - "dockerhost:${DOCKER_HOST_IP}"
        - "api.dev:${WEBSERVER_IP}"

.env

WEBSERVER_IP=172.18.0.1

and than I run the command -

docker-compose build --no-cache php-fpm workspace

I can access , with chrome an composer this url -

http://my-site-name//api/my-api

But I cant access with guzzle like this

   $client        = new Client(['verify' => false, 'verify_ssl' => false]);
        return $client->request('GET','http://my-site-name/api/my-api');

It give an error -

GuzzleHttp\Exception\ConnectException: cURL error 6: Could not resolve host: 

@AmmarRahman , I did like you did as well . with alias . how could I access this alias after the change ? showld I change the host file ? showld I access this alias with guzzle ? like this -

   $client        = new Client(['verify' => false, 'verify_ssl' => false]);
        return $client->request('GET','back.test/api/my-api');

I dont understand . If you could explain better , I wold appreciate .

Try AmmarRahman Solution.
Read post I made above

remove the extra host and use alias instead

extra_hosts:
        - "dockerhost:${DOCKER_HOST_IP}"
      #  - "api.dev:${WEBSERVER_IP}"
### NGINX Server Container ##################################

    nginx:
      build:
        context: ./nginx
        args:
          - PHP_UPSTREAM_CONTAINER=${NGINX_PHP_UPSTREAM_CONTAINER}
          - PHP_UPSTREAM_PORT=${NGINX_PHP_UPSTREAM_PORT}
      volumes_from:
        - applications
      volumes:
        - ${NGINX_HOST_LOG_PATH}:/var/log/nginx
        - ${NGINX_SITES_PATH}:/etc/nginx/sites-available
      ports:
        - "${NGINX_HOST_HTTP_PORT}:80"
        - "${NGINX_HOST_HTTPS_PORT}:443"
      depends_on:
        - php-fpm
      networks:
        frontend:
         aliases:
          - api.dev
        backend:
         aliases:
          - api.dev
          

@kilvn
Copy link
Contributor

kilvn commented Nov 22, 2022

A 使用 docker aliases 选项解决了它。 在 docker-compose.yml -> nginx -> networks 中将网络调整为:

      networks:
        frontend:
         aliases:
          - api.dev
        backend:
         aliases:
          - api.dev

我想这比使用 IP 更安全

it's works! wonderful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests