-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Guzzle/Curl connections between multiple projects #435
Comments
@chrispappas in order to communicate between 2 tier architecture you have to add go to your docker-compose.yml php fpm container extra_hosts:
Same goes if you want to access them in workspace container. and rebuild the containers by using this command
|
@philtrep I guess you can close this issue. |
@manshoor Awesome! |
@manshoor How to dynamically configure multiple domain name for multiple projects |
@bravist I'm not sure about this, need to do some research. |
@bravist @manshoor @chrispappas - I managed to come up with a fairly robust solution to this problem. I ended up assigning the webserver a static IP on a new network (webserver_network) - but also include domain mapping via extra_hosts. I can now curl/guzzle/file_get_contents between 'domains' & containers. Project setup, three different code bases (vhosts in apache2): .env (New vars)
docker-compose.yml (New network)
|
Is the |
Hey @bravist In my case Hope that helps. |
win10, docker, laradock, apache2 I have same issue. I went to my docker-compose.yml and added in
rebuild image, and now i have other error: Can anybody help me? How I can fix it? |
I fixed this issue by going in to my nginx container and checked the ip assigned to it. Then I updated in my docker-compose.yml file to: And all problems were gone! |
@bravist my ./nginx/sites/default.conf
Hots for app
|
@manshoor |
A resolved it using the docker aliases option.
I guess this would be safer than using the IP |
I like your solution @iceheat. Can you show the complete solution including changes to |
That's literary the only change you need to do, not going to post the whole of docker-compose file in a thread but here is the whole of the nginx section
|
Thanks. Much better solution. |
great solution @iceheat 👍 |
@thferreira you will need to restart nginx container. You don't need to rebuild the container since its a runtime configuration. The solution above is to manage multiple domains within the same swarm. If communication is going out of swarm, then it would have to rely on the host's dns. |
@manshoor What if I am using just |
I'm still struggling with this issue. My local dev runs LEMP stack with no issue, but on production server I get this error: |
great |
I have also same issue, I have multiple websites hosted on my digital ocean server. All on same ip address on same docker. i have created different vhosts in apache2/sites. One of my domain is analytics.dytest.in and other is services.dytest.in when application hosted on services.dytest.in tries to curl some api on analytics.dytest.in connection gets time out. tried every possible way like extra_hosts in php-fpm but not solution. Not: my websites are running on internet so there is no local environment. domains are pointed to server ip address. Plz someone help. here is my php-fpm docker compose
|
Anyone still stuck on this i tried adding aliases and also adding extra_hosts, but unfortunately none of it working. My case is i'm unable to do a guzzle api call from one domain to another domain. PHP-FPM Extra Hosts :
and this is my NGINX Networks Aliases
Can anyone help me about this, i spent 2 days to only solve this problem |
Just remove the extra_hosts. |
@theyudhiztira We had the same problem. It suddenly worked when we only used your nginx changes, but kept the php-fpm section in it's original state. |
@Kauto sh*t i spent so many times to fix a bug that caused by myself 😅 |
As other said in here: Do not use PHP-FPM extra_hosts, instead use the nginx network aliases! |
@pabloonbrand can you give us an example? please |
Add this to your docker-compose.override:
Leave the extra_hosts by default. |
:( It is not working in production! The alises work locally but not in live server. When just try CURL to get an image it doesn't work, for some reason I don't know yet, the CURL is timed out. Aliases nor extra_hosts are working. I'm using Laradock as it is and tried both options. Aliases fix the problem locally, it doesn't when the container URL is a publis URL... Let me describe the scenario... Project A Project B needs to CURL Project A to get a simple image. Both Projects live within the same Laradock workspace. And just to clarify alises work when the domains are locally resolved, not when these are resolved by the DNS. We are using caddy, nginx, php-fpm and mysql containers. I've no idea how to solve the problem :( any help would be appreciated! Thank you. |
Hi everyone, I need to solve the issue and I'll consider PAYING someone that can help with this. The staging/production sites aren't working with the alias solution nor with extra_hosts. We pointed the public domain to the host IP (127.0.0.1) same as in our local and still not working. The only difference between local and staging/production is the use of Caddy, so I presume the error is in there, but what the issue is, I've no idea. Anyone that could help? |
Maybe you do not need the alias in staging production because assume you have haproxy or similar routing the domains to the host? I would have through FQDN would resolve even if that means going out on the internet and back again from your host. |
@lionslair Thank you... That was how it orginally was set up until we hit the issue and we tried with the aliases. It doesn't work with and without. The request is not reaching out from the server to the internet to resolve. |
Hi @pabloonbrand , |
@lanphan We just tried and it doesn't work either. This command Any other tip could be very helpful. Thank you. |
@pabloonbrand |
This solution may be not working anymore?!?! |
Nginx aliases do not working for me too. ( extra_hosts too ) |
I found another solution, since none of these attempts worked for me ... My way of solving it was to let php-fpm solve external links to a specific container php-fpm: |
I solved the issue. Aliases works fine, we just need to restart containers with docker-compose command, restarting from desktop app isn't working. |
Please, check again! Aliases work in your local where the host is resolving the URLs, but it is not working in production where the URLs have to be resolved by internet DNS! This issue is making laradock completely useless for our case. We had to move to a custom docker solution :( Laradock is great but it doesn't work in complex scenarios, I'm afraid! |
Great. I verified it in Apache |
Working fine with NGINX too. |
worked for me on Ubuntu
|
Thank you! |
@marcus13371337 , I did like you did , but It didnt work to me , It gave me this IP - 172.18.0.1
after I went in docker-compose.yml -
.env
and than I run the command -
I can access , with chrome an composer this url -
But I cant access with guzzle like this
It give an error -
@AmmarRahman , I did like you did as well . with alias .
I dont understand . If you could explain better , I wold appreciate . |
Worked perfectly thank you. This solved the issue I had with nuxt and laradock Error in fetch(): FetchError: request to failed, reason: connect ECONNREFUSED +laradock Which the problem was
Initially I fixed it following a friend solution Problem was I needed to go into workspace.. each time server restarted. workspace:
build:
context: ./workspace
extra_hosts:
- "dockerhost:${DOCKER_HOST_IP}"
- "test.com:172.18.0.1"
Problem was the nginx IP changed each time I restart laradock. But now thanks to your solution I had no need to modify etc/host file. |
Try AmmarRahman Solution. remove the extra host and use alias instead
|
it's works! wonderful! |
I'm running Laradock in the "Multiple Projects" mode (https://github.com/laradock/laradock#b-setup-for-multiple-projects) for two projects. I've configured nginx to properly serve both projects under separate hostnames (let's call them api-project.dev and consumer-project.dev, names changed to protect the innocent). I've set up the /etc/hosts file on my Mac to point to localhost:
I can access the running laravel instances on both without any problems, but when the consumer tries to make Guzzle/Curl requests to the api-project, I get a cURL error:
[curl] 7: Failed to connect to api-project.dev port 80: Connection refused
This seems to be an error where the containers on which the curl request are being made don't know the proper IP address and thus are making their requests to "localhost", meaning their own container (and not hitting the nginx container instead).
Wat do? How can I fix this?
The text was updated successfully, but these errors were encountered: