Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust rewrite logic on sample proxy configurations #156

Closed
MatteoGioioso opened this issue Mar 16, 2019 · 24 comments
Closed

Adjust rewrite logic on sample proxy configurations #156

MatteoGioioso opened this issue Mar 16, 2019 · 24 comments
Assignees
Labels
target: v3 Documentation PRs/issues targeting content from docs-v3.strapi.io (v3 branch)

Comments

@MatteoGioioso
Copy link

MatteoGioioso commented Mar 16, 2019

See: #156 (comment)

Informations

  • Node.js version: 10.14.1
  • NPM version: 6.4.1
  • Strapi version: 3.0.0-alpha.23.1
  • Database: mongodb
  • Operating system: Windows 10 Pro

What is the current behavior?
I am running next.js and strapi.js together on two separated docker containers with docker-compose.
I want Nginx to redirect all the request from www.mydomain.com/admin to strapi admin page.
This is my nginx config:

server {
    listen 80;
    server_name localhost;

    location /admin/ {
        proxy_pass http://api:1337/admin/;
    }

    location / {
        proxy_pass http://frontend:3000;
    }
}

and my server.json

{
  "host": "localhost",
  "port": "${process.env.PORT || 1337}",
  "production": true,
  "proxy": {
    "enabled": false
  },
  "autoReload": {
    "enabled": false
  },
  "cron": {
    "enabled": false
  },
  "admin": {
    "autoOpen": false
  }
}

Everything works fine and the app starts and works normally. However when I try to access the admin page of strapi I have the following error:

screenshot-localhost-2019 03 16-17-37-11

What is the expected behavior?
Admin page should load normally

Suggested solutions

@derrickmehaffy
Copy link
Member

I don't recommend using subfolder proxying. Your Strapi should be on it's own sub-domain like API.example.com

Your front end also should not live in the public folder of Strapi and should have it's own virtual host config.

@MatteoGioioso
Copy link
Author

Your front end also should not live in the public folder of Strapi and should have it's own virtual host config.

What do you mean? My frontend lives in a separated container.

@MatteoGioioso
Copy link
Author

tried with subdomain and still not working, more or less the same error

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://frontend:3000;
    }
}

server {
    listen 80;
    server_name admin.example.com;

    location / {
        proxy_pass http://api:1337;
    }
}

Everything loads correctly but when I try to navigate to admin.example.com/admin cannot find main.js and vendor.dll

@MatteoGioioso
Copy link
Author

MatteoGioioso commented Mar 17, 2019

Ok, after 2 days I have solved it 🙏 .
It was strapi unable to find frontend files because I kept the host as localhost.

 api:
    build: strapi/
    environment:
      - HOST=www.subdomain.yourdomain.com <-- change to your host
      - NODE_ENV=production
    volumes:

and then in your strapi config server.json

{
  "host": "www.subdomain.yourdomain.com", <-- change to your host
  "port": "${process.env.PORT || 1337}",
  "production": true,
  "proxy": {
    "enabled": false
  },
 ...

@ilovett
Copy link

ilovett commented Apr 27, 2019

fuck finally! thank you @MatteoGioioso

@hzrcan
Copy link

hzrcan commented Apr 14, 2020

@MatteoGioioso where did you change this? which file?:

api: build: strapi/ environment: - HOST=www.subdomain.yourdomain.com <-- change to your host - NODE_ENV=production volumes:

@MatteoGioioso
Copy link
Author

@hzrcan that should be the docker-compose.yml

@mb89
Copy link

mb89 commented Jul 15, 2020

@MatteoGioioso Tnx! But is a subfolder impossible? Still prefer that.

@needleshaped
Copy link
Contributor

needleshaped commented Jul 15, 2020

@mb89 There is official documentation about it:

which involves both Nginx and Strapi configuration, but! I haven't succeeded yet to make it work.

EDIT: Sub-Folder-Split still didn't work for me; but Sub-Folder-Unified - did! 👍
STRAPI v3.0.5, docker compose, Nginx v1.19.1. Configuration is exactly same as on official page.

@mb89
Copy link

mb89 commented Jul 15, 2020

@needleshaped Tnx, missed the tabs! I got Sub-Folder-Split to work as well.
You can't use /admin unfortunately, but /dashboard or /cms works.

/etc/nginx/sites-available/strapi.conf:

...
# Strapi API
location /api/ {
    rewrite ^/api/(.*)$ /$1 break;
    proxy_pass http://localhost:1337;
...
# Strapi Dashboard
location /cms {
    proxy_pass http://localhost:1337/cms;
...

config/server.js:

...
url: 'https://example.com/api',
admin: {
    url: 'https://example.com/cms',
},
...

@vinkovsky
Copy link

vinkovsky commented Sep 24, 2020

I don't recommend using subfolder proxying. Your Strapi should be on it's own sub-domain like API.example.com

Your front end also should not live in the public folder of Strapi and should have it's own virtual host config.

Hi ! I am beginner in strapi, could you please explain a bit more ? What do you mean by Your front end also should not live in the public folder of Strapi and should have it's own virtual host config.?

@derrickmehaffy
Copy link
Member

I don't recommend using subfolder proxying. Your Strapi should be on it's own sub-domain like API.example.com
Your front end also should not live in the public folder of Strapi and should have it's own virtual host config.

Hi ! I am beginner in strapi, could you please explain a bit more ? What do you mean by Your front end also should not live in the public folder of Strapi and should have it's own virtual host config.?

This issue is quite old and we offer some same configs for working with split frontend and backend here: https://strapi.io/documentation/v3.x/getting-started/deployment.html#optional-software-guides

But to answer your question, Strapi is a Headless CMS, so it's designed to be ran on it's own without also trying to serve a frontend. Depending on your frontend, it's better to offload that to an actual web server (Nginx, Apache, Caddy, Traefik, ect). Some frontend frameworks can also run as a service (aka SSR or Server Side Rendering) and likewise should be proxied by Nginx.

@jurij
Copy link

jurij commented Dec 22, 2020

I dont know if that helps anyone. But it took me 6 hours until i found out to delete /build and/or .cache folder after changing config/server.js#admin.url to https://localhost/dashboard.

@derrickmehaffy
Copy link
Member

I dont know if that helps anyone. But it took me 6 hours until i found out to delete /build and/or .cache folder after changing config/server.js#admin.url to https://localhost/dashboard.

Alternative is to use the yarn build --clean or npm run build -- --clean which will do that for you.

https://strapi.io/documentation/developer-docs/latest/guides/update-version.html#rebuild-your-administration-panel

@mattpr
Copy link
Contributor

mattpr commented Feb 26, 2021

I know this is an old issue...but ouch! I can confirm this is still the case.

strapi build does a whole webpack build and wants to hard-code all the urls and paths into the build.

This means (option 1) if you are trying to manage your builds/deploys with a CI, you need to make your FULL production ENV (secrets included I suppose) available to the CI...because you don't know as a user which of the values in ./config/ are getting baked into the build. The CI needs to know ahead of time where this strapi will be deployed so it can provide the right ENV for the build. And the CI build is specific to a target machine/env.

Or (option 2) you just use the CI to do what it can (e.g. node ci testing and tar) and then you have to have your provisioning code run the strapi build on the end server once the ENV has been exposed. This also means the CI probably needs to include all dev dependencies in the build going to the server. This long-running webpack build blocks the rest of provisioning and leads to unpredictable go lives. As a matter of best practice, we try to never do builds on production servers (including yarn install --frozen-lockfile or npm ci or building source packages). In strapi's case it appears that we have to deploy to the server with both production and dev dependencies to be able to build (also something we try to avoid...in strapi's case the dev deps are apparently production deps). This also means that we can't guarantee the build will work on the CI before deploying to the production server...which can easily lead to deployment of broken builds we won't find out about until after deployment. So in this option a staging production server is mandatory as a way of testing the builds even work (since we can't do this in the CI).

The last (option 3) is manual deployment and giving users ssh access to production to manually manage/update strapi on production boxes. For security and data privacy reasons we try to strongly limit any SSH access to production machines and the access that is there is for "reading" (debugging) and users are not generally supposed to be making changes as there is a strong chance (if they even have access) their changes will conflict or be overwritten by our server management/provisioning code. So this option means that strapi servers cannot be handled in the same way as other production api and webservers...means losing out on a lot of automated monitoring, nginx stats, etc.

The underlying problem here (well for us) is that config is hard-coded into the webpack build (maybe other things?) at the strapi build stage...which requires the build machine to know all the details (secret production ENV) about the destination production server. I'm sure fixing this is more complicated than using directory-relative urls in the source. Of course when starting strapi in production if expected/required config (ENV vars loaded by files in ./config/) isn't there or doesn't validate then I would expect the production service to error and not start. This is a production/provisioning error and not a CI/app level error in my opinion.

Another thing... apparently mounting strapi under an nginx location block with a prefix (e.g. /api) is supported...but we have to do an nginx rewrite to strip the /api from the prefix? This is in the unified example but not called out anywhere as necessary or important. And apparently we need to do this even though we have already informed strapi about the path where it is mounted (public url)...so it should be able to manage without the redirect I would think? To add to the awesomeness... /ap gives nginx 404, /api gives strapi 404 and /api/ gives strapi "hello world" page.

Am I missing something here on strapi deployment/devops or is this project really not compatible with devops environments? It is okay if strapi is really designed for non-devops users who manually manage a small number of servers (e.g. the pm2 people)...even if that means it isn't really compatible with enterprise deployment environments and controls.

Update

  • We don't want a tight coupling between dev/git/CI and production configs.
  • We don't want developers to have to login to the machine and make changes (e.g. running strapi build).
  • The webpack build process takes almost 2 minutes and we don't want this holding up the rest of our provisioning processes. We have a "no building in production" rule.

And we are just trying to avoid having strapi modifying state (db, uploads, etc) outside of known locations that we can manage backup/restore/migrate/etc. We are getting closer. Here is what we came up with for anyone else that has the same concerns or requirements...

decoupling CI build from deployment and target server

The webpack build needs to happen on the CI (for us) and needs to know its own base path as well as the path to get to the API.

Strapi assumes that if your public admin url isn't absolute (starts with http...) that it should be treated relative to the api url. (e.g. /admin gets turned into /api/admin whereas https://example.com/admin will not be messed with).
There is a whole lot of trimming slashes (leading/trailing) and re-prepending slashes going on as well as special case logic based on whether or not you pass an absolute url.

Our solution on this was to patch node_modules/strapi-utils/lib/config.js in the CI after {yarn,npm} install but before strapi build.
In particular this line: https://github.com/strapi/strapi/blob/v3.5.2/packages/strapi-utils/lib/config.js#L38

This allows us to avoid having to tell our CI or the strapi build the absolute urls where the resulting build will be deployed. And it allows us to re-use the same build (tarball) for multiple deployments at different urls as long as they all use the same uri convention (e.g. /admin and /api).

Of course the target server that consumes/deploys the tarball needs to know about this convention and nginx needs to be setup to match. So there is some coupling between CI and production as not everything can be passed as runtime config (the build config). The strapi systemd service (passes env and runs strapi start) needs to be informed as well.

This allows us to use the same tarball (domain-name independent) for both staging and production servers and not have to run strapi build at all on target machines.

After looking through the code the only ENV needed for strapi build is the server.url and server.admin.url and NODE_ENV=production. The strapi server config is also set with ENV using the the env helper described in the docs (url: env('PUBLIC_URL', 'https://www.example.com/public-url-not-set'),).

moving uploads dir (./public) out of app dir.

I'm new to strapi (few days) and just learning what it does...some of these things are probably obvious if you already know how strapi works.

uploads (local uploads plugin) are (by default) saved relative to the app path (./public). When a new build (full app dir) comes from the CI that gets blown away (bye bye image files). So we also moved this data dir outside of the app dir.
Now it can be managed separate from the deploy of the strapi build app. It wasn't obvious how to do this but in the end we found we could override this via ./config/middleware.js and then set the DATA_DIR via env vars.

// config/middleware.js
// WARNING! This file does not support env helper like other files in ./config
// have to set defaults ourself.

module.exports = {
  settings: {
    public: {
        path: process.env.DATA_DIR || './public',
    }
  },
};

So now we are in a state where we only have to manage the data_dir and db backups/restores and the strapi application folder can be managed completely with git including CI builds.

future problems

plugins

We are a little worried about people potentially installing plugins using the production UI instead of having site developers install/test them on their own machines and then commit them to our strapi project staging branch...because the next deployment from the CI will also wipe out any plugins that have been added (the entire app dir/build is deployed as a unit).

I noticed that plugin presence seems to be accomplished by looking at package name prefixes in package.json...so it might be better to have installed plugin state stored in the database and have a configurable (like DATA_DIR for uploads) PLUGIN_DIR that can live outside the main app dir.

Alternatively or in addition.... having an easy config switch to turn off marketplace (no plugin installs allowed when enabled) would be great. It just disables the UI feature and developers have to manage plugin install and testing and then commit that config to git.

auto-update

During CI (again after install but before strapi build) we customize a few things including turning off update notification.

/bin/echo -e \
      "export const LOGIN_LOGO = null;\nexport const SHOW_TUTORIALS = false;\nexport const SETTINGS_BASE_URL = '/settings';\nexport const STRAPI_UPDATE_NOTIF = false;" \
      > node_modules/strapi-admin/admin/src/config.js

We don't want strapi to behave like wordpress where the only way to track/manage changes is make frequent backups of a giant directory and diff them. So we expect that updates will be done by developers locally, tested and then committed to git where they will be built/deployed. We set STRAPI_UPDATE_NOTIF = false; but it would be nice to be confident that ability to update via the api/admin-ui is really disabled.

telemetry

Despite setting STRAPI_TELEMETRY_DISABLED=true for both the strapi build on the CI (admin webpack build) and the strapi service/app (API) on the destination server...we still saw a lot of calls going out to analytics.strapi.io. We added another patch to the CI to just replace analytics.strapi.io with analytics.strapi.io.test in the files where we had located analytics calls.

Perhaps that ENV is outdated and we need to remove the UUID from package.json instead to accomplish this?
Or perhaps telemetry is different than the analytics calls?

@derrickmehaffy
Copy link
Member

derrickmehaffy commented Feb 26, 2021

Another thing... apparently mounting strapi under an nginx location block with a prefix (e.g. /api) is supported...but we have to do an nginx rewrite to strip the /api from the prefix? This is in the unified example but not called out anywhere as necessary or important. And apparently we need to do this even though we have already informed strapi about the path where it is mounted (public url)...so it should be able to manage without the redirect I would think? To add to the awesomeness... /ap gives nginx 404, /api gives strapi 404 and /api/ gives strapi "hello world" page.

This is caused by how Koa-router looks for routes, it does regex matching and with a prefix you would have to manually update all of the prefixes for every model via the routes.json.

We opted not to mess with the koa router to handling sub-folder based proxying

@mattpr
Copy link
Contributor

mattpr commented Feb 27, 2021

We opted not to mess with the koa router to handling sub-folder based proxying

Thanks for the clear explanation.

A little feedback as a new to strapi (3 days) user reading docs to do an enterprise deployment. Just some things I think that are important requirements or warnings that could be called out better in the docs. Sorry if I missed them and they are there.

  • when using subdir-{unified,split}
    • nginx rewrite to strip subdir from uri is mandatory (is it for admin or just for api?).
      • This is in the nginx example but not mentioned, explained or called out as required/important in the docs (that I could find).
    • API uri must have trailing slash (e.g. /api/ not /api). Shouldn't be a problem since api calls are generated by code, not users... but still important detail to mention.
  • Using subdirs is not recommended because of complexity making them work (nginx rewrites, etc) with the koa router. Instead use dedicated subdomain. So for a production site with staging you might have 4 endpoints (dns, ssl certs and host configs), e.g. staging-api.example.com, staging-www.example.com, api.example.com, www.example.com.
  • strapi build must be run with complete production config/environment in order to build properly. So this config/env needs to be available in the CI or the build needs to be run on a configured target production server.

@mattpr
Copy link
Contributor

mattpr commented Feb 27, 2021

To add to the awesomeness... /ap gives nginx 404, /api gives strapi 404 and /api/ gives strapi "hello world" page.

So the doc example (subfolder-unified) uses this rewrite in the /api nginx location block...

rewrite ^/api/(.*)$ /$1 break;

Anyone see an issue with small modification to make the trailing slash optional on original uri, but leading slash mandatory on re-written?

rewrite ^/api/?(.*)$ /$1 break; 

Unless I am missing something, the tweaked rewrite will result in the following which I can't imagine breaking anything in terms of strapi uri expectations. Not a big deal just nice not to have to worry about whether there is a trailing slash or not. Not sure if there is a meaningful performance penalty as I generally avoid doing rewrites if I can.

Orig. URI Rewritten
/api /
/api/ /
/api/foo /foo
/apifoo /foo
/api?foo=bar /?foo=bar
/api/?foo=bar /?foo=bar

@derrickmehaffy
Copy link
Member

@mattpr no that makes perfect sense, can you open a PR on our documentation repo: https://github.com/strapi/documentation

Please ignore the contribution guide and use the following PR branch for your base, as we are planning to merge it later this week and it's part of a massive restructure project: #154

@derrickmehaffy
Copy link
Member

I'm going to transfer this issue over to the docs repo and reopen it pending that suggestion for the configs. You may also want to check the HAProxy rewrite as well.

@derrickmehaffy derrickmehaffy reopened this Mar 1, 2021
@derrickmehaffy derrickmehaffy transferred this issue from strapi/strapi Mar 1, 2021
@derrickmehaffy derrickmehaffy changed the title Cannot load Strapi admin page on sub-folder with Nginx Adjust rewrite logic on sample proxy configurations Mar 1, 2021
@mattpr
Copy link
Contributor

mattpr commented Mar 2, 2021

Done. #157

@pwizla pwizla added the target: v3 Documentation PRs/issues targeting content from docs-v3.strapi.io (v3 branch) label May 16, 2022
@meganelacheny
Copy link
Collaborator

Closing this issue fixed via pull request #157

Thank you for your contribution!

@meganelacheny meganelacheny self-assigned this May 19, 2022
@vitalijalbu
Copy link

localhost

is www required if using a subdomain??

@derrickmehaffy
Copy link
Member

localhost

is www required if using a subdomain??

No it's not

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
target: v3 Documentation PRs/issues targeting content from docs-v3.strapi.io (v3 branch)
Projects
None yet
Development

No branches or pull requests