New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"robots.txt" not working #5585
Comments
Hi @lcgkm ! I think this was an oversight on our part since it part of the default in the UI framework we use. Can you describe your use case a bit more? I'm more inclined to remove it entirely since if we added a redirect it would only be present if the UI was enabled. |
Reference: So, the search engine, like Google, will check https://vault.example.net/robots.txt |
Yep I’m familiar with robots.txt. The file is part of the ui code though (at least for now). Exposing vault publically is not generally recommended, so I was asking more about why you’re doing that (if that’s what’s happening) so that we can solve the issue for you rather than jumping to the implementation. In the event of no robots.txt, a crawler wouldn’t be authorized and there’s no site map so they wouldn’t know other endpoints to visit. |
Yes. I totally agree with you. It's just an assumption. We assume someone took some mistake. And as a result, Vault is exposed to public network. (this is not a present/real situation.) |
We found "robots.txt" file in Vault: https://vault.example.net/ui/robots.txt
File contents:
http://www.robotstxt.org
User-agent: *
Disallow: *
But It's not working because the request for the URI, "/robots.txt", returns 404 error.
If "/robots.txt" returns 3XX STATE CODE and the location is "/ui/robots.txt" (or "robots.txt" file exists in root "/"), then it will be working.
The text was updated successfully, but these errors were encountered: