robots.txt block crawl from my components #16698
triplecasquette
announced in
Archive
Replies: 1 comment 15 replies
-
Extending the |
Beta Was this translation helpful? Give feedback.
15 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
As said in the title the default robots.txt block the google crawler getting info in my Navbar and footer component in NextJS. On pages the build solve the problem. But how far I know it's impossible to getStaticProps on components in NextJs and so build informations. That's why I need to have still no indexation of my api, but authorize crawling. where's the robot or where to config?
I use a self hosted, so i found (I think) the public folder in /@directus/app/dist but do I have to put a robot here or ... ? I don't know
Thank for any reply
Beta Was this translation helpful? Give feedback.
All reactions