You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As scrapydweb commands are first launched with input,
>>> ScrapydWeb version: 1.1.0
>>> Use 'scrapydweb -h' to get help
>>> Main pid: 23712
>>> Loading default settings from d:\virtualenvs\foo--pw-wxg0\lib\site-packages\scrapydweb\default_settings.py
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
The config file 'scrapydweb_settings_v7.py' has been copied to current working directory.
Please add your SCRAPYD_SERVERS in the config file and restart scrapydweb.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
We've seen that scrapydweb loads the default configuration file, and that it can be launched completely. Why generate one in the current directory? I feel this is redundant and unfriendly, so if I switch directories (I'm in a virtual environment) and execute it again, it will still generate a configuration file.
I think now that you have the default configuration, you should use the default configuration. Custom configurations are specified using command-line parameters. Configuration file, of course, you can also use the parameter specifies --setting settings.conf.
The default_settings.py file inside the site-packages directory would be overridden if ScrapydWeb is reinstalled or updated. Therefore the customized configurations would be lost if they are stored in the default_settings.py file, which would be painful if you have added hundreds of Scrapyd servers.
Also note that the content of the default_settings.py file would be updated in order to support new features along with a new release. That's why the name of the scrapydweb_settings_vN.py file comes with a version number.
Actually, if the scrapydweb_settings_vN.py file is not found in the current directory, ScrapydWeb would search it in the parent directory until reaching the root directory. So I think it's easy to work around even in the case of virtual environment -- just move the scrapydweb_settings_vN.py file to your users home directory (or D:/ for your case).
As scrapydweb commands are first launched with input,
We've seen that scrapydweb loads the default configuration file, and that it can be launched completely. Why generate one in the current directory? I feel this is redundant and unfriendly, so if I switch directories (I'm in a virtual environment) and execute it again, it will still generate a configuration file.
I think now that you have the default configuration, you should use the default configuration. Custom configurations are specified using command-line parameters. Configuration file, of course, you can also use the parameter specifies
--setting settings.conf
.If you command line configuration file specified every time very troublesome, can draw lessons from scrapyd configuration file loading way.
The text was updated successfully, but these errors were encountered: