Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImportError: No module named local_settings #1

Open
Minghao1207 opened this issue Mar 28, 2023 · 5 comments
Open

ImportError: No module named local_settings #1

Minghao1207 opened this issue Mar 28, 2023 · 5 comments
Assignees
Labels
help wanted Extra attention is needed

Comments

@Minghao1207
Copy link

Hi,
I'm trying to run this project locally, after I cloned the project and ran "python manage.py migrate" in testserver, it gives me this error:

Traceback (most recent call last):
  File "manage.py", line 22, in <module>
    execute_from_command_line(sys.argv)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line
    utility.execute()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 356, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 327, in execute
    self.check()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 359, in check
    include_deployment_checks=include_deployment_checks,
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 62, in _run_checks
    issues.extend(super(Command, self)._run_checks(**kwargs))
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 346, in _run_checks
    return checks.run_checks(**kwargs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/checks/registry.py", line 81, in run_checks
    new_errors = check(app_configs=app_configs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/checks/urls.py", line 16, in check_url_config
    return check_resolver(resolver)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/checks/urls.py", line 26, in check_resolver
    return check_method()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/urls/resolvers.py", line 256, in check
    for pattern in self.url_patterns:
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/functional.py", line 35, in __get__
    res = instance.__dict__[self.name] = self.func(instance)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/urls/resolvers.py", line 407, in url_patterns
    patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/functional.py", line 35, in __get__
    res = instance.__dict__[self.name] = self.func(instance)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/urls/resolvers.py", line 400, in urlconf_module
    return import_module(self.urlconf_name)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "/Users/tmh/Desktop/Basta-COSI-master/testserver/testserver/urls.py", line 21, in <module>
    url(r'', include('main.urls', namespace='mainApp')),
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/conf/urls/__init__.py", line 50, in include
    urlconf_module = import_module(urlconf_module)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "/Users/tmh/Desktop/Basta-COSI-master/testserver/main/urls.py", line 35, in <module>
    from main import views
  File "/Users/tmh/Desktop/Basta-COSI-master/testserver/main/views.py", line 27, in <module>
    from .local_settings import site_dict

Should this local_settings file be provided by myself? Thanks for your time in advance!

@SoheilKhodayari
Copy link
Owner

Hi, thanks for your report!

Yes, as documented here in the README, the current version of the code requires that you provide your own settings file. You can find an example here. This file specifies the testbed, e.g., which websites should the tool test.

@Minghao1207
Copy link
Author

Thanks a lot! The problem solved after providing the setting file :D There are several more problems I met afterwards:

According to 'Step 3: Selenium Webdrivers', the exact path of drivers need to be set in function get_new_browser_driver in automator/main.py. My operation system is macos, I set chrome driver path like this:

chromedriver = '/usr/local/bin/chromedriver'

Firefox path originally set in:

driver = webdriver.Firefox(capabilities=firefox_capabilities)

Is there an need to change this and pass the exact path of geckodriver to the function? And which driver did the project use to drive edge? I only see chromedriver and geckodriver in README.

And when running 'python crawler_and_avi.py get_cosi_attacks' in folder automator, It gives me this error:

Traceback (most recent call last):
  File "crawler_and_avi.py", line 916, in <module>
    main_crawl_url_response_headers(siteId, chunk_size=3)
  File "crawler_and_avi.py", line 601, in main_crawl_url_response_headers
    stateModule = __import__("%s.Scripts.%s"%(siteId, STATES_SCRIPT_FILE), fromlist=["states"])
ImportError: No module named 23.Scripts.FreeNPremium

Do I also need to provide FreeNPremium.py in 'automator\1\Scripts' ? Is there an example of this script?

Thanks again for your time! I really appreciates your help :D

@SoheilKhodayari SoheilKhodayari added the help wanted Extra attention is needed label Mar 30, 2023
@SoheilKhodayari SoheilKhodayari self-assigned this Mar 30, 2023
@SoheilKhodayari
Copy link
Owner

Hi, you have to set the correct path of the browser you want to use. You may need to change the function according to your OS/environment to make it working. Please refer to the selenium documentation for details.

For Edge, please check out here. You may download an appropriate driver from here, and pass the executable path to webdriver.Edge(executable_path=executable_path)

Regarding your second issue, please note that this tool gets as input the so-called state scripts. These are scripts that load a particular state inside the browser before testing, e.g., logged in as normal user, logged in as admin, logged in as a premium user, etc. As a tester, you can specify what states you want the tool to check for XS-Leaks.

You can call your state script file as you wish. An example is here. You should specify what state script the tool should use in the config.

Do I also need to provide FreeNPremium.py in 'automator\1\Scripts' ? Is there an example of this script?

The code is asking for a state script called FreeNPremium.py because it has been specified in the config. Assuming you want to test for XS-Leaks between free vs premium accounts, you need to create a script, with two functions that logs in the browser in those accounts, respectively.

@Minghao1207
Copy link
Author

Thanks so much for your help! I successfully run "crawler_and_avi.py" now. But when I
run crawler_find_urls.py, I got this error:

++ URL Crawling Started!
Traceback (most recent call last):
  File "crawler_find_urls.py", line 240, in <module>
    URLs = get_urls(siteId)
  File "crawler_find_urls.py", line 204, in get_urls
    currentAccountURLs = find_urls(siteId, state, spider_duration_mins=spider_duration_mins, ajax_duration_mins=ajax_duration_mins, Local=Local, IP=IP, CrawlFlag=CrawlFlag)
  File "crawler_find_urls.py", line 30, in find_urls
    psl = public_suffix_list(http=httplib2.Http(cache_dir), headers={'cache-control': 'max-age=%d' % (90000000*60*24)})
  File "/Users/tmh/Desktop/Basta-COSI-master/automator/publicsuffix.py", line 255, in public_suffix_list
    _response, content = http.request(url, headers=headers)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 1694, in request
    (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 1434, in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, headers)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 1360, in _conn_request
    raise ServerNotFoundError("Unable to find the server at %s" % conn.host)
httplib2.ServerNotFoundError: Unable to find the server at mxr.mozilla.org

I googled this error and found out it was because mxr source code is no longer available online, so I changed the EFFECTIVE_TLD_NAMES in publicsuffix.py from
'http://mxr.mozilla.org/mozilla-central/source/netwerk/dns/effective_tld_names.dat?raw=1'
to 'https://chromium.googlesource.com/chromium/src/+/master/net/base/registry_controlled_domains/effective_tld_names.dat'
Afterwards I got this error:

++ URL Crawling Started!
Traceback (most recent call last):
  File "crawler_find_urls.py", line 240, in <module>
    URLs = get_urls(siteId)
  File "crawler_find_urls.py", line 204, in get_urls
    currentAccountURLs = find_urls(siteId, state, spider_duration_mins=spider_duration_mins, ajax_duration_mins=ajax_duration_mins, Local=Local, IP=IP, CrawlFlag=CrawlFlag)
  File "crawler_find_urls.py", line 30, in find_urls
    psl = public_suffix_list(http=httplib2.Http(cache_dir), headers={'cache-control': 'max-age=%d' % (90000000*60*24)})
  File "/Users/tmh/Desktop/Basta-COSI-master/automator/publicsuffix.py", line 255, in public_suffix_list
    _response, content = http.request(url, headers=headers)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 1694, in request
    (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 1434, in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, headers)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 1390, in _conn_request
    response = conn.getresponse()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1061, in getresponse
    raise ResponseNotReady()
httplib.ResponseNotReady

Running main.py gives me similar error. So I guess I didn't get the right url for EFFECTIVE_TLD_NAMES. Which url should I use when mxr.mozilla.org is no longer available?

Thanks again for all the help you have offered !!!

@SoheilKhodayari
Copy link
Owner

How about using https://publicsuffix.org/list/effective_tld_names.dat? Does that solve your problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants