New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

certifi-2015.9.6.1 and 2015.9.6.2 fail verification #26

Closed
ndparker opened this Issue Sep 7, 2015 · 69 comments

Comments

Projects
None yet
@ndparker

ndparker commented Sep 7, 2015

Hi,

accessing https://amazon.com/ or amazon webservices gives
SSLError: [Errno 1] _ssl.c:510: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed

Any reason to not trust verisign as CA here? anything wrong with amazon's certs?

Thanks for any insights!

@Lukasa

This comment has been minimized.

Member

Lukasa commented Sep 7, 2015

I suspect that Amazon is passing you a cross-signed certificate. Per this blog post I wrote in April, cross-signed roots represent a problem for older versions of OpenSSL because versions of OpenSSL prior to 1.0.2 are not able to trust non-self-signed root certificates.

Also per that blog post, the most recent version of certifi primarily contains a bundle that does not include the cross-signed roots. If you try replacing your call to certifi.where() with certifi.old_where(), you should find that everything keeps working as it was before.

However, please note that at some time in the next few months certifi.old_where() will be removed entirely. Prior to this time you should either upgrade to OpenSSL 1.0.2 (which on my machine encounters no problems using the current certifi bundle with https://amazon.com/) or pressure Amazon to stop supplying a cross-signed root certificate.

@sigmavirus24

This comment has been minimized.

Member

sigmavirus24 commented Sep 7, 2015

pressure Amazon to stop supplying a cross-signed root certificate.

To be fair, I don't think any of these companies will stop supplying such a certificate any time soon. I'm also not entirely convinced that everyone can upgrade to OpenSSL 1.0.2 easily. Perhaps we should schedule the removal of old_where for next year instead of in a few months (specifically after April) so that ideally the new LTS distributions will have already upgraded to OpenSSL 1.0.2 by default.

@Lukasa

This comment has been minimized.

Member

Lukasa commented Sep 7, 2015

@sigmavirus24 That would be entirely reasonable. I already delayed the timeline from that blog post (the switch I claimed I'd make in May I instead made on Sunday), so I'm happy to further delay the removal of the cross-signed roots.

@j0hnsmith

This comment has been minimized.

j0hnsmith commented Sep 7, 2015

I've just encountered this issue, with certifi==2015.9.6.1 and requests==2.7.0

import requests
resp = requests.get('https://bucket-name.s3.amazonaws.com/', verify=True)

Gives me SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)

After I uninstall certifi the above code runs without any errors.

I'm using Ubuntu 14.04 and if I try to update openssl via apt-get I get openssl is already the newest version (eg newest available version in the 14.04 repo, patched for heartbleed etc etc)

# openssl version -a
OpenSSL 1.0.1f 6 Jan 2014
built on: Thu Jun 11 15:28:12 UTC 2015
platform: debian-amd64
options:  bn(64,64) rc4(16x,int) des(idx,cisc,16,int) blowfish(idx) 
compiler: cc -fPIC -DOPENSSL_PIC -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -m64 -DL_ENDIAN -DTERMIO -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack -Wall -DMD32_REG_T=int -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
OPENSSLDIR: "/usr/lib/ssl"

@Lukasa Lukasa changed the title certifi-2015.9.6.1 excludes amazon's CA? certifi-2015.9.6.1 and 2015.9.6.2 fail verification Sep 7, 2015

@Lukasa

This comment has been minimized.

Member

Lukasa commented Sep 7, 2015

Yup, requests by default uses the most recent trust bundle it can find, and cannot fall back to the old bundle. If you want to keep certifi installed, set verify=certifi.old_where().

@ndparker

This comment has been minimized.

ndparker commented Sep 7, 2015

This is not really a feasable solution, given a code base scattered with requests calls all over the place :/

@Lukasa

This comment has been minimized.

Member

Lukasa commented Sep 7, 2015

@ndparker And I take it upgrading to OpenSSL 1.0.2 is also unfeasible?

@moser

This comment has been minimized.

moser commented Sep 7, 2015

In your blog post you state that "By default, the more secure bundle will be used...". This might well be true, but I am afraid that not many people will go through the pain of getting OpenSSL 1.0.2 on their production systems (I'd assume that A LOT of people deploy to Ubuntu LTS or Debian Stable).
So what will they do? Either
(1) they pin certifi==2015.04.28 (and will still have that in 1-2 years time),
(2) they use old_where instead of the current (and will go for (1) when this is removed),
(3) or they say I-dont-have-time-for-this-ssl-stuff-now and throw in a verify=False.
Thus, I am afraid that the net global security gain is negative :-)

@ndparker

This comment has been minimized.

ndparker commented Sep 7, 2015

Yes. ubuntu 14.04 is running here as well. And it's unlikely to upgrade.

@Lukasa

This comment has been minimized.

Member

Lukasa commented Sep 7, 2015

@ndparker In that case, I recommend running python -c 'import certifi; print certifi.old_where()', and then assigning the result of that to the REQUESTS_CA_BUNDLE environment variable.

@moser Right now I've begun a planned deprecation with a time period of at least three months, and I have not yet committed to any specific date to remove the old_where solution. Users will continue to be able to get security updates and a 'weakened' bundle along with them.

However, a user who is using certifi presumably is hoping for somewhat regular updates for security purposes. It defeats those security purposes to keep old, weak certificates in the bundle. The 1024-bit roots were removed for a reason. Given that I have to provide both, it seems to me to be totally defensible to go secure-by-default: I would much rather say that I handed you an unloaded gun and a bullet and you loaded it, pointed it at your foot, and shot yourself, than say that I handed you a loaded gun pointed at your foot and said "don't forget to take the bullet out!".

There are many ways to resolve this problem, as detailed in this issue. I am open to committing to providing the weak bundle until Ubuntu 16.04 is released, which will presumably carry OpenSSL 1.0.2. I am also open to providing tooling to allow people to deliberately weaken a cert bundle provided by mkcert.org, if that is likely to be useful. However, I am not open to having certifi be less secure than it could be by default. That is not what tools like these are for.

If you don't want to get OpenSSL 1.0.2 on your system, that's fine. However, by default I'd like to make it easy to do the right thing and harder to do the wrong thing, and in this case the right thing is to remove the 1024 bit roots. Mozilla removed them one year ago this month: it's about time we caught up.

@sigmavirus24

This comment has been minimized.

Member

sigmavirus24 commented Sep 7, 2015

So, on the whole I agree with @Lukasa. I'm interested in understanding why people who are apparently installing requests from PyPI in production on LTS releases of Ubuntu/Debian are also installing certifi and yet expecting an insecure default. Ubuntu 14.04 ships with requests 2.2.1 by default in its archives and will correctly link you to your system certificate bundle (which, by the way will soon also exclude these certificates, if they haven't already removed them in their latest non-LTS releases).

I'd like to understand the reasoning personally because I'd like to help find a good solution if one exists.

@ndparker

This comment has been minimized.

ndparker commented Sep 7, 2015

For my part that's easy. We use virtualenv-installs using a simple dependency on requests. And requests (or maybe some other dependency, I didn't really check) pulls in certifi (apparently). Today I actually found out that this package exists at all :-)

@ndparker

This comment has been minimized.

ndparker commented Sep 7, 2015

@Lukasa right now I've pinned certifi to the old version in order to stop our production environment from working at all. I still need to finally explore the alternatives.

@sigmavirus24

This comment has been minimized.

Member

sigmavirus24 commented Sep 7, 2015

@ndparker requests does not pull in certifi. Something else must be doing that for you.

@ndparker

This comment has been minimized.

ndparker commented Sep 7, 2015

Ok, I digged a bit. We have two packages requiring certifi here: tornado and ... setuptools (optional, but duh!).

@moser

This comment has been minimized.

moser commented Sep 8, 2015

I also thought that certifi was a dependency of requests, a short look at the web pages misleads you there. Thus my argumentation. Given this new info I agree with your point.

btw. thanks for being far responsive to the issue :-)

@Lukasa

This comment has been minimized.

Member

Lukasa commented Sep 8, 2015

@moser No problem: it's really important for us to be responsive here because this issue will cause people problems. I'm not happy about it (I don't want to make anyone's life hard if I can possibly avoid it), but given the ever-increasing importance of TLS I think it's important that we do the best we can to keep it secure.

Just so everyone knows, it's my intention to leave this issue open for the time being to center discussion here. For that reason it might get noisy: if you are satisfied you no longer have any problems, I highly recommend you unsubscribe! 😁

❤️ to you all.

@Lukasa Lukasa referenced this issue Sep 8, 2015

Closed

Removal of Thawte? #27

@kalekseev

This comment has been minimized.

kalekseev commented Sep 8, 2015

paypal.com same problem, so libs like django-paypal stop working.

>>> import requests
>>> requests.get('https://www.paypal.com/')
/tmp/env/local/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/tmp/env/local/lib/python2.7/site-packages/requests/api.py", line 69, in get
    return request('get', url, params=params, **kwargs)
  File "/tmp/env/local/lib/python2.7/site-packages/requests/api.py", line 50, in request
    response = session.request(method=method, url=url, **kwargs)
  File "/tmp/env/local/lib/python2.7/site-packages/requests/sessions.py", line 465, in request
    resp = self.send(prep, **send_kwargs)
  File "/tmp/env/local/lib/python2.7/site-packages/requests/sessions.py", line 573, in send
    r = adapter.send(request, **kwargs)
  File "/tmp/env/local/lib/python2.7/site-packages/requests/adapters.py", line 431, in send
    raise SSLError(e, request=request)
requests.exceptions.SSLError: [Errno 1] _ssl.c:510: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
@Lukasa

This comment has been minimized.

Member

Lukasa commented Sep 8, 2015

@kalekseev As discussed above, this is intentional: please use one of the mitigation strategies I mentioned in the previous posts.

@ipartola

This comment has been minimized.

ipartola commented Sep 8, 2015

@Lukasa: Just was bit by this issue. Unfortunately, I am running Ubuntu Trusty (14.04) which is the latest LTS release and OpenSSL is pinned to 1.0.1f, so that upgrade won't work. Would it be possible to make any such future changes start out as opt-in first?

@jdanbrown

This comment has been minimized.

jdanbrown commented Sep 8, 2015

We were also bit by this over the weekend. It broke some daily pipeline jobs that launch new nodes on each run which pip install'd the latest certifi version. Our most feasible solution is to pin to certifi==2015.4.28 until ubuntu (12.04) upgrades openssl to 1.0.2.

@Lukasa

This comment has been minimized.

Member

Lukasa commented Sep 8, 2015

Would it be possible to make any such future changes start out as opt-in first?

Yes, in principle, but it's hard to do. For example, when do we switch over to opt-out? When the next Ubuntu LTS comes out? What makes Ubuntu so special, RHEL and Debian have LTS releases. When no LTS release has an old OpenSSL? That would mean waiting 20 years (thanks RHEL). If we ignore RHEL, that still means up to 3 years if Debian and Ubuntu end up out of sync.

The reality is that there is no option here that pleases everyone. Someone's code is broken, either explicitly (verification errors) or implicitly (they are operating insecurely and don't know it). IMO, explicit failures are better: at least they get dealt with. Implicit failures never get fixed and lead to CVEs and screwed users.

I am sorry that this issue hurt you, but mostly because we don't have a good communications channel to warn about this stuff. That's what I intend to look into: I want to make sure users can see this stuff coming.

@Lukasa

This comment has been minimized.

Member

Lukasa commented Sep 8, 2015

@jdanbrown 12.04 will never upgrade to OpenSSL 1.0.2. Use the environment variable proposed above.

@Lukasa

This comment has been minimized.

Member

Lukasa commented Sep 8, 2015

Also, 12.04 is more than 3 years old. I strongly advise upgrading, you are being left behind by the rest of the web.

@ipartola

This comment has been minimized.

ipartola commented Sep 8, 2015

@Lukasa: 12.04 is an LTS release just like 14.04 and 16.04 will be. Ubuntu supports LTS releases for at least 5 years, providing security and package updates. 12.04 is still a fine choice and is not EOL for another year and a half.

Regarding opt-in, I am not sure what time interval makes sense. I honestly don't know enough about the subject matter to say what is reasonable, since I am not a security researcher. That's why I use certifi: because I can't put together an equivalent thing on my own (and I appreciate the work that gets put into it). However, when a breaking change is released that brings production sites to a halt it is difficult justifying trusting it.

You mention that explicit failures get fixed faster. In this case that won't happen. Since the next LTS Ubuntu release is not due for another 7 months, the solution we went with was simply to pin certifi to the pervious version. It will likely stay this way until we are able to get openssl 1.0.2, some time in 2016. I suspect most people are doing this right now, or are using certifi.old_where(), both of which effectively negate the increased security the new version provides. Pinning the certifi version also will prevent any future updates going out which will mean that lots of codebases will become less secure, not more.

It just feels like there should be a better solution than dropping a huge breaking change over the weekend. Distros and browsers manage to stay on top managing root certs without suddenly breaking sites like amazon.com, paypal.com, ups.com, etc. Is there some type of process that they follow that allows them to provide such safety that certifi could mirror?

@ipartola

This comment has been minimized.

ipartola commented Sep 8, 2015

Thinking about this, is there a way to check whether openssl 1.0.2 is available and if it is use the new bundle? Otherwise use the old bundle and issue a warning?

@jdanbrown

This comment has been minimized.

jdanbrown commented Sep 8, 2015

@Lukasa No ungratefulness implied. I just wanted to voice our experience as one of your many consumers, and also our solution for the sake of others in the same boat. Thanks for the heads up about the env var; we'll check that out too.

@ipartola

This comment has been minimized.

ipartola commented Sep 8, 2015

One last thing: perhaps putting the workaround instructions into a warning would be great. Maybe even a link to this issue. Otherwise, I expect it will take other users quite some time to figure out what exactly is going on.

@amolbarewar

This comment has been minimized.

amolbarewar commented Sep 2, 2016

I am getting the same error:
SSLError: bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)
requests==2.9.1 certifi==2016.02.28

@Lukasa

This comment has been minimized.

Member

Lukasa commented Sep 2, 2016

@amolbarewar Please read this thread: we have discussed several solutions for your problem already. Please also update your certifi: we have shipped several new releases since the one you're using.

@Lukasa Lukasa closed this Sep 2, 2016

@amolbarewar

This comment has been minimized.

amolbarewar commented Sep 2, 2016

opps actually after downgrading certifi to 2015.04.28 I was still getting the issue thats why I asked again.
Its working fine it was issue from my side.
Thanks

@reallistic

This comment has been minimized.

reallistic commented Sep 2, 2016

I just ran into this yesterday, and wanted to point out the steps to build a static wheel has been merged. Please check it out here:
https://github.com/pyca/cryptography/blob/master/docs/installation.rst#static-wheels

@mlissner

This comment has been minimized.

mlissner commented Dec 23, 2016

I just ran into this issue on Travis-CI, where the installed version of OpenSSL is 1.0.1 (according to their docker image). I just want to mention this because it's an example of the kinds of issues we'll run into if we remove old_where.

I also want to echo other folks that have suggested that old_where be advertised more heavily. I guess it's not possible to know which version of OpenSSL is being used. Is it possible to know whether a cross-signed (or otherwise obsolete) certificate is causing the certificate to fail?

Thanks for all the detailed responses and workarounds in this thread. What a mess!

andylolz added a commit to andylolz/yournextrepresentative that referenced this issue May 10, 2017

Fix for Google social login SSL error
The following error is produced when using the Google social login:

  SSLError: [Errno bad handshake] [('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')]

... from requests.  I believe that this error is due to the servers we
(and Democracy Club) currently run having OpenSSL 1.0.1 and using a recent
version of the certifi Python package (which contains root
certificates).

A full explanation can be found here:

  certifi/python-certifi#26

(found via: http://stackoverflow.com/a/34665344/223092 )

Since upgrading OpenSSL to 1.0.2 will be a pain, pinning certifi to the
old version is the easiest way to fix this error for the moment.
Hopefully we will upgrade the distributions on our servers in the near
future to one that supplies OpenSSL 1.0.2 and then we can switch to a
more recent certificate bundle.

robertpeteuil added a commit to robertpeteuil/multi-cloud-control that referenced this issue Jul 2, 2017

Adjust requires for systems with Py < 2.7.9 and openssl < 1.0.2
- On these systems, “certifi” would generate a “certificate verify failed” error for SSL3_GET_SERVER_CERTIFICATE
  - this occurs when logging into GCP
- Workaround is to install “certifi” <= 2015.9.6.2 and old “requests” <= 2.7.0
  - newer version of “requests” require newer versions of  “certifi”
- Known OS with this situation is CentOS 7 with Python 2.7.5 and OpenSSL 1.0.1e
  - openssl distributed via OS package manager - so not upgradable via setup
- Discussion of this issue: certifi/python-certifi#26

mhozza added a commit to trojsten/web that referenced this issue Jul 3, 2017

Downgrade certifi to fix google oauth issue until we migrate to new s…
…ervers with a new openssl.

Please revert this after migration.

See: certifi/python-certifi#26 for more
details.

Fixes 1061.
@brondsem

This comment has been minimized.

brondsem commented Aug 2, 2017

I've installed cryptography-2.0.2-cp27-cp27mu-manylinux1_x86_64.whl and this seems to confirm it does have a recent openssl bundled:

$ python -c 'from cryptography.hazmat.backends.openssl.backend import backend;print(backend.openssl_version_text())'
OpenSSL 1.1.0f  25 May 2017

However, python -c "import requests; requests.get('https://www.google.com')" still gives CERTIFICATE_VERIFY_FAILED. Am I missing something? Is there a difference with OpenSSL 1.1.0 vs 1.0.2 or something? The machine I'm on has OpenSSL 1.0.1e-fips on Centos 7, and these are the package versions:

certifi==2017.7.27.1
cryptography==2.0.2
requests==2.18.3

Thanks

@alex

This comment has been minimized.

Contributor

alex commented Aug 2, 2017

Do you have all the other dependencies from requests[security] installed?

@brondsem

This comment has been minimized.

brondsem commented Aug 2, 2017

That was it, thanks! Ran pip install 'requests[security]' and it pulled in pyOpenSSL and all is good now.

@pmesgari

This comment has been minimized.

pmesgari commented May 22, 2018

Hi @Lukasa, this problem is happening again now with google api.

All libraries are upgraded, the system clock is fine, the code is:

from .auth_service import get_credentials
from google.auth.transport.requests import AuthorizedSession
import json
# Calendar service


class CalendarService:
    def __init__(self, calendar_id=''):
        self.credentials = get_credentials()
        self.authed_session = AuthorizedSession(self.credentials)
        self.service_base_url = 'https://www.googleapis.com/calendar/v3/users/me/calendarList'
        self.calendar_id = calendar_id

    def get_service_url(self):
        return('{base_url}/{calendar_id}'.format(
            base_url=self.service_base_url,
            calendar_id=self.calendar_id))

    def get_calendar(self):
        response = self.authed_session.get(self.get_service_url())
        calendar = response.json()
        return calendar

    def get_calendar_list(self):
        response = self.authed_session.get(self.service_base_url)
        calendar_list = response.json()
        return calendar_list

The output of pip freeze is:

aniso8601==3.0.0
cachetools==2.1.0
certifi==2018.4.16
chardet==3.0.4
click==6.7
Flask==1.0.2
Flask-Cors==3.0.4
Flask-HTTPAuth==3.2.3
Flask-RESTful==0.3.6
google-auth==1.4.1
idna==2.6
itsdangerous==0.24
Jinja2==2.10
MarkupSafe==1.0
pyasn1==0.4.2
pyasn1-modules==0.2.1
pytz==2018.4
requests==2.18.4
rsa==3.4.2
six==1.11.0
urllib3==1.22
Werkzeug==0.14.1
C:\Workspace\Techroom\venvs\lazyhour\lazyhour\Scripts\python.exe C:/Workspace/Techroom/lazyhour-server/main.py
 * Serving Flask app "main" (lazy loading)
 * Environment: development
 * Debug mode: on
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 279-826-945
 * Running on http://localhost:5000/ (Press CTRL+C to quit)
C:\Workspace\Techroom\lazyhour-server\util\service_account.json
127.0.0.1 - - [22/May/2018 15:22:56] "GET /calendar/list HTTP/1.1" 500 -
Traceback (most recent call last):
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask\app.py", line 2309, in __call__
    return self.wsgi_app(environ, start_response)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask\app.py", line 2295, in wsgi_app
    response = self.handle_exception(e)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask_cors\extension.py", line 161, in wrapped_function
    return cors_after_request(app.make_response(f(*args, **kwargs)))
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask_restful\__init__.py", line 273, in error_router
    return original_handler(e)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask\app.py", line 1741, in handle_exception
    reraise(exc_type, exc_value, tb)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask\_compat.py", line 34, in reraise
    raise value.with_traceback(tb)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask\app.py", line 2292, in wsgi_app
    response = self.full_dispatch_request()
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask\app.py", line 1815, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask_cors\extension.py", line 161, in wrapped_function
    return cors_after_request(app.make_response(f(*args, **kwargs)))
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask_restful\__init__.py", line 273, in error_router
    return original_handler(e)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask\app.py", line 1718, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask\_compat.py", line 34, in reraise
    raise value.with_traceback(tb)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask\app.py", line 1813, in full_dispatch_request
    rv = self.dispatch_request()
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask\app.py", line 1799, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask_restful\__init__.py", line 480, in wrapper
    resp = resource(*args, **kwargs)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask\views.py", line 88, in view
    return self.dispatch_request(*args, **kwargs)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask_restful\__init__.py", line 595, in dispatch_request
    resp = meth(*args, **kwargs)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\flask_httpauth.py", line 93, in decorated
    return f(*args, **kwargs)
  File "C:\Workspace\Techroom\lazyhour-server\main.py", line 45, in get
    return calendar_service.get_calendar_list()
  File "C:\Workspace\Techroom\lazyhour-server\util\calendar_service.py", line 25, in get_calendar_list
    response = self.authed_session.get(self.service_base_url)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\requests\sessions.py", line 521, in get
    return self.request('GET', url, **kwargs)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\google\auth\transport\requests.py", line 198, in request
    self._auth_request, method, url, request_headers)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\google\auth\credentials.py", line 121, in before_request
    self.refresh(request)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\google\oauth2\service_account.py", line 322, in refresh
    request, self._token_uri, assertion)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\google\oauth2\_client.py", line 145, in jwt_grant
    response_data = _token_endpoint_request(request, token_uri, body)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\google\oauth2\_client.py", line 106, in _token_endpoint_request
    method='POST', url=token_uri, headers=headers, body=body)
  File "C:\Workspace\Techroom\venvs\lazyhour\lazyhour\lib\site-packages\google\auth\transport\requests.py", line 124, in __call__
    six.raise_from(new_exc, caught_exc)
  File "<string>", line 3, in raise_from
    # Permission is hereby granted, free of charge, to any person obtaining a copy
google.auth.exceptions.TransportError: HTTPSConnectionPool(host='accounts.google.com', port=443): Max retries exceeded with url: /o/oauth2/token (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)'),))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment