Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AxiosError: syncTxFromExplorer failed: Network Error #4175

Closed
sentry-io bot opened this issue Dec 24, 2023 · 17 comments · Fixed by #4211
Closed

AxiosError: syncTxFromExplorer failed: Network Error #4175

sentry-io bot opened this issue Dec 24, 2023 · 17 comments · Fixed by #4211
Assignees
Labels
enhancement New feature or request

Comments

@sentry-io
Copy link

sentry-io bot commented Dec 24, 2023

I've merged 4 network error issues to this one (you can see them by clicking the merged issues tab in sentry)

Can we safely ignore and not report Network error and network request error?

Sentry Issue: GOODDAPP-5VXW

AxiosError: syncTxFromExplorer failed: Network Error
  at request.onerror (../node_modules/axios/lib/adapters/xhr.js:149:14)
  at sentryWrapped (../node_modules/@sentry/browser/esm/helpers.js:86:17)
@sentry-io sentry-io bot added the enhancement New feature or request label Dec 24, 2023
@decentralauren
Copy link

@johnsmith-gooddollar we are prioritizing this for sprint 6 - can you confirm you are accountable and will own this ticket?

@johnsmith-gooddollar
Copy link
Collaborator

@L03TJ3

@L03TJ3
Copy link
Collaborator

L03TJ3 commented Jan 23, 2024

Should we log or not? @johnsmith-gooddollar @sirpy
analyzed a bunch of the reported issues

syncFromExplorer:

  • not finished in 5 seconds is the warn (could increase, but basically someone just has slow internet?)
  • a bunch of rpc's are rejected because of 'too many requests'
    ceramic failed:
  • IPFS request failed? (not sure about this one)
  • read streams failed (has 5 second syncTimeout, related to slow connection?)
    RealmDb:
  • query failed after retries (cannot find connection to realmdb? user seems not even logged in)

Something with codepusH?: codepush-event

and the question.. should we completely hide network errors from logging: #4192
or are there places where some of the above information is relevant?

@sirpy
Copy link
Contributor

sirpy commented Jan 24, 2024

@L03TJ3 warnings are irrelevant we are only talking about errors which are logged to sentry.
Usually i'd say network errors are related to user connection problems but it might be also be an issue with a specific url/domain, temporarily down, or maybe a typo.

@johnsmith-gooddollar @L03TJ3 if we can log the url that failed it might make more sense, and also perhaps make sure we log the url only once per session/refresh

@L03TJ3
Copy link
Collaborator

L03TJ3 commented Jan 25, 2024

@sirpy if any errors are caused by due to faulty rpc errors and are not picked up by any other the faulty rpc's will be shown in the stack trace on sentry still. no need to log them all imo

The ones regularly complaining about threshold are:
nodies
onfinality

at least the ones I noticed

@sirpy
Copy link
Contributor

sirpy commented Jan 25, 2024

@L03TJ3 if for example we put a wrong rpc address with a typo, the logger will ignore any errors with this rpc.
what make you say that this errors will be logged elsewhere?

@L03TJ3
Copy link
Collaborator

L03TJ3 commented Jan 25, 2024

@sirpy
What I basically meant was that if there is an error from the app caused by a faulty rpc, then the stacktrace would show all the rpc's that were attempted, and we could catch/resolve if there a rpc failing cause of a human error or something.

besides that, what I now see is that any error/warning has been silenced and wrapped with !isConnectionError, so whatever meant is not usefull anyway

reference which warnings I would expect to see in a stack strace:

log.warn('send: got error', { message, error, willFallback })

@sirpy
Copy link
Contributor

sirpy commented Jan 26, 2024

@L03TJ3 warnings are irrelevant we are only talking about errors which are logged to sentry. Usually i'd say network errors are related to user connection problems but it might be also be an issue with a specific url/domain, temporarily down, or maybe a typo.

@johnsmith-gooddollar @L03TJ3 if we can log the url that failed it might make more sense, and also perhaps make sure we log the url only once per session/refresh

@johnsmith-gooddollar can we do that? log url that failed once?

@sirpy
Copy link
Contributor

sirpy commented Jan 30, 2024

@johnsmith-gooddollar

@johnsmith-gooddollar
Copy link
Collaborator

@sirpy @L03TJ3 After some additional research I've found it's not easy to do
And even impossible in some cases because we cannot always get failing url from the exception :

a) we could get failed URL from axios error
b) we could understand which http provider failed

In the other cases (e.g. failed fetch() to XMLHTTPRequest() from some liberary like realmdb) we cannot obtain url has failed with the network error.

So what I've done

  • I added separate loggers to axios instances we use and to the MultipleHTTPProvider: b3db632 224e35c
  • I suppressed logging network errors globally over the whole app 8af89dd

@sirpy
Copy link
Contributor

sirpy commented Jan 31, 2024

@johnsmith-gooddollar
i've added comments on the commits.
please use PR next time

@johnsmith-gooddollar
Copy link
Collaborator

@sirpy

this should be per url not host

Fixed

log.error?
but now it will not single time eerrors about urls

  • log.error now filters errors received and do not log any network/connection errors
  • log.exception is unchanged copy of log.error which logs everything. it used in the places we log urls once per session

restore pcallback it is overwriting another function called callback

Fixed

@johnsmith-gooddollar
Copy link
Collaborator

@johnsmith-gooddollar
i've added comments on the commits.
please use PR next time

#4211

@L03TJ3
Copy link
Collaborator

L03TJ3 commented Feb 12, 2024

@vldkhh verify on prod

@vldkhh
Copy link

vldkhh commented Feb 23, 2024

@L03TJ3
Copy link
Collaborator

L03TJ3 commented Mar 1, 2024

@vldkhh I think this can be rechecked as well

@vldkhh
Copy link

vldkhh commented Apr 11, 2024

verified on prod
image.png

@vldkhh vldkhh closed this as completed Apr 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants