Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce /api/projects bandwidth requirements and latency #6

Closed
chrisgorgo opened this issue Aug 22, 2017 · 16 comments · Fixed by #86
Closed

Reduce /api/projects bandwidth requirements and latency #6

chrisgorgo opened this issue Aug 22, 2017 · 16 comments · Fixed by #86

Comments

@chrisgorgo
Copy link
Contributor

When loading https://openneuro.org/dashboard/datasets being logged in as krzysztof.gorgolewski@gmail.com

app.min.1b7726f0.js:sourcemap:54 Uncaught TypeError: Cannot read property 'body' of undefined
    at app.min.1b7726f0.js:sourcemap:54
    at a (app.min.1b7726f0.js:sourcemap:54)
    at app.min.1b7726f0.js:sourcemap:54
    at d.callback (app.min.1b7726f0.js:sourcemap:45)
    at d.crossDomainError (app.min.1b7726f0.js:sourcemap:45)
    at XMLHttpRequest.n.onreadystatechange (app.min.1b7726f0.js:sourcemap:45)

This seems like a critical bug.

@chrisgorgo chrisgorgo added the bug label Aug 22, 2017
@chrisgorgo
Copy link
Contributor Author

Same when trying to access: https://openneuro.org/public/datasets (even if not logged in).

@nellh
Copy link
Contributor

nellh commented Aug 22, 2017

I'm not currently seeing this but it is an error we've seen before. The request to SciTran closes with this message:

core_1 | [pid: 14|app: 0|req: 44798/57658] 255.255.255.255 () {42 vars in 749 bytes} [Tue Aug 22 00:45:43 2017] GET /api/projects?metadata=true => generated 2 bytes in 1 msecs (HTTP/2.0 200) 3 headers in 110 bytes (2 switches on core 1)
core_1 | uwsgi_response_write_body_do() TIMEOUT !!!
core_1 | IOError: write error

I think that can only be caused when the client aborted the request mid transfer. The frontend records this error as the XHR request failing. It could be the connection is aborted by the browser, an extension, or your network. Did you run into this error on a bandwidth constrained network?

@chrisgorgo
Copy link
Contributor Author

The bandwidth is indeed not superb, but also not too bad. I just checked again and I can load the list of public datasets. It does, however, take 20 seconds to load. For comparison on the same network loading the list of OpenfMRI datasets (https://openfmri.org/dataset/) takes 2 seconds.

I think you are right that the browser must've timeout when waiting for response thus leading to an error (and infinite loading symbol), but I also feel that OpenNeuro is taking much longer to load than other websites. In some conditions (subpar bandwidth) this can lead to silent errors and poor user experience.

Here's a benchmark of my current internet connection: http://beta.speedtest.net/result/6559072469

@nellh nellh changed the title Uncaught TypeError: Cannot read property 'body' of undefined Reduce /api/projects bandwidth requirements and latency Aug 22, 2017
@nellh
Copy link
Contributor

nellh commented Aug 22, 2017

The endpoint having issues here is about 5MB on production and takes some time to start. We can reduce the size and also serve this data from cache on the server to reduce the major cause of latency and that should eliminate this timeout.

I'll file another issue to not crash the frontend when the request fails.

@chrisgorgo
Copy link
Contributor Author

Probably related - I also see slow performance of https://openneuro.org/api/snapshots/projects?metadata=true endpoint which is called when viewing individual datasets. On my current connection it took 2m to load this data leading to suboptimal user experience

@chrisgorgo
Copy link
Contributor Author

Running into this issue once again on a new network. The speed is not awesome, but not terrible: http://beta.speedtest.net/result/6644972518 we should be able to support such networks.

@nellh
Copy link
Contributor

nellh commented Sep 22, 2017

@chrisfilo #86 helps a lot with this but as all the dataset features depend on that change, I want to test it carefully before pushing it to production. I just updated dev with that fix. Let me know if you see any issues there, especially around uploading or running jobs.

@chrisgorgo
Copy link
Contributor Author

I am still seeing this on dev with poor connection (http://beta.speedtest.net/result/6649453857).

@chrisgorgo
Copy link
Contributor Author

Running into this on another connection: http://beta.speedtest.net/result/6652070957 (also on dev https://openneuro.dev.sqm.io/dashboard/datasets )

@chrisgorgo
Copy link
Contributor Author

On dev listing my datasets downloads 3.6MB from https://openneuro.dev.sqm.io/api/snapshots/projects?metadata=false&root=true which seems way too much.

@chrisgorgo
Copy link
Contributor Author

Another network I was unable to list all of the public datasets (University of Birmingham): http://beta.speedtest.net/result/6659263359

@chrisgorgo
Copy link
Contributor Author

I'm at University of Glasgow today and unfortunately still running into this issue. Loading the list of public datasets on prod downloads 17Mb and then times out.

@chrisgorgo
Copy link
Contributor Author

After trying a few times I managed to load the page - total payload is 19.9Mb.

@chrisgorgo
Copy link
Contributor Author

I'm back in California - the land of really fast internet (tm), but I found this neat guide for simulating slow connections: https://developers.google.com/web/tools/chrome-devtools/network-performance/network-conditions

@nellh
Copy link
Contributor

nellh commented Oct 1, 2017

Lighthouse (mentioned in issue #49) automates that test with headless Chrome. I'm targeting the "page load is fast enough on 3G" audit to consider this fixed (10 seconds max load time for all views on Chrome's throttled 3G setting).

#108 and #109 are the biggest fixes.

@chrisgorgo
Copy link
Contributor Author

I am planning to give a demo of the platform on Tuesday in China - how likely is it that this issue will be resolved by then? Fixing #109 would probably deliver the biggest performance boost. @JohnKael

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Development

Successfully merging a pull request may close this issue.

2 participants