Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Soup for downloads and set timeouts #59

Merged
merged 5 commits into from Jul 12, 2013
Merged

Use Soup for downloads and set timeouts #59

merged 5 commits into from Jul 12, 2013

Conversation

dsd
Copy link

@dsd dsd commented Jul 12, 2013

Downloading via GIO uses gvfs internally. Unfortunately gvfs does not
set or handle connection timeouts, meaning that bad connectivity can
lead to a total hang.

Fixing gvfs is not easy:
https://mail.gnome.org/archives/gvfs-list/2013-July/msg00000.html

And while looking at the bug, I realised just how much overkill this
is. gvfs runs in a separate process and does the HTTP communication,
communicating back to gio over dbus, and has a few layers of abstraction.

The gio developers on IRC recommended just to use libsoup directly,
which is not hard. And we can set timeouts appropriately in order to
handle bad connectivity.

replaces #57

Daniel Drake added 5 commits July 12, 2013 09:38
Downloading via GIO uses gvfs internally. Unfortunately gvfs does not
set or handle connection timeouts, meaning that bad connectivity can
lead to a total hang.

Fixing gvfs is not easy:
https://mail.gnome.org/archives/gvfs-list/2013-July/msg00000.html

And while looking at the bug, I realised just how much overkill this
is. gvfs runs in a separate process and does the HTTP communication,
communicating back to gio over dbus, and has a few layers of abstraction.

The gio developers on IRC recommended just to use libsoup directly,
which is not hard. And we can set timeouts appropriately in order to
handle bad connectivity.

This required some tweaks to the internal Downloader API - we can now
handle cancellation much more directly.
Re-use the general downloader class for downloading activity
metadata, which means that interrupted connections will now error
and time out correctly.

This involved adding a "download to memory" interface to the
Downloader class, which has a new unit test.
Re-use the general downloader class for downloading activity
metadata, which means that interrupted connections will now error
and time out correctly.

This involved adding a get_size() request to the downloader, and a
got-chunk signal for users that don't require the entire download
presented as a single buffer.
Increase code re-use by using the Downloader class instead of urllib2.
The advantage here is that Soup keeps around connections for a while,
so instead of re-creating TCP streams for each partial read, we just
re-use the existing ones. This results in a noticable speedup when doing
metadata lookups.

Another advantage is that we can run the main loop while we are waiting
for a response. This means that there is no need to create a separate
thread for metadata lookups.
Closing the activity updater when the list of updates was visible,
then reopening it again, resulted in an exception. Handle this
cancellation case properly.
@dnarvaez dnarvaez merged commit 116adc1 into sugarlabs:master Jul 12, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants