-
-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CSV export results in empty file with no data #2157
Comments
I suspect this is happening because of a timeout -- it was reported earlier and self resolved without me doing anything. |
Hi! I tried exporting it again twice, I still get the same empty file every time. |
I can confirm this issue was happening with v0.4.3 and is still not working with v0.4.4. I was trying to export on the BookWyrm Social instance. |
I can confirm I'm getting the exact same thing too. Desktop
|
same here, is annoying only reason I wanted this is to migrate to another server, the one I am at is not been taken care off is more time down than up for many people, https://bookwyrm.social/ I keep getting issues for more than 9 months, I cant edit a group list some of us made, works when not logged in, but as soon I log in it breaks, because a nginx bad gateway that usually happens when the service is unresponsive or the container is low on memory unresponsive etc, so got tired of waiting for the admin to reply to us, I even offer to let me ssh so I can fix it, (systems engineer for 24 years) and to try to troubleshoot this even do I sent 2-3 times screenshots, I waited 9 months because I know this is not a job and voluntier but now and here comes to the point, I try to get the CSV to migrate to another instance and is empty :( I am on Arch GNU/Linux I tried with qutebrowser, firefox and chromium. |
@r3k2 -- I just wanted to acknowledge how frustrated and unheard it sounds like you're feeling with this. It sounds like you've been impacted by bugs that interfere with basic usability, and you aren't getting a response that makes you feel like the problems you're encountering are being acknowledged and taken seriously. I imagine it's just as frustrating for everyone who's been reporting and +1'ing these as well! When I get bug reports that I have trouble replicating and am not sure how to fix, I often find it overwhelming and I'm not always sure how to get outside help in fixing them. As a result, there are some bugs, like the ones you've encountered, that I've been aware of but pretty stuck about how to address, and as a result they languish in the issue tracker. The scale and demands of the project have gone way up and my capacity to fix bugs and bring in support from others hasn't kept pace. I find that really daunting and a little scary. So given that, it's super generous of you to have offered your expertise to help fix it, and I apologize for missing that you made that offer; although I try my best, I still fail to notice or forget things. I believe that the source of this specific bug is that the server is overloaded and the query is timing out and returning an empty csv. I have been focusing (with some greatly appreciated help) for the last couple weeks to try to improve the performance situation overall, which is related but doesn't directly address this specific thing not working. I'd be really open to suggestions on how a data export can be compiled in a way that is robust to server load and timeouts. It will also need some expanding to be an effective migration utility, but that's very doable. |
I'm also frustrated by this bug, and I'm wondering if the solution is to make the export asynchronous. I have two examples in mind which might be useful as patterns: LibraryThing: when I exported from there to import to Bookwyrm, clicking the "Export all books" button didn't immediately give me a link, instead it showed some kind of "in progress" feedback. Trying it now, I got a blue bar that says "nnn books processed", updating every 100 books. It took a few minutes to do my whole library, and then was replaced with a link in the format https://www.librarything.com/download_export_file.php?uniqueId=HexadecimalIDHere . Presumably that hex ID is a lookup in a table that connects to an actual file; clicking the link gets me a file with my username in the file name. Allen Coral Atlas: I actually maintain the downloads system for this one. Downloads can take a couple of hours to assemble in the worst case, so we pre-package common requests and then have a fully asynchronous system for everything else. It goes like this:
The ACA version is probably overkill here! But something like what LibraryThing does, perhaps also generating a notification that contains the link, seems like it could work. |
Hi, I'm @eldang 's spouse and I told him that I would take a crack at fixing this bug as a Christmas gift. Luckily, I do a lot of work on Django sites, so it's like... actually realistic of me to offer this. I haven't done a lot of digging here yet, so I might be way off-base, but something like https://pypi.org/project/django-import-export-celery/ might go a long way toward helping make this happen. I think what I'm likely to do is start with something tagged as "good first bug" to get my feet wet in this repo, too. So... hi! Hope I can help! |
Thank you! Help on this ticket would be super appreciated, and I agree that using celery seems like the right approach. I'd be happy to work with you and provide whatever help/explanations of weird codebase things I can do |
@nein09, any luck with |
@todrobbins I decided to start with what looks like a smaller piece of work first (#1678) (and life has been getting in the way of even that), so I'm not quite here yet. So sadly, there isn't much to report yet though. But I haven't forgotten about it. |
Hi! As a note on possible causes here, I was able to get an export from my own bookwyrm instance just fine, but the export I got from bookwyrm.social was blank |
I made some interesting progress on this today. I've been playing with our gunicorn setup (since I'm pretty sure it's responsible for many of the loading delays on bookwyrm.social recently), and I bumped the timeout up to 600 seconds to test this (also increasing the timeout on the nginx side). This prevents the timeout, but there's still a problem — psycopg2 reports that the disk is full: Long traceback
And indeed, Using the It looks to me like what's happening is that one of the queries in the While building this export in Celery is clearly the best long-term solution, I do think that there's potential to rewrite the export code to fix this in the short term. I'm not sure why a generator was initially used (probably it saves on memory by avoiding storing the entire CSV in memory?), but unrolling it into a loop that generates the CSV in-memory, rather than keeping the Query objects around and converting them to CSV one line at a time would probably get exports working on bookwyrm.social for the moment. |
The idea behind a streaming CSV export was to reduce the amount of memory used, by avoiding building the entire CSV file in memory before sending it to the client. However, it didn't work out this way in practice: the query objects that were created to represent each line caused Postgres to generate a very large (~200MB on bookwyrm.social) temp file, not to mention the memory being used by the Query object likely being similar to, if not larger than that used by the finalized CSV row. While we should in the long term run our CSV exports as a Celery task, this change should allow CSV exports to work on large servers without causing disk-space problems. Fixes: bookwyrm-social#2157
The idea behind a streaming CSV export was to reduce the amount of memory used, by avoiding building the entire CSV file in memory before sending it to the client. However, it didn't work out this way in practice: the query objects that were created to represent each line caused Postgres to generate a very large (~200MB on bookwyrm.social) temp file, not to mention the memory being used by the Query object likely being similar to, if not larger than that used by the finalized CSV row. While we should in the long term run our CSV exports as a Celery task, this change should allow CSV exports to work on large servers without causing disk-space problems. Fixes: bookwyrm-social#2157
The idea behind a streaming CSV export was to reduce the amount of memory used, by avoiding building the entire CSV file in memory before sending it to the client. However, it didn't work out this way in practice: the query objects that were created to represent each line caused Postgres to generate a very large (~200MB on bookwyrm.social) temp file, not to mention the memory being used by the Query object likely being similar to, if not larger than that used by the finalized CSV row. While we should in the long term run our CSV exports as a Celery task, this change should allow CSV exports to work on large servers without causing disk-space problems. Fixes: bookwyrm-social#2157
The idea behind a streaming CSV export was to reduce the amount of memory used, by avoiding building the entire CSV file in memory before sending it to the client. However, it didn't work out this way in practice: the query objects that were created to represent each line caused Postgres to generate a very large (~200MB on bookwyrm.social) temp file, not to mention the memory being used by the Query object likely being similar to, if not larger than that used by the finalized CSV row. While we should in the long term run our CSV exports as a Celery task, this change should allow CSV exports to work on large servers without causing disk-space problems. Fixes: bookwyrm-social#2157
Looks like #2713 didn't have exactly the effect I was hoping for — somehow the same disk space problem is triggered. Not sure if it's that I don't understand when Django |
The fact that we're seeing a 500 error rather than an empty file is an improvement, I think! |
Yeah, that's true :) My guess is that what's happening is the books = (
models.Edition.viewer_aware_objects(request.user)
.filter(
Q(shelves__user=request.user)
| Q(readthrough__user=request.user)
| Q(review__user=request.user)
| Q(comment__user=request.user)
| Q(quotation__user=request.user)
)
.distinct()
) It seems like the culprit and solution will likely be similar to #2725 / #2726, although presumably more complicated since we'll need to figure out how to handle duplicates. If we were writing raw SQL it would be pretty easy to use CTEs to select all five of those as individual queries then |
Splitting this into five separate queries avoids the large join that prevents us from using indexes, and requires materializing to disk. Fixes: bookwyrm-social#2157 (hopefully)
Just FYI, #2741 hasn't been deployed to bookwyrm.social yet, so CSV exports will still fail there until that's deployed — I'll comment in this issue when it is, so that anyone following this will get a notification. |
This should now be fixed on bookwyrm.social! I was able to export my own CSV history, albeit a small one. Please open a new issue if you have any problems with the CSV export. @Strubbl @todrobbins FYI |
Thank you for your work on this, @WesleyAC! I'll test a bookwyrm.social export right now. |
@WesleyAC Hooray! Thank you for fixing this. Export from bookwyrm.social just worked smoothly (and fairly quickly) for me, and it looks like importing that to books.theunseen.city (which is on v0.6.0) is also working. |
I can confirm it worked for me too, thank you so much!
…On 04/04/2023 18:56, Tod Robbins wrote:
Export worked perfectly and pretty quick (~3s/43KB/363 rows):
image
<https://user-images.githubusercontent.com/158590/229877801-491d0175-0c53-44fe-85db-28aa2c96fa0d.png>
—
Reply to this email directly, view it on GitHub
<#2157 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAMS2ASPCIXET6M3IXBZ2BTW7ROFZANCNFSM52DOWXMQ>.
You are receiving this because you commented.Message ID:
***@***.***>
|
Describe the bug
When using the CSV data export the resulting file is empty
To Reproduce
Steps to reproduce the behavior:
Expected behavior
CSV with data from the books
Screenshots
Instance
BookWyrm Social
Desktop (please complete the following information):
- OS: Windows 10
- Browser: Edge
- Version 103.0.1264.37 (Official build) (64-bit)
The text was updated successfully, but these errors were encountered: