Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Remember attempted price list upload submissions #1491
We've had a number of failed attempted price list upload submissions, but the only way we've really known about them (and had enough information to debug them) is due to the submitter emailing us and sending their price list.
This is an experimental PR that attempts to record attempted price list upload submissions and remember various metadata about them (such as number of valid/invalid rows), so that we can easily see what folks are trying to upload, what they're constantly having to fix up the most often, and things like that.
Following a link from the admin index page, users who are part of the new Technical Support Specialists group can choose from a list of attempted submissions:
When they choose one, they're taken to a custom detail page that provides more information and allows them to "replay" the submission:
Replaying the submission basically restores the original submitter's
Also, while the user can submit the price list for review in replay mode, it probably isn't a good idea. There might be edge cases where it's useful to do so, however, so we'll simply display the following warning above the submit button:
Yeah, I think file system storage is no bueno for us. The app is running on two instances, so who knows which actual container the files will end up on. The containers also get restarted fairly often (nightly almost it seems), so we could end up losing a lot and having this whole feature not be very useful to us.
Good points @jseppi. I've implemented a really basic Django file storage backend that just saves in a DB model called
Thoughts @jseppi? In the worst case, we can switch to django-storages, it just felt like a big hassle after reading the instructions for their database backend.
Ok, I think this is good to go. The only question remaining is whether we should also add support for recording/replaying attempted submissions in the price list replace workflow in this PR, too. I think it will be straightforward, but will likely involve modifying the
@jseppi, one option is for you to add support for the other workflow as a way to familiarize yourself w/ this code... of course, we could just punt on that too, since it's not super likely that folks will be using it really soon.
A screenhero-ing might be good for reviewing this since a lot of it is quite outside of my Django knowledge. I'm primarily concerned so far with how the storage stuff works -- it looks like it gets uploaded to disk somewhere (?), but that could be bad since we are on multiple "servers."
Ah good idea.
It technically might get uploaded to disk if it's bigger than around 2.5 mb, but so do our price lists--that's just how django deals with file uploads. However, the
Some thoughts I had while reading over all this:
- I love the functionality and think it will be super helpful
- What about using S3 for the storage backend? It would be straightforward to set this up on cloud.gov, though I think it will also have some ATO documentation burden since it's another "system" that we'd be going in and out of. We should check with @abisker.
- I think I recall you mentioning this somewhere, but it would be cool if we could somehow abstract the price list upload flow in way that we could extend it to add the logic for doing the replay.
My biggest concern currently is understanding
step_3 from a development perspective.
Oh, regarding using S3 for the storage backend, I think that's definitely a possibility, though we'd have to make sure that the S3 buckets aren't publicly accessible. Actually, given that they can't be publicly accessible, I'm not even sure how much of a performance benefit there'd be in using S3 over
Another thing I forgot to mention about the storage (and the model in general): the whole recording/playback system is intended to be a pretty transient thing, in this PR at least, meaning that it should be fine for us to delete everything every so often. This makes me fairly unconcerned about the exact storage mechanism we use, because we won't need to actually migrate the data from one backend to another if we decide to switch storages (doing that would probably be a big hassle). So I think it's OK if we just go with this slowpoke thing for now but switch to a different storage later if it turns out to suck, because the switch shouldn't be that painful.
(Famous last words though, maybe? I've never actually done anything like this before...)
This is looking really good
I like the new
Replayer implementation -- helps a ton with readability.
Um, I think all my questions/concerns are in review comments. Let me know if anything I said was unclear/dumb :)