Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A storage error doesn't prevent the creation of the File or Item records #88

Closed
zerocrates opened this issue Jul 19, 2011 · 9 comments
Closed
Milestone

Comments

@zerocrates
Copy link
Member

If you create a new Item, choose some metadata, and choose a file to upload, we seem to get the following if there's some error when storing the file:

  • The Item record is created, but has none of its Element metadata
  • The File record is created
  • The underlying file itself is, obviously, not stored.

This is under Synchronous job dispatching. I suspect that the Element metadata would be successfully saved under one of the "true" background processing handlers if there was a storage problem, since the two are separate.

I'm unsure if the whole File record is created as part of the upload job, if so, transaction support could solve this for us (though not the Item part of it, since these would be separate db sessions).

Even under the current situation, it seems like we could defer saving the File record until the actual files have been stored.

(Stopping the Item creation for a file issue might not be necessary or desired, but the current synchronous behavior of resulting in an empty Item is no good.)

@zerocrates
Copy link
Member Author

The File record appears to be saved before we kick off the background job, and the data there is used by the job.

Not sure if its feasible to shift the File creation entirely into the job and just pass the appropriate data, but that'd be one solution.

The job could also delete the File record upon an error, but that's a little more fragile than simply not saving it at all until everything's a-ok.

@kriskelly
Copy link
Contributor

The most robust solution for this would be to add transaction support, which is itself blocked by the lack of a baked-in search alternative to MyISAM fulltext search.

Storage exceptions are thrown in File::afterInsert(), so wrapping that part of Omeka_Record::save() in a try/catch to delete the record might also work, though seems hackish.

@kriskelly
Copy link
Contributor

Moving all file processing into a job could also work, though that will take some tinkering.

@zerocrates
Copy link
Member Author

The most robust solution for this would be to add transaction support, which is itself blocked by the lack of a baked-in search alternative to MyISAM fulltext search.

Storage exceptions are thrown in File::afterInsert(), so wrapping that part of Omeka_Record::save() in a try/catch to delete the record might also work, though seems hackish.

Both transaction support and a try/catch on afterInsert wouldn't help us
when the exception is being thrown in a different process, right?

Those kinds of things could help out in the currently-bad synchronous
case, but there might be some trivial reordering that could be done to
bring synchronous behavior into line with the better but still bad
current background behavior.

@kriskelly
Copy link
Contributor

Oh yeah, you're right. Probably best to try/catch storage exceptions in the job and delete the file if necessary.

In this case, we're not worried about saving items, are we? The item doesn't depend on the file, so it wouldn't make sense to delete it if the file failed to upload.

On Jul 19, 2011, at 3:16 PM, zerocrates wrote:

On 07/19/2011 03:09 PM, kbkelly wrote:

The most robust solution for this would be to add transaction support, which is itself blocked by the lack of a baked-in search alternative to MyISAM fulltext search.

Storage exceptions are thrown in File::afterInsert(), so wrapping that part of Omeka_Record::save() in a try/catch to delete the record might also work, though seems hackish.

Both transaction support and a try/catch on afterInsert wouldn't help us
when the exception is being thrown in a different process, right?

Those kinds of things could help out in the currently-bad synchronous
case, but there might be some trivial reordering that could be done to
bring synchronous behavior into line with the better but still bad
current background behavior.

Reply to this email directly or view it on GitHub:
#88 (comment)

@zerocrates
Copy link
Member Author

On 07/19/2011 03:21 PM, kbkelly wrote:

In this case, we're not worried about saving items, are we? The item doesn't depend on the file, so it wouldn't make sense to delete it if the file failed to upload.

I think that's right. The only snag is that, thanks to the order things
get saved in, when there's a storage error and you're using the
synchronous dispatcher, the Item record gets saved, but the element
texts never do, so you end up with an incomplete Item. I think this is
fixable just by making sure that the Item saving process does it's thing
before we kick off the File job.

@kriskelly
Copy link
Contributor

Sounds good to me.

@zerocrates
Copy link
Member Author

So, the tricky part about reordering the save order is that element texts get saved by ActsAsElementText, while the files are saved by Item::afterSave, and Omeka_Record always runs the record's own callback first.

zerocrates added a commit that referenced this issue Jul 20, 2011
zerocrates added a commit that referenced this issue Sep 26, 2011
Fixes part of #88.
(cherry picked from commit 7c8df6a)
@patrickmj
Copy link
Contributor

should probably keep the item in place to not lose work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants