Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot PUT versioned entity returned by spring-data-rest [DATAREST-705] #1074

Closed
spring-projects-issues opened this issue Nov 13, 2015 · 5 comments
Assignees
Labels
type: bug

Comments

@spring-projects-issues
Copy link

@spring-projects-issues spring-projects-issues commented Nov 13, 2015

Anton Koscejev opened DATAREST-705 and commented

I would expect it to be possible to execute PUT by simply supplying all fields returned by GET. However, this is currently broken as a consequence of DATAREST-160.

Now when an entity is exposed via REST API (either as part of a list, or directly), it's returned without version property. However, when PUT operation is performed, version property can be required, since the whole entity is persisted (as opposed to PATCH logic where only changed fields need to be supplied). This is true, for example, for JPA, which will check version and refuse updating if version is missing.

This means that REST API client is expected to send a property that it never received. In fact, the client might not even be aware that such property exists, how it's called, and what value it should contain


Affects: 2.4 GA (Gosling)

Referenced from: commits 6068a17, 2d1fe6f, 8329fcf

Backported to: 2.4.2 (Gosling SR2), 2.3.3 (Fowler SR3)

@spring-projects-issues
Copy link
Author

@spring-projects-issues spring-projects-issues commented Nov 13, 2015

Anton Koscejev commented

It seems that to be consistent Spring-Data-Rest should either:

  1. populate version property with value of ETag on incoming entities, or
  2. return version property when GET is performed (perhaps not for lists, but at least for single entities)

Also, if the first option is chosen, it seems redundant to load entity from DB and merge incoming entity with existing one, if module (JPA) will check it anyway. Perhaps each module can specify whether it will check version itself, so Spring-Data-Rest doesn't have to handle ETag directly

@spring-projects-issues
Copy link
Author

@spring-projects-issues spring-projects-issues commented Nov 20, 2015

Oliver Drotbohm commented

This should be fixed. I explicitly removed the wiping for version and identifier properties from DomainObjectReader to make sure we preserve the server side values. For PUT scenarios this now gives the following situation:

  1. PUTting the document as received will not wipe the version property, thus use the value loaded from the server and only fail (cause a conflict) in case someone else updates the same entity in between load, application of the sent payload and saving. All other requests will strictly write the sent payload.
  2. Conditional PUTs can be issued by adding the If-Match header and using the ETag header received in the original request as it's evaluated independently of the payload merge and will return 412 Precondition failed as defined by the HTTP spec.

Does that make sense?

@spring-projects-issues
Copy link
Author

@spring-projects-issues spring-projects-issues commented Nov 20, 2015

Anton Koscejev commented

That would solve the original issue, yes.

What do you think about the fact that an unnecessary DB operation is performed, when module (JPA) supports optimistic locking natively, and both ID (in URL) and version (in ETag) are provided? In this specific scenario it would be more efficient to simply:

  1. deserialize incoming entity
  2. apply ID from URL (fail if deserialized entity already has different ID specified)
  3. apply version from ETag (fail if different)
  4. persist

Perhaps this could be an additional feature? "Increase performance by delegating optimistic locking to underlying module."

@spring-projects-issues
Copy link
Author

@spring-projects-issues spring-projects-issues commented Nov 21, 2015

Oliver Drotbohm commented

Interesting thoughts. I am fearing this approach would cause quite a bit of trouble or would only be suitable in very edge cases due the following problems:

  1. The resource representation you might see doesn't have to be a 1:1 representation of the underlying entity. Just imagine one property being @JsonIgnore-ed. That means the client would never see it. If we apply the suggested algorithm, the PUT request would wipe out the properties that were ignored.
  2. I think it's an anti-pattern to delay the recognition of a potential optimistic locking exception. As it basically means you hand object instances through business logic (e.g. Spring Data REST pre-save events) although you could've recognized they would never even have to be fired in the first place just to eventually run into the database layer rejecting the update.

To summarize we're basically optimizing for correctness instead of backend update performance by default. You can always go ahead an simply override the PUT method for a particular item resource if performance is really bad and use more immediate means (e.g. a dedicated repository method that optimizes the call)

@spring-projects-issues
Copy link
Author

@spring-projects-issues spring-projects-issues commented Nov 21, 2015

Anton Koscejev commented

Regarding @JsonIgnore, I see that you're basing additional features on top of this, so yes, increased performance would have to come at the cost of purposely avoiding using those features. To be honest, until you said this, I didn't realize spring-data-rest actually allows some fields to be ignored and not wiped on update. It's an interesting feature. So losing this (and potentially other) features sounds like a big drawback, but at the same time I can see it being helpful to skip the check, because sometimes increased performance is more important than secondary features.

I see your point about delaying the recognition, but this is not so clear-cut. This is not just about rearranging things in correct order, so you can skip some things, because other things are more likely to fail. The whole idea of optimistic locking is that you don't expect to hit the lock often (hence "optimistic"). You expect most of the operations to be fine and you want increased performance that comes with optimistic lock (as opposed to the pessimistic one). Adding an additional check with every operation is exactly the opposite of what one would expect from optimistic locking. In other words, adding that check means you're being pessimistic. :)

Considering the business logic point (e.g., pre-save events), doing an additional check still doesn't prevent the situation when underlying module will reject the update. So it might still happen that business logic events run through, and then update fails. So the system becomes less consistent/predictable with the idea that you might get increased performance from it (by skipping business logic sometimes). But while the drawback (inconsistency) is introduced, the benefit is questionable - this is because, like I tried to explain in the previous point, the idea of optimistic lock is to allow "good" operations to pass through with as few checks as possible (= as fast as possible) at the cost of perhaps losing some performance when "bad" operations occur. Because if we were properly pessimistic in the first place and locked the record, then "bad" operation would've been prevented sooner - but that would be pessimistic lock.

Basically, I'm not saying this has to be fixed. I'm saying this implementation doesn't follow the core idea of optimistic locking as strictly as it could. Indeed, if someone wanted increased performance, they could create a workaround. The question is: should it really be a workaround or could there be a feature specifically for this?

Anyhow, thanks for fixing this bug, we're looking forward to the next version! ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug
Projects
None yet
Development

No branches or pull requests

2 participants