Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CassandraRepository.save(…) fails silently [DATACASS-445] #598

Closed
spring-projects-issues opened this issue May 22, 2017 · 5 comments
Closed
Assignees
Labels
type: bug A general bug
Milestone

Comments

@spring-projects-issues
Copy link

Philipp Langer opened DATACASS-445 and commented

Hi folks. I think the implementation of save(Object) has a bug since version 1.5.2 of Spring Data Cassandra.

Pre-Condition
An entity class with one (or more) fields annotated as the primary key and one or more non-primary key fields annotated as regular column values.

@Table("person")
public class Person {
    @PrimaryKey("id")
    private String id;

    @Column("name")
    private String name;

     (...)
}

Scenario

  1. Create new object of the entity class described above.
  2. Set values for the primary key field(s) only.
  3. Pass the object to save(Object) of your CRUD Spring Data Repository for this entity
Person p = new Person();
p.setId("id1");

personRepository.save(p);

Scenario Results

  • The save method "succeeds" and the entity object is returned
  • but no row created in the corresponding CQL table

That's because an UPDATE statement was performed (since primary key values are set). E.g. UPDATE person SET name=null WHERE id="id1";

Usually, an UPDATE statement will create a new row if it does not exist. However, if all non-primary key columns are set to null, no row is created.

Expected Behaviour
Since the entity object is returned by save(Object) and no exception was thrown, I would expect that a CQL row was created or updated (i.e. the entity object was saved to Cassandra).

Please let me know if that's the intended behaviour or if you consider this a bug as well.

Thank you for your efforts


Affects: 1.5.2 (Ingalls SR2)

Backported to: 1.5.4 (Ingalls SR4)

@spring-projects-issues
Copy link
Author

Mark Paluch commented

Thanks Philipp Langer for filing the ticket. The mentioned behavior is an effect originating from a change regarding null value handling (see DATACASS-420).

We can't determine reliably whether an object is new when persisting the object. Initially, we used INSERT and inserted null values which created tombstones so we changed the behavior. We now filter null valued fields from an INSERT and use INSERT if the entity consists only of primary key attributes. For every other case, we use UPDATE and include null valued fields in the statement.

From what I see now, our approach to always use INSERT and include null valued fields in the statement makes the most sense and the least trouble. To honor DATACASS-420, null value handling would be configurable (include null values in insert /update statements or not) on CassandraConverter level.

Does this make sense?

@spring-projects-issues
Copy link
Author

Philipp Langer commented

Hi @Mark Paluch and thanks for your acknowledgement.

From what I see now, our approach to always use INSERT and include null valued fields in the statement makes the most sense and the least trouble.

Yes, regarding save(Object) I think this is true. Although avoiding tombstones is very desirable and was the reason why I upgraded to 1.5.2.

To honor DATACASS-420, null value handling would be configurable (include null values in insert /update statements or not) on CassandraConverter level.

Not sure if I got this right. Are we talking about read/write converter here? I guess that means it's not configurable on a per-query basis.

What about if save(Object) would always use INSERT and include null valued fields, whereas (the newly introduced) insert(Object) would always use INSERT and filter null values?

@spring-projects-issues
Copy link
Author

Mark Paluch commented

So we will change save(…) to always INSERT with null. From a Repository perspective, you expect to retrieve the same object as stored without necessary obey to every store optimization.

insert(…) is a store-specific extension that applies null optimization in the way of skipping null values

@spring-projects-issues
Copy link
Author

Mark Paluch commented

That's fixed now. Care to give the snapshot artifact a try?

<dependency>
	<groupId>org.springframework.data</groupId>
	<artifactId>spring-data-cassandra</artifactId>
	<version>1.5.4.BUILD-SNAPSHOT</version>
</dependency>

@spring-projects-issues
Copy link
Author

Philipp Langer commented

Hi. I can confirm this bug to be fixed by version 1.5.4.BUILD-SNAPSHOT of Spring Data Cassandra.

Thanks a lot. Good job

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug A general bug
Projects
None yet
Development

No branches or pull requests

2 participants