-
-
Notifications
You must be signed in to change notification settings - Fork 337
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support one-to-many mappings in sqlobject #996
Comments
First off, you can always extend Notwithstanding, I agree this is a shortcoming, and one I'd love to see solved. Be warned any proposed solution would have to be pretty generic and adaptable to different usage patterns for us to adopt it into the project. This is a whole can of worms we have to think about:
Ideally we'd want to structure this in a way that promotes reuse of registered mappers without having to reinvent the wheel for every possible corner case. I think a productive place to start is to ask the question: what annotations would you like to exist, how would they look on a SQL Object interface, and how would those annotations be applied in the mapping logic behind the scenes? |
We also need a way to tell Jdbi which columns uniquely identify the master record, and which identify each child record within each row, to identify coalesce duplicates. And how should we deal with deeply nested collections? e.g. class A {
List<B> b;
}
class B {
List<C> c;
}
class C {
...
} |
While a general approach might be more performant, I think that for most cases it would be sufficient to be able to register a reducer, that calls reduceRows and returns the result. The annotations would look something like this:
With the row reducer looking something like this:
Would that make sense? |
What if instead of a generic interface RowProcessor<A, R> {
A getSeed();
A apply(A accumulator, RowView rowView);
Stream<R> getResult(A accumulator);
} Example implementation: class ContactPhoneRowReducer implements RowReducer<Map<Long,Contact>, Contact> {
Map<Long,Contact> getSeed() {
return new LinkedHashMap<Long, Contact>();
}
Map<Long,Contact> apply(Map<Long,Contact> accumulator, RowView rowView) {
accumulator.computeIfAbsent(rowView.getColumn("c_id", long.class),
id -> rowView.getRow(Contact.class))
.getPhones()
.add(rowView.getRow(Phone.class));
return accumulator;
}
Stream<Contact> getResult(Map<Long, Contact> acc) {
return acc.values().stream();
}
} We could provide an abstract implementation based on a class ContactPhoneRowReducer extends MapAccumulatorRowReducer<Long, Contact> {
void accept(Map<Long,Contact> accumulator, RowView rowView) {
accumulator.computeIfAbsent(rowView.getColumn("c_id"),
id -> rowView.getRow(Contact.class))
.getPhones()
.add(rowView.getRow(Phone.class));
}
} In the SQL Object interface, we'd need an annotation that tells Jdbi that this method's result is reduced from the public interface ContactDao {
@SqlQuery("select contacts.id c_id, name c_name, "
+ "phones.id p_id, type p_type, phones.phone p_phone "
+ "from contacts left join phones on contacts.id = phones.contact_id "
+ "order by c_name, p_type ")
@ReduceRows(ContactPhoneRowReducer.class)
List<Contact> getAll();
@SqlQuery("select contacts.id c_id, name c_name, "
+ "phones.id p_id, type p_type, phones.phone p_phone "
+ "from contacts left join phones on contacts.id = phones.contact_id "
+ "order by c_name, p_type where contacts.id = :id")
@ReduceRows(ContactPhoneRowReducer.class)
Optional<Contact> getOne(Long id);
} This annotation would apply to methods with the following annotations:
Since the reducer produces a I could see default <T> Stream<T> reduceRows(RowReducer<?, T> reducer) @jdbi/contributors What do you think? |
I'm doing a spike on this, so far so good 🤞 |
Can the |
I looked into that--there are strong similarities, but |
I don't think "it's usually done with lambdas" is a sufficient reason to re-invent an interface, assuming it actually would end up working out. |
This may be the tail wagging the dog, but one reason I favor our own interface over @interface UseRowReducer {
Class<? extends Collector<RowView, ?, Stream<?>>> value();
}
interface Good extends Collector<RowView, Map<Long, Stream>, Stream<String>> {}
interface BadInput extends Collector<String, Map<Long, Stream>, Stream<String>> {}
interface BadResult extends Collector<RowView, Map<Long, Stream>, List<String>> {}
interface Pointless extends Collector<RowView, String, Stream<?>> {}
interface TestAnnotations {
@UseRowReducer(Good.class)
void good();
@UseRowReducer(BadInput.class)
void badInput();
@UseRowReducer(BadResult.class)
void badResult();
@UseRowReducer(Pointless.class)
void pointless();
} Out of the above method annotations, only |
JdbiCollectors are pretty much undocumented (http://jdbi.org/#JdbiCollectors is referenced in the documentation, but the anchor does not exist), and it is not cear how they integrate with SqlObject, so they are kind of hard to use right now. OTOH it is clear that there is a lot of overlap between Collectors and RowReducers, and for my uses a I think that what I would like the most is a new method on ResultBearing
along with some way of using it from SqlObject, as that would allow me to combine the mapping capabilities of the RowView with the full flexibility of the filtering, mapping and collecting built into the Streams api. |
I'm concerned that if we provide a |
In fact, any stateful Stream operation (such as |
And yeah, we really need to document Collectors. I'll try to do that this week. But it should mostly be a, "read the Collectors documentation, and here's a couple of examples" -- Jdbi does not really introduce new concepts on top of it, only exposes it through the API. |
What if we used interface ResultBearing {
<A, R> R collectRows(Collector<RowView, A, R> collector);
} public interface RowReducer<A, R> {
A supplyAccumulator();
void accumulateRow(A accumulator, RowView rowView);
Stream<R> toStream(A accumulator);
// but with better method names :)
default Collector<RowView, A, Stream<R>> collector() {
return Collector.of(
this::supplyAccumulator,
this::accumulateRow,
(a, b) -> {
throw new UnsupportedOperationException("RowReducer does not support parallel streams");
},
this::toStream);
}
} |
That definitely sounds more appealing to me! I am just trying to be really defensive on expanding an already large API, thanks for humoring :)
… On Jan 11, 2018, at 4:36 PM, Matthew Hall ***@***.***> wrote:
What if we used Collector in core, and RowReducer was a SQL Object-only type (to make annotation-driven reducers easy to express), and provided a method to convert them to collectors?
interface ResultBearing {
<A, R> R collectRows(Collector<RowView, A, R> collector);
}
public interface RowReducer<A, R> {
A supplyAccumulator();
void accumulateRow(A accumulator, RowView rowView);
Stream<R> toStream(A accumulator);
// but with better method names :)
default Collector<RowView, A, Stream<R>> collector() {
return Collector.of(
this::supplyAccumulator,
this::accumulateRow,
(a, b) -> {
throw new UnsupportedOperationException("RowReducer does not support parallel streams");
},
this::toStream);
}
}
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
I'm definitely open for suggestions on the |
How about (the type really operates over all results, not a single row) |
How about Wrt. the method names, I think it would be wise to draw on the naming fro the Streams API in order to foster recognition. That would mean names like |
So far we have one method dealing with Ditto for I keep going back and forth though on whether <A, R> R collectRows(Collector<RowView, A, R> collector);
<A, R> Stream<R> reduceRows(RowReducer<A, R> reducer); I lean toward core, partly so the same reduce mechanism may be used in either core of SQL Object (consistency, reuse), and partly because List<Contact> result = dbRule.getSharedHandle()
.createQuery("SELECT ... FROM contacts LEFT JOIN phones on ...")
.reduceRows(LinkedHashMapRowReducer.<Integer, Contact>of((map, rv) ->
Contact contact = map.computeIfAbsent(
rv.getColumn("contact_id", Integer.class),
id -> new SomethingWithLocations(rv.getRow(Contact.class)));
if (rv.getColumn("phone_id", Integer.class) != null) {
contact.getPhones().add(rv.getRow(Phone.class));
}
}) // returns Stream<Contact>
.collect(toList()); |
Should we support |
* RowReducer<A, R> interface. * LinkedHashMapRowReducer abstract implementation. * ResultBearing.collectRows(Collector<RowView,A,Stream<R>>) * ResultBearing.reduceRows(RowReducer<A,R>) * @UseRowReducer annotation. * Refactor tests to demonstrate usage TODO: * Decide whether to support @UseRowReducer with @SqlBatch, @SqlUpdate * Document reducers in developer guide.
* RowReducer<A, R> interface. * LinkedHashMapRowReducer abstract implementation. * ResultBearing.collectRows(Collector<RowView,A,Stream<R>>) * ResultBearing.reduceRows(RowReducer<A,R>) * @UseRowReducer annotation. * Refactor tests to demonstrate usage TODO: * Decide whether to support @UseRowReducer with @SqlBatch, @SqlUpdate * Document reducers in developer guide.
As far as I can tell, one-to-many relations can be mapped using either
I would like to be able to return custom beans, containing one-to-many relations, from sqlobject.
Using the example at http://jdbi.org/#_joins i would like to be able to do something similar to the following:
I can see 2 ways this might be implemented:
If someone can help me get started, I would be happy to contribute the code myself, but I would need some guidance as to which implementation is preferred.
The text was updated successfully, but these errors were encountered: