Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposed new API design principles #83

Closed
dimzon opened this issue Apr 8, 2014 · 43 comments
Closed

Proposed new API design principles #83

dimzon opened this issue Apr 8, 2014 · 43 comments
Milestone

Comments

@dimzon
Copy link
Contributor

dimzon commented Apr 8, 2014

I want to propose to re-design new (2.0) release version architecture|api using following principles

  1. Separation of Concerns - each component must solve 1 concrete task, every component can be replaced by custom implementation
  2. Using well-known GoF patterns instead self-made solutions (for example PojoMetadata is awful, it must be separated by PojoMetadataProvider && PojoMetadata)
  3. Must not prevent to write plain-old JDBC-code (must be able to get Connection from sql2o.Connection, PreparedStatement from Query, ResultSet from IterableResultSet (since we can use third-party library wich consumes JDBC objects, as example - converting ResultSet to CSV)
  4. Must be able to consume JDBC objects at any step (must be create sql2o.Connection from Connection, Query from PreparedStatement, must be able to transform ResultSet to all possible forms (List|Table|LazyTable|IterableResultSet (since sometimes we write not project-from-scratch but code wich obtains JDBC objects outside)
@aaberg aaberg added this to the 2.0 milestone Apr 8, 2014
@aaberg aaberg mentioned this issue Apr 9, 2014
@dimzon
Copy link
Contributor Author

dimzon commented Apr 10, 2014

So let's start...

sql2o.Conf class.

Aggregates all sql2o settings at one place. Implements sql2o.ConfSource.

sql2o.ConfSource interface.

contains ``sql2o.Conf getConf()` method

sql2o.Sql2o class.

my vision: it's factory class.
responsibility: single entry-point, configure once use multiple. Takes javax.sql.DataSource in constructor. Produce sql2o.Conn. Also can create sql2o.Conn from java.sql.Connection, implements sql2o.ConfSource.

typical usage: one instance per javax.sql.DataSource. Developer must create new instance and perform application-wide tweaks (Quirks|Strategies etc). In case of application servers developer can|must somehow implement singleton behavior (lifetime management is outside sql2o.Sql2o responsibility) using any preferred way (plain old static fields|IoC containers|whatever)

@dimzon
Copy link
Contributor Author

dimzon commented Apr 10, 2014

sql2o.Conn class

my vision: it's adapter class.

responsibility: database connection. takes sql2o.ConfSource and java.sql.Connection in constructor. Can create sql2o.Query. Implements sql2o.ConfSource

typical usage: mostly like plain old java.sql.Connection. Provides some syntax sugar. Works like a decorator around sql2o.ConfSource. Allows connection-wide tweaks (locally overriding tweaks from sql2o.ConfSource)

JDBC consumer - can be created from plain JDBC Connection
JDBC provider - can return ready-to use JDBC Connection

@dimzon
Copy link
Contributor Author

dimzon commented Apr 10, 2014

sql2o.Query class

my vision: it's PreparedStatement on steroids. takes sql2o.ConfSource, java.sql.Connection and sqlQuery in constructor
can produce sql2o.QueryResult.

typical usage: mostly like plain old java.sql.PreparedStatement.

JDBC consumer - can be created from plain JDBC Connection
JDBC provider - can return ready-to use PreparedStatement/CallableStatement

@dimzon
Copy link
Contributor Author

dimzon commented Apr 10, 2014

sql2o.QueryResult class

my vision: adapter around java.sql.ResultSet. takes java.sql.ResultSet and sql2o.ConfSource in constructor. Contains all neat List<Bean> listOf(Class<Bean> ofBeanClass) etc methods.
as addition must provide generic getString, getInteger, getDate, getUUID etc methods which behavior is customizable via Quirks (some projects|databases use getNString another getString etc)

JDBC consumer - can be created from plain JDBC ResultSet

@aaberg
Copy link
Owner

aaberg commented Apr 10, 2014

Separation of Concerns - each component must solve 1 concrete task, every component can be replaced by custom implementation

I agree

Using well-known GoF patterns instead self-made solutions (for example PojoMetadata is awful, it must be separated by PojoMetadataProvider && PojoMetadata)

I'm not sure I agree that it is awful! :). But I agree that it's better to use patterns under the GoF umbrella.

Must not prevent to write plain-old JDBC-code (must be able to get Connection from sql2o.Connection, PreparedStatement from Query, ResultSet from IterableResultSet (since we can use third-party library wich consumes JDBC objects, as example - converting ResultSet to CSV)

I think this is a really good idea.

Must be able to consume JDBC objects at any step (must be create sql2o.Connection from Connection, Query from PreparedStatement, must be able to transform ResultSet to all possible forms (List|Table|LazyTable|IterableResultSet (since sometimes we write not project-from-scratch but code wich obtains JDBC objects outside)

As above. It think this is a really good idea :)

@aaberg
Copy link
Owner

aaberg commented Apr 10, 2014

I think your vision for sql2o v2 is very promising. I have read though your comments, and I think it looks very promising!

Some things I think is important:

  • API must be clean and simple.
  • No annotations in core lib.
  • No manipulation of sql in core lib.
  • No need to catch any exceptions. All exceptions should be runtime exceptions.
  • I think the api today (as seen from the user) is good, some changes will happen, but it shouldn't change too much

I think all of this fits nicely into your vision!

Something I would like to discuss:

  • Will sql2o v2 have converters? How will they fit in?
  • How will transactions fit in? I think a transaction should be attached to a connection object and not extend it. I think it should be possible to begin a transaction both from a sql2o instance directly and from a connection instance.
  • Today you can execute a query directly on a sql2o instance (will automatically open a connection, execute the query, and close the connection). Should we keep this behavior?
  • v2 Will probably not compatible with v1. Should we create a new package (namespace) for sql2o v2, like they did with org.apache.commons.lang3. So an application can use both sql2o v1 and v2 at the same time?

@dimzon
Copy link
Contributor Author

dimzon commented Apr 10, 2014

No need to catch any exceptions. All exceptions should be runtime exceptions.

Yes, checked exception is wrost Java feature even. But re-throwing all of them as RuntimeException is evil too since you can't catch on concrete exception and you must search for proper CallStack every time...
I just prefer to write throws Exception in all my business-code. This is really better solution - keep original exception clean without RuntimeException noise and Java doesn't harass you to catch anything.

@aldenquimby
Copy link
Contributor

I think the current design for sql2o exceptions is good and shouldn't
change. Catch everything and rethrow as a Sql2oException, which extends
RuntimeException. A lot of people don't add "throws Exception" everywhere,
so it could be annoying to users to have to try catch every sql2o call

On Thursday, April 10, 2014, Dmitry notifications@github.com wrote:

No need to catch any exceptions. All exceptions should be runtime
exceptions.
Yes, checked exception is wrost Java feature even. But re-throwing all of
them as RuntimeException is evil too since you can't catch on concrete
exception and you must search for proper CallStack every time...
I just prefer to write throws Exception in all my business-code. This is
really better solution - keep original exception clean without
RuntimeException noise and Java doesn't harass you to catch anything.

Reply to this email directly or view it on GitHubhttps://github.com//issues/83#issuecomment-40145809
.

@dimzon
Copy link
Contributor Author

dimzon commented Apr 10, 2014

Will sql2o v2 have converters? How will they fit in?

I think better way is

interface ResultSetExtractor{
    Object extractValue(QueryResult resultSet, int columnNumber) throws SQLException;
}

back direction is ParamSetter you already used

current converters become Extractor|Setter factories

@dimzon
Copy link
Contributor Author

dimzon commented Apr 10, 2014

Today you can execute a query directly on a sql2o instance (will automatically open a connection, execute the query, and close the connection). Should we keep this behavior?

WARNING! IT'S IMHO
API is something designed by professionals for non-professionals (read is some FrmeworkDesignGuide book). We must force consumers to use proper programming style. To learn to use try-with-resource. To not open new connection for every query.
take PHP as example - it allows bad coding...

@dimzon
Copy link
Contributor Author

dimzon commented Apr 10, 2014

v2 Will probably not compatible with v1. Should we create a new package (namespace) for sql2o v2, like they did with org.apache.commons.lang3. So an application can use both sql2o v1 and v2 at the same time?

great idea

@dimzon
Copy link
Contributor Author

dimzon commented Apr 10, 2014

How will transactions fit in? I think a transaction should be attached to a connection object and not extend it. I think it should be possible to begin a transaction both from a sql2o instance directly and from a connection instance.

this is a subject to think about. this is not so easy, think about distributed transactions...
I really like how this woks in .NET, let me show this:

// using is 100% same as try-with-resource
using(Transaction tr = new Transaction()){
     // any code here automatically enlisted in transaction
     // this is done via ThreadLocal<Stack<Transaction>>
     // every constructor invocation puts new transaction onto stack top  
     // every dispose() (dispose == java AutoClosable.close() )
     // removes transaction from stack
     // every database statement constructor checks top of this stack
     // if it's not empty it will enlist in transaction
     this.doSomething(); 
     tr.commit();
}

@dimzon
Copy link
Contributor Author

dimzon commented Apr 10, 2014

A lot of people don't add "throws Exception" everywhere,
so it could be annoying to users to have to try catch every sql2o call

this is subject to talk about. I think (we can discuss and also look for best practics) wrapping everything in runtime exceptioon is evil (look my explanation in previous post). It's better to teach 'users' better and clean programming style (wich will help in other cases to, not just sql2o)

@aldenquimby
Copy link
Contributor

That is roughly how I use sql2o in my projects right now. I have a data
layer package that wraps sql2o and drives a service/repository pattern.

To be honest I don't think sql2o should implement this, because it's
getting out of scope. The current version is one of the very few light
weight object mappers in java. Imposing data layer patterns on users takes
sql2o well beyond an object mapper, and I don't think that's the right
move.

Maybe someone could write an extension that keeps a stack of transactions
and pushes/pops as things are created/disposed, but I really don't think
that should be in the core.

On Thu, Apr 10, 2014 at 6:03 PM, Dmitry notifications@github.com wrote:

How will transactions fit in? I think a transaction should be attached to
a connection object and not extend it. I think it should be possible to
begin a transaction both from a sql2o instance directly and from a
connection instance.

this is a subject to think about. this is not so easy, think about
distributed transactions...
I really like how this woks in .NET, let me show this:

// using is 100% same as try-with-resourceusing(Transaction tr = new Transaction()){
// any code here automatically enlisted in transaction
// this is done via ThreadLocal<Stack>
// every constructor invocation puts new transaction onto stack top
// every dispose() (dispose == java AutoClosable.close() )
// removes transaction from stack
// every database statement constructor checks top of this stack
// if it's not empty it will enlist in transaction
this.doSomething();
rtr.commit();}

Reply to this email directly or view it on GitHubhttps://github.com//issues/83#issuecomment-40148569
.

@dimzon
Copy link
Contributor Author

dimzon commented Apr 10, 2014

Transactions are main DB feature. We must provide transaction management some way...

@aldenquimby
Copy link
Contributor

Yes I agree, I'm just saying that IMO we don't need to provide anything
beyond begin/commit/rollback. Unless the project goals are changing
drastically, I believe sql2o is an object mapper, not a data framework.

Small projects do not need large data frameworks. Large projects will have
wrappers around sql2o so that the dependency is abstracted and other code
won't break if the object mapper library is swapped out.

Again this is just my opinion, but the minimalism of sql2o is what
attracted me to it in the first place.

On Thu, Apr 10, 2014 at 6:22 PM, Dmitry notifications@github.com wrote:

Transactions are main DB feature. We must provide transaction management
some way...

Reply to this email directly or view it on GitHubhttps://github.com//issues/83#issuecomment-40150151
.

@aaberg
Copy link
Owner

aaberg commented Apr 11, 2014

Can I create a query with named parameters, add parameters, execute the query and map it to a list of POJO objects in one line of code? No configuration (other than url/user/pass). No annoying exception handling. No annotations-mapping. No risk of leaving a connection open.

This was the original idea... I programmed the first version in a couple of nights, as a simple proof of concept, and showed it to some colleagues. They immediately started using it!

The goal of sql2o has never been to enforce best practices or to learn people to use property programming techniques. It was to make something that is extremely easy and intuitive to use!

@aaberg
Copy link
Owner

aaberg commented Apr 11, 2014

Here is what I think would work.

Starting point
Sql2o is a starting point for all operations.

Connection
From a Sql2o instance you can open a connection. A connection should contain operations for creating queries and starting a transaction. When a transaction is started, it is 'attached' to the connection.

Query
Largely as today. methods for adding parameters and execute.

I think the above is a good starting point. But I would like keep many of the convenience-methods sql2o have today.

List<Pojo> list = sql2o.createQuery(sql)
        .addParameter('param', val)
        .executeAndFetch(Pojo.class);

The above is in many ways the identity of sql2o. This should also be available in v 2. It is a convenient way of opening a connection, executing a query and mapping it to pojo objects.

But I also think something like following should be possible:

try(Connection con = sql2o.open()) {
    try(Transaction trans = con.beginTransaction(isolationLevel)) {
        con.createQuery(sql).executeUpdate();
        trans.commit();
    }
    // execute something outside transaction
    con.createQuery(sql2).executeUpdate();
}

or if you just need to run something in a transaction

sql2o.runInTransaction((Connection con) -> {
    con.createQuery(sql1).addParameter('name1', val1).executeUpdate();
    con.createQuery(sql2).addParameter('name2', val2).executeUpdate();
});

I guess the key is to make sure it has has a flexible model under the hood, and still keep convenience-functionality for the things we do all the time.

@dimzon
Copy link
Contributor Author

dimzon commented Apr 11, 2014

List<Pojo> list = sql2o.createQuery(sql)
        .addParameter('param', val)
        .executeAndFetch(Pojo.class);

since this is just facade methods it's ok. but I really dislike this let me figure out why.
I'm senior lead developer and I'm working with juniors and offsourcers very often.
imaginate in you code is written something like this

this.list1 = sql2o.createQuery(sql)
        .addParameter('param', val)
        .executeAndFetch(Pojo.class);

now you need to add list2 with Pojo2 obtained by sql2. Now suggest what junior will write ;)

this.list1 = sql2o.createQuery(sql)
        .addParameter('param', val)
        .executeAndFetch(Pojo.class);
this.list2 = sql2o.createQuery(sql2)
        .addParameter('param', val)
        .executeAndFetch(Pojo2.class);

@aaberg
Copy link
Owner

aaberg commented Apr 24, 2014

I've been thinking, and here is what I propose.

Sql2o object

First, I would like to pull back my earlier comment that it should be possible to run queries directly on the sql2o object.
Consider the following code:

return sql2o.createQuery("select ...")
    .addParameter("param1", val)
    .addParameter("param2", val)
    .executeAndFetch(Pojo.class);

This approach is very convenient, as long as you only want to run 1 query.
But as I see it, the approach has two problems.

  1. As @dimzon has already mentioned, it may cause bad coding, as it becomes to easy to open an unnecessary amound of connections to the database.
  2. We cannot guarantee that the connection is closed, as it is opened in the createQuery() method and closed in the executeAndFetch method. If an exception is thrown while adding parameters, connection may never be closed.

So, after some consideration, I support removing the createQuery function from the sql2o object.

Connections

Connection should be opened using try-with-resource feature

try(Connection connection = sql2o.open()) {
   ...
}

queries are created from connection object.

List<Pojo> pojos = connection.createQuery("select * from ...")
    .addParameters(...)
    .executeAndFetch(Pojo.class);

Queries

As @dimzon has already proposed (And implemented :), Query class should also be autoClosable, so if you execute many queries on the same connection, you can close queries and their underlaying statements as soon as they have been executed.

try(Query query = connection.createQuery("select * from ...")) {
    List<Pojo> pojos = query.addParameters(...).executeAndFetch(Pojo.class);
}

As it is generally good practice to close connections as fast as possible, and the above syntax adds complexity, it should not be required to close statements. Sql2o should clean up automatically, and call close on all statements created with a connection, when the connection is closed.

Transactions

Transactions are opened on a connection. They should also be autoclosable, so we have a clear scope for the transaction

try(Connection connection = sql2o.open()) {

    // execute a query not using a transaction.
    List<Pojo> pojos = connection.createQuery("select * from ...")
        .addParameters(...)
        .executeAndFetch(Pojo.class);

    // run some updates in a transaction.
    try (Transaction transaction = connection.beginTransaction()) {
        conection.createQuery("insert into ....")
            .addParameters("..", val1)
            .addParameters("..", val2)
            .executeUpdate(); 

        conection.createQuery("insert into ....")
            .addParameters("..", val1)
            .addParameters("..", val2)
            .executeUpdate(); 

        transaction.commit(); 
    }

    // queries run on the connection after the transaction is closed, will be autocommitted.
}

I also think we should supply a java 8 lambda syntax for the transactions, similar to how i works on todays version.

try(final Connection connection = sql2o.open()) {

    // execute a query not using a transaction.
    List<Pojo> pojos = connection.createQuery("select * from ...")
        .addParameters(...)
        .executeAndFetch(Pojo.class);

    // run some updates in a transaction.
    connection.runInTransaction(() -> {   // or should we rename this to withTransaction() ?
        connection.createQuery("insert ..", val1).executeUpdate();
        connection.createQuery("insert ..", val2).executeUpdate(); 

        // if everything when ok (No exceptions), transaction is committed automatically. 
        // If an unhandled exception is thrown, transaction is rolled back. 
    });

    // or if you need to have some more control over the transaction.
    connection.runInTransaction((Transaction t) -> {
        connection.createQuery("insert ..", val1).executeUpdate();
        connection.createQuery("insert ..", val2).executeUpdate();

        if (somethingIsWrong) {
            t.rollback();
        } else {
            // we don't need to explicitely commit transaction, 
            // as that is done automatically.
        }
    });
}

And if we need to add transaction savepoints:

try(final Connection connection = sql2o.open()) {

    // execute a query not using a transaction.
    List<Pojo> pojos = connection.createQuery("select * from ...")
        .addParameters(...)
        .executeAndFetch(Pojo.class);

    // or if you need to have some more control over the transaction.
    connection.runInTransaction((Transaction t) -> {
        SavePoint save1 = t.setSavePoint();
        connection.createQuery("insert ..", val1).executeUpdate();

        SavePoint save2 = t.setSavePoint();
        connection.createQuery("insert ..", val2).executeUpdate();

        if (somethingIsWrong) {
            t.rollback(save2);
        } else if (everythingWentWrong) {
            t.rollback(save1);
        } else {
            // we don't need to explicitely commit transaction, 
            // as that is done automatically.
        }
    });
}

Java 7

We will base much of the api on java 7 autoClosable feature, so I also support changing target to java 7.

Also...

...I think @dimzon brought up an excellent point when he suggested that it should be possible to create an object from the corresponding JDBC object it extends. So a sql2o Connection can be created from a JDBC connection and sql2o Query object can be created from a PreparedStatement.

try(Query q = new Query(statement)) {
    List<Pojo> l = q.executeAndFetch(...);
}

And we should expose the PojoMapper, so it can be used directly with a ResultSet.

Also, pojo mapping should be implemented using a strategy pattern, so users can change pojo mapping strategy. In this way we can supply a default PojoMappingStrategy that is roughly like the one we use today. But users could supply their own PojoMapper that, for instance, uses JPA-annotations or similiar.

Sql2o version 2

I had planned to take the changes we have already made, and publish a version 1.5. I think I'll skip that, so we can focus on version 2 from now on.

@dimzon, I have added you as collaborator, so you can assign yourself to issues, close issues, create branches in main sql2o project and so forth.

So... any objections, or is this something we can agree on?

~lars

@dimzon
Copy link
Contributor Author

dimzon commented Apr 25, 2014

I also propose to re-visit some method names like
executeAndFetch -> executeList or queryList or maybe just simple list()

@aldenquimby
Copy link
Contributor

+1 for the naming idea, I usually associated "execute" with a write
operation rather than a read operation.

Also I didn't respond to Lars' proposal, but everything looks good to me! I
actually have the nested connection/transaction setup in my project. If
we're going to go that route, I can share the code that I have for it. Need
to keep a stack of transactions and ensure a commit only happens with the
outermost transaction. My implementation is intentionally not thread safe
because i found it hurt performance too much and wasn't necessary.

On Friday, April 25, 2014, Dmitry notifications@github.com wrote:

I also propose to re-visit some method names like
executeAndFetch -> executeList or queryList or maybe just simple list()


Reply to this email directly or view it on GitHubhttps://github.com//issues/83#issuecomment-41361577
.

@dimzon
Copy link
Contributor Author

dimzon commented Apr 25, 2014

talking about thread-safety - it's important for singleton objects (like Sql2o, Convert, FeatureDetector). It's not necessary for other objects.

there are difference between transaction itself and checkpoint within transacton - it's impossible to set different isolation level for checkpoint... So for clearance I propose Commitable interface...

@aaberg
Seems good for me (at first glance).

@dimzon
Copy link
Contributor Author

dimzon commented Apr 25, 2014

commit vs rollback - what is "default" behavior?
i believe by default must be "rollback" - not "commit".
reasons:

  1. commit is more destructive
  2. attempt to commit may fail and this fail sometimes is recoverable (read about DEFFERED constraints in Oracle)

@aaberg
Copy link
Owner

aaberg commented May 1, 2014

Tranactions

@aldenquimby I think your suggestion of tracking transactions, and only committing the outermost one makes sense.

@dimzon I believe you are right, that "rollback" should be the default action.

Method renaming

What about this?
executeAndFetch -> queryList
executeAndFetchFirst -> queryOne
executeAndFetchTable -> queryTable
executeAndFetchLazy -> queryIterable
executeAndFetchTableLazy -> queryTableIterable
executeUpdate -> execute

I think we should deprecate and keep the old method names as well. At least for a while. This makes it easier to upgrade from an earlier version of sql2o.

@aaberg aaberg mentioned this issue May 2, 2014
@Adhara3
Copy link

Adhara3 commented Feb 13, 2015

I also have a couple of suggestions:
For query list: why don't you populate an existing list instead of returning a new one? I usually prefer to give you the container and you fill it rather than returning something.
Query one: how do we manage the "is not one result" scenario? We could return a Optional<T> instead of T(something similar to Java 8 first() on streams. Or we could accept another object as parameter that is called if results are not exactly 1.

Andrea

@aconte76
Copy link

Hi,
I've noticed a lot of time wasting on fetching: this depends mainly on used Reflection (100k rows w/ 10 fields in 2m - on i7/4GB/java7).
As option could be good to use something similar to DBUtils: a companion mapper class or an Interface to implement to have bytecode bean population.

Antonio

@dimzon
Copy link
Contributor Author

dimzon commented Aug 20, 2015

@jammer76 does your java classes have public fields or getters/setters?
Which environment you running on?

sql2o use bytecode generators for methods (really, they reuse them from JVM).
I made some research (including decompiling oracle and ibm rt.jar) for ReflectASM project
EsotericSoftware/reflectasm#27
EsotericSoftware/reflectasm#23

@aconte76
Copy link

Hi Dimzon,
my environment has public setter/getter with LOMBOK... but with a new try I've obtained that with some trick on fetchSize of DataSource I've obtained that with reflection I've 21s per 1M rows of 100bytes ca. and with Handler 15s for same rows (obiouvsly of a simply query like select * from table where rownum < 1000000)

@dimzon
Copy link
Contributor Author

dimzon commented Aug 20, 2015

Try public fields
20.08.2015 11:24 пользователь "Antonio Conte" notifications@github.com
написал:

Hi Dimzon,
my environment has public setter/getter with LOMBOK... but with a new try
I've obtained that with some trick on fetchSize of DataSource I've obtained
that with reflection I've 21s per 1M rows of 100bytes ca. and with Handler
15s for same rows (obiouvsly of a simply query like select * from table
where rownum < 1000000)


Reply to this email directly or view it on GitHub
#83 (comment).

@aconte76
Copy link

But public fields is not a good choice... for sure is faster but absolutely unsafe and unmaintanable... ;-)

@dimzon
Copy link
Contributor Author

dimzon commented Aug 20, 2015

The bottleneck is not reflection since byte code used. Just one more
virtual method call * 1M rows* n columns can take those 5 seconds
20.08.2015 11:34 пользователь "Dmitry Alexandrov" dimzon541@gmail.com
написал:

Try public fields
20.08.2015 11:24 пользователь "Antonio Conte" notifications@github.com
написал:

Hi Dimzon,
my environment has public setter/getter with LOMBOK... but with a new try
I've obtained that with some trick on fetchSize of DataSource I've obtained
that with reflection I've 21s per 1M rows of 100bytes ca. and with Handler
15s for same rows (obiouvsly of a simply query like select * from table
where rownum < 1000000)


Reply to this email directly or view it on GitHub
#83 (comment).

@dimzon
Copy link
Contributor Author

dimzon commented Aug 20, 2015

It depends. Until they are package private...
20.08.2015 11:36 пользователь "Antonio Conte" notifications@github.com
написал:

But public fields is not a good choice... for sure is faster but
absolutely unsafe and unmaintanable... ;-)


Reply to this email directly or view it on GitHub
#83 (comment).

@dimzon
Copy link
Contributor Author

dimzon commented Aug 20, 2015

Btw I think we need a strategy to ignore setters and deserualize directly
to private fields anyway. Your can achieve same effect now if you rename
backing fields without touching setters and make column aliases in select
to match fields...
20.08.2015 11:36 пользователь "Antonio Conte" notifications@github.com
написал:

But public fields is not a good choice... for sure is faster but
absolutely unsafe and unmaintanable... ;-)


Reply to this email directly or view it on GitHub
#83 (comment).

@aaberg
Copy link
Owner

aaberg commented Aug 20, 2015

Such a strategy should be optional and should not be enabled by default. it will be counter intuitive compared to ordinary java-standards. But I agree, that such a strategy could be nice to have, when you need a little extra performance for large datasets.

@dimzon
Copy link
Contributor Author

dimzon commented Aug 20, 2015

Btw GSON use such strategy by default. Java builtin serialization too
20.08.2015 12:04 пользователь "Lars Aaberg" notifications@github.com
написал:

Such a strategy should be optional and should not be enabled by default.
it will be counter intuitive compared to ordinary java-standards. But I
agree, that such a strategy could be nice to have, when you need a little
extra performance for large datasets.


Reply to this email directly or view it on GitHub
#83 (comment).

@dimzon
Copy link
Contributor Author

dimzon commented Aug 20, 2015

Since serialization means persisting object state and actual state is
fields. So there are some dualism and both approaches has logical base
20.08.2015 12:04 пользователь "Lars Aaberg" notifications@github.com
написал:

Such a strategy should be optional and should not be enabled by default.
it will be counter intuitive compared to ordinary java-standards. But I
agree, that such a strategy could be nice to have, when you need a little
extra performance for large datasets.


Reply to this email directly or view it on GitHub
#83 (comment).

@dimzon
Copy link
Contributor Author

dimzon commented Aug 20, 2015

So saving field means "Save object snapshot" and using properties means
"Save recepie how to create same object". There are not silver bullet
solution - it depends...
20.08.2015 12:04 пользователь "Lars Aaberg" notifications@github.com
написал:

Such a strategy should be optional and should not be enabled by default.
it will be counter intuitive compared to ordinary java-standards. But I
agree, that such a strategy could be nice to have, when you need a little
extra performance for large datasets.


Reply to this email directly or view it on GitHub
#83 (comment).

@aaberg
Copy link
Owner

aaberg commented Aug 20, 2015

Yes, it really depends on the use case. I think the best solution would be to have both strategies available, and to keep the properties-first strategy as default.

@aconte76
Copy link

For Serialization Strategy coul be useful to investigate also KRYO (faster that I know)
https://code.google.com/p/kryo/

@dimzon
Copy link
Contributor Author

dimzon commented Aug 20, 2015

As I said before bottleneck is additional virtual methods into call stack
when setters is invoked. Anyway this overhead is negligent for real life
applications. For example with a million rows - in real life you fetch them
for some complex processing (otherwise better to process via sql). I
believe your processing take muuuuuuuch more time than whole fetch
overhead.
At over side for best perfomance we need have ability to pass lambda so
anyone can provide handwritten optimized code to read object from
resultset. Where and when he really need it.
20.08.2015 12:32 пользователь "Antonio Conte" notifications@github.com
написал:

For Serialization Strategy coul be useful to investigate also KRYO (faster
that I know)
https://code.google.com/p/kryo/


Reply to this email directly or view it on GitHub
#83 (comment).

@dalexander01
Copy link

@dimzon GSON makes use of sun.misc.unsafe to handle serialization/deserialization using private fields. It's definitely a nice feature to have as your model objects do not require setters (thus making them mutable everywhere), but I would be worried about sun.misc.unsafe being removed in future java versions.

@dimzon
Copy link
Contributor Author

dimzon commented Aug 21, 2015

It's impossible to completely take away such mechanism since Java RMI
serialization is based on it. It's only possible to rename it..
21.08.2015 0:39 пользователь "dalexander01" notifications@github.com
написал:

@dimzon https://github.com/dimzon GSON makes use of sun.misc.unsafe to
handle serialization/deserialization using private fields. It's definitely
a nice feature to have as your model objects do not require setters (thus
making them mutable everywhere), but I would be worried about
sun.misc.unsafe being removed in future java versions.


Reply to this email directly or view it on GitHub
#83 (comment).

@aaberg aaberg closed this as not planned Won't fix, can't repro, duplicate, stale Apr 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants