-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
connection.query_one/all
with update
and delete
are a bit annoying to use
#29
Comments
I think once we add these methods, it'll also be fine to have |
I am a little confused about where the API is going to end up after this and #30. The pattern that I am hoping it ends up is building the command and calling an execute/run/run_all/load function on that command for all database interactions. Something like:
Either that or calling all of them off of the connection themselves. |
@mfpiccolo You just described exactly what I want (Although I'm still unsure if |
also I'm leaning towards |
Yeah. That is more semantic. Will delete also support?:
|
It'll likely just be |
👍 |
It used to be the case that everything had to go through
Connection#query_all
andConnection#query_one
. I found it was annoying, and added too many parenthesis, which is whyload
andfirst
were added. (connection.query_all(users.filter(name.eq("Sean")))
vsusers.filter(name.eq("Sean")).load(&connection)
.I want to give the same treatment to these (technically
load
actually works fine, but I think thatupdate(users.filter(id.eq(1))).set(name.eq("Sean")).load(&connection)
reads really strangely). We should add some new methods (the names are up for debate):run(&connection)
: We often know that we will return exactly one row. The should returnQueryResult<T>
, and will basically beload(&connection).map(|mut r| r.nth(0).unwrap())
.run_all(&connection)
: Alias forload
. Probably a better name for this, but I want it to imply that we're doing a command that happens to return.execute(&connection)
: Alias forConnection#execute_returning_count
The text was updated successfully, but these errors were encountered: