Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upLet's figure out logging #150
Comments
sgrif
added
enhancement
discussion desired
accepted
labels
Jan 30, 2016
sgrif
added this to the 1.0 milestone
Jan 30, 2016
sgrif
referenced this issue
Jan 31, 2016
Merged
Make sure we only use `RETURNING` on backends that support it #149
sgrif
modified the milestones:
0.13,
1.0
Mar 18, 2017
killercup
referenced this issue
May 12, 2017
Merged
Respect --migration-dir and the MIGRATION_DIRECTORY env variable everywhere #901
sgrif
modified the milestones:
1.0,
0.13
Jun 5, 2017
This comment has been minimized.
|
I'm going to pull this off the 1.0 milestone. I don't see any way for this to happen in the immediate future. The ecosystem isn't there yet. From my point of view there are two main requirements here:
The two options today seem to be I'll be interested to see what happens with SergioBenitez/Rocket#21, as I assume they have similar concerns to mine. |
sgrif
removed this from the 1.0 milestone
Jul 30, 2017
This comment has been minimized.
dpc
commented
Sep 9, 2017
|
Also, if
|
This comment has been minimized.
|
@sgrif would it be possible to leave logging up to the user by providing hooks on the connection? I'm thinking something like this: trait Connection {
...
/// Register a function that will be called before each executed statement.
fn pre_execute<F>(&mut self, hook: F)
where F: Fn(query: &str, args: &[?]) -> bool;
fn on_success<F>(&mut self, hook: F) where ..;
fn on_error<F>(&mut self, hook: F) where ..;
}This would allow customized logging, only logging errors, profiling by measuring execution time, ... |
This comment has been minimized.
|
Profiling execution time is definitely an interesting case that I haven't considered much. I need to give that one some thought. (The general answer to your question is, yes having us provide our own ad-hoc logging system with shims for log and slog is probably the most likely path forward at this point) |
This comment has been minimized.
|
I've played a bit with making a custom #[derive(Debug, Clone, Copy)]
pub enum TransactionState {
Start,
Commit,
Abort,
}
pub trait Logger<DB: Backend>: Send + Default {
fn on_establish(&self, url: &str);
fn on_transaction(&self, state: TransactionState);
fn on_execute(&self, query: &str);
fn on_query<T>(&self, source: &T)
where T: QueryFragment<DB>,
DB::QueryBuilder: Default;
}
#[derive(Debug)]
pub struct LogConnection<C, L> {
inner: C,
logger: L,
}
impl<C, L> SimpleConnection for LogConnection<C, L>
where C: Connection,
L: Logger<C::Backend>
{
fn batch_execute(&self, query: &str) -> QueryResult<()> {
self.logger.on_execute(query);
self.inner.batch_execute(query)
}
}
impl<C, L> Connection for LogConnection<C, L>
where C: Connection,
L: Logger<C::Backend>,
<C::Backend as Backend>::QueryBuilder: Default
{
type Backend = C::Backend;
type TransactionManager = LogTransactionManager<C::TransactionManager>;
fn establish(database_url: &str) -> ConnectionResult<Self> {
let logger = L::default();
logger.on_establish(database_url);
C::establish(database_url).map(|inner| LogConnection { inner, logger })
}
fn execute(&self, query: &str) -> QueryResult<usize> {
self.logger.on_execute(query);
self.inner.execute(query)
}
fn query_by_index<T, U>(&self, source: T) -> QueryResult<Vec<U>>
where T: AsQuery,
T::Query: QueryFragment<Self::Backend> + QueryId,
Self::Backend: HasSqlType<T::SqlType>,
U: Queryable<T::SqlType, Self::Backend>
{
self.logger.on_query(&source.clone().as_query());
self.inner.query_by_index(source)
}
fn query_by_name<T, U>(&self, source: &T) -> QueryResult<Vec<U>>
where T: QueryFragment<Self::Backend> + QueryId,
U: QueryableByName<Self::Backend>
{
self.logger.on_query(source);
self.inner.query_by_name(source)
}
fn execute_returning_count<T>(&self, source: &T) -> QueryResult<usize>
where T: QueryFragment<Self::Backend> + QueryId
{
self.logger.on_query(source);
self.inner.execute_returning_count(source)
}
fn transaction_manager(&self) -> &Self::TransactionManager {
// See the implementation of `std::path::Path::new`
unsafe {
&*(self.inner.transaction_manager() as *const C::TransactionManager as
*const LogTransactionManager<C::TransactionManager>)
}
}
}
#[derive(Debug)]
pub struct LogTransactionManager<T> {
inner: T,
}
impl<C, L> TransactionManager<LogConnection<C, L>> for LogTransactionManager<C::TransactionManager>
where C: Connection,
L: Logger<C::Backend>,
<C::Backend as Backend>::QueryBuilder: Default
{
fn begin_transaction(&self, conn: &LogConnection<C, L>) -> QueryResult<()> {
conn.logger.on_transaction(TransactionState::Start);
self.inner.begin_transaction(&conn.inner)
}
fn rollback_transaction(&self, conn: &LogConnection<C, L>) -> QueryResult<()> {
conn.logger.on_transaction(TransactionState::Abort);
self.inner.rollback_transaction(&conn.inner)
}
fn commit_transaction(&self, conn: &LogConnection<C, L>) -> QueryResult<()> {
conn.logger.on_transaction(TransactionState::Commit);
self.inner.commit_transaction(&conn.inner)
}
fn get_transaction_depth(&self) -> u32 {
self.inner.get_transaction_depth()
}
}This could nearly be implemented outside of diesel.
|
This comment has been minimized.
|
The main problem with that implementation is that I made the mistake of encouraging code written as |
This comment has been minimized.
|
I also doubt this could ever be (efficiently) implemented outside of Diesel, since any outside implementation would have to force the query builder to run when we would normally skip it because it's in the prepared statement cache and looked up by |
This comment has been minimized.
That's a bit unfortunate, but is it really required that logging works out of box for existing implementation?
In the interface that I've proposed above there is a |
This comment has been minimized.
lolgesten
commented
Mar 8, 2018
|
@sgrif enabling
I.e. the postgres logs do not actually show you the values, so it's not a good replacement for hunting down bugs. |
This comment has been minimized.
|
@diesel-rs/core I would like to propose that we discuss this issue again in the context of diesel 2.0.
Couldn't this be simply "solved" by doing |
This comment has been minimized.
|
@weiznich If your crate doesn't ever pass a connection to a dependency... Maybe? I don't think it's a viable option at this point even beyond the "People take |
sgrif commentedJan 30, 2016
•
edited by killercup
As I'm adding SQLite support, I'm finding it annoying to track down cases like
called `Result::unwrap()` on an `Err` value: DatabaseError("SQL logic error or missing database").This should definitely be injectable, and I'd prefer that it be decoupled from the individual connection classes, but I'm not sure that's actually possible.
I'd originally imagined something like
LoggingConnectionbeing a struct, which wrapped anotherT: Connection, and ran everything through theDebugQueryBuilder(and eventually cached it through the same mechanisms that we add for prepared statement caching). This should also log the values for the bind params. I am fine with adding the constraintToSql<ST, DB>: Debugto accommodate this, as basically everything should implement it.However, given that
DebugQueryBuildercan have output which potentially differs from the actual SQL executed (for example, we're currently figuring out how to deal with SQLite's lack of aDEFAULTkeyword, and I don't see how what we end up doing can actually be reflected inDebugQueryBuilder). So perhaps the correct answer is to have aset_loggermethod added toConnection.Ultimately I'm undecided on this, and am open to either solution, if presented as a working PR. Discussion about how to go about this, or use cases that we think we need to support is certainly welcomed.