Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Q: Does this sink offer a size limit? #375

Closed
sommmen opened this issue Jan 5, 2022 · 2 comments
Closed

Q: Does this sink offer a size limit? #375

sommmen opened this issue Jan 5, 2022 · 2 comments
Labels
question This issue contains a general question. Please use the discussion section for this.

Comments

@sommmen
Copy link

sommmen commented Jan 5, 2022

Hiya,

I'm currently using the file sink from serilog, but in production the log files are not accesible.
I thought of pushing logs to an MSSQL database since we've already got that up an running.

The problem is that there is only limited space on the server.
I've set my file sink to log till 1gb.

Does this MSSQL sink have this kind of feature, where i can limit the rows or the overal table size?
Are there plans for this feature?

Since i hadn't seens anything in the readme i though about running periodic checks and remove older entries,
Would you have any recommendations regarding that?

@ckadluba ckadluba added the question This issue contains a general question. Please use the discussion section for this. label Jan 5, 2022
@ckadluba
Copy link
Member

ckadluba commented Jan 5, 2022

Hi @sommmen!

Thank you for your question. The answer is no. The SQL sink does not have any function to limit the size of log data written to the db. It does not even keep track of the ammount of data written to the db.

For file logging it makes sense to limit the size and number of the files to keep them in workable size which can be handles by tools like editors and also it prevents the logger from filling up a drive completely which could cause other programs to fail.

For the SQL sink a correct and usable implementation would be harder and some questions arise. Should logging just silently stop (not a good idea) or overwrite/delete old logs (also not perfect and harder to do)? How and when to determine how much space is already allocated/used in the db? How to deal with multiple processes/machines logging into the same db?

Because of these unclear points and increased complexity I believe a size limit should not be implemented in the SQL sink. Instead I recommend an external periodic job which moves out older logs from the db (possibly to an archive db).

But of course if someone has a clever idea for this and sends a PR, we are always happy to add good improvements to the sink.

@sommmen
Copy link
Author

sommmen commented Jan 6, 2022

Hi @sommmen!

Thank you for your question. The answer is no. The SQL sink does not have any function to limit the size of log data written to the db. It does not even keep track of the ammount of data written to the db.

For file logging it makes sense to limit the size and number of the files to keep them in workable size which can be handles by tools like editors and also it prevents the logger from filling up a drive completely which could cause other programs to fail.

For the SQL sink a correct and usable implementation would be harder and some questions arise. Should logging just silently stop (not a good idea) or overwrite/delete old logs (also not perfect and harder to do)? How and when to determine how much space is already allocated/used in the db? How to deal with multiple processes/machines logging into the same db?

Because of these unclear points and increased complexity I believe a size limit should not be implemented in the SQL sink. Instead I recommend an external periodic job which moves out older logs from the db (possibly to an archive db).

But of course if someone has a clever idea for this and sends a PR, we are always happy to add good improvements to the sink.

Hello - thanks for answering my question.

I think i understand and agree with what you're saying here after reading it twice :).

It does still feel a bit weird - i'd love to log to MSSQL, but i don't want to write logic in another concern to keep the database from becoming to large - the same way it'd be weird if i had to clean up older log files in the file sink.

Just some points/braindump:

Should logging just silently stop (not a good idea) or overwrite/delete old logs (also not perfect and harder to do)? How and when to determine how much space is already allocated/used in the db?

Agreed - however if the consumers would be fine with an approximation this gets easier.

Also i can think of the following steps in a single transaction:

  • Insert rows
  • Grab the table size (Permissions? performance?)
  • Loop rows in DESC order on the timestamp until table size is below the limit (performance?)

Btw: there's a way to get the data size of a column: https://docs.microsoft.com/en-us/sql/t-sql/functions/datalength-transact-sql?view=sql-server-ver15

A row limit would be easier/ faster, and the consumer could make do with a little calculation to approximate database size (based on their log size per day for example).

How to deal with multiple processes/machines logging into the same db?

Welp the same way we'd handle normal tables and multiple processes, with transactions i suppose, drop old logs an insert new ones in a single transaction would do i think.

Anyhow - i know enough, thanks again for taking some time to answer me and i'll go ahead and close the issue.

@sommmen sommmen closed this as completed Jan 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question This issue contains a general question. Please use the discussion section for this.
Projects
None yet
Development

No branches or pull requests

2 participants