Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support auditing chunked SQL Server packets #29228

Merged
merged 8 commits into from Aug 9, 2023

Conversation

gabrielcorado
Copy link
Contributor

@gabrielcorado gabrielcorado commented Jul 17, 2023

Closes #28632

Chunked packets consist in packets that don't fit on the negotiated network packet size. In this case, the packet has a status flag that indicates it is not the last message from the chunk.

Besides that production changes to support this kind of packet, this PR also adds new functions to generate SQL Server packets. Those functions have better documentation and enable tests to rely on provided data instead of example packets.

Note: This PR does not cover RPC calls that use the NTEXTTYPE parameter. This is due to an error involving the driver. It will require additional debugging, I'll work on this separately. (Tracking this in a separate issue).

Changelog: Improved audit logging support for large SQL Server queries.

@gabrielcorado gabrielcorado self-assigned this Jul 17, 2023
@github-actions github-actions bot added database-access Database access related issues and PRs size/md labels Jul 17, 2023
lib/srv/db/sqlserver/engine.go Outdated Show resolved Hide resolved
initialPacketHeader = p.Header()
}

chunkData.Write(p.Data())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this cause a DoS if a large enough packet was crafted?
Should we add a (configurable?) max size for the chunk data?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no good limit value that can be set. We can consider using https://learn.microsoft.com/en-us/sql/sql-server/maximum-capacity-specifications-for-sql-server?view=sql-server-ver16#:~:text=Bytes%20per%20varchar(max)%2C%20varbinary(max)%2C%20xml%2C%20text%2C%20or%20image%20column but this is a limit for single connection a client can establish many db conn and try enforce teleport db agent to run out of memory.
I wonder I we can add a debug/info log in case of handling large package and audit this operations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't set a limit that will generate a valid packet because the RPC calls can be composed of multiple parameters. Although I prefer having predictable memory consumption, I don't know how we should handle those partial packets. We could stick with the current behavior of generating db.session.malformed_packet if a packet exceeds a threshold, but in that case, that could be used to bypass audit logs too.

For me, the best solution is to be able to parse those requests partially, so we can decide to split them into multiple audit logs. But this would require custom packet parsing (other than what we have from the go-mssqldb driver); I think it might not be worth doing it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be ok to return an error if the packet exceeds a couple of MBs.
Or, just truncate the packet (is it possible?) and add a sentinel flag indicating that this is a truncated message

For reference, AWS CloudTrail has a max of 256KB for their log message size.
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/WhatIsCloudTrail-Limits.html

lib/srv/db/sqlserver/protocol/packet.go Outdated Show resolved Hide resolved
lib/srv/db/sqlserver/protocol/constants.go Show resolved Hide resolved
initialPacketHeader = p.Header()
}

chunkData.Write(p.Data())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no good limit value that can be set. We can consider using https://learn.microsoft.com/en-us/sql/sql-server/maximum-capacity-specifications-for-sql-server?view=sql-server-ver16#:~:text=Bytes%20per%20varchar(max)%2C%20varbinary(max)%2C%20xml%2C%20text%2C%20or%20image%20column but this is a limit for single connection a client can establish many db conn and try enforce teleport db agent to run out of memory.
I wonder I we can add a debug/info log in case of handling large package and audit this operations.

Copy link
Contributor

@marcoandredinis marcoandredinis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay
There's only one open thread wrt packet size but feel free to merge as is 👍

@gabrielcorado gabrielcorado added this pull request to the merge queue Aug 9, 2023
Merged via the queue into master with commit 86358c1 Aug 9, 2023
21 checks passed
@gabrielcorado gabrielcorado deleted the gabrielcorado/audit-large-sql-server-packet branch August 9, 2023 03:36
@public-teleport-github-review-bot

@gabrielcorado See the table below for backport results.

Branch Result
branch/v11 Failed
branch/v12 Failed
branch/v13 Failed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

SQL Server support for chunks payload audit logs
3 participants