Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Compression #31

Closed
bgrainger opened this issue Aug 10, 2016 · 5 comments
Closed

Implement Compression #31

bgrainger opened this issue Aug 10, 2016 · 5 comments
Assignees

Comments

@bgrainger
Copy link
Member

Docs: https://dev.mysql.com/doc/internals/en/compression.html

MySqlConnectionStringBuilder.UseCompression already exists, but must be set to false; relax this restriction.

@bgrainger
Copy link
Member Author

Need to test compressed packets that span the 16MiB boundary (see implementation note here); the MySql.Data library does not currently handle those correctly:

@mguinness
Copy link
Contributor

When running a specific LINQ query (using Pomelo.EntityFrameworkCore.MySql) and with UseCompression set to true I get System.InvalidOperationException: Packet received out-of-order. Expected 2; got 93. error. This is reproducible.

@mguinness
Copy link
Contributor

mguinness commented Dec 4, 2016

I did some more investigation and it appears related to the size of the data being returned. In the example below the InvalidOperationException is raised, but if I set the limit to 74 it is not.

var cmd = db.CreateCommand();
cmd.CommandText = "SELECT * FROM `companies` LIMIT 75";

using (var rdr = cmd.ExecuteReader())
{
    while (rdr.Read()) { } //Packet received out-of-order. Expected 2; got 100.
}

It should also be noted that the loop works for the first 74 iterations, but it fails on the final one.

The following SQL returns 975 KB for the table in question if that is of any help. The table uses MyISAM and latin collation.

SELECT ROUND((DATA_LENGTH + INDEX_LENGTH) / 1024) AS `Size (KB)` 
FROM information_schema.TABLES
WHERE TABLE_NAME = 'companies'

@bgrainger
Copy link
Member Author

@mguinness The repro is probably highly dependent on the exact size of the data (either before or after compression). Would it be possible for me to get a copy of the data (solely for creating a repro of this bug, and to be deleted immediately afterwards)? My contact details are on my GitHub profile page.

@bgrainger
Copy link
Member Author

Splitting this bug out to #146.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

2 participants