Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core(response-compression): add transferSize sanity check #3606

Merged
merged 3 commits into from
Oct 20, 2017

Conversation

patrickhulce
Copy link
Collaborator

fixes #3604

@patrickhulce
Copy link
Collaborator Author

On second thought, I'm not completely convinced we should use transferSize everywhere either. We really want a sanity check on transferSize but still use resourceSize? Open to others thoughts

@patrickhulce patrickhulce added this to the Sprint Dos: Oct 16-27 milestone Oct 18, 2017
Copy link
Member

@paulirish paulirish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me. this updated PR is exactly how i'd expect you fix #3604.

@patrickhulce patrickhulce changed the title core(response-compression): use transferSize over resourceSize core(response-compression): add transferSize sanity check Oct 19, 2017
@@ -21,7 +21,7 @@ class ResponsesAreCompressed extends ByteEfficiencyAudit {
*/
static get meta() {
return {
name: 'uses-request-compression',
name: 'uses-response-compression',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a better name, but should existing audits change names without a major version bump?

I'm admittedly completely biased because it's going to make my HTTP Archive queries (which are partly based on runner.getAuditList()) harder :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's fair, I was hoping it was new enough people weren't necessarily relying on it yet :/

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reverted

@patrickhulce
Copy link
Collaborator Author

@brendankenny was that your only change?

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, sorry, LGTM!

@patrickhulce patrickhulce merged commit e3888c6 into master Oct 20, 2017
@patrickhulce patrickhulce deleted the fix_response_compression branch October 20, 2017 20:18
@kodi
Copy link

kodi commented Nov 12, 2017

Hello,

Than you for fix, this was quite fast!

Any idea when can we expect this to work, since I'm still receiving false alert as explained in #3604

@patrickhulce
Copy link
Collaborator Author

@kodi we'll hopefully be putting out a release this week, so extension and CLI by EOD Friday

Chrome stable will lag behind ~2 months so end of January.

@kodi
Copy link

kodi commented Nov 13, 2017

@patrickhulce

Thank you, this means a lot!
I'll continue to watch this issue, i guess it will be closed once this hits stable.

Again, thank you for your work (everyone)!

@@ -48,7 +48,8 @@ class ResponsesAreCompressed extends ByteEfficiencyAudit {
// we require at least 10% savings off the original size AND at least 1400 bytes
// if the savings is smaller than either, we don't care
if (1 - gzipSize / originalSize < IGNORE_THRESHOLD_IN_PERCENT ||
gzipSavings < IGNORE_THRESHOLD_IN_BYTES
gzipSavings < IGNORE_THRESHOLD_IN_BYTES ||
record.transferSize < gzipSize

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be a <= ?
If transferSize is exactly equal to the gzipSize then the resource should still be ignored, right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, if we want to get nitpicky about it though it should really just be

const gzipSavings = Math.min(originalSize - gzipSize, record.transferSize - gzipSize)

above but realistically this is just a sanity check that shouldn't be hit given the new mitigation in the gatherer

feel free to open a PR, we're always open to contributions :)

@rjgotten
Copy link

rjgotten commented Nov 16, 2017

Chrome stable will lag behind ~2 months so end of January.

In other words; Lighthouse performance scores on stable are going to remain wildly incorrect for months to come.

That is extremely bad news when you have to deal with tech-savvy clients that are aware of the existence of this tool inside Chrome's developer tools and like to run it themselves.

I would really expect Chrome's developers to uplift a fix for this post-haste, considering the potential something like this has to caus real-world financial damage to web-developers that have to deal with clients misreading this as actually affecting their scores.

(Atleast; I hope it's not actually affecting the computed score. If it is, then the problem would be considerably worse.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Erroneously reporting compression is missing
5 participants