Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

☔ Pagespeed Insights results differ from lighthouse in chrome #6708

Open
vertic4l opened this issue Dec 3, 2018 · 146 comments
Open

☔ Pagespeed Insights results differ from lighthouse in chrome #6708

vertic4l opened this issue Dec 3, 2018 · 146 comments

Comments

@vertic4l
Copy link

@vertic4l vertic4l commented Dec 3, 2018

Hey there!

I'm optimizing a mobile site and lighthouse reported me a score of 92. So far so good i thought. But after checking back with Pagespeed Insights, which also uses lighthouse, i'm getting a score of 57.

Is there any reliable way to get the same score?

@exterkamp
Copy link
Member

@exterkamp exterkamp commented Dec 3, 2018

Hey, thanks for reaching out!

To help with diagnosing this we'll need to have the URL you're testing, and the context that you are running Lighthouse in.

How did you get the 92? Are you running on Devtools in Chrome? On the node CLI? And what throttling settings are you using?

@vertic4l
Copy link
Author

@vertic4l vertic4l commented Dec 4, 2018

Hey @exterkamp,

i've made an audit with lighthouse in DevTools in Chrome and now tested it with node CLI aswell.
Lighthouse in DevTools gives a score of 92 (Performance) right now. Lighthouse (latest, 4.0.0-alpha.2-3.2.1) reports 75 for Best Practices and 0 for Performance. But still huge difference to the PSI score. Mailed you the corresponding website!

These are my settings:

bildschirmfoto 2018-12-04 um 11 17 12

Last test with lighthouse in DevTools:
bildschirmfoto 2018-12-04 um 11 24 34

Last test with PSI:
bildschirmfoto 2018-12-04 um 11 26 47

bildschirmfoto 2018-12-04 um 11 26 02

Last test with CLI
bildschirmfoto 2018-12-04 um 11 28 20

@exterkamp
Copy link
Member

@exterkamp exterkamp commented Dec 4, 2018

So I ran that URL against all channels and I am finding things to be a consistently in the high 50's low 60's for performance. I would say within expected variance.
Node v4.0.0-alpha.1
image
PSI (I also force ran against an EU data center, and got similar results)
image
DevTools in Production Chrome v3.0.3
image

The 0 for performance in the CLI, that definitely seems like an error, that site shouldn't get a 0, and if it consistently does without surfacing an error that might be a bug or something might be blocking LH from running locally?

If you run from other machines does your devtools still get such a high score? That seems unreasonably high given those settings, I am seeing a consistent ~10 seconds for TTI, but that screenshot of devtools running is ~4 seconds, seems oddly fast compared to the other runs.

@vertic4l
Copy link
Author

@vertic4l vertic4l commented Dec 6, 2018

@exterkamp Thanks for testing it. It's odd that there's no Performance score when using the CLI version (although Best Practices gets a score as you can see). And it's very odd that the TTI differs so much.

@vertic4l
Copy link
Author

@vertic4l vertic4l commented Dec 18, 2018

@exterkamp So, i made some changes to pass some more of the audits for mobile.

Lighthouse in DevTools
Passed audits: 15
Performance Score: 83

Pagespeed Insights
Passed Audits: 12
Performance Score: 45 (sometimes 55)

Both tests were made on newly updated production site.

@AlexVadkovskiy
Copy link

@AlexVadkovskiy AlexVadkovskiy commented Dec 18, 2018

Currently on Chromium Version 71.0.3578.80 I get the same weird results for almost any website. Lighthouse in DevTools is always 25-45 points better than on PSI.
example: https://www.omgubuntu.co.uk/ (got 82 in lighthouse and around 50 on PSI)
image
image

Could this be related to DNS lookups or similar stuff? Could PSI server have a little bit slower access to the tested website than my local machine (depending on where the server actually located in US or in EU?

@vertic4l
Copy link
Author

@vertic4l vertic4l commented Dec 18, 2018

@AlexVadkovskiy just checked the website (https://www.omgubuntu.co.uk/) from Germany.

PSI: 29
Lighthouse in DevTools: 90

(mobile scores)

@exterkamp
Copy link
Member

@exterkamp exterkamp commented Dec 18, 2018

Hmmm checking out https://www.omgubuntu.co.uk/ I get:
~80 from cli on v4.0.0-alpha.1
~40 from pagespeed insights (running v4.0.0-alpha.1)

So this is solidly reproducible for that site.

I am force running pagespeed in the EU and seeing similar results. I will say that pagespeed has a consistent ~200ms TTFB while running locally have ~60ms TTFB, and that URL seems to send more images to pagespeed for some reason? Some odd behavior for sure, and some questionable latency which might be us or the site.

@patrickhulce might want to take a look at this from a lantern perspective on why this could be different/maybe something is up with the trace.

@paulirish might want to take a look at this from a global pagespeed latency perspective.

@patrickhulce
Copy link
Collaborator

@patrickhulce patrickhulce commented Dec 19, 2018

@exterkamp is this the same case as @wardpeet brought up? some of his runs in LR had every request duplicated which would immediately explain 2x lantern predictions.

@AlexVadkovskiy the Chromium difference is also probably explained by #6772 FYI

@exterkamp
Copy link
Member

@exterkamp exterkamp commented Dec 19, 2018

@patrickhulce I remember that, but I can't find that issue/discussion, you have a link to it/can you bump it to mention this thread?

@patrickhulce
Copy link
Collaborator

@patrickhulce patrickhulce commented Dec 19, 2018

but I can't find that issue/discussion, you have a link to it/can you bump it to mention this thread?

IIRC that's because we just discussed it over chat :) @wardpeet do you happen to still have those?

@vertic4l
Copy link
Author

@vertic4l vertic4l commented Jan 2, 2019

@exterkamp any update here?

@vertic4l
Copy link
Author

@vertic4l vertic4l commented Jan 14, 2019

@patrickhulce so there is a bug in chrome which is mentioned in other issues. Will this fix the PSI score as well ?

@patrickhulce
Copy link
Collaborator

@patrickhulce patrickhulce commented Jan 14, 2019

@vertic4l to which bug are you referring?

@exterkamp
Copy link
Member

@exterkamp exterkamp commented Jan 14, 2019

Oops this got buried! Missed that bump. I will definitely be looking into these URLs again soon. So far we have:

Taking a guess, the bug might be ignoring flags? But that shouldn't be an issue because this was reported before that version of Chrome shipped iirc.

@exterkamp exterkamp self-assigned this Jan 14, 2019
@ashtonlance
Copy link

@ashtonlance ashtonlance commented Jan 15, 2019

I'm having the complete opposite experience with this. My dev tools Lighthouse is give me a score of 37 where PSI is giving a score of 91.

The url in question is: https://biketours.com

@patrickhulce
Copy link
Collaborator

@patrickhulce patrickhulce commented Jan 15, 2019

@ashtonlance I'm seeing PSI score of 24 for that URL
image

@ashtonlance
Copy link

@ashtonlance ashtonlance commented Jan 15, 2019

@patrickhulce Sorry, those number I gave were the desktop scores.

@patrickhulce
Copy link
Collaborator

@patrickhulce patrickhulce commented Jan 15, 2019

@ashtonlance Ah you're probably experiencing #6772 then. Give it a whirl in Chrome Canary.

@ashtonlance
Copy link

@ashtonlance ashtonlance commented Jan 15, 2019

Lighthouse is busted for me in Canary Version 73.0.3672.0.

screen shot 2019-01-15 at 4 10 44 pm

I was able to get the Lighthouse CLI to produce similar results as PSI by turning off emulation and throttling.

@amaladevi-r
Copy link

@amaladevi-r amaladevi-r commented Jan 16, 2019

@exterkamp I'm seeing the same issue, where Lighthouse reports a high score while PSI shows a lower one. You can try on this url

https://www.bankbazaar.com/credit-card.html?mobileSite=true

@vertic4l
Copy link
Author

@vertic4l vertic4l commented Jan 16, 2019

upload
Well, not that bad...

@patrickhulce
Copy link
Collaborator

@patrickhulce patrickhulce commented Jan 16, 2019

@ashtonlance something is off, that's Lighthouse 3.0.0-beta.0 which is ~8 months old, not the right Canary version 🤔

@ashtonlance
Copy link

@ashtonlance ashtonlance commented Jan 16, 2019

@patrickhulce Weird indeed. That was from a build I downloaded at https://www.google.com/chrome/canary/. However, I just downloaded a fresh copy and all seems to be good now.

@vertic4l
Copy link
Author

@vertic4l vertic4l commented Feb 1, 2019

@exterkamp any news?

@ChriStef
Copy link

@ChriStef ChriStef commented Aug 15, 2020

I would like to add an article about Pagespeed and Lighthouse:

https://pagespeedplus.com/blog/pagespeed-insights-vs-lighthouse

In summary the lab and real world data matters:

image

My online report:

image

Thanks for the insides, good to clear this up.

Consider also the fact from what network point the test server is running. Example my website is allocated to Greece, I run it from Greece I have better results.

I Consider global performance by this wonderful service https://www.fastorslow.com/app

@ensemblebd
Copy link

@ensemblebd ensemblebd commented Aug 15, 2020

It's a highly useful tool, but it should be used with a grain of salt.

  • PSI web never reports the same score twice, even for the same page
  • The two scores differ (lighthouse vs web), even though they are based upon the exact same source code (geographic location, network speed, latency, and cpu power differ)
  • If you run Lighthouse on a slower cpu speed pc, and on a more powerful one, the score change is massive and it reports problems with completely different aspects of the page.

All of which proves by simple logic that the "performance metrics" and "issues" upon which we are told are perfect God data and therefore rely on, are actually fictitious unbalanced and unreliable fluctuating numbers that variate by circumstance.
You are not analyzing your site. You are analyzing your PC analyzing your site. Let that sink in.

This ticket will never close and will never be resolved because those who are in charge will never admit it's fundamentally flawed. And that's because the market value and $$ generated by SEO companies using this tool is supermassive.
So I'm unsubscribing. Tired of all the emails. Two years we've waited.
Where is the official google response? Where is it?

A bit like the surgeon general cancer warning label on cigarettes, PSI needs a warning label: "Subject to variance". Simple. Clean. Honest.

@fchristant
Copy link

@fchristant fchristant commented Aug 19, 2020

Also want to chip in, we're using SpeedCurve and occasionally compare it to PSI scores, just to see if our internal view of performance matches the external view. Today, these scores roughly match, even though the scores are unexpectedly low.

Pages that are very much usable and would normally classify as mid-level/medium performance, get scores like 20. This isn't productive scoring, it's fatalistic and unreasonable, but that's a topic for another day.

So SpeedCurve and PSI align. Next, Lighthouse in devtools bumps up the score with 30 points or more. This leaves the same question asked many times above: which one should we use and trust?

I can answer that question, actually. In the real world, typically marketing, SEO companies, agencies and such use PSI. So that's the reference, regardless of how we feel about that.

Where does that leave LH scoring in dev tools? With 30+ point differences, one of the following is true:

  • It is a number not to be taken serious at all. It's personal and relative. All those devs proudly sharing scores on Twitter are basically proud of having fast hardware.
  • It is in fact correct (or close), and PSI is way off. Yet it doesn't matter because the world uses PSI.

I appreciate the difficulty in aligning throttling of complex runtime behavior (CPU, network). but that's an explanation, not an excuse. The scores have a very serious real world meaning. Businesses make investments based on it, and surely in some way I don't understand it's a ranking signal, which too has a direct business outcome, for some even an existential one.

With that background in mind, it's inexcusable to show scores of 20, and then 55, for the very same thing tested. These numbers shouldn't be jokes.

Here's some awkward totally not hypothetical business conversations:

Looks like our mobile performance score is terrible, why?
It's actually fine if you check this other Google tool.

So all is good?
Unsure. Depends on who you ask. Not even Google knows.

Then what is the value of our tooling and dashboards?
I figured you like random colors. It's kind of like a stock market, on a bad trading day.

We got an external audit saying our performance is bad, what would you advise?
Just wait one week, maybe the scores become better.

You mentioned you deployed a big change to improve performance, can you show me the impact?
No. It's called "variability". Get with the times.

Dashboard says scores got 15% lower after our last release, roll back?
Already told you: just wait a week, try a different tool, or on different hardware. It solves all performance problems.

I'm dramatizing, obviously, but with a serious note. The answers given so far aren't good enough. If you position yourself as the performance gatekeeper of the web and directly associate it with very real business outcomes (ranking), none of these technical explanations are good enough.

Scoring must be aligned between tools and scoring must be repeatable. It's not relevant how it's accomplished, just that it is accomplished. I consider it an existential issue for the credibility of the entire suite of tools. Without basic alignment and repeatability, not only can the scores not be taken serious, the entire idea of a performance culture is like building on quicksand.

If PSI is way off and too variable, I expect it to be seriously investigated and improved, no matter the technical challenge. "Deal with it" isn't good enough.

If PSI is the correct one, and LH devtools is not, perhaps let LH call PSI, because personal and relative scoring is very misleading. It has its use during development but that's not how this score is perceived, it's widely shared as if its some neutral benchmark.

Don't mind my harsh tone. I care deeply about web performance which is exactly why it frustrates me so much that the basics just don't work. We need better answers, no matter how technically challenging it is to solve this.

For now, my personal conclusion and learning is to take PSI as the reference, even if its score is unreasonably low and variable. I'll consider LH in devtools purely a personal tool, to be used as a solo dev experience only, and its score is to never leave that boundary.

The awful consequence for anyone to follow this same line of thinking is that your PSI-based dashboards are depressing. Expect a deeply red/orange dashboard all over, as PSI scoring is very harsh.

@ivictbor
Copy link

@ivictbor ivictbor commented Aug 20, 2020

Accidently found this checkbox ticked (under to-right gear), after removing results are almost same:

image

@panablue
Copy link

@panablue panablue commented Sep 2, 2020

@igorbraz89

And I can't find a clear path to follow to prioritise my mobile enhancements.

If it helps at all - even though some page speed optimizations have a bigger impact, since every little bit helps just apply all possible optimization strategies, rather than prioritize.

@matti
Copy link

@matti matti commented Sep 3, 2020

To summarize:

a) Google PSI run is done from a datacenter close to the PSI user. You can verify this for example by running: https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwhatismyipaddress.com%2F

image

^- for example since I'm in Finland, the run was done from Switzerland (and I have also seen Paris, France)

b) as can be seen from the screenshot, the load numbers are mostly wrong because the page does not load that long.

c) the results of LH vs PSI are different because:

  1. Google Runs PSI from different network location than you run your LH
  2. Google Runs PSI with unknown machine type (CPU Cores, CPU Speed, Memory, GPU, Disk). The only thing we know is that:
$ curl https://www.googleapis.com/pagespeedonline/v5/runPagespeed\?url\=https://www.louhi.fi\&strategy\=desktop | jq .lighthouseResult.environment.benchmarkIndex

{
  "networkUserAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4143.7 Safari/537.36 Chrome-Lighthouse",
  "hostUserAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/83.0.4103.93 Safari/537.36",
  "benchmarkIndex": 710
}

So the machine Chrome is Headless and they run it in Linux and the benchmarkIndex is something between 200 and 800. My MacBook Pro is about 1800.

  1. Google Runs PSI with unknown network throttles.

Without knowing what kind of machines and settings Google uses in PSI Lighthouse, it's not possible to be even close to the numbers in your own machine. You should also fly to Switzerland to run the test from there.

@dobbobbs
Copy link

@dobbobbs dobbobbs commented Sep 8, 2020

What is infuriating is how on the Mobile test PSI seems to just basically randomly add 2-3s to the TTFB (and consequently LCP and everything else) and I cannot find any real explanation why (except "network latency"), or how one might go about addressing this. Also, I get an LCP of about 6 seconds for a particular page, a bad result I simply cannot reproduce in the Chrome Performance profiler, where it's about 3.5s at worst, of which about 2.5s is TTFB, no matter how many times I run the test, and even on "6 x CPU slowdown" and "3G Slow" network setting.

Considering my server is in the US (LA) and I am in Europe, but the website is intended for US users primarily, it makes no sense that the test should be QUICKER when I run it from within Chrome here in Europe (with all that trans-Atlantic latency), or that PSI should run the test as if I were a user close to my real location when the site isn't even intended for this geographic region. Which also makes me wonder how the Search Console comes up with its Core Web Vitals report, which is also on the poor end of the range of results I am getting.

Feels like Google is putting a bit of a bee in our bonnet about all this stuff. When I test the BBC News site, it makes no sense either (Field Data: LCP = 2.1, Lab data: almost always about 4s), so as soon as the BBC have it figured out, I will lose sleep over it all.

@sinklair
Copy link

@sinklair sinklair commented Sep 18, 2020

I'm seeing the same thing when testing locally vs. using https://web.dev/measure/. Testing one of our blog pages that scores 97 locally gets a score in the mid-50s on web.dev and on PSI.

@h3d0
Copy link

@h3d0 h3d0 commented Sep 25, 2020

I would like to join the discussion and share my concerns regarding this issue. Google is pushing the devs and site owners to take Core Web Vitals into account, while not providing a reliable source of single truth. Currently there are:

  1. Web.dev/measure (Uses Lighthouse 6.0.0?)
  2. PageSpeed Insights (Uses Lighthouse 6.3.0)
  3. Chrome DevTools (6.0.0 for stable 85, 6.3.0 for unstable 87 - which is currently broken).
  4. Lighthouse NPM (6.3.0).

PSI

image

Measure

image

Chrome DevTools

image

Chrome DevTools (simulated throttling on)

image

NPM Lighthouse 6.3.0

image

Our e-commerce project is limited to the single city of a single country. We don't plan to advertise to the users outside of this city. Why would measurement tools try to use the server in US or Swittzerland?

So far, only the Lighthouse NPM provides pretty stable results in 10 runs with a slight variety and uses the latest 6.3.0 version.

Can we somehow get one single reliable source of metric? Which one does Google use to rank the website pages?

@tylik1
Copy link

@tylik1 tylik1 commented Oct 6, 2020

the same thing for me 58 in lighthouse dev tools vs 38 in Google page speed

@mihir-spinx
Copy link

@mihir-spinx mihir-spinx commented Oct 9, 2020

We are also facing an issue of difference in score from Google Chrome dev tool lighthouse and Google Page Speed.
And if we scan the same website multiple times, we are getting a different score every time.
Anyone can please provide me the best and fast lighthouse API to get all scores for desktop and mobile.
Thank You.

@t1000upgraded
Copy link

@t1000upgraded t1000upgraded commented Oct 17, 2020

Same here and don't know what to do

@jamespb97
Copy link

@jamespb97 jamespb97 commented Nov 18, 2020

It also seems the variability in results from PSI to LH not only affects the overall performance score but also the main Web Vitals scores, in particular in our case the CLS score which is being flagged up in google search console.

Search console errors
Screenshot 2020-11-18 at 10 53 14

Results in PSI
Screenshot 2020-11-18 at 10 57 11

Result in LH
Screenshot 2020-11-18 at 10 55 27

I am also unable to replicate the CLS issue using the performance tab within Dev Tools, not to the extent at least which is being marked in PSI. Using @matti 's test it seems when we use PSI it is being ran in Saudi Arabia or Egypt, this site's only audience is strictly in the UK.

Why is it we are getting penalised in search console and therefore potentially having our rankings knocked down because of scores seemingly carried out in a country in which no single one of our customers would be browsing from?

@Undistraction
Copy link

@Undistraction Undistraction commented Nov 18, 2020

Why is it We are getting penalised in search console and therefore potentially having our rankings knocked down because of scores seemingly carried out in a country in which no single one of our customers would be browsing from?

Because Google

@dobbobbs
Copy link

@dobbobbs dobbobbs commented Nov 18, 2020

Since joining this thread I have stopped messing with Pagespeed and am ignoring Core Web Vitals and life is good. Not an option for everyone, I know, but other than actually streamlining my site/s as much as possible and eliminating any bottlenecks (which you want to do anyway) there is no point, seemingly, in obsessing about Google's metrics because they make no sense.

@patrickhulce
Copy link
Collaborator

@patrickhulce patrickhulce commented Nov 18, 2020

It also seems the variability in results from PSI to LH not only affects the overall performance score but also the main Web Vitals scores, in particular in our case the CLS score which is being flagged up in google search console.

You're correct that variability is not a Lighthouse-only problem. It affects anything that collects or displays performance information, including search console.

when we use PSI it is being ran in Saudi Arabia or Egypt

FWIW, it's shouldn't be though there are still no UK locations to my knowledge.

@matti
Copy link

@matti matti commented Nov 18, 2020

when we use PSI it is being ran in Saudi Arabia or Egypt

FWIW, it's shouldn't be though there are still no UK locations to my knowledge.

The locations don't matter if the machine running the test is underpowered or loaded because the same machine runs multiple tests at the same time (see "noisy neighbour" in https://en.wikipedia.org/wiki/Cloud_computing_issues)

So running the test from Australia/Moon in a machine that has plenty of free CPU will be faster than running the test from closer location if the machine is overbooked.

@matti
Copy link

@matti matti commented Nov 24, 2020

I made this test page https://google-us-east1.vitals.supervisor.com/

The test page always causes FCP to happen after 1s, LCP after 2s and last CLS at 2.5s (these settings are tuneable, see the end of the page for query parameters)

Here's a comparison of PSI and www.gtmetrix.com:

2020-11-24 at 14 38

PSI shows FCP/LCP timing to be 0.8s, when in reality the page is still white until 1.0s has elapsed. Also https://web.dev/measure/ shows incorrect 0.8s timings.

The source for my test page is available at https://github.com/supervisor-com/webvitals-server

btw, you can see these metrics visually with this:
http://test.supervisor.com/start?url=https://google-europe-north1.vitals.supervisor.com/

@mihalikv
Copy link

@mihalikv mihalikv commented Nov 24, 2020

web.dev score from 50 to 60 and LCP close to FCP
image

Chrome lighthouse (simulated throttling off) - same as web.dev
image

PSI score is lower than 50 and LCP is much bigger than FCP and totally different from web.dev and local lighthouse
image

@matti
Copy link

@matti matti commented Nov 24, 2020

So in more concrete numbers the above:

  • PSI LCP: 3,8s
  • web.dev/measure 1,4s
  • local lighthouse: 1,2s

The machines/network running PSI are clearly underpowered or somehow broken.

@mihalikv
Copy link

@mihalikv mihalikv commented Nov 24, 2020

@matti actually PSI LCP is 5.7s

@hooligani
Copy link

@hooligani hooligani commented Nov 24, 2020

PSI is not run with a real browser (only trying to simulate browsers to cut infrastructure costs) so that's why the results differ from Lighthouse in chrome (observed mode).

@matti
Copy link

@matti matti commented Nov 27, 2020

And it is almost impossible to run PSI locally, see https://twitter.com/patrickhulce/status/1331961018654855168?s=20

So just don't use PSI, use gtmetrix.com for one user and one page or supervisor.com for multiple pages and multiple users

@socialpreneur
Copy link

@socialpreneur socialpreneur commented Dec 30, 2020

@h3d0

Can we somehow get one single reliable source of metric? Which one does Google use to rank the website pages?

Google has made clear they don't use lab data for ranking but field data, so basically the scores are nothing as it used to be. But in order to optimize and fix problems, lab data is the only thing we can immediately see the changes (field data requires 28 days), so we still have to use it as one of the tools to measure performance along with webpagetest and others.

@Sven74Muc
Copy link

@Sven74Muc Sven74Muc commented Jan 2, 2021

Sorry, I missed this thread... have the same issue: #11908

@pjar
Copy link

@pjar pjar commented Jun 6, 2021

Since joining this thread I have stopped messing with Pagespeed and am ignoring Core Web Vitals and life is good. Not an option for everyone, I know, but other than actually streamlining my site/s as much as possible and eliminating any bottlenecks (which you want to do anyway) there is no point, seemingly, in obsessing about Google's metrics because they make no sense.

I have same feelings. I'm spending my weekend tasked with improving this laughable metrics. With every click on 'analyze' on page insights I'm getting scores diffing by about 20 points, listing different 'reasons' each time. How am I supposed to use it?

I went to check how their page https://developers.google.com/web/tools/lighthouse/ that is supposed to tell us how we should prepare our pages to please this vague tool does not do well for mobile score - barely 40 points and it is as well -+10 points after each 'analyze'. Looks like even google can't figure out how to please this tool.
Screenshot 2021-06-06 at 19 06 54

Lots of negatives here, but I'm just trying to finish my work and spend some time with family instead of trying to guess what google/lighthouse wants from me.

@VenkatSandeeph
Copy link

@VenkatSandeeph VenkatSandeeph commented Jun 30, 2021

I have run PSI and Lighthouse in Chrome DevTools for mobile. Found the FCP and LCP in both cases are varying. Also, i can find the different results when checked from different locations. Once try on this url https://mydepartments.in/.

@matti
Copy link

@matti matti commented Jun 30, 2021

@VenkatSandeeph yeah, you site loads way faster than what those screenshots show

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet