Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hls-eval-plugin: Replicate #4139 #4140

Merged
merged 4 commits into from
Mar 27, 2024
Merged

hls-eval-plugin: Replicate #4139 #4140

merged 4 commits into from
Mar 27, 2024

Conversation

mattapet
Copy link
Contributor

No description provided.

@fendor
Copy link
Collaborator

fendor commented Mar 12, 2024

Thanks! As a note, our CI jobs are rather flaky and fail unpredictably, so don't be afraid and suspect an error in your code if some random plugin tests fail.

@fendor
Copy link
Collaborator

fendor commented Mar 12, 2024

That's a good start! Perhaps now we can investigate why the other test cases are fine, and only this scenario is affected?

Since it might not be obvious, you can run your individual test case locally like this:

> TASTY_PATTERN="T4139" cabal test test:hls-eval-plugin-tests

To see debug logs:

> HLS_TEST_PLUGIN_LOG_STDERR=1 TASTY_PATTERN="T4139" cabal test test:hls-eval-plugin-tests

And to log everything in HLS (likely too much information):

> HLS_TEST_LOG_STDERR=1 TASTY_PATTERN="T4139" cabal test test:hls-eval-plugin-tests

@mattapet
Copy link
Contributor Author

The debug output was not much of a help. In continuing the search, I tried explicitly filtering out the main function from the renamed environment, which made the tests pass again (at least locally).

@fendor do you know if there are any specific properties of the module, or the hsc environment that it might be worth to check for differences?

@fendor
Copy link
Collaborator

fendor commented Mar 13, 2024

do you know if there are any specific properties of the module, or the hsc environment that it might be worth to check for differences?

Unfortunately no, I am not familiar with that part of GHC's codebase. Perhaps @wz1000 knows.

@mattapet
Copy link
Contributor Author

Continuing the search, I found the tcRnDeclsi in hscParsedDecls to be the point where the evalPrint is last seen in the environment. After switching the ddump-rn-trace I also noticed the difference in the exports there:

rnExports: Exports: [main] vs rnExports: Exports: [evalPrint] when we filter out the main symbol from the initial env. I'll check if traces further for additional leads. I can share them if that would help

@fendor
Copy link
Collaborator

fendor commented Mar 13, 2024

Sounds like good detective work! You can share whatever you find!

I will likely only get to it some time in the next couple of days.

@mattapet
Copy link
Contributor Author

mattapet commented Mar 13, 2024

Attaching the diffs. They're produced by running cabal test test:hls-eval-plugin-tests > <<dump-file.txt>>. I'll also update the PR with version adding the dump flag itself in a bit. (Edit: Added)

rn-dump-main.txt
rn-dump-no-main.txt

@mattapet
Copy link
Contributor Author

Scratch the previous comment. It turned out to be a dead-end. After comparing with renamer traces with GHC 9.4.8, which is the version where the test works, I found the same Exports. But I decided to add couple more dump flags and compare the traces across the 9.4.8 and 9.8.2 without changes, and 9.8.2 with main filtering.

It appears that for 9.8.2 with the main function present, the evalPrint function gets removed between ddump-ds-preopt and ddump-ds steps. Adding traces files here:

output-dump-9.4.8.txt
output-dump-9.8.2.txt
output-dump-no-main-9.8.2.txt

@michaelpj
Copy link
Collaborator

Can we merge the test as an expected failure for now? It's a good contribution regardless!

@soulomoon
Copy link
Collaborator

soulomoon commented Mar 21, 2024

It might worth adding a eval benchmark test to see if switch to interpreter back end would cause any leak or undesired behaviour.

You can add a experiment here ghcide-bench/src/Experiments.hs and commit the experiment here bench/config.yaml

@mattapet
Copy link
Contributor Author

@soulomoon I added a couple of experiments, one with very simple, single line evaluation, and one a bit more complex. After inspecting the output logs locally, the benchmarks seem to actually exercise the desired behavior. Is this what you had in mind though?

@soulomoon
Copy link
Collaborator

@soulomoon I added a couple of experiments, one with very simple, single line evaluation, and one a bit more complex. After inspecting the output logs locally, the benchmarks seem to actually exercise the desired behavior. Is this what you had in mind though?

Thank you for add new benchmarks, it is looking good.
If we are not seeing any more memory usage etc,. It should be good to go.

Copy link
Collaborator

@soulomoon soulomoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@michaelpj michaelpj merged commit c3b0b37 into haskell:master Mar 27, 2024
39 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants