Skip to content

Conversation

@vasily-kirichenko
Copy link
Contributor

Heap allocation before:
position-is-record
After:
position-is-struct-2

@dsyme
Copy link
Contributor

dsyme commented Sep 7, 2015

Hi @vasily-kirichenko

This is potentially a good change. A couple of things

  • Although allocations will reduce, it's possible that the lexer and parser become slower (because more time is spent copying structs). Could you write a perf test to repeatedly parse some input? I don't think it matters too much which input it is.
  • Ideally this change would via Microsoft/visualfsharp --> fsharp/fsharp --> fsharp/FSharp.Compiler.Service to keep the codebases maximally aligned. Would you be able to submit the change to Microsoft/visualfsharp and assess its impact on perf of a compiler bootstrap?

Thanks
Don

@vasily-kirichenko
Copy link
Contributor Author

Hi, Don.

How exactly I can run parsing on a file? If I use FSharpChecker.ParseFileInProject, it caches results and all subsequent calls returns immediately.

@dsyme
Copy link
Contributor

dsyme commented Sep 7, 2015

I guess you could generate a sequence of textually different files?

@vasily-kirichenko
Copy link
Contributor Author

I've tested it on all *.fs files in src/fsharp dir https://github.com/vasily-kirichenko/FSharp.Compiler.Service/blob/parser-perf/tests/ConsoleApplication1/Program.fs

Results: before - 6.7 s, after - 7.03 s, which is roughly 5% digression :(

Memory traffic (all objects created during the parsing), before:

before_memory

after:

after_memory

Now Tuple<Position, Position> takes almost 2x more memory. Maybe it's a good idea to replace it with a custom struct?

make suffixExists and tokenBalancesHeadContext top-level functions to eliminate heap allocations
@vasily-kirichenko
Copy link
Contributor Author

I replaced a couple of top allocated Tuples with custom structs and make a couple of functions to be top-level. After that parsing time reduced and equals 6.26 s (6% better than "before" result). Number of allocated objects reduced by 45% (!), allocated memory - by 9%. I tried to make TokenTup to be a struct, but it increased parsing time by 10%.

after_2_memory

@dungpa
Copy link
Contributor

dungpa commented Sep 26, 2015

@dsyme Have you had a second look at this PR? It seems like a good enhancement to make.

@dsyme
Copy link
Contributor

dsyme commented Sep 27, 2015

I replaced a couple of top allocated Tuple s with custom structs and make a couple of functions to be top-level. After that parsing time reduced and equals 6.26 s (6% better than "before" result). Number of allocated objects reduced by 45% (!), allocated memory - by 9%.

@vasily-kirichenko Is this what's currently in the PR? Thx.

@dsyme
Copy link
Contributor

dsyme commented Sep 27, 2015

(I'll reopen since there does seem to be value here, once we confirm the tradeoffs)

@dsyme dsyme reopened this Sep 27, 2015
@vasily-kirichenko
Copy link
Contributor Author

@dsyme yes.

dsyme added a commit that referenced this pull request Sep 30, 2015
make Internal.Utilities.Text.Lexing.Position a struct
@dsyme dsyme merged commit fdd1f08 into fsharp:master Sep 30, 2015
@dsyme
Copy link
Contributor

dsyme commented Sep 30, 2015

@vasily-kirichenko Would it be possible for you to submit this to Microsoft\visualfsharp as well please? I don't mind accepting it here first but ideally we should get it submitted there too.

@vasily-kirichenko
Copy link
Contributor Author

@dsyme done dotnet/fsharp#664

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants