Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] [PROTOTYPE] Dokka K2 analysis #2995

Closed
wants to merge 4 commits into from
Closed

[WIP] [PROTOTYPE] Dokka K2 analysis #2995

wants to merge 4 commits into from

Commits on Apr 24, 2023

  1. First draft

    vmishenev committed Apr 24, 2023
    Configuration menu
    Copy the full SHA
    bb6e04b View commit details
    Browse the repository at this point in the history

Commits on May 12, 2023

  1. Add translator of symbols

    vmishenev committed May 12, 2023
    Configuration menu
    Copy the full SHA
    a2865b6 View commit details
    Browse the repository at this point in the history
  2. Update size setting

    vmishenev committed May 12, 2023
    Configuration menu
    Copy the full SHA
    3c239d4 View commit details
    Browse the repository at this point in the history
  3. Eliminate unneeded LimitedDispatcher instances on `Dispatchers.Defa…

    …ult` and `Dispatchers.IO` (#3562)
    
    * Handle `Dispatchers.IO.limitedParallelism(Int.MAX_VALUE)` case
    
    `LimitedDispatcher.limitedParallelism` returns `this` if requested parallelism is greater or equal
    to the own parallelism of the said `LimitedDispatcher`. `UnlimitedIoScheduler` has parallelism effectively set
    to `Int.MAX_VALUE`, so `parallelism >= this.parallelism` check folds into `parallelism == Int.MAX_VALUE`.
    
    Before the change `LimitedDispatcher(Int.MAX_VALUE)` was returned. While it does work as expected, any submitted task
    goes through its queue and `Int.MAX_VALUE` number of workers. The change allows eliminating the `LimitedDispatcher`
    instance and its queue in this extreme case.
    
    * Handle `Dispatchers.Default.limitedParallelism` when requested parallelism >= core pool size (#3442)
    
    `LimitedDispatcher.limitedParallelism` returns `this` if requested parallelism is greater or equal
    to the own parallelism of the said `LimitedDispatcher`. `DefaultScheduler` has parallelism effectively set
    to `CORE_POOL_SIZE`.
    
    Before the change `LimitedDispatcher(parallelism)` was returned. While it does work as expected, any submitted task
    goes through its queue and `parallelism` number of workers. The change allows eliminating the `LimitedDispatcher`
    instance and its queue in case the requested parallelism is greater or equal to `CORE_POOL_SIZE`.
    
    Fixes #3442
    Added benchmarks on cacheable child serializers
    
    Relates #1918
    vmishenev committed May 12, 2023
    Configuration menu
    Copy the full SHA
    999b5a1 View commit details
    Browse the repository at this point in the history