Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: use pytorch built-in interpolation method for LinearInterpolation #38

Merged
merged 9 commits into from
May 25, 2023

Conversation

yoyololicon
Copy link
Contributor

@yoyololicon yoyololicon commented May 23, 2023

I was using the mlsa filter module and found some places that can be optimised.
I replaced the convolution-based linear interpolation with pytorch built-in method and did some benchmarks.

Before


[--------------- linear_interpolation ---------------]
              |  scale_factor=128  |  scale_factor=256
1 threads: -------------------------------------------
      N=512   |        419.0       |        853.9     
      N=1024  |        849.4       |       1716.3     
      N=2048  |       1884.4       |       4325.4     
2 threads: -------------------------------------------
      N=512   |        281.6       |        562.4     
      N=1024  |        556.1       |       1153.5     
      N=2048  |       1188.4       |       2449.0     
4 threads: -------------------------------------------
      N=512   |        221.7       |        422.7     
      N=1024  |        419.9       |        864.5     
      N=2048  |        882.8       |       1788.6     

Times are in milliseconds (ms).

After

[--------------- linear_interpolation ---------------]
              |  scale_factor=128  |  scale_factor=256
1 threads: -------------------------------------------
      N=512   |       190.1        |        378.8     
      N=1024  |       380.2        |        758.5     
      N=2048  |       764.1        |       1536.3     
2 threads: -------------------------------------------
      N=512   |       113.6        |        221.4     
      N=1024  |       226.9        |        460.4     
      N=2048  |       451.6        |        902.7     
4 threads: -------------------------------------------
      N=512   |        75.6        |        147.9     
      N=1024  |       147.1        |        294.3     
      N=2048  |       295.2        |        603.4     

Times are in milliseconds (ms).
The benchmark script
import torch
from torch.profiler import profile, record_function, ProfilerActivity
import torch.utils.benchmark as benchmark
from diffsptk import LinearInterpolation
from itertools import product

def benchmark_linear_interpolation():
    # Create an instance of the LinearInterpolation class
    lin_interp = LinearInterpolation(100)

    # Generate some random input data
    x = torch.randn(16, 600).t()

    # Run the forward pass of the filter and measure the execution time
    with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof:
        with record_function("LinearInterpolation"):
            y = lin_interp(x)

    print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10))

    lengths = [512, 1024, 2048]
    scale_factors = [128, 256]
    results = []
    for N, S in product(lengths, scale_factors):
        label = "linear_interpolation"
        sub_label = f"N={N}"
        x = torch.randn(32, N, 49)
        interp = LinearInterpolation(S)

        for num_threads in [1, 2, 4]:
            results.append(
                benchmark.Timer(
                    stmt="y = interp(x)",
                    globals={"x": x, "interp": interp},
                    label=label,
                    num_threads=num_threads,
                    sub_label=sub_label,
                    description=f"scale_factor={S}",
                ).blocked_autorange(min_run_time=1)
            )

    compare = benchmark.Compare(results)
    compare.print()


if __name__ == "__main__":
    benchmark_linear_interpolation()

I only test the script on a Macbook with an M1 chip, but it should have similar results on other machines and OS.

TODO

I haven't run the test scripts yet due to some problems with building sptk.
If anyone can help run the tests, I would appreciate it.

Copy link
Contributor

@takenori-y takenori-y left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for improving the code!

I added some minor comments. Could you apply the linter and formatter?

diffsptk/core/linear_intpl.py Outdated Show resolved Hide resolved
diffsptk/core/linear_intpl.py Outdated Show resolved Hide resolved
diffsptk/core/linear_intpl.py Outdated Show resolved Hide resolved
diffsptk/core/linear_intpl.py Outdated Show resolved Hide resolved
diffsptk/core/linear_intpl.py Outdated Show resolved Hide resolved
yoyololicon and others added 5 commits May 24, 2023 14:35
Co-authored-by: Takenori Yoshimura <takenori.yoshimura24@gmail.com>
Co-authored-by: Takenori Yoshimura <takenori.yoshimura24@gmail.com>
Co-authored-by: Takenori Yoshimura <takenori.yoshimura24@gmail.com>
Co-authored-by: Takenori Yoshimura <takenori.yoshimura24@gmail.com>
Co-authored-by: Takenori Yoshimura <takenori.yoshimura24@gmail.com>
@yoyololicon
Copy link
Contributor Author

Sure, will do this today.

@takenori-y
Copy link
Contributor

Thank you for your response. I will merge.

@takenori-y takenori-y merged commit 3b1cb15 into sp-nitech:master May 25, 2023
@yoyololicon
Copy link
Contributor Author

Oh, did it pass the test? I can't see any workflow running... 😅

@takenori-y
Copy link
Contributor

No worries. I intentionally skipped the CI because I have already performed the test on my machine.

@takenori-y takenori-y added the enhancement New feature or request label May 25, 2023
@yoyololicon
Copy link
Contributor Author

Nice! all good then.

@yoyololicon yoyololicon deleted the buildin-linear-interp branch May 25, 2023 08:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants