Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Defining and parametrizing benchmarks by applying decorators (@benchmark,@parametrize,@product) - motivation, behavior, and one usage example for each. #33

Closed
Maciej818 opened this issue Jan 30, 2024 · 3 comments
Labels
duplicate This issue or pull request already exists

Comments

@Maciej818
Copy link

No description provided.

@msavinash
Copy link

I don't understand how parametrize is supposed to work. From my understanding its something like this:

parametrize_parameters = [
    {"y_test": [1, 2, 3], "y_pred": [4, 5, 6]},
    {"y_test": [1, 2, 3], "y_pred": [1, 2, 3]},
]

@nnbench.parametrize(parameters=parametrize_parameters)
def accuracy(model: base.BaseEstimator, y_test: np.ndarray, y_pred: np.ndarray) -> float:
    accuracy = metrics.accuracy_score(y_test, y_pred)
    return accuracy

Is this accurate?

@nicholasjng
Copy link
Collaborator

Is this accurate?

Yes.

(The arguments in the decorator dictionaries should match the parametrized benchmark function's typing, so the arguments should be np.array([1, 2, 3]), np.array([4, 5, 6]), ..., but you got it right conceptually.)

@Maciej818 Maciej818 added the duplicate This issue or pull request already exists label Feb 1, 2024
@Maciej818
Copy link
Author

This issue duplicates #34 - I close it. Relevant comments (above) were copy-pasted to #34 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists
Projects
None yet
Development

No branches or pull requests

3 participants