Skip to content

Conversation

@portante
Copy link
Member

Both pbench-fio and pbench-uperf:

  • No longer allow for the benchmark binary to be overriden by an environment variable -- this was an out-dated way for the unit tests to mock the respective benchmark behavior

  • No longer resolve and check that the benchmark binary is executable

There was an ordering problem with the old benchmark_bin variable initial value and where it is used in the rest of the script (both for pbench-fio and pbench-uperf). The existence check was not always performed locally (no clients or servers specified which are local), but the commands run remotely required benchmark_bin to be set during creation of the commands for remote execution. By only checking for the existence of the benchmark binary when performing the version check, and allowing the shell to resolve the location of the benchmark binary at run time, we avoid the interdependency altogether.

Mock commands for fio and uperf are provided on the PATH for tests. For the CLI tests, those mocks are removed so that we can verify that help and usage text is emitted before the checks for the particular command existence (which is shown by the failing test-CL).

We also add failing tests for uperf and fio behaviors:

  • pbench-fio
    • test-22 -- missing fio command
    • test-50 -- fio -V reports a bad version
  • pbench-uperf
    • test-02 -- missing uperf command
    • test-51 -- uperf -V reports a bad version

The existence check of the fio and uperf benchmark commands now occurs after any help requests or command usage errors. This fixes issue #2841 [1].

Finally, we correct the way pbench-uperf checked the status of the uperf command execution of the benchmark by making sure the local declaration is performed before the assigment, so that the return code check is not overridden by the "status" of the local declaration. This fixes issue #2842 [2].

[1] #2841
[2] #2842


This is a respin of PR #2582 to correct the mistaken merge without squashing.

Both `pbench-fio` and `pbench-uperf`:

 * No longer allow for the benchmark binary to be overriden by an
   environment variable -- this was an out-dated way for the unit tests
   to mock the respective benchmark behavior

 * No longer resolve and check that the benchmark binary is executable

There was an ordering problem with the old `benchmark_bin` variable
initial value and where it is used in the rest of the script (both for
`pbench-fio` and `pbench-uperf`). The existence check was not always
performed locally (no clients or servers specified which are local), but
the commands run remotely required `benchmark_bin` to be set during
creation of the commands for remote execution. By only checking for the
existence of the benchmark binary when performing the version check, and
allowing the shell to resolve the location of the benchmark binary at
run time, we avoid the interdependency altogether.

Mock commands for `fio` and `uperf` are provided on the `PATH` for
tests.  For the CLI tests, those mocks are removed so that we can verify
that help and usage text is emitted before the checks for the particular
command existence (which is shown by the failing `test-CL`).

We also add failing tests for `uperf` and `fio` behaviors:

 * `pbench-fio`
   * `test-22` -- missing `fio` command
   * `test-50` -- `fio -V` reports a bad version
 * `pbench-uperf`
   * `test-02` -- missing `uperf` command
   * `test-51` -- `uperf -V` reports a bad version

The existence check of the `fio` and `uperf` benchmark commands now
occurs after any help requests or command usage errors. This fixes
issue distributed-system-analysis#2841 [1].

Finally, we correct the way `pbench-uperf` checked the status of the
`uperf` command execution of the benchmark by making sure the `local`
declaration is performed before the assigment, so that the return code
check is not overridden by the "status" of the `local` declaration.
This fixes issue distributed-system-analysis#2842 [2].

[1] distributed-system-analysis#2841
[2] distributed-system-analysis#2842
@portante portante added bug Agent Code Infrastructure fio pbench-fio benchmark related uperf pbench-uperf benchmark related labels May 24, 2022
@portante portante added this to the v0.72 milestone May 24, 2022
@portante portante self-assigned this May 24, 2022
@portante portante changed the title Stop resolving benchmark binary location Correct pbench-fio and pbench-uperf handling of benchmark existence and version checks (take 2) May 24, 2022
Copy link
Member

@webbnh webbnh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GTG

@ndokos ndokos self-requested a review May 24, 2022 16:33
@portante portante changed the title Correct pbench-fio and pbench-uperf handling of benchmark existence and version checks (take 2) Correct pbench-fio and pbench-uperf handling of benchmark existence and version checks (take 2) May 24, 2022
@portante portante merged commit ba6a2b7 into distributed-system-analysis:main May 24, 2022
portante added a commit to portante/pbench that referenced this pull request May 25, 2022
This is a back-port of commit ba6a2b7 (PR distributed-system-analysis#2860) from `main`.

Both `pbench-fio` and `pbench-uperf`:

 * No longer allow for the benchmark binary to be overriden by an
   environment variable -- this was an out-dated way for the unit tests
   to mock the respective benchmark behavior

 * No longer resolve and check that the benchmark binary is executable

There was an ordering problem with the old `benchmark_bin` variable
initial value and where it is used in the rest of the script (both for
`pbench-fio` and `pbench-uperf`). The existence check was not always
performed locally (no clients or servers specified which are local), but
the commands run remotely required `benchmark_bin` to be set during
creation of the commands for remote execution. By only checking for the
existence of the benchmark binary when performing the version check, and
allowing the shell to resolve the location of the benchmark binary at
run time, we avoid the interdependency altogether.

Mock commands for `fio` and `uperf` are provided on the `PATH` for
tests.  For the CLI tests, those mocks are removed so that we can verify
that help and usage text is emitted before the checks for the particular
command existence (which is shown by the failing `test-CL`).

We also add failing tests for `uperf` and `fio` behaviors:

 * `pbench-fio`
   * `test-22` -- missing `fio` command
   * `test-50` -- `fio -V` reports a bad version
 * `pbench-uperf`
   * `test-02` -- missing `uperf` command
   * `test-51` -- `uperf -V` reports a bad version

The existence check of the `fio` and `uperf` benchmark commands now
occurs after any help requests or command usage errors. This fixes
issue distributed-system-analysis#2841 [1].

Finally, we correct the way `pbench-uperf` checked the status of the
`uperf` command execution of the benchmark by making sure the `local`
declaration is performed before the assigment, so that the return code
check is not overridden by the "status" of the `local` declaration.
This fixes issue distributed-system-analysis#2842 [2].

[1] distributed-system-analysis#2841
[2] distributed-system-analysis#2842
portante added a commit to portante/pbench that referenced this pull request May 25, 2022
This is a back-port of commit ba6a2b7 (PR distributed-system-analysis#2860) from `main`.

Both `pbench-fio` and `pbench-uperf`:

 * No longer allow for the benchmark binary to be overriden by an
   environment variable -- this was an out-dated way for the unit tests
   to mock the respective benchmark behavior

 * No longer resolve and check that the benchmark binary is executable

There was an ordering problem with the old `benchmark_bin` variable
initial value and where it is used in the rest of the script (both for
`pbench-fio` and `pbench-uperf`). The existence check was not always
performed locally (no clients or servers specified which are local), but
the commands run remotely required `benchmark_bin` to be set during
creation of the commands for remote execution. By only checking for the
existence of the benchmark binary when performing the version check, and
allowing the shell to resolve the location of the benchmark binary at
run time, we avoid the interdependency altogether.

Mock commands for `fio` and `uperf` are provided on the `PATH` for
tests.  For the CLI tests, those mocks are removed so that we can verify
that help and usage text is emitted before the checks for the particular
command existence (which is shown by the failing `test-CL`).

We also add failing tests for `uperf` and `fio` behaviors:

 * `pbench-fio`
   * `test-22` -- missing `fio` command
   * `test-50` -- `fio -V` reports a bad version
 * `pbench-uperf`
   * `test-02` -- missing `uperf` command
   * `test-51` -- `uperf -V` reports a bad version

The existence check of the `fio` and `uperf` benchmark commands now
occurs after any help requests or command usage errors. This fixes
issue distributed-system-analysis#2841 [1].

Finally, we correct the way `pbench-uperf` checked the status of the
`uperf` command execution of the benchmark by making sure the `local`
declaration is performed before the assigment, so that the return code
check is not overridden by the "status" of the `local` declaration.
This fixes issue distributed-system-analysis#2842 [2].

[1] distributed-system-analysis#2841
[2] distributed-system-analysis#2842
portante added a commit to portante/pbench that referenced this pull request May 25, 2022
This is a back-port of commit ba6a2b7 (PR distributed-system-analysis#2860) from `main`.

Both `pbench-fio` and `pbench-uperf`:

 * No longer allow for the benchmark binary to be overriden by an
   environment variable -- this was an out-dated way for the unit tests
   to mock the respective benchmark behavior

 * No longer resolve and check that the benchmark binary is executable

There was an ordering problem with the old `benchmark_bin` variable
initial value and where it is used in the rest of the script (both for
`pbench-fio` and `pbench-uperf`). The existence check was not always
performed locally (no clients or servers specified which are local), but
the commands run remotely required `benchmark_bin` to be set during
creation of the commands for remote execution. By only checking for the
existence of the benchmark binary when performing the version check, and
allowing the shell to resolve the location of the benchmark binary at
run time, we avoid the interdependency altogether.

Mock commands for `fio` and `uperf` are provided on the `PATH` for
tests.  For the CLI tests, those mocks are removed so that we can verify
that help and usage text is emitted before the checks for the particular
command existence (which is shown by the failing `test-CL`).

We also add failing tests for `uperf` and `fio` behaviors:

 * `pbench-fio`
   * `test-22` -- missing `fio` command
   * `test-50` -- `fio -V` reports a bad version
 * `pbench-uperf`
   * `test-02` -- missing `uperf` command
   * `test-51` -- `uperf -V` reports a bad version

The existence check of the `fio` and `uperf` benchmark commands now
occurs after any help requests or command usage errors. This fixes
issue distributed-system-analysis#2841 [1].

Finally, we correct the way `pbench-uperf` checked the status of the
`uperf` command execution of the benchmark by making sure the `local`
declaration is performed before the assigment, so that the return code
check is not overridden by the "status" of the `local` declaration.
This fixes issue distributed-system-analysis#2842 [2].

[1] distributed-system-analysis#2841
[2] distributed-system-analysis#2842
portante added a commit that referenced this pull request May 25, 2022
This is a back-port of commit ba6a2b7 (PR #2860) from `main`.

Both `pbench-fio` and `pbench-uperf`:

 * No longer allow for the benchmark binary to be overriden by an
   environment variable -- this was an out-dated way for the unit tests
   to mock the respective benchmark behavior

 * No longer resolve and check that the benchmark binary is executable

There was an ordering problem with the old `benchmark_bin` variable
initial value and where it is used in the rest of the script (both for
`pbench-fio` and `pbench-uperf`). The existence check was not always
performed locally (no clients or servers specified which are local), but
the commands run remotely required `benchmark_bin` to be set during
creation of the commands for remote execution. By only checking for the
existence of the benchmark binary when performing the version check, and
allowing the shell to resolve the location of the benchmark binary at
run time, we avoid the interdependency altogether.

Mock commands for `fio` and `uperf` are provided on the `PATH` for
tests.  For the CLI tests, those mocks are removed so that we can verify
that help and usage text is emitted before the checks for the particular
command existence (which is shown by the failing `test-CL`).

We also add failing tests for `uperf` and `fio` behaviors:

 * `pbench-fio`
   * `test-22` -- missing `fio` command
   * `test-50` -- `fio -V` reports a bad version
 * `pbench-uperf`
   * `test-02` -- missing `uperf` command
   * `test-51` -- `uperf -V` reports a bad version

The existence check of the `fio` and `uperf` benchmark commands now
occurs after any help requests or command usage errors. This fixes
issue #2841 [1].

Finally, we correct the way `pbench-uperf` checked the status of the
`uperf` command execution of the benchmark by making sure the `local`
declaration is performed before the assigment, so that the return code
check is not overridden by the "status" of the `local` declaration.
This fixes issue #2842 [2].

[1] #2841
[2] #2842
@portante portante linked an issue May 26, 2022 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Agent bug Code Infrastructure fio pbench-fio benchmark related uperf pbench-uperf benchmark related

Projects

None yet

Development

Successfully merging this pull request may close these issues.

pbench-uperf fails to detect a bad version of uperf pbench-uperf -h fails with an error not being able to find the uperf command on PATH

3 participants