Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get run CLI args from Results #579

Closed
umarcor opened this issue Oct 28, 2019 · 7 comments
Closed

Get run CLI args from Results #579

umarcor opened this issue Oct 28, 2019 · 7 comments

Comments

@umarcor
Copy link
Member

umarcor commented Oct 28, 2019

It would be useful if the CLI args that are used to run each test/config were available programmatically (either in post_run or through --export-json). Something similar to:

ui.set_sim_option("ghdl.elab_e", True)
ui._args.elaborate = True

def post_func(results):
    for key, val in results.args:
        print(key) # entity.arch.config
        print(val) # list of args (argv to GHDL)

vu.main(post_run=post_func)

In https://github.com/VUnit/vunit/pull/568/files#diff-d11203cc5c5e59bf50c8fe619e03e39d, I'm using a modified ghdl_interface to write the args to a txt file, as a workaround.


This would allow to indirectly retrieve the version of the standard, as commented in #577 (comment).

@kraigher
Copy link
Collaborator

The args should be readable from the VUnit object already. No need to wait to post_run.

@umarcor
Copy link
Member Author

umarcor commented Oct 30, 2019

I tried to get it from VUnit, but I didn't succeed.

I assume I need to execute _get_command from ghdl_interface (https://github.com/VUnit/vunit/blob/master/vunit/ghdl_interface.py#L226), for each testbench. How do I get from get_test_benches (which returns objects of type vunit.ui.testbench.TestBench) to _get_command without executing main()? The regular execution flow seems to be:

vunit/vunit/ui/vunit.py

Lines 751 to 764 in 895e8c4

def _main_run(self, post_run):
"""
Main with running tests
"""
simulator_if = self._create_simulator_if()
test_list = self._create_tests(simulator_if)
self._compile(simulator_if)
print()
start_time = ostools.get_time()
report = TestReport(printer=self._printer)
try:
self._run_test(test_list, report)

vunit/vunit/ui/vunit.py

Lines 921 to 942 in 895e8c4

def _run_test(self, test_cases, report):
"""
Run the test suites and return the report
"""
if self._args.verbose:
verbosity = TestRunner.VERBOSITY_VERBOSE
elif self._args.quiet:
verbosity = TestRunner.VERBOSITY_QUIET
else:
verbosity = TestRunner.VERBOSITY_NORMAL
runner = TestRunner(
report,
join(self._output_path, "test_output"),
verbosity=verbosity,
num_threads=self._args.num_threads,
fail_fast=self._args.fail_fast,
dont_catch_exceptions=self._args.dont_catch_exceptions,
no_color=self._args.no_color,
)
runner.run(test_cases)

vunit/vunit/test_runner.py

Lines 78 to 115 in 895e8c4

def run(self, test_suites):
"""
Run a list of test suites
"""
if not exists(self._output_path):
os.makedirs(self._output_path)
self._create_test_mapping_file(test_suites)
num_tests = 0
for test_suite in test_suites:
for test_name in test_suite.test_names:
num_tests += 1
if self._is_verbose:
print("Running test: " + test_name)
if self._is_verbose:
print("Running %i tests" % num_tests)
print()
self._report.set_expected_num_tests(num_tests)
scheduler = TestScheduler(test_suites)
threads = []
# Disable continuous output in parallel mode
write_stdout = self._is_verbose and self._num_threads == 1
try:
sys.stdout = ThreadLocalOutput(self._local, self._stdout)
sys.stderr = ThreadLocalOutput(self._local, self._stdout)
# Start P-1 worker threads
for _ in range(self._num_threads - 1):
new_thread = threading.Thread(
target=self._run_thread,

vunit/vunit/test_runner.py

Lines 139 to 160 in 895e8c4

def _run_thread(self, write_stdout, scheduler, num_tests, is_main):
"""
Run worker thread
"""
self._local.output = self._stdout
while True:
test_suite = None
try:
test_suite = scheduler.next()
output_path = create_output_path(self._output_path, test_suite.name)
output_file_name = join(output_path, "output.txt")
with self._stdout_lock():
for test_name in test_suite.test_names:
print("Starting %s" % test_name)
print("Output file: %s" % relpath(output_file_name))
self._run_test_suite(
test_suite, write_stdout, num_tests, output_path, output_file_name
)

vunit/vunit/test_runner.py

Lines 187 to 224 in 895e8c4

def _run_test_suite(
self, test_suite, write_stdout, num_tests, output_path, output_file_name
):
"""
Run the actual test suite
"""
color_output_file_name = join(output_path, "output_with_color.txt")
output_file = None
color_output_file = None
start_time = ostools.get_time()
results = self._fail_suite(test_suite)
try:
ostools.renew_path(output_path)
output_file = wrap(open(output_file_name, "a+"), use_color=False)
output_file.seek(0)
output_file.truncate()
if write_stdout:
self._local.output = Tee([self._stdout_ansi, output_file])
else:
color_output_file = open(color_output_file_name, "w")
self._local.output = Tee([color_output_file, output_file])
def read_output():
"""
Called to read the contents of the output file on demand
"""
output_file.flush()
prev = output_file.tell()
output_file.seek(0)
contents = output_file.read()
output_file.seek(prev)
return contents
results = test_suite.run(output_path=output_path, read_output=read_output)

vunit/vunit/test_suites.py

Lines 170 to 188 in 895e8c4

def run(self, output_path, read_output):
"""
Run selected test cases within the test suite
Returns a dictionary of test results
"""
results = {}
for name in self._test_cases:
results[name] = FAILED
if not self._config.call_pre_config(
output_path, self._simulator_if.output_path
):
return results
# Ensure result file exists
ostools.write_file(get_result_file_name(output_path), "")
sim_ok = self._simulate(output_path)

vunit/vunit/test_suites.py

Lines 227 to 260 in 895e8c4

def _simulate(self, output_path):
"""
Add runner_cfg generic values and run simulation
"""
config = self._config.copy()
if (
"output_path" in config.generic_names
and "output_path" not in config.generics
):
config.generics["output_path"] = "%s/" % output_path.replace("\\", "/")
runner_cfg = {
"enabled_test_cases": ",".join(
encode_test_case(test_case)
for test_case in self._test_cases
if test_case is not None
),
"use_color": self._simulator_if.use_color,
"output path": output_path.replace("\\", "/") + "/",
"active python runner": True,
"tb path": config.tb_path.replace("\\", "/") + "/",
}
# @TODO Warn if runner cfg already set?
config.generics["runner_cfg"] = encode_dict(runner_cfg)
return self._simulator_if.simulate(
output_path=output_path,
test_suite_name=self._test_suite_name,
config=config,
elaborate_only=self._elaborate_only,
)

def simulate( # pylint: disable=too-many-locals
self, output_path, test_suite_name, config, elaborate_only
):
"""
Simulate with entity as top level using generics
"""
script_path = join(output_path, self.name)
if not exists(script_path):
os.makedirs(script_path)
ghdl_e = elaborate_only and config.sim_options.get("ghdl.elab_e", False)
cmd = self._get_command(config, script_path, ghdl_e)

def _get_command(self, config, output_path, ghdl_e):

@kraigher
Copy link
Collaborator

Oh are you talking about the argv to ghdl? I thought you were talking about the run. py argv. I am not sure we want to expose the GHDL command on the public API. I am not sure we can even promise that it will always be one command even for all simulators.

@umarcor
Copy link
Member Author

umarcor commented Oct 30, 2019

Oh are you talking about the argv to ghdl?

Yes. The point is that ghdl_e allows to build a binary. Therefore, the runtime arguments that VUnit creates are discarded/lost. In order to later execute the binary, either from a Python script or externally, those args are required. It is possible, and sometimes desirable, to set those arguments manually; but it'd be useful to get the ones that VUnit creates. When VUnit is used to generate tests with multiple combinations of generics and/or configurations, it is pointless to repeat all the logic.

I am not sure we want to expose the GHDL command on the public API. I am not sure we can even promise that it will always be one command even for all simulators.

I think it'd be good to retrieve the sequence of commands that VUnit uses for simulation and the arguments for compilation in a machine-readable format (i.e. not parsing the logs). I'm ok with not adding them to the public API, but to provide guidelines about how to achieve it. Would you be open to merging something like

else:
try:
os.makedirs(output_path, mode=0o777)
except OSError:
pass
with open(join(output_path, "args.txt"), "w") as fname:
for item in [bin_path] + sim:
fname.write("%s\n" % item)
?

@kraigher
Copy link
Collaborator

If we add a public abstraction of the simulator it must not constrain the implementation of any current or future simulator. Do we want to promise that each simulator will always need one command to run a test?

It seems like yor use case is quite niche and GHDL specific and would be better accommodated by something similar as an args.txt you propose. Maybe make it have some standard format such as json such that string escaping and tokenization does not become a problem.

@umarcor
Copy link
Member Author

umarcor commented Oct 30, 2019

If we add a public abstraction of the simulator it must not constrain the implementation of any current or future simulator. Do we want to promise that each simulator will always need one command to run a test?

The main motivation to propose exposing it with a mechanism different from args.txt was that I thought you would not like the later. In that case, we would not promise that each simulator will need one command, but that the exposed object would contain the number of commands that is required. I.e., the artifact would be a list of lists of strings, instead of a list of strings. Anyway, since you are ok with writing a file to the same path as the simulation binary, no need to worry about the hypothetical public abstraction.

It seems like yor use case is quite niche and GHDL specific and would be better accommodated by something similar as an args.txt you propose. Maybe make it have some standard format such as json such that string escaping and tokenization does not become a problem.

I'll convert it to JSON and I'll propose a PR.

@umarcor
Copy link
Member Author

umarcor commented Feb 20, 2020

Merged as part of #606.

@umarcor umarcor closed this as completed Feb 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants