Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workflow improvement for ease of replicability #466

Merged
merged 35 commits into from
Jan 15, 2024

Conversation

yesilzeytin
Copy link
Contributor

This comes from Issue #456. Three main updates are implemented:

  • Updated README.md to include the missing steps to reproduce the test runs successfully.
  • Updated .gitmodules to remove SSH dependency of the current approach and also improve forkability so that contributions could be made easier (this second one uses absolute paths to the repository through replacing the relative paths. So, if you deem it inappropriate, it can be revised, and only SSH dependency removal could be brought in).
  • Currently, one has to do everything manually or use a tool such as act. However, using act doesn't generate local reports. So, I wrote a new bash script, build_and_run.sh, to easily handle all the necessary build steps.

Copy link
Contributor

@kbieganski kbieganski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! Never expected to see an external contribution here.

I think these are good changes, though I'd approach the build_and_run script a little differently (see comments).

README.md Outdated Show resolved Hide resolved
README.md Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
echo "--------------------------------------------------" >> "$LOGFILE"
echo "COMMAND: $COMMAND $ARGS" >> "$LOGFILE"
echo "OUTPUT:" >> "$LOGFILE"
eval "$COMMAND $ARGS" >> "$LOGFILE" 2>&1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
eval "$COMMAND $ARGS" >> "$LOGFILE" 2>&1
$COMMAND "$ARGS" >> "$LOGFILE" 2>&1

Although right now you're always passing all args via $COMMAND (the commands are quoted together with the args). $ARGS is always empty.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will check this one as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was suspecting it but rather confirmed when I tested: I think if I do it without doing the way it is, I face issues with executing commands that have multiple arguments due to some argument parsing issues. So keeping it this way would probably be better.
image

scripts/build_and_run.sh Outdated Show resolved Hide resolved
scripts/build_and_run.sh Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
scripts/build_and_run.sh Outdated Show resolved Hide resolved
yesilzeytin and others added 11 commits January 9, 2024 09:16
Co-authored-by: Krzysztof Bieganski <krzysztof@biegan.ski>
Co-authored-by: Krzysztof Bieganski <krzysztof@biegan.ski>
Co-authored-by: Krzysztof Bieganski <krzysztof@biegan.ski>
Co-authored-by: Krzysztof Bieganski <krzysztof@biegan.ski>
Co-authored-by: Krzysztof Bieganski <krzysztof@biegan.ski>
Co-authored-by: Krzysztof Bieganski <krzysztof@biegan.ski>
README.md Outdated Show resolved Hide resolved
README.md Show resolved Hide resolved
Copy link
Contributor

@kbieganski kbieganski Jan 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I forgot to submit this comment, and it's the most important one.

First, a nit: let's rename this script to build_and_run; the language it's in is not important to the usage.

More importantly, this script is not being tested, so it's guaranteed to stop working in a few months. The way I would approach this is:

  1. Define a function for each step:
    function build() {
        ...
    }
    
    function gen_tests() {
        ...
    }
    
    function run_tests() {
        ...
    }
    
    function gen_report() {
        ...
    }
  2. Parse flags like --build, --run-tests, etc.:
    # If no arguments are given, run all steps
    if [[ $# -eq 0 ]] ; then
        for BRANCH in $(ls $MAIN_DIR/verilator); do
            build $BRANCH
            gen_tests $BRANCH
        done
        run_tests
        gen_report
    fi
    
    # Parse arguments and run individual steps
    while [ -n "$1" ]; do
        case "$1" in
            --build) shift; build "$1";;
            --gen-tests) shift; gen_tests "$1";;
            --run-tests) run_tests;;
            --gen-report) gen_report;;
            --help) echo "Usage: $0 [--build <branch>] [--gen-tests <branch>] [--run-tests] [--gen-report]"; exit 0;;
            *) echo "Option $1 not recognized" >&2; exit 1;;
        esac
        shift
    done 
  3. Replace the original CI steps with script invocations like:
    scripts/build_and_run ${{ matrix.VERILATOR_BRANCH }}
    
    etc.

@kbieganski
Copy link
Contributor

Looks great! The last thing to do is to use the script in CI instead of the explicit commands.
E.g.

scripts/build_and_run --build ${{ matrix.VERILATOR_BRANCH }}

@yesilzeytin
Copy link
Contributor Author

yesilzeytin commented Jan 11, 2024

Hey, glad that changes look okay. I am currently testing the workflow changes and will let you know once I complete them. Also, I deemed it appropriate to add the "all" option to the verilator build and test generation for possible utilization. The current workflow doesn't seem to be requiring it, but I wanted to let you know in advance that maybe you, who are most likely much more familiar with github workflows than me, who haven't worked with them before at all, :D could find it useful later, and it would still be nice to have for non-workflow users. I'll probably complete the changes tomorrow.

@yesilzeytin
Copy link
Contributor Author

Hey again, I am currently trying to update the workflow yaml but I have faced some problems. My progress so far:

  • I have implemented a task to execute
git submodule update --init --recursive

to the workflow in order to allow running it by itself if not cloned recursively. That seems to be working OK.

  • Updated the "Build Verilator" task to execute from the script, and it seems to be working OK as well.
  • Updated the "Generate Tests" task to execute from the script, and this one seems to be working OK as well.
  • However, when I modified the "Run Tests" task to execute from the script, I faced some issues. I tried two different methods:
    • Method 1: Doing it as provided in the README. Calling the bash function below
    run_tests() {
        echo "Running tests..."
        execute_and_log "robot $MAIN_DIR/*.robot"
        echo "Test runs are done. Generated report: $REPORTFILE."
    }
    through executing
    ./build_and_run --run-tests
    
    • Method 2: Doing it as it is done within the GitHub workflow. Calling the bash function below
    run_tests() {
        echo "Running tests..."
    
        # Iterate over each .robot file in the specified directory
        for TEST_SUITE_FILE in $MAIN_DIR/*.robot; do
            # Extract the test suite name from the file name
            TEST_SUITE="${TEST_SUITE_FILE##*/}"
            TEST_SUITE="${TEST_SUITE%.*}"
    
            # Construct the output file name
            OUTPUT_FILE="$MAIN_DIR/$TEST_SUITE.xml"
    
            # Run the robot command with specified options
            # Note: The if condition is used to handle robot internal errors
            if ! robot --report NONE --log NONE --output $OUTPUT_FILE $TEST_SUITE_FILE; then
                if [ $? -ge 252 ]; then
                    echo "Internal Robot Framework error occurred."
                    exit 1
                fi
            fi
        done
    
        echo "Test runs are done. Generated report: $REPORTFILE."
    }
    through executing
    ./build_and_run --run-tests
    

Both of these methods caused the error log below during the "Upload test output" task:
image
I think it is caused by the error:

::error::Unable to get ACTIONS_RUNTIME_TOKEN env variable

But I couldn't find anything regarding this environment variable anywhere within the project. I found this issue which looks like a bug regarding act itself, and there were previous occurrences of this error before; however, since the current workflow works correctly unless I modify it, I suspect that it is more about what I am trying to do rather than the tool itself.

When I run the first three steps of the build_and_run script, there are no issues at all, but the run-tests step seems to be problematic somehow, and I couldn't figure it out. When I only modify the previous tasks and leave the "Run Tests" task as it was, everything works correctly. Do you have any ideas about what may be causing this? I am now updating the repository by adding both variations of build_and_run scripts to the repository, if you'd like to have a closer look.

@yesilzeytin
Copy link
Contributor Author

yesilzeytin commented Jan 12, 2024

Updated repository based on the latest comment I wrote. I think after only getting this "Run tests" task up and running, this PR is highly likely to be ready for merge.

@kbieganski
Copy link
Contributor

I think this is a limitation of act, should work in GitHub Actions proper. Why do we have a separate *_act script?

Looks good overall.

@yesilzeytin
Copy link
Contributor Author

yesilzeytin commented Jan 12, 2024

I noticed that in README and bash script execution that I wrote based on README, we ran tests using a very simple command:

robot $MAIN_DIR/*.robot

However, there are plenty of flags employed within the workflow task where the tests are run:

for TEST_SUITE_FILE in $(ls *.robot); do
    TEST_SUITE="${TEST_SUITE_FILE%.*}"
        robot --report NONE --log NONE \
                  --output $TEST_SUITE.xml \
                  $TEST_SUITE_FILE || \
                  [ $? -lt 252 ]  # exit code >= 252 means internal Robot error
done

So, the _act script employs the routine within test.yml meanwhile the baseline script is based on README routine. Since both caused errors with act, I couldn't determine which would be more appropriate. I can leave it in the original state that you approved before and remove the _act version of the script if it is okay on your part. However, if that's the case, we should remove the act section from README since it would be incompatible. Then it would be "run build_and_run if you want to run locally, and use workflow if you want to do it with GitHub actions". If that is okay, I can remove the additional script, modify README to its final form, and finalize the PR state of the branch.

@kbieganski
Copy link
Contributor

The README command just runs all tests and generates a report. When we do --report NONE we need to manually generate a report (like this). If we add that report generation to build_and_run, we can just have one script for both flows. I can just do that after this is merged. So if you remove the _act version, I'll just merge this PR and add the report generation step.

@yesilzeytin
Copy link
Contributor Author

Hi again, I did the update as you asked now. Thanks for all the feedback and guidelines throughout the process that led me to understand this repository much better. I also learned a lot through the process since this was my first contribution to a public repository. I'm eagerly looking forward to future updates and developments regarding UVM support in the future. Wishing you all a wonderful day.

Copy link
Contributor

@kbieganski kbieganski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@kbieganski kbieganski merged commit 8cc1079 into antmicro:main Jan 15, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants