Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add test(should_fail) attribute for tests that are meant to fail #2418

Merged
merged 35 commits into from
Aug 24, 2023

Conversation

Ethan-000
Copy link
Contributor

Description

Problem*

Resolves #1994

Summary*

Documentation

  • This PR requires documentation updates when merged.

    • I will submit a noir-lang/docs PR.
    • I will request for and support Dev Rel's help in documenting this PR.

Additional Context

PR Checklist*

  • I have tested the changes locally.
  • I have formatted the changes with Prettier and/or cargo fmt on default settings.

Copy link
Contributor

@phated phated left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking great! I had a few code style recommendations that will clean up a bit

crates/nargo_cli/src/cli/test_cmd.rs Outdated Show resolved Hide resolved
crates/nargo_cli/src/cli/test_cmd.rs Outdated Show resolved Hide resolved
crates/nargo_cli/src/cli/test_cmd.rs Outdated Show resolved Hide resolved
crates/nargo_cli/src/cli/test_cmd.rs Outdated Show resolved Hide resolved
crates/nargo_cli/src/cli/test_cmd.rs Outdated Show resolved Hide resolved
crates/noirc_frontend/src/hir/def_map/mod.rs Outdated Show resolved Hide resolved
Ethan-000 and others added 2 commits August 24, 2023 18:05
Co-authored-by: Blaine Bublitz <blaine.bublitz@gmail.com>
@Ethan-000
Copy link
Contributor Author

Bikeshedding some more :D, we could probably have the function be:

    pub fn get_all_test_functions<'a>(
        &'a self,
        interner: &'a NodeInterner,
    ) -> impl Iterator<Item = TestFunction> + 'a {
        self.modules
            .iter()
            .flat_map(|(_, module)| module.value_definitions()
            .filter_map(|id| id.as_function()))
            .filter_map(|func_id| 
                match interner.function_meta(&func_id).attributes {
                Some(Attribute::Test(scope)) => Some(TestFunction::new(func_id, scope)),
                _ => None,
            }
        )
    }

ahh thought it would be better to use just one filter_map? i don't have a preference though do u want to go with 2?

@phated
Copy link
Contributor

phated commented Aug 24, 2023

I think he's trying to remove the internal mapper, so you can combine the two techniques (a flat_map followed by a filter_map`

   pub fn get_all_test_functions<'a>(
        &'a self,
        interner: &'a NodeInterner,
    ) -> impl Iterator<Item = TestFunction> + 'a {
        self.modules
            .iter()
            .flat_map(|(_, module)| module.value_definitions())
            .filter_map(|id| {
                if let Some(func_id) = id.as_function() {
                    match interner.function_meta(&func_id).attributes {
                        Some(Attribute::Test(scope)) => Some(TestFunction::new(func_id, scope)),
                        _ => None,
                    }
                } else {
                    None
                }
            })
    }

technically this is one less closure than what kev suggested, but I'm guessing the compiler would optimize them away 🤔

@kevaundray
Copy link
Collaborator

Just going to run this on a few examples

@kevaundray
Copy link
Collaborator

This is something we should do in a separate issue, when a test should fail, but it passes. There is no red line to tell us what passed that should not have passed. This is in comparison to the opposite. To illustrate, run nargo test on these two pieces of code:

#[test]
fn foo(){
    assert(1 == 2);
}
#[test(should_fail)]
fn foo(){
    assert(1 == 1);
}

@kevaundray kevaundray self-requested a review August 24, 2023 21:22
Copy link
Collaborator

@kevaundray kevaundray left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM -- @phated did you have any other concerns?

Copy link
Contributor

@phated phated left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Nice work @Ethan-000 - I think we can follow on with more descriptive error messages (such as what kev mentioned or noting that a test failed because it succeeded but was expected to fail)

@Ethan-000
Copy link
Contributor Author

Thanks for the review 🙂

@kevaundray kevaundray added this pull request to the merge queue Aug 24, 2023
Merged via the queue into master with commit 74af99d Aug 24, 2023
16 checks passed
@kevaundray kevaundray deleted the e/should_panic branch August 24, 2023 22:02
TomAFrench added a commit that referenced this pull request Aug 25, 2023
* master:
  fix: Implement new mem2reg pass (#2420)
  feat(nargo): Support optional directory in git dependencies (#2436)
  fix(acir_gen): Pass accurate contents to slice inputs for bb func calls (#2435)
  fix(ssa): codegen missing check for unary minus (#2413)
  fix(lsp): Remove duplicated creation of lenses (#2433)
  feat: Add `test(should_fail)` attribute for tests that are meant to fail (#2418)
  chore: improve error message for InternalError (#2429)
  chore: Add stdlib to every crate as it is added to graph (#2392)
@Savio-Sou
Copy link
Collaborator

Should this be documented (i.e. meant to be used by Noir devs)?

@Ethan-000
Copy link
Contributor Author

ahh yes should probably be documented with #2541

@Savio-Sou
Copy link
Collaborator

Thanks! Created noir-lang/docs#372 for documenting this issue (while #2541 can be documented with noir-lang/docs#367).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Having a flag #[should_panic] for tests that are meant to fail
6 participants