Skip to content

Conversation

@Tomaqa
Copy link
Contributor

@Tomaqa Tomaqa commented Jun 10, 2025

No description provided.

@github-actions
Copy link

Summary of modified submissions

OpenSMT

  • 2 authors
  • website: https://github.com/usi-verification-and-security/opensmt
  • Participations
    • UnsatCore
      • QF_Equality
        • all
      • QF_Equality+LinearArith
        • QF_UFDTLIA
        • QF_UFDTLIRA
      • QF_LinearIntArith
        • QF_LIRA
      • QF_LinearRealArith
        • all
    • SingleQuery
      • QF_Equality
        • all
      • QF_Equality+LinearArith
        • QF_UFDTLIA
        • QF_UFDTLIRA
      • QF_LinearIntArith
        • QF_LIRA
      • QF_LinearRealArith
        • all
    • ModelValidation
      • QF_Equality
        • all
      • QF_Equality+LinearArith
        • all
      • QF_LinearIntArith
        • QF_LIRA
      • QF_LinearRealArith
        • all
    • Incremental
      • QF_Equality
        • all
      • QF_Equality+LinearArith
        • all
      • QF_LinearIntArith
        • all
      • QF_LinearRealArith
        • all

@Tomaqa
Copy link
Contributor Author

Tomaqa commented Jun 12, 2025

I would like to ask if it is possible to add an orthogonal participation using a different configuration of the solver? I am not sure if specialized configurations override the original one, always resulting in at most one participation per track/logic, or if multiple participations are possible.

Concretely, would adding the following

        ,
        {
            "tracks": ["UnsatCore"],
            "command": ["./opensmt", "--minimal-unsat-cores"],
            "logics": [
                "QF_UF", "QF_AX",
                "QF_IDL", "QF_RDL", "QF_LIA", "QF_LRA",
                "QF_UFIDL", "QF_UFLIA", "QF_UFLRA",
                "QF_ALIA",
                "QF_AUFLIA"
            ]
        }

into the field participations in our JSON file result in the second participation in the Unsat Core Track, or is it necessary to create a new submission for that?

@martinjonas
Copy link
Contributor

@Tomaqa Good question! Multiple overlapping participations are not allowed (unfortunately, this is currently not checked by the CI).

If you also want to run an alternative version of OpenSMT on the unsat core benchmarks, please create a new submission for it. This is better as the alternative version will have a more telling name in the result tables and also can have its own dedicated system description that describes the used algorithm in more detail. You can still reuse the same archive url for the two submissions.

@bobot bobot added the submission Submissions for SMT-COMP label Jun 14, 2025
martinjonas pushed a commit that referenced this pull request Jun 23, 2025
#189: UltimateEliminator submission 2025
#188: Z3-Siri Submission 2025
#187: OSTRICH version 2
#186: yicesQS submission to the 2025 SMT comp
#185: Bitwuzla 2025 submission.
#184: Yices2 Submission SMTCOMP 2025
#183: cvc5 for SMT-COMP 2025
#182: Create iProver
#181: Z3-Owl Submission 2025
#179: Z3-alpha SMT-COMP 2025
#178: Z3-Noodler-Mocha Submission for SMT-COMP 2025
#177: `bv_decide` submission 2025
#176: OpenSMT (min-ucore) submission 2025
#175: Z3-Noodler submission 2025
#172: SMTS submission 2025
#171: Bitwuzla-MachBV Submission for SMT-COMP 2025
#170: Z3-Parti-Z3++ Submission for SMT-COMP 2025
#169: STP-Parti-Bitwuzla Submission for SMT-COMP 2025
#168: SMTInterpol submission 2025
#167: OpenSMT submission 2025
#165: Amaya 2025
#164: SMT-RAT submission
#163: COLIBRI submission
#162: [Submission] colibri2
#156: upload z3-inc-z3++
@Tomaqa Tomaqa force-pushed the opensmt25 branch 2 times, most recently from 9c55d90 to 71663c0 Compare June 23, 2025 10:57
@martinjonas
Copy link
Contributor

@Tomaqa Thanks for submitting OpenSMT to this year's SMT-COMP!

We have executed your solver on a small number of benchmarks from each logic it should compete in. You can find the results here:

We have not seen any incorrect results returned by your solver (compared to the expected status of the benchmarks). But there are some errors (error "Unknown command encountered: define-sort") in the incremental track.

You can check whether all the results we have obtained are expected. If not, please let us know here.

Some notes:

  • We have used less resources than will be used in the final runs.
  • The benchmarks are scrambled by the official scrambler with seed 1.
  • The column status shows whether your solver decided the benchmark as sat (true) or unsat (false).
  • For the incremental track, the column status also shows the number of correct answers.
  • The purpose of the evaluation is just to perform a technical sanity check, whether your solver works fine on our infrastructure. We have therefore not checked the returned unsat cores and models. The status column for unsat core and model validation tracks just shows the satisfiability of the input formula, not validity of the returned unsat core/model.
  • You can click on the value in the status column to see the output of your solver on that benchmark.

If you upload a new version of the solver and want to have another test run, let me know. We still have some time for that.

Happy rest of the competition!
Martin

@Tomaqa
Copy link
Contributor Author

Tomaqa commented Jun 25, 2025

Thank you, we are aware of the issue regarding the define-sort command, but still have not fixed it 😇 Hopefully next year.

@martinjonas martinjonas merged commit ad53a97 into SMT-COMP:master Jun 30, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

submission Submissions for SMT-COMP

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants