Skip to content

Conversation

@christophejunke
Copy link
Contributor

Sumbitting COLIBRI.json file for SMT-COMP 2025

@github-actions
Copy link

github-actions bot commented Jun 5, 2025

Summary of modified submissions

COLIBRI

@bobot bobot added the submission Submissions for SMT-COMP label Jun 14, 2025
martinjonas pushed a commit that referenced this pull request Jun 23, 2025
#189: UltimateEliminator submission 2025
#188: Z3-Siri Submission 2025
#187: OSTRICH version 2
#186: yicesQS submission to the 2025 SMT comp
#185: Bitwuzla 2025 submission.
#184: Yices2 Submission SMTCOMP 2025
#183: cvc5 for SMT-COMP 2025
#182: Create iProver
#181: Z3-Owl Submission 2025
#179: Z3-alpha SMT-COMP 2025
#178: Z3-Noodler-Mocha Submission for SMT-COMP 2025
#177: `bv_decide` submission 2025
#176: OpenSMT (min-ucore) submission 2025
#175: Z3-Noodler submission 2025
#172: SMTS submission 2025
#171: Bitwuzla-MachBV Submission for SMT-COMP 2025
#170: Z3-Parti-Z3++ Submission for SMT-COMP 2025
#169: STP-Parti-Bitwuzla Submission for SMT-COMP 2025
#168: SMTInterpol submission 2025
#167: OpenSMT submission 2025
#165: Amaya 2025
#164: SMT-RAT submission
#163: COLIBRI submission
#162: [Submission] colibri2
#156: upload z3-inc-z3++
@martinjonas
Copy link
Contributor

@christophejunke Thanks for submitting COLIBRI to this year's SMT-COMP!

We have executed your solver on a small number of benchmarks from each logic it should compete in. You can find the results here:

We have not seen any incorrect results returned by your solver (compared to the expected status of the benchmarks).

You can check whether all the results we have obtained are expected. If not, please let us know here.

Some notes:

  • We have used less resources than will be used in the final runs.
  • The benchmarks are scrambled by the official scrambler with seed 1.
  • The column status shows whether your solver decided the benchmark as sat (true) or unsat (false).
  • You can click on the value in the status column to see the output of your solver on that benchmark.

If you upload a new version of the solver and want to have another test run, let me know. We still have some time for that.

Happy rest of the competition!
Martin

@christophejunke
Copy link
Contributor Author

Hello @martinjonas,

Thank your for your message and your work.

We noticed that benchmarks for QF_UFFPDTNIRA may introduce datatypes that are not used by the proof, like in 20200306-Kanig/spark2014bench, where there are floating point tests that declare unused datatypes. In these cases if we manually remove the declare-datatypes expressions that are not supported by COLIBRI, the result is unsat, as expected, instead of unknown.

We are wondering if the benchmark would benefit by having these tests cleanup up and moved to a non-DT category, given that adding a purely syntactical support of datatypes would change the outcome of a test, that depends only on floating-point operations.

Thanks again for your work,
Have a nice day,

Christophe

@martinjonas
Copy link
Contributor

@christophejunke Thanks for the report. We discussed it with the co-organizers and we will report the benchmarks to SMT-LIB maintainers. However, since they are technically correct, we will not exclude them from the competition this year.

@christophejunke
Copy link
Contributor Author

@martinjonas Thank you.

Currently we have a small patch for COLIBRI that makes it handle these cases but wasn't submitted for the final version. We are also considering retracting COLIBRI from the QF_UFFPDTNIRA category, which is probably the most likely outcome given the time constraints we are faced with.

I there is any chance of submitting a newer version, that would be great, but I understand that you have a lot of work on your side and we would be totally fine with retracting from said category if that's fine with you.

Thanks a lot again,
Have a nice day

@martinjonas
Copy link
Contributor

@christophejunke I see that you have retracted COLIBRI from the category. This is a good solution, thanks for it (updating the solver after the final deadline would require further discussion). Also, the final versions of solvers should be uploaded to Zenodo. Please, do this as soon as possible and change the archive url. Thanks!

@christophejunke
Copy link
Contributor Author

@martinjonas Yes, that the better solution here. I updated the link and once the tests passes I think this should be right.

@martinjonas martinjonas merged commit 1db1209 into SMT-COMP:master Jul 1, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

submission Submissions for SMT-COMP

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants