Verification tools often rely on back-end reasoning engines to achieve their results. Despite being used in safety-critical contexts, reasoning engines are known to have bugs, which may be a source of unsoundness in verification. In this project, we investigate how results from different reasoning engines can be validated and how they can provide additional guarantees for verification tools.
The two tracks of this project are:
Certified execution of reasoning engines
Reasoning engines such as SAT, SMT, and CHC solvers can provide erroneous results. In SMT-COMP'20, for instance, there were in total 149 instances with unknown status in which at least two competing solvers disagreed on the results. Existing approaches to validate SMT solvers' results rely on the use of heavy-weight interactive theorem provers, and thus do not lend themselves easily to automated verification. We propose the use of light-weight correctness witnesses that can be checked by simple automated checkers, making their integration into verification frameworks much more practical. An implementation of such certificates for SMT solving is available through our in-house solver OpenSMT. In addition, CHC models can be validated by relying on proof producing SMT solvers, via our framework ATHENA.
Integration of correctness certificates into verification frameworks
Guarantees regarding solver execution are not enough to eliminate all sources of unsoundness in verification, since it is possible that the solved instances do not accurately reflect the program/system being verified. To address this issue, we propose to certify all calls made to reasoning engines by verification tools. This certification involves establishing an equivalence relation between the internal representation of the instance being verified by the verification tool and the formula being solved by the solver.
Publications: