Search papers, labs, and topics across Lattice.
This paper benchmarks 24 LLMs on their ability to generate formal pre- and post-conditions for programs based on natural language descriptions, using a new dataset of 40 tasks. The study introduces refined performance metrics beyond accept@1 and explores the impact of automatically generated tests for validating LLM-generated specifications. Results indicate that while LLMs can produce valid pre- and post-conditions, proprietary models outperform open-source models, and augmenting the dataset with automatically generated tests exposes incorrect solutions.
LLMs struggle to formalize program post-conditions from natural language, with even the best models failing to correctly formalize all tasks, highlighting a critical gap in their ability to bridge natural language understanding and formal verification.
Formal specifications, such as pre- and post-conditions provide a solid basis for performing thorough program verification. However, developers rarely provide such formal specifications, hence if AI could help in constructing them, it would make formal verification possible or at least make automated testing much more effective. This paper presents a study on the ability of LLMs in generating formal FULL pre- and post-conditions of a program, given its natural language description. 24 state-of-the-art LLMs were evaluated on a freshly prepared dataset of 40 tasks. The paper investigates specifications of varying difficulty and discusses a set of more refined performance metrics in addition the general accept@1 performance. It also investigates the impact of using automatically generated tests for validation of the solutions proposed by LLMs. The results of the experiment reveal that, in general LLMs can produce valid pre- and post-conditions based on natural language descriptions of programs. Incorrect solutions from proprietary models are also often near correct. A closer inspection shows that open-source models tend to result in a higher proportion of erroneous results while proprietary models tend to have slightly higher false negative rates. Interestingly, the results also show that augmenting the manually prepared dataset with automatically generated tests leads to the exposure of wrong solutions, which would have otherwise been accepted as correct. In general, LLMs perform better in formalizing pre-conditions than on post-conditions and proprietary models perform better than open ones. However, none of the LLMs were able to correctly formalize all the tasks in our benchmark. Overall, the effectiveness and reliability of LLMs in formalizing pre- and post-conditions could be greatly improved by using a good test suite that checks the correctness of the LLM generated formalizations.