Synergy of Theorem Proving and Machine Learning
On Tuesday December 5, 2023, at 10:30, Bartosz Piotrowski will defend his
PhD Synergy
of Machine Learning and Automated Reasoning
At the occasion of this event we will have a small seminar the day after, on Wednesday December 6.
You are cordially invited to attend both or either the defense and the seminar. (No need to register.)
Seminar
- Date and time of the seminar: Wednesday December 6, 9:30-12:30
- Location: EOS N 01.750 (Elinor Oostrom Building)
- Schedule:
- 9:30-10:00 Jasmin Blanchette (Technical University Munich) -- Lambda-Superposition: From Theory to Trophy
(Joint work with
Alexander Bentkamp, Simon Cruanes, Visa Nummelin, Stephan
Schulz, Sophie Tourret, Petar Vukmirovic, and Uwe
Waldmann)
Abstract: Lambda-superposition is a new proof calculus for higher-order logic. To a large extent, the calculus is a graceful generalization of standard superposition, which is a highly successful calculi for first-order logic. On the theory side, we proved lambda-superposition refutationally complete. On the practical side, we implemented it in the E and Zipperposition provers. Zipperposition finished first in the higher-order theorem division of the CADE ATP System Competition in 2020, 2021, and 2022, suggesting that superposition is a valuable approach also in a higher-order setting.
- 10:00-10:30 Bartosz Piotrowski -- Can large language models help proving theorems in proof assistants?
Abstract: Large language models (aka LLMs), in addition to performing very well
in natural language, are also able -- to an extent -- to handle symbolic,
mathematical tasks. In particular, recently there is much excitement about
the opportunity to harness the power of LLMs to prove theorems in proof
assistants. I will summarize some recent works on that topic, and I will show
my experiences with using neural language models for several reasoning tasks.
- 10:30-11:00 coffee break
- 11:00-11:30 Josef Urban (Czech Technical University Prague) -- Alien Coding: Learning Synthesis of OEIS Sequences
Abstract: I will describe a self-learning algorithm for synthesizing programs that provide explanations for OEIS sequences. The algorithm starts from scratch initially generating programs at random. Then it runs many iterations of a self-learning loop that interleaves (i) training neural machine translation to learn the correspondence between sequences and the programs discovered so far, and (ii) proposing many new programs for each OEIS sequence by the trained neural machine translator. The algorithm discovers on its own programs for more than 100000 OEIS sequences, sometimes developing unusual programming methods.
- 11:30-12:00 Michail Karatarakis (Radboud University Nijmegen) -- A formalization of the statement of Deligne's theorem.
Abstract: The formalization of the main objects and theorems in the proof of Fermat's Last Theorem is the focus of current work in the Lean 4 theorem prover. We present a partial formalization of the statement of Deligne's theorem on attaching a p-adic Galois representation to a weight k eigenform. The case of this theorem for k = 2 is an important part of the Wiles/Taylor-Wiles proof of Fermat's Last Theorem.
- 12:00-12:15 Jelle Piepenbrock (Radboud University Nijmegen) -- Graph Neural Network guidance for Instantiation in SMT solvers
Abstract: In the process of finding a proof, SMT solvers instantiate quantified expressions with ground terms, to find contradictions. The difficulty is picking the right instantiations, as there are often
many terms to choose from and many possible ways to instantiate. In this talk, we'll present preliminary work on using a graph neural network integrated into
the SMT solver CVC5 to predict useful instantiations. We compare the performance to existing instantiation strategies, such as e-matching.
herman