MFoCS Seminar 2024–2025

Monday 20 January

Lyra van Bokhoven
10:30–11:20

Title: Linear Logic and Haskell
Advisor: Bálint Kocsis
Chair: Erik Oosting

[slides]

Wadler, P. A taste of linear logic. In: Borzyszkowski, A.M., Sokołowski, S. (eds) Mathematical Foundations of Computer Science 1993. MFCS 1993. Lecture Notes in Computer Science, vol 711. Springer, Berlin, Heidelberg. 1993.

Jean-Philippe Bernardy, Mathieu Boespflug, Ryan R. Newton, Simon Peyton Jones, and Arnaud Spiwack. Linear Haskell: practical linearity in a higher-order polymorphic language. Proc. ACM Program. Lang. 2, POPL, Article 5 (January 2018), 29 pages. 2017.

Abstract:
Traditional logic allows us to copy proofs. This is useful: we do not want to prove a statement every time we use it. However, when reasoning about resources instead of truth, we might want to restrict the ability to copy or discard: we (unfortunately) cannot copy a cake, and we do not want to discard a file pointer without properly closing it first. Linear logic provides the ability to construct formal proofs involving resources.

Like traditional logic, linear logic has terms, which can be viewed as a programming language. One variant of the theory of linear logic has been applied in Haskell in this way. This allows us to write programs with greater control over resources like files.

Tanja Muller
11:20–12:10

Title: Combining uniqueness and linearity in one type system
Advisor: Komi Golov
Chair: Remco van Os

[slides]

Daniel Marshall, Michael Vollmer, Dominic Orchard. Linearity and Uniqueness: An Entente Cordiale. European Symposium on Programming 2022.

Dana Harrington. Uniqueness Logic. Theoretical Computer Science (TCP) 2006.

Abstract:
Both linearity and uniqueness are a restriction on how many times something occurs. Linear variables must be used exactly once, while unique variables must have at most one reference. It is possible for a programming language to perform certain actions more efficiently if it is known that a constraint like linearity or uniqueness holds. This can be used to improve performance by implementing these concepts.

To concretize linearity and uniqueness, we consider a formalization of each of these notions in a logic. We can also represent each of them in a type system, which is closely related to their respective logic. In this talk, we will discuss the relation between uniqueness and linearity, and how this corresponds to their formalizations as a logic. Then we showcase how to combine these two concepts in a single type system. We close off the talk by discussing how this translates to a programming language and the main results therein.

Lunch Break
12:10–13:30

Dick Arends
13:30–14:20

Title: Verified Extraction and Type-Checking for Coq in Coq
Advisor: Robbert Krebbers
Chair: Rutger Broekhoff

[slides]

Yannick Forster, Matthieu Sozeau, and Nicolas Tabareau. Verified Extraction from Coq to OCaml. Proc. ACM Program. Lang. 8, PLDI, Article 149 (June 2024), 24 pages. 2024.

Matthieu Sozeau, Simon Boulier, Yannick Forster, Nicolas Tabareau, and Théo Winterhalter. Coq Coq Correct! Verification of Type Checking and Erasure for Coq, in Coq. Proc. ACM Program. Lang. 4, POPL, Article 8 (January 2020), 28 pages. 2020.

Abstract:
In this presentation, we will explore two formal verification projects in Coq, both are developed and proven correct within Coq itself. These formalizations address two fundamental components of the Coq system: extraction and type-checking. The first project formalizes a verified extraction pipeline that translates Coq programs into operational equivalent OCaml code, ensuring correctness and guarantees on interoperability with unverified code. The second focuses on an implementation of Coq's kernel type checker, developed according to a formal specification of the underlying type theory of Coq. We will take a closer look at key contributions from both projects and examine how they complement each other.

Rutger Broekhoff
14:20–15:10

Title: Don't trust, verify: guarantees for the Lean proof assistant
Advisor: Freek Wiedijk
Chair: Daan Spijkers

[slides]

M. Carneiro, "The Type Theory of Lean," Master's Thesis, Carnegie Mellon University, Pittsburgh, PA, USA, 2019. Accessed: Oct. 06, 2024. [Online]. 2019.

M. Carneiro, "Lean4Lean: Towards a Verified Typechecker for Lean, in Lean," Dec. 03, 2024, arXiv: arXiv:2403.14064. 2024.

Abstract:
When one hears that the kernel of the Lean proof assistant is written in C++, that Lean lacks subject reduction, that its definitional equality is not transitive, or really anything else of the kind, one might be driven to believe that one had better stay far away from Lean. We will discuss how these metatheoretic properties emerge, and what their effects are. Furthermore, we look into current developments that target greater reliability of the proof assistant's kernel. We also briefly cover similar 'breakdown' in other proof assistants as well, to see whether we can ultimately trust these tools, or not.

Break
15:10–15:30

Mark Lapidus
15:30–16:20

Title: Type Theory with Explicit Universe Polymorphism
Advisor: Niels van der Weide
Chair: Bas van der Linden

Marc Bezem, Thierry Coquand, Peter Dybjer, and Martín Escardó. Type Theory with Explicit Universe Polymorphism. In 28th International Conference on Types for Proofs and Programs (TYPES 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 269, pp. 13:1-13:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik. 2023.

Mario Carneiro, The Type Theory of Lean. 2019.

Abstract:
Universes are a fundamental feature of type theory, providing the foundation for defining polymorphic functions such as the identity function. However, Girard's paradox demonstrates that a universe cannot contain itself without leading to inconsistency. To avoid this, many type theories employ a countable hierarchy of universes. This approach introduces a new challenge: polymorphic functions, like the identity function, must be redefined for each universe. Universe polymorphism addresses this issue by enabling definitions that work uniformly across all universes. In this paper, we present a type theory with explicit universe polymorphism, built on a countable hierarchy of universes. We explore the formal foundations of this system and compare it with the implementation of universe polymorphism in proof assistants such as Coq and Lean, illustrating our ideas through concrete examples in these tools.

Tuesday 21 January

Erik Oosting
10:30–11:20

Title: Proving PureCake (and CakeML)
Advisor: Freek Wiedijk
Chair: Niels van Duijl

[slides] [handout]

Yong Kiam Tan, Magnus O. Myreen, Ramana Kumar, Anthony Fox, Scott Owens, and Michael Norrish. The verified CakeML compiler backend. Journal of Functional Programming, 29, 2019.

Hrutvik Kanabar, Samuel Vivien, Oskar Abrahamsson, Magnus O. Myreen, Michael Norrish, Johannes Åman Pohjola, and Riccardo Zanetti. PureCake: A verified compiler for a lazy functional language. In Programming Language Design and Implementation (PLDI). ACM, 2023.

Abstract:
In 2014, a new variant of ML was published called CakeML. It's main feature is that it was written in the language of the HOL4 theorem prover and was designed to provably never escape memory bounds or miscompile, all the way down to the machine code level. A follow-up project is designing a similar language to CakeML called PureCake, which is a functional programming language with lazy semantics.

We will explore what challenges go into proving a program correct, how lazy semantics work internally, and what challenges this kind of semantics along with a Hindley-Milner inspired type system provide

Additional resources that were used in the presentation but won't be expanded much upon:

Ramana Kumar, Magnus O. Myreen, Michael Norrish, and Scott Owens. CakeML: A verified implementation of ML. In Principles of Programming Languages (POPL), pages 179--191. ACM Press, January 2014.

Sabry, Amr and Matthias Felleisen. "Reasoning about programs in continuation-passing style." LISP and Symbolic Computation 6 (1992): 289-360.

Xia, Li-yao, Yannick Zakowski, Paul He, Chung-Kil Hur, Gregory Malecha, Benjamin C. Pierce and Steve Zdancewic. "Interaction trees: representing recursive and impure programs in Coq." Proceedings of the ACM on Programming Languages 4 (2019): 1 - 32.

Remco van Os
11:20–12:10

Title: Computing Minimal Distinguishing Formulas for Bisimulation
Advisor: Herman Geuvers
Chair: Jasper Laumen

[slides]

Martens, J., Groote, J.F. Minimal Depth Distinguishing Formulas Without Until for Branching Bisimulation. In: Capretta, V., Krebbers, R., Wiedijk, F. (eds) Logics and Type Systems in Theory and Practice. Lecture Notes in Computer Science, vol 14560. Springer, Cham. 2024.

Martens, J., & Groote, J. F. Computing Minimal Distinguishing Hennessy-Milner Formulas is NP-Hard, but Variants are Tractable. In G. A. Perez, & J.-F. Raskin (Eds.), 34th International Conference on Concurrency Theory, CONCUR 2023 Article 32 (Leibniz International Proceedings in Informatics, LIPIcs; Vol. 279). Schloss Dagstuhl - Leibniz-Zentrum für Informatik. 2023.

Abstract:
Labelled transition systems are a common modelling formalism comprising states and labelled transitions, also named actions. We are interested in whether two different systems behave similarly. Therefore, we consider the concept of bisimilarity. Intuitively, two states are bisimilar when all possible actions in one state can be performed in the other and vice versa.

We introduce a modal logic to be able to abstractly talk about the behaviour of a state. This includes a diamond operator, which describes whether a specific action can be performed, after which a state is reached where the remaining formula holds. This logic equates to bisimilarity by the Hennessy-Milner Theorem: two states are bisimilar if and only if all formulas hold in both states or do not hold in both states.

We can thereby consider distinguishing formulas: a formula that holds in one state, but not the other. The existence of such a formula proves non-bisimilarity. First, we consider the problem of computing minimal distinguishing formulas for non-bisimilar states. This is NP-hard if the size of the formula must be minimal. However, we can provide polynomial algorithms, if minimality is formulated as the minimal number of nested modalities. We can extend this result via the dual notion of bisimilarity, apartness, to minimal distinguishing formulas for branching bisimulation, a more subtle form of bisimulation.

Lunch Break
12:10–13:30

Daan Spijkers
13:30–14:20

Title: Regular Transductions
Advisor: Jurriaan Rot
Chair: Simón Ruiz

[slides]

Bojańczyk, Mikołaj, and Lê Thành Dũng Tito Nguyễn. "Algebraic Recognition of Regular Functions." 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Schloss-Dagstuhl-Leibniz Zentrum für Informatik, 2023.

Alur, Rajeev, and Pavol Černý. "Expressiveness of streaming string transducers." IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010). Schloss-Dagstuhl-Leibniz Zentrum für Informatik, 2010.

Abstract:
Regular transductions are a generalization of regular languages, where models produce string output instead of accepting or rejecting strings. We will be covering 3 of these string-to-string models, and try to indicate why they are equivalent.

The first model type is a streaming string transducer (SST), which makes a linear pass over the input, updating several variables with each letter. The second transduction is defined with monadic second order logic (MSO). Finally, we have transducer semigroups, which is a more abstract model using concepts from category theory.

Bas van der Linden
14:20–15:10

Title: Automatic differentiation: Compositional backpropagation and other methods
Advisor: Dario Stein
Chair: Lukas Nieuweboer

[slides]

Atılım Günes Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul and Jeffrey Mark Siskind. Automatic differentiation in machine learning: a survey. Journal of Machine Learning Research. 2017.

Aloïs Brunel, Damiano Mazza, and Michele Pagani. Backpropagation in the simply typed lambda-calculus with linear negation. Proceedings of the ACM on Programming Languages. 2020.

Abstract:
There are many programs that need to compute the (partial) derivative of some function. Automatic differentiation is a set of techniques with which you can effectively evaluate such a derivative. These techniques are more general than manual or symbolic methods and more accurate than numeric methods. I will explain the two main modes: forward and reverse (also known as backpropagation).

In more recent years, people are applying these methods to programming languages, allowing you to differentiate programs that include higher order combinators such as map and fold. In the second paper, the authors define such a program transformation that applies backpropagation. This transformation has the additional benefit of being compositional, meaning it can be executed on parts of a program separately.

Wednesday 22 January

Niels van Duijl
10:30–11:20

Title: Blackwell optimality in Robust MDPs
Advisor: Eline Bovy
Chair: Tanja Muller

[slides]

Vineet Goyal and Julien Grand-Clément, Robust Markov Decision Process: Beyond Rectangularity, Mathematics of Operations Research 2023.

Julien Grand-Clement, Marek Petrik, Nicolas Vieille, Beyond discounted returns: Robust Markov decision processes with average and Blackwell optimality, 2024.

Abstract:
Markov decision processes (MDPs) are models for sequential decision-making problems in a stochastic environment. Robust MDPs replace the exact probabilities with a set of possible probabilities, called the uncertainty set. The goal is to compute the optimal policy for the worst-case probabilities from the uncertainty set. To compute it, assumptions on independence between the probabilities are needed, of which there are various types.

The first paper introduces a new type of independence assumption, where the probabilities between states depend on common underlying factors. For example, in healthcare, the genetics of a patient could be a common factor that influences the chance of success of many possible treatments. It goes on to show various results under this assumption. The paper proves Blackwell optimality under the assumption that the uncertainty sets are polyhedral.

The second paper goes more into detail about Blackwell optimality in Robust MDPs. It uses the concept of a definable set, and proves that Blackwell optimality is attained when the uncertainty set is definable. This is a more general case than in the first paper.

Jasper Laumen
11:20–12:10

Title: Automata Learning with an Incomplete Teacher
Advisor: Loes Kruger
Chair: Dick Arends

[slides] [example]

Mark Moeller, Thomas Wiener, Alaia Solko-Breslin, Caleb Koch, Nate Foster, Alexandra Silva. Automata Learning with an Incomplete Teacher. Journal: Leibniz International Proceedings in Informatics. Conference: ECOOP 2023.

Martin Leucker, Daniel Neider. Learning Minimal Deterministic Automata from Inexperienced Teachers. Journal: ISoLA. Conference: International Symposium On Leveraging Applications of Formal Methods, Verification and Validation. 2012.

Abstract:
Active automata learning is the act of inferring a DFA of a system by sending inputs and observing outputs, without having access to the inner workings. This has many applications in software verification and security.

A common framework to study active learning is the minimally adequate teacher (MAT). Such a teacher is an idealized oracle which can perfectly answer both membership queries and equivalence queries. However, such an oracle is often not realizable in practice. The first paper introduces the incomplete MAT (iMAT), which only has limited knowledge of the underlying DFA, meaning that the output of the system after specific inputs might not always be known. The paper generalizes the classical L* algorithm to work with this new iMAT framework.

In this talk, we define the MAT framework and look at an example of the L* algorithm. We then generalize this to the iMAT framework, and study the L*☐ algorithm. We compare this algorithm to older algorithms defined for similar formulations of the iMAT framework, which are summarized in the second paper.

Lunch Break
12:10–13:30

Lukas Nieuweboer
13:30–14:20

Title: Deciding Hyperproperties
Advisor: Jana Wagemaker
Chair: Lyra van Bokhoven

[slides]

Finkbeiner, B., & Hahn, C. Deciding Hyperproperties. arXiv [Cs.LO]. 2016.

Rabe, M. N. A temporal logic approach to information-flow control. 2016.

Abstract:
Linear Temporal Logic (LTL) is widely used in model checking to verify whether a system meets specific properties along a single computation trace. In this talk, we introduce HyperLTL, an extension of LTL that allows reasoning about behaviour across multiple traces. This generalisation enables the specification of more complex properties, such as observational determinism. We will examine the satisfiability problem for HyperLTL, showing that for one class of formulas, it is PSPACE-complete, for another class, EXPSPACE-complete, and in certain cases, undecidable. However, many practical properties can be expressed using simpler HyperLTL formulas, where satisfiability remains decidable. These results are useful, for example, in verifying that a desired property for a system is actually satisfiable before attempting to implement it.

Simón Ruiz
14:20–15:10

Title: Finite Kleene Algebras
Advisor: Jana Wagemaker
Chair: Mark Lapidus

Tobias Kappé. An Elementary Proof of the FMP for Kleene Algebra. RAMiCS. 2024.

Dexter Kozen. On Kleene algebras and Closed Semirings. MFCS1990. 1990.

Abstract:
Regular languages are an interesting and often-studied area within theoretical computer science. These are precisely the languages accepted by finite state automata. Moreover, the equivalence of regular languages is decidable. Kleene algebras provide an elegant algebraic framework for studying these languages.

A Kleene algebra is an algebraic structure (like a ring or a group) defined such that the equations provable from its axioms correspond exactly to equations on regular languages (and vice versa). In 2005, it was shown that the equational theory of Kleene algebras is complete for finite models; that is, false equivalences always have a finite counterexample. This property is known as the Finite Model Property (FMP). Tobias Kappé offers a new and direct proof of this result in his paper "An Elementary Proof of the FMP for Kleene Algebra."

In this presentation, we will outline the constructions used in Kappé's proof of the FMP for Kleene algebras. In the first part, we will define Kleene algebras and present some canonical examples while highlighting interesting properties. In the second part, we will walk through the constructions required in the proof of the FMP using an illustrative example. Finally, we will sketch the proof itself. The goal of this presentation is not to explain Kappé's proof in full detail but to introduce the theory of Kleene algebras to the audience.

Drinks
15:10–

 

[link to the old page with descriptions of the projects by the teachers]