Anton Golov

A1: Markel Zubia

Rush Hour is PSPACE-Complete, or "Why you should generously tip parking lot attendants" Gary William Flake, Eric B. Baum Theoretical Computer Science 270 (2002) A Rush Hour puzzle consists of a 6x6 board with an arrangement of 1x2 and 1x3 cars. One edge of the board is marked as the exit, and the goal is to reach that exit with a specific car. Cars may only move along their axis; that is, a car can drive forward or backwards, but not sideways, and cannot turn. You can try it out online. This paper studies the difficulty of Rush Hour when generalised to an n x n grid. Given a different puzzle (say, a Sudoku) can we encode it as an instance of Rush Hour within polynomial time? That is, can we (efficiently) find a Rush Hour configuration that has a solution if and only if the puzzle we started with has a solution? The paper resolves this question by showing that any problem that can be solved in polynomial space can be encoded as a Rush Hour puzzle. The construction involves defining a model of computation, the lazy reversible dual-rail (LRD) machine, which can simulate a linear-bounded Turing machine, and showing that this can be implemented as a Rush Hour puzzle.
Bas Terwijn B1: (not assigned) one of the two following papers:
Extending partial combinatory algebras Inge Bethke, Jan Willem Klop, Roel de Vrijer Mathematical Structures in Computer Science 9(4), June 1999 Completion of partial combinatory algebras A partial combinatory algebra (pca) is a set A with an partial application operator, satisfying a property called combinatory completeness. An equivalent definition is that an applicative structure is a combinatory complete if and only if it has Curry combinators k and s, as in combinatory algebra. For example, Kleene's first model K_1 defines application on the natural numbers by nm = phi_n(m), where phi_n is the n-th partial computable function. K_1 is a pca by the S-m-n-theorem. In fact, the property of combinatory completeness is the formal analogue of the S-m-n-theorem. A pca may be embedded in another pca in which more applications are defined. A general question is whether every pca has a completion, i.e. an embedding into a total combinatory algebra. A negative answer to this was given by Klop in 1983, cf. [1]. The goal of this project is to understand the problem, and to get an idea of the proof as much as possible. There is also other work related to this (e.g. by Asperti), that might be considered as well.
Hardness vs randomness Noam Nisan, Avi Wigderson Journal of Computer and System Sciences 49(2), October 1994 Random oracles and pseudorandom generators For an arbitrary set A, we can ask what can be computed from A when we consider A as an oracle, i.e. when given free access to membership queries about A. This gives the notion of computation relative to A. For a complexity class C, we denote by C^A the class relativized to oracle A. Now define almost-C = { B | B is in C^A for almost every A } We have the following results: COMP = almost-COMP (Sacks) BPP = almost-P (Bennett and Gill) PH = almost-PH (Nisan and Wigderson) The latter result uses the theory of pseudorandom generators, which is the most challenging part of this topic. The existence of this prg has many other consequences, and has led to the speculation that BPP = P.
Cynthia Kop C1: (not assigned)
Proving Non-Looping Non-Termination Automatically Fabian Emmes, Tim Enger, Jürgen Giesl IJCAR 2012 This paper discusses a method to automatically prove non-termination of term rewriting systems. It particularly considers systems where traditional methods fail, because there isn't an obvious "loop".
C2: Suzan Erven
Higher-Order Critical Pairs Tobias Nipkow This paper introduces a method of higher-order term rewriting: term rewriting systems with simple types and lambda-abstractions. It also extends the critical pair lemma (from traditional term rewriting) to this setting.
Freek Wiedijk F1: (not assigned)
Relaxing Safely: Verified On-the-Fly Garbage Collection for x86-TSO Peter Gammie, Antony L. Hosking, Kai Engelhardt PLDI 2015 This paper describes a formalization of the correctness of a state-of-the-art garbage collector using the proof assistant Isabelle/HOL. The garbage collector is concurrent, and is designed for multi-core archtitectures with weak memory consistency, the x86-TSO memory model that has a "relaxed" memory semantics. The paper consists of two parts: the first half describes the garbage collector, and the second describes the formalization.
F2: Márk Széles
Bindings as Bounded Natural Functors Jasmin Christian Blanchette, Lorenzo Gheri, Andrei Popescu, Dmitriy Traytel POPL 2019 [extended version] [slides of a related talk] In the Isabelle proof assistant, inductive and coinductive datatypes are not primitive in the logic (like in Coq and Lean), but are defined as fixpoints of bounded natural functors. This paper applies this technology to datatypes that contain binders.
F3: (not assigned)
Generalizing Automath by means of a lambda-typed lambda calculus N.G. de Bruijn Mathematical logic and theoretical computer science, 1994 [sciencedirect] De Bruijn was with his Automath language one of the founders of the field of type theory. He developed a variant of type theory called ΔΛ, in which the distinction between functions (λ) and function types (Π) does not exist. This variant is very interesting, but not very well known. In this paper he introduces the ΔΛ calculus, and describes a way to typecheck it.
Herman Geuvers H1: Menno Bartels
Martin Hofmann's Case for Non-Strictly Positive Data Types Ulrich Berger, Ralph Matthes, Anton Setzer In: 24th International Conference on Types for Proofs and Programs (TYPES 2018), Editors Peter Dybjer, José Espírito Santo, and Luís Pinto; Article No. 1; pp. 1:1-1:22 The authors describe the breadth-first traversal algorithm by Martin Hofmann that uses a non-strictly positive data type. Known data types are strictly positive; a type theory like Coq only accepts strictly positive dataa types, because for those a termination recursive function definition scheme and an induction principle can be derived on syntactic grounds. The breadth-first traversal algorithm is interesting by itself and termination of the algorithm is shown by implementing it in the strongly normalising extension of system F by Mendler-style recursion. The authors analyze the same algorithm by alternative verifications first in an intensional setting using a non-strictly positive inductive definition (not just a non-strictly positive data type), and subsequently by two different algebraic reductions.
Jurriaan Rot J1: Steven Bronsveld
Residual Nominal Automata Joshua Moerman, Matteo Sammartino Nominal automata are elegant formalism for accepting languages over infinite alphabets. Contrary to classical finite automata, in the nominal case, non-deterministic automata are strictly more expressive than deterministic automata. The paper refines this further, into a little hierarchy with different forms of non-determinism. In particular they talk about residual nominal automata, through a nice combination of the theory of nominal sets and lattices.
J2: (not assigned)
Guarded Kleene Algebra with Tests: Verification of Uninterpreted Programs in Nearly Linear Time Steffen Smolka, Nate Foster, Justin Hsu, Tobias Kappé, Dexter Kozen, Alexandra Silva The authors present GKAT, an extension of regular expressions, that can be used to reason about basic imperative programs. It comes with a complete axiomatisation and an efficient decision procedure; the project is to get the global of the calculus and its use, and get into the details of some of the theoretical results.
Robbert Krebbers (with Ike Mulder) K1: Bálint Kocsis
GhostCell: Separating Permissions from Data in Rust Joshua Yanovski, Hoang-Hai Dang, Ralf Jung, Derek Dreyer In ICFP 2021: ACM SIGPLAN International Conference on Functional Programming Rust is a the first industry-supported programming language that combines safety and control. It uses a "permission based" type system to rule out large classes of bugs at compile time (such as buffer overflows and use-after-free), while providing low-level control over data representation and avoiding the need for a garbage collector. Unfortunately, Rust's type system is currently not able to support data structures with complex pointer sharing such as doubly-linked lists or graphs. The paper presents a new API for Rust, called GhostCell, that allows for safe and efficient implementations of such data structures, and provides a proof (mechanized using separation logic in Coq) that the proposed API is indeed sound. The paper combines programming language design with programming language theory, and thus allows for a wide range of follow up projects.
Jules Jacobs (with Robbert Krebbers) K2: (not assigned)
Perceus: Garbage Free Reference Counting with Reuse Alex Reinking, Ningning Xie, Leonardo de Moura, Daan Leijen In PLDI 2021: Programming Language Design and Implementation This paper introduces Perceus, an algorithm for reference counting with reuse. Reuse enables a functional-but-in-place paradigm, which allows writing in-place mutating algorithms in a purely functional style. The paper also introduces a method to decrement the reference count as soon as possible, inspired by ownership and borrowing (a la Rust), so that data is deallocated as soon as it is no longer needed (unlike standard scope-based reference counting implementations). At first, this seems to be more expensive than standard scope-based reference counting, but the authors show that this enables optimizations that often remove almost all reference count operations from the fast paths. Possible follow-up projects are, but not limited to: looking into how this enables the Lean 4 theorem prover to be written in Lean itself, looking into the Koka language with effect handlers, or looking into the mimalloc memory allocator.
Nils Jansen N1: Roy Willems
Robust MDPs with k-rectangular uncertainty Shie Mannor, Ofir Mebel, Huan Xu Mathematics of Operations Research, 2016. Markov decision processes (MDPs) are a standard model for decision-making under uncertainty. In particular, for each decision a number of outcomes are possible, and the actual outcome is chosen probabilistically. A key assumption on MDPs is that the probabilities are exactly known. Robust MDPs (or uncertain MDPs) do not require exact probabilities. Instead, they use so-called uncertainty sets that capture a range of possible probability distributions. These uncertainty sets are often assumed to be rectangular, meaning that at every choice, the worst-case instance of the uncertainty set is chosen. This assumption does not always hold in practice. As such, this paper introduces a new type of uncertainty called k-rectangular uncertainty, where the worst-case instance is chosen only k times, and shows how the decision-making problem can be solved under this type of uncertainty.
Marnix Suilen (with Nils Jansen) N2: Wietze Koops
The Mathematics of Changing One's Mind, via Jeffrey's or via Pearl's update rule Bart Jacobs Journal of Artificial Intelligence Research, 2019 Given the probability of a fire occurring Pr(F), and the probability that an alarm goes off in case of a fire Pr(A | F), we may ask: 'what is the probability of a fire when we hear the alarm go off?', i.e., what is Pr(F | A). In this expression, it is given that the alarm A goes off. There is no question about it. This is called sharp evidence. But what if we are only 80% certain we heard alarm A going off? Then A is given by so-called fuzzy evidence. In principle, whether A is sharp or fuzzy is no problem, we can still compute Pr(F | A). However, as it turns out, in the fuzzy case there are two different ways of doing this computation, either via Pearl's rule, or via Jeffrey's rule. And these two rules give different results. This paper uses channel-based reasoning to look into both rules, examine their differences, and discusses a number of examples.
Wieb Bosma W1: Danish Alvi
1x1 Rush Hour with Fixed Blocks is PSPACE-complete Josh Brunner, Lily Chung, Adam Hesterberg, Erik D. Demaine, Adam Suhl, Dylan Hendrickson, Avi Zeff arXiv, May 2020 The Rush Hour game, consisting of 1x2 and 1x3 cars on a horizontal (square) grid that can each slide either horizontally or vertically, provides a problem (can a given car be moved to the edge?) that was shown to be PSPACE-complete in 2002. In this paper the (apparently) more difficult case of 1x1 cars is solved in case of the additional presence of cars that cannot move (with the same complexity result). To me the method (of contraint logic programming) is more interesting than the result itself: how does the method work, and how does one obtain such PSPACE-completeness results (for many games)?
W2: Robin Holen
On the hardness of knowing busy beaver values BB(15) and BB(5, 4) Tristan Sterin, Damien Woods arXiv, July 2021 The (uncomputable) busy beaver function BB gives the maximum number of steps that any n-state (2 symbol) deterministic halting Turing machine can make (starting from a blank tape). BB(4) is known to be 107, BB(5) conjectured to be (and least equal to) 47176870 and BB(7) known to exceed 10^(10^(10^(10^C))), with C>10^7. There are several interesting questions related to BB, but this paper caught my interest for the explicit relation it makes with certain conjectures in elementary number theory (that is: the statements are elementary), such as: there is an explicit 15-state Turing machine that halts if and only if the following Conjecture (by Erdos) is false (and thus computing BB(15) is at least as hard as solving the conjecture). Conjecture: no power of 2 greater than 256 is the sum of distinct powers of 3. Also the relation of this with a form of Collatz famous conjecture (that iterating the function dividing even integers by 2 and replacing odd integers x by 3x+1 eventually reaches 1, for any positive integer input), is of interest, as is recent computational work on the Erdos Conjecture.
Hans Zantema Z1: Mirja van de Pol
Mix-Automatic Sequences Jörg Endrullis, Clemens Grabmayer and Dimitri Hendriks Proc. Conf. on Language and Automata Theory and Applications (LATA 2013), LNCS 7810, 262–274, Springer, 2013. Automatic sequences are the simplest infinite sequences that are not ultimately periodic. In the seminal book 'Automatic Sequences' by Allouche and Shallit a lot of theory about this kind of sequences and many variants is presented. In the standard definition of automatic sequences a basis k is fixed, where the i-th element of the sequence is obtained by feeding the k-ary representation of i in an automaton. In the proposed paper it is investigated how this can be generalized to mixed number representation. [slides]