\documentclass[runningheads]{llncs}
\usepackage{amssymb,url,alltt,supertabular,color}
\raggedbottom
\begin{document}
\title{Is ZF a hack? \\\bigskip \large
Comparing the complexity of some \\ (formalist interpretations of) \\
foundational systems for mathematics}
\titlerunning{Is ZF a hack?}
\author{Freek Wiedijk}
\institute{Department of Computer Science, University of Nijmegen\\
Toernooiveld 1, 6525 ED Nijmegen, The Netherlands}
\maketitle
\begin{abstract}
This paper presents Automath encodings (which also are valid in LF/$\lambda P$)
of various kinds of foundations
of mathematics.
Then it compares these encodings according to their size, to find
out which foundation is the simplest.
The systems analyzed in this way are two kinds of set theory (ZFC and NF),
two systems based on Church's higher order logic (Isabelle/Pure and HOL),
three kinds of type theory (the calculus of constructions, Luo's extended calculus of constructions,
and Martin-L\"of predicative type theory) and one foundation based on category theory.
The conclusions of this paper are that the simplest system is type theory
(the calculus of constructions) but that type theories that know
about serious mathematics are not simple at all.
Set theory is one of the simpler systems too.
Higher order logic is the simplest if one looks at the number of concepts (twenty-five)
needed to explain the system.
On the other side of the scale, category theory is relatively
complex, as is Martin-L\"of's type theory.
\medskip
\noindent
(This paper is on the web, with the full Automath sources of the contexts described in it, at
\url{}.)
\end{abstract}
\section{Introduction}
\subsection{Problem}
Some time ago Bob Solovay drew my attention to the writings on proof checking
by Raph Levien.
In one of his postings on the forum called \emph{Advogato}, Raph had written that `ZF is a hack'.\footnote{
Actually his statement was not that strong.
He had written, in diary entry
\url{}:
\begin{quote}\em
I was talking $[\ldots]$ with Bram, and he called ZF set theory a `hack.' I more or less agree, but playing with it in
the context of Metamath has led me to appreciate how powerful a hack it is. With a small number of relatively
simple axioms, it gets you a rich set of infinities, but avoids the particular ones that bring the whole formal system
crumbling down. You get integers, reals, tuples, sequences (finite and infinite), transfinites, and functions from set to
set.
$[\ldots]$
%Overall, it is probably a good tradeoff.
\end{quote}
}
This statement was the reason for this paper.
Because I, too, do believe that ZF is a bit of a hack.
The aim of this paper is to give some quantitative information on
`how much' of a hack I consider ZF to be.
The whole point of Cantor's set theory
is (using type theoretic terminology) that $\mbox{\sf Set}$ and $\mbox{\sf Set}{\to}\mbox{\sf bool}$
should be the same thing.
And in ZF they are not.
The first are the sets and the second are the (possibly proper) classes.
Of course every set is a class, so the two kinds of objects
are related, but the relation between the two is not really simple.
This distinction is the first reason we might want to call ZF a `hack'.
In a presentation of ZF there are two levels that one
has to introduce:
first there is the level of first order predicate logic and
second there is the level of the ZF axioms.
On both levels one finds very similar concepts (for instance $\exists$ on the logic level corresponds to $\bigcup$ on the set theory level),
and it seems strange
that these levels cannot be identified.
This is different in the foundations of the HOL theorem prover \cite{har:00}.
In that system, formulas in the logic are just functions that map to the set \texttt{bool}.
So this distinction between levels is not necessary in the case of HOL,
there the logical operations are just set theoretic functions like
all the others.
This distinction is the second reason why we might
want to call ZF a `hack'.
The majority of the ZF axioms state that the
set theoretic universe is closed under certain operations.
Once Henk Barendregt asked me whether I knew the number of
ZF axioms.
When I admitted I did not know the exact number offhand, he wrote
the six ZF operations for me on a napkin (see Fig.~\ref{figure:henk};
so the answer to his question was `there are eight ZF axioms.')
He then struck out the $F``x$ replacement operation saying that
`the F in this operation stands for Fr\"ankel.
If you remove it from the list, you remove the F from ZF
and you get Zermelo set theory.'
\label{section:henk}
\begin{figure}
\begingroup
\normalsize
\hfill\framebox{\begin{tabular}{lll}
Set properties & 2 & ext, found \\
Set existence & 6\quad 8\enskip\strut \\
\noalign{\medskip}
\qquad $\mbox{\rm Set}(\emptyset)$ \\
\qquad $\mbox{\rm Set}(\{x,y\})$\enskip\strut \\
\qquad $\mbox{\rm Set}(\bigcup x)$ \\
\setbox0=\hbox{${\mbox{\rm Set}(F``x)}$}%
\qquad \rlap{\raise.85ex\hbox{\underbar{\hbox to\wd0{\hss}}}}\unhbox0 \\
\qquad $\mbox{\rm Set}({\cal P} x)$ \\
\qquad $\mbox{\rm Set}(\omega)$\strut
\end{tabular}}\hfill\strut
\endgroup
\caption{Henk's napkin\label{figure:henk}}
\end{figure}
If one looks at ZF like this, it seems to be about `six concepts'.
But of course that presupposes that the whole machinery of first
order predicate logic (which is needed to phrase the ZF axioms)
is already `given'.
The question that this paper wants to answer is `how many concepts
are needed to tell \emph{the whole story} of ZF, without sweeping anything
under the rug?' (and the answer turns out to be \emph{thirty-one},
as is shown in Section~\ref{section:ZFC}.)
\subsection{Approach}
ZF distinguishes between a logic level and a set level, while HOL
does not.
Therefore, one would expect the HOL system to be the simpler of the
two
(of course it is also less powerful mathematically.)
To investigate this,
I wrote an Automath context both for ZFC and for HOL Light
(of the HOL systems this system has the cleanest `kernel'.)
To my surprise, although the HOL context turned out to have less
`primitive notions' than the ZFC context, it was bigger.
So ZF is not that much of a hack after all!
Then I wanted to know how these two contexts compare to type theory.
I did not expect to be able to represent the rules for the inductive types
of Coq in Automath (there does not even seem to be a publication
in which they are stated precisely.)
Instead I wrote a context for the MLW$^{\rm ext}$
system, after a paper by Peter Aczel \cite{acz:99}.
This context turned out to be much bigger than both the ZFC and the
HOL context, in all respects.
Then I decided to extend my collection of Automath contexts to even more
foundational systems, to turn it into a `comparative review of
foundations of mathematics.'
The result is this paper.
To summarize the approach followed:
for each foundations of mathematics I constructed a context
in a logical framework.
This context was intentionally kept as close as possible to
some given presentation from the literature
(in all cases I used a presentation of the system `in a few pages'.)
Then I computed some statistics of these contexts to
compare the `complexity' of the different systems.
One might expect that there is a lot of variation possible in the
encoding of a foundational system as an Automath context.
However, it was my experience that it does not feel like
one has that many choices
if one follows a given presentation.
It feels like there is only one simple way to do it.
So if it is claimed here that a system `needs 25 primitive notions',
I would be surprised if a variation on the encoding would
differ more than only one or two from that.
For convenience I used Automath as a logical framework,
but the contexts also are valid in the logical framework LF/$\lambda P$.
It is well-known that Automath can be used as a logical framework:
N.G.~de Bruijn called it the `Automath restaurant'.
He compares intuitionistic mathematics to kosher food:
just like in a non-kosher restaurant one might be able to order kosher food,
one can `order' constructive mathematics in Automath by working
in an appropriate context.
This paper is like the menu of the Automath restaurant.
In the contexts from this paper we use higher order abstract syntax
(HOAS).
This means that bindings in the object logic are encoded by bindings
in the framework.
For this reason, we do not have to deal with variable names or de Bruijn indices.
More importantly it means that we never will have to model the contexts of the
object logic:
the Automath contexts will take care of that for us.
This is the main respect in which the Automath rendering of our
source texts deviates from the originals.
Often there is quite some talk in these texts about contexts $\Gamma$, which
is ignored completely in the Automath version.
Another deviation caused by our use of HOAS is that \emph{instantiation}
often does not need a primitive notion,
as the Automath binding will take care of that as well.
\medskip
\noindent
My personal philosophy of mathematics is that of
`full-blooded' Platonism (when I do not think too hard about it),
or that of fictionalism (when I do).
Of course as Mark Balaguer explains in \cite{bal:98} both are basically the same thing,
and fictionalism is just a fancy name for formalism.
Another aspect of my philosophy of mathematics:
I do not like constructive mathematics very much
(apparently my taste runs towards Platonist fiction.)
According to Bas Spitters this is strange.
He claims that a formalist should not mind which formal system he is using.
And in some sense he is right!
When I look at the contexts from this paper,
the Platonist one of ZFC does not seem very different to me,
emotionally, from the constructivist one of MLW$^{\rm ext}$.
(Of course some constructivists probably will think that encoding the rules
of constructive mathematics as a formalist system misses the point entirely
in the first place.)
\subsection{Related Work}
Contexts for logical systems are the subject of the field
of logical frameworks.
See \cite{pfe:99} for an overview of this field.
%
Popular systems for formalizing logics in a logical framework are
Twelf \cite{pfe:sch:02} and Isabelle \cite{pau:03}.
\subsection{Contribution}
The contexts that we present here are nothing special,
as far as modeling logics in a logical framework go.
The main differences with other work are:
\begin{itemize}
\item
the use of the Automath system as the logical framework;
\item
the attempt to model the contexts as closely as possible
to a given informal presentation of the system;
\item
the comparison in a \emph{quantitative} way, by collecting
statistics about the contexts.
\end{itemize}
\subsection{Outline}
This paper is organized as follows.
We start in the next section by presenting a small Automath context -- for first order
predicate logic -- in full detail.
The other contexts will not be given explicitly, but will just be summarized.
For the ZFC and HOL contexts there will be a full list of the primitive notions
in the context.
For the other contexts there will be just a table of counts and a list
of Automath types.
The contexts are grouped according to paradigm.
The set theory contexts are in Section~\ref{section:set}.
The contexts based on higher order logic are in Section~\ref{section:hol}.
The type theory contexts are in Section~\ref{section:type}.
The category theory context is in Section~\ref{section:cat}.
\section{FOL: first order predicate logic}\label{section:FOL}
Here is an Automath context for first order logic:
\noindent\rule{\hsize}{\fboxrule}\vspace{-\bigskipamount}
\paragraph*{the types}
\begingroup
%\footnotesize
\begin{alltt}
* \underbar{prop} : TYPE := PRIM
* [a:prop] \underbar{proof} : TYPE := PRIM
* \underbar{term} : TYPE := PRIM
\end{alltt}
\endgroup
\paragraph*{first order formulas}
\begingroup
%\footnotesize
\begin{alltt}
* \underbar{false} : prop := PRIM
a * [b:prop] \underbar{imp} : prop := PRIM
* [p:[z,term]prop] \underbar{for} : prop := PRIM
\end{alltt}
\vspace{-\bigskipamount}
\begin{alltt}
* [x:term][y:term] \underbar{eq} : prop := PRIM
\end{alltt}
\vspace{-\bigskipamount}
\begin{alltt}
a * \underbar{not} : prop := imp(a,false)
\end{alltt}
\endgroup
\paragraph*{natural deduction}
\begingroup
%\footnotesize
\begin{alltt}
b * [_1:[_,proof(a)]proof(b)] \underbar{imp_intro} : proof(imp(a,b)) := PRIM
b * [_1:proof(imp(a,b))][_2:proof(a)] \underbar{imp_elim} : proof(b) := PRIM
p * [_1:[z,term]proof(p)] \underbar{for_intro} : proof(for(p)) := PRIM
p * [_1:proof(for(p))][z:term] \underbar{for_elim} : proof(p) := PRIM
a * [_1:proof(not(not(a)))] \underbar{classical} : proof(a) := PRIM
\end{alltt}
\vspace{-\bigskipamount}
\begin{alltt}
x * \underbar{eq_intro} : proof(eq(x,x)) := PRIM
y * [q:[z,term]prop][_1:proof(eq(x,y))][_2:proof(q)]
\underbar{eq_elim} : proof(q) := PRIM
\end{alltt}
\endgroup
\vspace{-\bigskipamount}
\noindent\rule{\hsize}{\fboxrule}\par\medskip
\noindent
(This context is part of the contexts for the foundations
built on first order predicate logic.
It is the start of the ZFC and NF set theories in Section~\ref{section:set}, and
-- with minor modifications to get many-sorted logic -- of McLarty's axiomatic treatment of category theory in Section~\ref{section:cat}.)
We will summarize Automath contexts like this one in two ways.
First, we will present the counts of the primitive notions
(those are the Automath notions of which the body of the definition is \texttt{PRIM})
in a table:
\begin{center}
\begin{tabular}{lcr}
the types &$\quad$& 3 \\
first order formulas && 4 \\
natural deduction && 7 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\emph{total} && 14
\end{tabular}
\end{center}
Second, we will focus on the Automath \texttt{TYPE}s in the context
(because they are the most interesting part of it.)
We will show them in the following style:
\begin{center}\tt
\begin{tabular}{rcl}
&& prop \\
{}[prop] && proof \\
&& term
\end{tabular}
\end{center}
This means that \texttt{prop} is a type without any arguments
(it represents the first order formulas),
that \texttt{proof} is a type that takes one argument of
type \texttt{prop}
(it represents the provability of its argument:
the type is inhabited if and only if the formula is provable)
and that \texttt{term} is also a type without arguments
(it represents the first order terms).
\section{Set theories}\label{section:set}
\subsection{ZFC: Zermelo-Fr\"ankel set theory with the Axiom of Choice}\label{section:ZFC}
The ZFC context is based on Henk's napkin that was shown in Section~\ref{section:henk}.
It contains thirty-two primitive notions (thirty-one for ZF plus one for AC):
\begin{center}
\begin{supertabular}{lrcccl}
\rlap{\em first order logic} \\
& 1. &$\enskip$& &$\enskip$& propositions \\
& 2. && && proofs \\
& 3. && && sets \\
& 4. && $\bot$ && false \\
& 5. && $\to$ && implication \\
& 6. && $\forall$ && universal quantifier \\
& 7. && $=$ && equality \\
& 8. && $\in$ && element predicate \\
& 9. && $\to_I$ && implication introduction \\
& 10. && $\to_E$ && implication elimination (modus ponens) \\
& 11. && $\forall_I$ && universal quantifier introduction \\
& 12. && $\forall_E$ && universal quantifier elimination \\
& 13. && && classical logic (excluded middle) \\
& 14. && && reflexitivity of $=$ \\
& 15. && && substitution property of $=$ \\
\rlap{\em definition by cases} \\
& 16. && && definition by cases \\
& 17. && && definition by cases axiom \\
\rlap{\em set theory} \\
& 18. && $\emptyset$ && empty set \\
& 19. && $\{x,y\}$ && pair set \\
& 20. && $\bigcup x$ && union \\
& 21. && $F``x$ && replacement operation \\
& 22. && ${\cal P} x$ && power set \\
& 23. && $\omega$ && infinity \\
\rlap{\em the axioms} \\
& 24. && && \emph{extensionality} \\
& 25. && && \emph{foundation} \\
& 26. && && \emph{empty set axiom} \\
& 27. && && \emph{pair set axiom} \\
& 28. && && \emph{union axiom} \\
& 29. && && \emph{replacement} \\
& 30. && && \emph{power set axiom} \\
& 31. && && \emph{axiom of infinity} \\
\rlap{\em choice} \\
& 32. && AC && \rlap{\emph{axiom of choice}}\phantom{implication elimination (modus ponens)} \\
\end{supertabular}
\end{center}
\noindent
So these are the concepts that one has to
explain to introduce ZFC.
One might object to the duplicate appearance of the notions
corresponding to Henk's six operations (items 18--23 and
items 26--31).
But I think it is fair to distinguish between
the empty set (labeled `$\emptyset$') and the
empty set axiom (labeled `empty set axiom').
It is analogous to distinguishing between
the logical operations as formula constructors,
and their natural deduction rules as proof constructors.
The definition by cases construction is needed to be able to derive
the comprehension axiom from the replacement axiom.
One can shift things around a bit, for instance one can
have the comprehension axiom as primitive notion and then derive the definition by cases from
that, but that does not make much of a difference in the complexity
of the context.
The solution that is presented here seems reasonable to us.
To summarize this ZFC context we have the following counts:
\begin{center}
\begin{tabular}{lcr}
first order logic &$\quad$& 15 \\
definition by cases && 2 \\
set theory && 6 \\
the axioms && 8 \\
choice && 1 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\emph{total} && 32
\end{tabular}
\end{center}
and the following types:
\begin{center}\tt
\begin{tabular}{rcl}
&& prop \\
{}[prop] && proof \\
&& set
\end{tabular}
\end{center}
These types are items 1--3 from the list.
Of course they are just the types for the encoding of
first order logic from the previous section.
The reason that in the previous section there were only
14 primitive notions for first order logic while here there
are 15, is that we here have the $\in$ predicate as well.
\subsection{NF: Quine's set theory of New Foundations}
The NF context is based on an e-mail explanation of that system
by Randall Holmes.
It has the following counts:
\begin{center}
\begin{tabular}{lcr}
first order logic &$\quad$& 15 \\
stratification levels && 3 \\
stratified formulas && 14 \\
new foundations && 5 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\emph{total} && 37
\end{tabular}
\end{center}
and the following types:
\begin{center}\tt
\begin{tabular}{rcl}
&& prop \\
{}[prop] && proof \\
&& set \\
&& nat \\
&& prop' \\
{}[nat] && set' \\
{}[prop][prop'] && same\char`\_prop \\
{}\llap{[n:nat]}[set][set'(n)] && same\char`\_set \\
{}[prop] && axiom\char`\_scheme
\end{tabular}
\end{center}
The first three types are those of first order logic, like before.
The type \texttt{nat} holds the natural numbers, for the
stratification levels.
The types \texttt{prop'} and \texttt{set'} are for the representation
of \emph{stratified} first order formulas.
These stratified formulas
are just syntactic objects, there is no deduction system for them
in the context.
An example of a constructor of \texttt{prop'} is the stratified
version of the universal quantifier:
\begin{center}
\verb|* [n:nat][p':[z',set'(n)]prop'] for' : prop' := PRIM|
\end{center}
The types \texttt{same\char`\_prop} and \texttt{same\char`\_set} together
represent the judgment that a formula corresponds to some stratified
counterpart.
For instance the constructor of \texttt{same\char`\_prop}
for the universal quantifier is:
\begin{flushleft}
\verb|n * [p:[z,set]prop][p':[z',set'(n)]prop']|\\
\verb| [_1:[z,set][z',set'(n)][_:same_set(n,z,z')]same_prop(p,p')]|$\hspace{-2em}$\\
\verb| same_for : same_prop(for(p),for'(n,p')) := PRIM|
\end{flushleft}
(Randall Holmes made the observation that one can avoid the
\texttt{same\char`\_set} predicate by having for each \texttt{n}
a function from \texttt{set} to \texttt{set'(n)}.
Taking this alternative approach would hardly change the complexity of the context.)
Finally the \texttt{axiom\char`\_scheme} type represents
the judgment that a formula is an instance of the NF comprehension axiom.
This is needed because the \texttt{same\char`\_prop} judgment only
works for \emph{closed} formulas.
But to get the full power of the system,
NF comprehension also needs to be there for
formulas that have free variables.
So the \texttt{axiom\char`\_scheme} judgment is used to put sufficiently many
universal quantifiers around the axiom first to turn it into a closed formula.
\section{Systems based on higher order logic}\label{section:hol}
\subsection{Isabelle/Pure}
The first half of the context for the Isabelle/Pure logic is
based on slides by Stefan Berghofer \cite{ber:00}.
Later I discovered Section 5.2 of \cite{pau:03}.
The equality rules have been modeled after
the deduction rules from that section.
The Isabelle/Pure context has the following counts:
\begin{center}
\begin{tabular}{lcr}
meta logic &$\quad$& 8 \\
proof terms && 5 \\
equality && 10 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\emph{total} && 23
\end{tabular}
\end{center}
and the following types:
\begin{center}\tt
\begin{tabular}{rcl}
&& type \\
{}[type] && term \\
{}[term(prop)] && proof
\end{tabular}
\end{center}
These types are the equivalents for higher order logic
of the types for first order logic from Section~\ref{section:FOL}.
But note that the \texttt{prop} and \texttt{term} types have been
unified by having a \texttt{prop} object of type \texttt{type}.
\subsection{HOL Light}
The HOL Light system
\cite{har:00}
has a very elegant logical kernel.
This makes it very obvious what should be in a context that
corresponds to the system.
In fact, for each primitive notion in the HOL Light context
we can give an ML expression that corresponds to it.
In the following list we give these expressions together with
their ML types:
\begin{center}
\begin{supertabular}{lrclcl}
\rlap{\texttt{type.ml}} \\
& 1. &$\enskip$& \texttt{hol\char`\_type} &$\enskip$& \emph{type} \\
& 2. && \texttt{`:bool`} && \texttt{hol\char`\_type} \\
& 3. && \texttt{`:A->B`} && \texttt{hol\char`\_type} \\
\rlap{\texttt{term.ml}} \\
& 4. && \texttt{term} && \emph{type} \\
& 5. && \texttt{Comb} && \texttt{term $\times$ term $\to$ term} \\
& 6. && \texttt{Abs} && \texttt{term $\times$ term $\to$ term} \\
& 7. && \texttt{`(=)`} && \texttt{term} \\
\rlap{\texttt{thm.ml}} \\
& 8. && \texttt{thm} && \emph{type} \\
& 9. && \texttt{REFL} && \texttt{term $\to$ thm} \\
& 10. && \texttt{TRANS} && \texttt{thm $\to$ thm $\to$ thm} \\
& 11. && \texttt{MK\char`\_COMB} && \texttt{thm $\times$ thm $\to$ thm} \\
& 12. && \texttt{ABS} && \texttt{term $\to$ thm $\to$ thm} \\
& 13. && \texttt{BETA} && \texttt{term $\to$ thm} \\
& 14. && \texttt{EQ\char`\_MP} && \texttt{thm $\to$ thm $\to$ thm} \\
& 15. && \texttt{DEDUCT\char`\_ANTISYM\char`\_RULE} && \texttt{thm $\to$ thm $\to$ thm} \\
& 16. && \emph{new basic type definition} && \texttt{hol\char`\_type} \\
& 17. && \emph{abs} && \texttt{term} \\
& 18. && \emph{rep} && \texttt{term} \\
& 19. && $\vdash \mathop{\emph{abs}}(\mathop{\emph{rep}}(a)) = a$ && \texttt{thm} \\
& 20. && $\vdash P(r) = (\mathop{\emph{rep}}(\mathop{\emph{abs}}(r)) = r)$\enskip\strut && \texttt{thm} \\
\rlap{\texttt{num.ml}} \\
& 21. && \texttt{`:ind`} && \texttt{hol\char`\_type} \\
& 22. && \texttt{INFINITY\char`\_AX} && \texttt{thm} \\
\rlap{\texttt{class.ml}} \\
& 23. && \texttt{ETA\char`\_AX} && \texttt{thm} \\
& 24. && \texttt{`(@)`} && \texttt{term} \\
& 25. && \rlap{\texttt{SELECT\char`\_AX}}\phantom{$\vdash P(r) = (\mathop{\emph{rep}}(\mathop{\emph{abs}}(r)) = r)$} && \rlap{\texttt{thm}}\phantom{\texttt{term $\times$ term $\to$ term}} \\
\end{supertabular}
\end{center}
\noindent
The items are headed with the names of the HOL Light source files
in which they are implemented.
Note that the ten basic HOL Light inference rules (see Section~5.3 of \cite{har:00})
are only 7 of the 25 primitive notions (items 9--15).
To summarize the HOL Light context we have the following counts:
\begin{center}
\begin{tabular}{lcr}
\texttt{type.ml} &$\quad$& 3 \\
\texttt{term.ml} && 4 \\
\texttt{thm.ml} && 13 \\
\texttt{bool.ml} && 0 \\
\texttt{num.ml} && 2 \\
\texttt{class.ml} && 3 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\emph{total} && 25
\end{tabular}
\end{center}
and the following types:
\begin{center}\tt
\begin{tabular}{rcl}
&& type \\
{}[type] && term \\
{}\llap{[term}(bool)] && thm
\end{tabular}
\end{center}
(Apart from the names -- \texttt{prop} is called \texttt{bool} here, and \texttt{proof} is called \texttt{thm} -- these are exactly the same types as the ones in the Isabelle/Pure
context.)
\section{Type theories}\label{section:type}
\subsection{CC/$\lambda C$: the Calculus of Constructions in the form of a Proper Type System}
The context for the calculus of constructions is based on
a tutorial paper about proper type systems by Henk Barendregt
\cite{bar:99}.
It has the following counts:
\begin{center}
\begin{tabular}{lcr}
terms &$\quad$& 4 \\
specifications && 3 \\
judgments && 2 \\
equality && 7 \\
proper type system && 5 \\
$\lambda C$ sorts && 4 \\
$\lambda C$ axioms && 1 \\
$\lambda C$ rules && 4 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\emph{total} && 30
\end{tabular}
\end{center}
and the following types:
\begin{center}\tt
\begin{tabular}{rcl}
&& pterm\hspace{5em} \\
{}[pterm] && sort \\
{}[pterm][pterm] && axiom \\
{}[pterm][pterm][pterm] && rule \\
{}[pterm][pterm] && in \\
{}[pterm][pterm] && eq
\end{tabular}
\end{center}
The type \texttt{pterm} is the type of pseudo-terms.
The next three types encode the information about the proper type system:
the type \texttt{sort($s$)} is inhabited when $s$ is a sort of the
PTS,
the type \texttt{axiom($c$,$s$)} is inhabited when $c:s$ is an axiom
of the PTS,
and the type \texttt{rule($s_1$,$s_2$,$s_3$)} is inhabited when
$(s_1,s_2,s_3)$ is a rule of the PTS.
The type \texttt{in($A$,$B$)} encodes the judgment $A:B$.
Finally the type \texttt{eq($A$,$B$)} represents
$A =_{\beta} B$.
It might seem strange that the $\lambda C$ PTS has two sorts ($*$ and $\Box$),
while there are four primitive notions about the $\lambda C$ sorts in the
list.
These four notions are:
\begin{center}
\begin{tabular}{l}
\verb|* star : pterm := PRIM|\\
\verb|* box : pterm := PRIM|\\
\noalign{\smallskip}
\verb|* sort_star : sort(star) := PRIM|\\
\verb|* sort_box : sort(box) := PRIM|
\end{tabular}
\end{center}
\subsection{ECC: Luo's Extended Calculus of Constructions}
The context for the extended calculus of constructions ECC
is based on the original paper about the system \cite{luo:89}.
It has the following counts:
\begin{center}
\begin{tabular}{lcr}
universe levels &$\quad$& 3 \\
terms && 10 \\
conversion && 12 \\
type cumulativity && 8 \\
inference rules && 13 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\emph{total} && 46
\end{tabular}
\end{center}
and the following types:
\begin{center}\tt
\begin{tabular}{rcl}
&& omega\hspace{4em} \\
&& pterm \\
{}[pterm][pterm] && eq \\
{}[pterm][pterm] && sub \\
{}[pterm][pterm] && in
\end{tabular}
\end{center}
\noindent
The type \texttt{omega} holds the natural numbers used to encode
the universe levels.
The type \texttt{pterm} is for the ECC terms.
The types \texttt{eq($A$,$B$)}, \texttt{sub($A$,$B$)} and \texttt{in($A$,$B$)}
represent respectively $A\simeq B$, $A\preceq B$ and $A:B$
(see p.~2 of \cite{luo:89} for the meaning of those symbols.)
\subsection{MLW$^{\rm ext}$: extensional Martin-L\"of type theory with W-types but no type universes}
Originally we tried to formalize Martin-L\"of type theory from
the standard reference
about the subject \cite{nor:pet:smi:90}, but there the rules
are scattered throughout the book.
Then we found a very nice paper by Peter Aczel \cite{acz:99},
that gives the rules of the system compactly in a few pages (pp.~7--9)
and even has a nice three letter acronym for the system,
so that type theory's MLW can share this kind of identification with set theory's ZFC.
The context for MLW$^{\rm ext}$ has the following counts:
\begin{center}
\begin{tabular}{lcr}
terms &$\quad$& 1 \\
judgments && 4 \\
equality rules && 8 \\
congruence rules && 2 \\
the empty type && 4 \\
the unit type && 7 \\
the Booleans && 13 \\
product types && 9 \\
sum types && 11 \\
W-types && 8 \\
extensionality && 10 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\emph{total} && 77
\end{tabular}
\end{center}
and the following types:
\begin{center}\tt
\begin{tabular}{rcl}
&& pterm \\
{}[pterm] && type \\
{}[pterm][pterm] && eq\char`\_type\hspace{5em} \\
{}[pterm][pterm] && in \\
{}[pterm][pterm][pterm] && eq\char`\_in
\end{tabular}
\end{center}
The first type is for pseudoterms.
The latter four types are the four basic judgments
of Martin-L\"of type theory (see Section 4.1 of \cite{nor:pet:smi:90}):
\begin{center}
\begin{tabular}{l}
$A\;\textit{set}$ \\
$A = B$ \\
$a \in A$ \\
\rlap{$a = b \in A$}\phantom{$M_1 = M_2 : M$}
\end{tabular}
\end{center}
In \cite{acz:99} these judgments are written (see the description of \emph{pseudobody} on p.~3) as:
\begin{center}
\begin{tabular}{l}
$M\;\textsf{type}$ \\
$M_1 = M_2$ \\
$M_0 : M$ \\
$M_1 = M_2 : M$
\end{tabular}
\end{center}
\medskip
\noindent
Our Automath specifications of type theory all use a type
for \emph{pseudoterms}.
This means that it is possible to write Automath terms that are well-typed
in the Automath sense (they have type \texttt{pterm}) but that do not correspond
to terms that are well-typed in the type theory.
The judgments are then used to encode what it means for a pseudoterm to be
well-typed in the type theory itself.
An alternative would be to put the type theory types in the Automath types themselves,
similar to the way that the types are encoded in the higher order logic contexts.
This would lead to the following types:
\begin{center}\tt
\begin{tabular}{rcl}
&& type \\
{}[type] && term \\
{}[type][type] && eq\char`\_type\hspace{4em} \\
{}[A:type][term(A)][term(A)] && eq
\end{tabular}
\end{center}
However, this is less faithful to the usual descriptions
of type theory.
If one uses such an encoding, then \emph{conversion}
needs an explicit Automath function in the terms.
Thorsten Altenkirch explained to me that addition of such a function to a system
necessitates
the addition of several extra rules.
Alternatively one might replace the equality of type theory (where both
sides have the same type) by \emph{John-Major equality} \cite{mcb:99}.
This would lead to the the following set of types:
\begin{center}\tt
\begin{tabular}{rcl}
&& type \\
{}[type] && term \\
{}[type][type] && eq\char`\_type\hspace{4em} \\
{}\phantom{[A:type][term(A)][term(A)]}\llap{[A:type][B:type][term(A)][term(B)]} && eq
\end{tabular}
\end{center}
Note that replacing the judgment $a = b \in A$ by the judgment $a \in A = b \in B$ would be a
significant departure from traditional Martin-L\"of type theory.
\section{Category theory}\label{section:cat}
\subsection{McLarty's axiomatization of a well-pointed topos with natural numbers and choice}
The context for category theory as a foundations of mathematics
is based on an e-mail message \cite{mcl:98} by Colin McLarty to the FOM mailing
list.
It has the following counts:
\begin{center}
\begin{tabular}{lcr}
many-sorted first order logic &$\quad$& 15 \\
category theory && 8 \\
terminal object and cartesian products && 9 \\
non-trivial Boolean topos && 10 \\
natural numbers && 1 \\
well-pointed topos with choice && 2 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\emph{total} && 45
\end{tabular}
\end{center}
and the following types:
\begin{center}\tt
\begin{tabular}{rcl}
&& prop \\
{}[prop] && proof \\
&& sort \\
{}[sort] && elt
\end{tabular}
\end{center}
These types are a variation on the types of Section~\ref{section:FOL},
for \emph{many-sorted} first order logic.
Instead of \texttt{term} the rules use \texttt{elt(s)}, with
\texttt{s} a \texttt{sort} of the system.
The two sorts of category theory then are introduced by the rules:
\begin{center}
\begin{tabular}{l}
\verb|* object_sort : sort := PRIM|\\
\verb|* arrow_sort : sort := PRIM|\\
\noalign{\smallskip}
\verb|* object : TYPE := elt(object_sort)|\\
\verb|* arrow : TYPE := elt(arrow_sort)|
\end{tabular}
\end{center}
Apart from the rules for many-sorted first order logic, this context
is just a long list of axioms.
\section{Conclusion}
\subsection{Discussion}
The conclusions of this paper are that:
\begin{center}\em
All these foundational systems have roughly the same complexity.
\end{center}
and:
\begin{center}\em
The number of primitive concepts in these systems is bigger \\ than one would expect.
\end{center}
To support the first claim we have put the statistics of the
contexts for these systems in three bar diagrams (Fig.~\ref{figure:bar1}--\ref{figure:bar3}).
%
\definecolor{gray}{rgb}{.5,.5,.5}%
\definecolor{lightgray}{rgb}{.8,.8,.8}%
\definecolor{lightergray}{rgb}{.9,.9,.9}%
\newdimen\statbarunit
\def\statbar#1{\raise0.3ex\hbox{\rule{#1\statbarunit}{3pt}}}%
\def\graystatbar#1{\color{gray}{\statbar{#1}}}%
\def\extrastatbar#1{}%
\def\grayextrastatbar#1{}%
%
\begin{figure}
\begingroup
\normalsize
\statbarunit=4.06pt
\hfill\begin{tabular}{rcl}
FOL && \graystatbar{14}\grayextrastatbar{1} \\
&& \rlap{\emph{set theory}} \\
ZFC && \statbar{32}\extrastatbar{15} \\
NF && \statbar{37}\extrastatbar{4} \\
&& \rlap{\emph{higher order logic}} \\
Pure && \graystatbar{23}\grayextrastatbar{3} \\
HOL && \statbar{24}\extrastatbar{15} \\
&& \rlap{\emph{type theory}} \\
CC && \graystatbar{30}\grayextrastatbar{0} \\
ECC && \graystatbar{46}\grayextrastatbar{0} \\
MLW && \statbar{77}\extrastatbar{3} \\
&& \rlap{\emph{category theory}} \\
Topos && \statbar{45}\extrastatbar{14}
\end{tabular}\hfill\strut
\endgroup
\caption{Comparing systems according to the number of primitive notions\label{figure:bar1}}
\end{figure}%
\begin{figure}
\begingroup
\normalsize
\statbarunit=.267pt
\hfill\begin{tabular}{rcl}
FOL && \graystatbar{251} \\
&& \rlap{\emph{set theory}} \\
ZFC && \statbar{774} \\
NF && \statbar{640} \\
&& \rlap{\emph{higher order logic}} \\
Pure && \graystatbar{532} \\
HOL && \statbar{903} \\
&& \rlap{\emph{type theory}} \\
CC && \graystatbar{515} \\
ECC && \graystatbar{696} \\
MLW && \statbar{1171} \\
&& \rlap{\emph{category theory}} \\
Topos && \statbar{1153}
\end{tabular}\hfill\strut
\endgroup
\caption{Comparing systems according to the (compressed) size of the specification\label{figure:bar2}}
\end{figure}%
\begin{figure}
\begingroup
\normalsize
\statbarunit=.0822pt
\hfill\begin{tabular}{rcl}
FOL && \graystatbar{262} \\
&& \rlap{\emph{set theory}} \\
ZFC && \statbar{1570} \\
NF && \statbar{972} \\
&& \rlap{\emph{higher order logic}} \\
Pure && \graystatbar{1200} \\
HOL && \statbar{2793} \\
&& \rlap{\emph{type theory}} \\
CC && \graystatbar{784} \\
ECC && \graystatbar{1636} \\
MLW && \statbar{3808} \\
&& \rlap{\emph{category theory}} \\
Topos && \statbar{2601}
\end{tabular}\hfill\strut
\endgroup
\caption{Comparing systems according to the size of the $\lambda$-term\label{figure:bar3}}
\end{figure}%
The conclusions from that are the following.
Set theory is not as simple as one might expect (it needs 32 concepts!),
but it still is one of the
simplest foundations.
Therefore the answer to the question in the title of this paper seems to be:
`ZF might be a hack, but we do not have anything better.'
Type theory -- that is, the calculus of constructions written as a PTS -- is still simpler,
but if one wants to extend it to a serious system in which one really can do all
mathematics, it becomes much more complex.
Finally, HOL is the simplest system if one only looks at the number of concepts needed.
Of course one should note that we are not comparing exactly matching systems
here:
some of these systems have classical logic and the axiom of choice,
while others do not.
And one might argue that Isabelle/Pure and the (extended) calculus of constructions are too poor
to encode mathematics in a reasonable way by itself,\footnote{%
The calculus of
constructions cannot prove all equalities that one would like
to have.
This is obvious from the existence of the proof irrelevance model,
which shows that $0\ne 1$ is not provable.}
for which reason their bars have been grayed in the figures.
We summarize all this in the following table (the `$\circ$' in the row of NF
means that NF disproves the axiom of choice):
\setbox0=\hbox{`all math'}
\begin{center}
\begin{tabular}{lcccc}
%\hline
%\noalign{\smallskip}
&& \hbox to\wd0{\hss classical\hss} & \hbox to\wd0{\hss choice\hss} & \unhcopy0 \\
%\noalign{\smallskip}
%\hline
%\noalign{\smallskip}
FOL &$\qquad$& $\bullet$ & & \\
%\noalign{\smallskip}
\hline
%\noalign{\smallskip}
ZFC && $\bullet$ & $\bullet$ & $\bullet$ \\
NF && $\bullet$ & $\circ$ & $\bullet$ \\
%\noalign{\smallskip}
\cline{3-5}
%\noalign{\smallskip}
Pure && & & \\
HOL && $\bullet$ & $\bullet$ & $\bullet$ \\
%\noalign{\smallskip}
\cline{3-5}
%\noalign{\smallskip}
CC && & & \\
ECC && & & \\
MLW && & & $\bullet$ \\
%\noalign{\smallskip}
\cline{3-5}
%\noalign{\smallskip}
Topos && $\bullet$ & $\bullet$ & $\bullet$ \\
%\noalign{\smallskip}
%\hline
\end{tabular}
\end{center}
Adding classical logic or choice to a system generally does not
make a big difference, though.
Generally such an addition involves only one extra primitive notion,
and only a few lines of Automath text.
Some remarks about the sizes of these contexts.
The NF context is small (if one disregards the gray bars it is the
smallest in two out of three categories), so it seems an
attractive foundation.
Unfortunately its consistency strength is unknown.\footnote{
The consistency strengths of NFU (`NF with urelements'), and of NFU with choice and infinity added,
\emph{are} known.
Both are weaker than ZFC.
Bob Solovay told me that the first has a consistency strength strictly
between $I\Delta_0 + \mbox{Exp}$ and $I\Delta_0 + \mbox{SuperExp}$,
while the second has the same consistency strength as HOL Light.
}
The Isabelle/Pure and HOL Light logics are very similar, but
the HOL Light context is much larger.
This is because it is a full context for mathematics (while Isabelle/Pure is not)
including infinity and choice:
to introduce these two concepts some logic needs to be
developed, which is a sizable part of the specification.
Also, in the HOL Light context really the whole story is present,
including the type definition mechanism.
The Isabelle/Pure context represents only an abstracted version
of the Isabelle system.
The second conclusion of this paper is that these contexts are
more complex than one would expect.
The smallest serious one (HOL Light) has 25 primitive notions.
Naively one would hope to be able to build all mathematics from
something like about 10 notions.
A list of 25 items sounds like a long list!
%
(The `standard model' of elementary particle physics -- the
foundations of physics -- has a
list of 18 unexplained dimensionless numbers.
This always struck me as a large number.
But foundations of mathematics is even worse!)
\medskip
\noindent
\paragraph*{A reader comments:}
After Thorsten Altenkirch read this paper his reaction was:
\begin{quote}\em
I can't help making the obvious comment that simplicity can't be
measured by size. Actually, I often find that the contrary is the case
that simpler systems are slightly longer than more complicated ones.
E.g.\ if you look at some of these obfuscated C-programs which are very
short but doing something very subtle (people sometimes put them into
their sigs), would you call them `simple'? Are those not precisely
what people call a `hack'?
I mean I think it is a good idea to look for something which short \emph{and}
simple. And obviously length can be more easily measured than
simplicity. It is just that you say we measure the size to find out
which one is simplest.
\end{quote}
For the record: I agree with Thorsten that simple and short are not
the same thing.
(I had a remark about this, but somehow it did not survive my editing
the paper.)
So I now put Thorsten's comment in, to address the point.
Please note that I did not try to make the contexts as small as possible
by `obfuscating' them.
In each case I tried to keep as close to the original `informal' description of the
system as I could.
For instance, it is easy to reduce the number of ZFC axioms by
putting them all in one big conjunction.
I certainly did not do this.
\subsection{Future work}
There are several things that can be done with this:
\begin{itemize}
\item
One can try to prove these contexts \emph{adequate}.
It is easy to show that the various systems can be represented in their contexts as presented here,
but one also has to show that it is not possible to derive something
in the context -- maybe by using the strength of the logical framework --
which is not derivable in the system itself.
(Whether this is the case also might depend on the framework itself:
potentially LF/$\lambda P$ might be able to derive
less in the same context than the Automath framework $\Delta\Lambda$.)
While interesting, the adequacy of the contexts is not relevant
for this paper.
The goal here is to estimate the complexity of the foundational system.
Even if a context is \emph{not} adequate, it probably gives a good
impression of the complexity of the corresponding system.
\item
One can build `De Bruijn criterion' style checkers based on these contexts.
Automath then would be an independent checker for formal mathematics
formalized in one of these systems.
For instance one could have HOL Light generate a stream of correct Automath definitions
when doing its proofs.
This is closely related to the first kind of HOL to Coq translation
described in \cite{wie:02:3}.
\item
More foundational systems might be represented in the style of this paper.
%
Possible candidates are von Neumann-Bernays-G\"odel and Morse-Kelley set theory,
Church's original version of higher order logic,
and Russell's theory of ramified types.
\item
The fact that these contexts need so many primitive notions is philosophically unsatisfactory.
A system should be looked for that is equivalent to ZFC,
but is less complex than the systems presented here.
Such a system then would be less of a hack than ZF.
\end{itemize}
\paragraph*{Acknowledgments.}
Thanks to Thorsten Altenkirch for advice on MLW.
Thanks to Herman Geuvers for explaining to me why the calculus of constructions by itself is not enough for all of mathematics.
Thanks to Dan Synek and Bas Spitters for putting up with my ravings about my collection of contexts.
Thanks to Randall Holmes for advice on NF.
%\bibliographystyle{plain}
%\bibliography{frk}
\begin{thebibliography}{10}
\bibitem{acz:99}
Peter Aczel.
\newblock {On Relating Type Theories and Set Theories}.
\newblock In T.~Altenkirch, W.~Naraschewski, and B.~Reus, editors, {\em Types
for Proofs and Programs, Proceedings of Types '98}, volume 1657 of {\em
LNCS}. Springer-Verlag, 1999.
\newblock \url{}.
\bibitem{bal:98}
Mark Balaguer.
\newblock {\em {Platonism and Anti-Platonism in Mathematics}}.
\newblock Oxford University Press, 1998.
\bibitem{bar:99}
Henk Barendregt.
\newblock {Problems in type theory}.
\newblock In U.~Berger and H.~Schwichtenberg, editors, {\em Computational
Logic, Proceedings of the NATO Advanced Study Institute on Computational
Logic, held in Marktoberdorf, 1997}, volume 165 of {\em Series F: Computer
and Systems Sciences}. Springer-Verlag, 1999.
\newblock \url{}.
\bibitem{ber:00}
Stefan Berghofer.
\newblock {New features of the Isabelle theorem prover -- proof terms and code
generation}, 2000.
\newblock
\url{}.
\bibitem{har:00}
John Harrison.
\newblock {\em The HOL Light manual (1.1)}, 2000.
\newblock \url{}.
\bibitem{luo:89}
Zhaohui Luo.
\newblock {ECC}, an extended calculus of constructions.
\newblock In {\em Proceedings 4th Annual {IEEE} Symp.\ on Logic in Computer
Science, {LICS}'89, Pacific Grove, {CA}}, pages 386--395, Los Alamitos, CA,
1989. IEEE Computer Society Press.
\newblock \url{}.
\bibitem{mcb:99}
Conor McBride.
\newblock {\em Dependently Typed Functional Programs and their Proofs}.
\newblock PhD thesis, University of Edinburgh, 1999.
\newblock \url{}.
\bibitem{mcl:98}
Colin McLarty.
\newblock {Challenge axioms, final draft}.
\newblock \hskip 0pt plus 2em Message \url{<199802061421.JAA28643@po.cwru.edu>}
as sent to the FOM mailing list,
\url{}, 1998.
\bibitem{nor:pet:smi:90}
Bengt Nordstr{\"o}m, Kent Petersson, and Jan~M. Smith.
\newblock {\em {Programming in Martin-L\"of's Type Theory, An Introduction}}.
\newblock Oxford University Press, 1990.
\newblock \url{}.
\bibitem{pau:03}
L.C. Paulson.
\newblock {\em The Isabelle Reference Manual}, 2003.
\newblock \url{}.
\bibitem{pfe:99}
Frank Pfenning.
\newblock {Logical frameworks}.
\newblock In Alan Robinson and Andrei Voronkov, editors, {\em Handbook of
Automated Reasoning}, chapter~17, pages 1063--1147. Elsevier Science
Publishers, 1999.
\newblock \url{}.
\bibitem{pfe:sch:02}
Frank Pfenning and Carsten Schuermann.
\newblock {\em {Twelf User's Guide, Version 1.4}}, 2002.
\newblock \url{}.
\bibitem{wie:02:3}
F.~Wiedijk.
\newblock {Encoding the HOL Light logic in Coq}.
\newblock \url{}, 2002.
\end{thebibliography}
\end{document}