% $Id: hrefl.tex,v 1.4 2004/02/18 15:48:35 freek Exp $

\documentclass[runningheads]{llncs}
\usepackage{amssymb,url}
\usepackage[all]{xy}
\raggedbottom

\newcommand{\inte}[1]{[\![#1]\!]}
\newcommand{\pidx}{_{\!\not\,1}}
\newcommand{\intfun}[1]{\inte{#1}_\rho}
\newcommand{\intII}{\,]\![}
\newcommand{\intrel}{\mathbin{\intII_\rho}}
\newcommand{\intrelG}{\mathbin{\intII_\rho^G}}
\newcommand{\intrelF}{\mathbin{\intII_\rho^F}}
\newcommand{\intrelR}{\mathbin{\intII_\rho^R}}
\newcommand{\intrelS}{\mathbin{\intII_\rho^S}}
\newcommand{\intrelX}[1]{\mathbin{\intII_\rho^{#1}}}
\newcommand{\crr}[1]{\hat{#1}}
\newcommand{\intreltwo}{\mathbin{\intII_{\rho_0,\rho_1}}}
\newcommand{\intrelfour}{\mathbin{\intII_{\rho_0,\rho_1,\rho\pidx,\rho_2}}}
\newcommand{\Ltac}{${\cal L}_{\rm tac}$}
\newcommand{\N}{{\cal N}}
\newcommand{\eqbdi}{=}
\newcommand{\wf}{{\it wf}}
\newcommand{\wfrho}{{\it wf\!}_\rho}
\newcommand{\Prop}{{\textsf{Prop}}}
\newcommand{\alt}{\mathrel{|}}
\newcommand{\Z}{{\mathbb Z}}
\newcommand{\V}{{\mathbb V}}
\newcommand{\tacticname}[1]{\textsf{#1}}
\newcommand{\rational}{\tacticname{Rational}}

\begin{document}

\title{Hierarchical Reflection}
\author{Lu\'\i s Cruz-Filipe\inst{1,2} \and Freek Wiedijk\inst{1}}
\institute{Department of Computer Science, University of Nijmegen\and Center for Logic and Computation, Lisboa\\ \email{$\{$lcf,freek$\}$@cs.kun.nl}}
\maketitle

\begin{abstract}
The technique of reflection is a way to automate proof construction
in type theoretical proof assistants.
Reflection is based on the definition of a type of syntactic expressions
that gets interpreted in the domain of discourse.
By allowing the interpretation function to be partial or even a relation
one gets a more general method known as ``partial reflection''.
In this paper we show how one can take advantage of the partiality of
the interpretation to uniformly define a family of tactics for equational
reasoning that will work in different algebraic structures.
These tactics then follow the hierarchy
of those algebraic structures in a natural way.
\end{abstract}

\section{Introduction}\label{intro} %F+L

\subsection{Problem}

Computers have made formalization of mathematical proof practical.
They help getting formalizations correct by
verifying all the details,
but they also make it easier to formalize
mathematics by automatically generating parts of the proofs.

One way to automate proof is the technique called
\emph{reflection}.
With reflection one describes the desired automation \emph{inside} the logic
of the theorem prover, by formalizing relevant meta-theory.
Reflection is the common approach for proof automation in type theoretical
systems like NuPRL and Coq.
Another name for reflection is ``the two-level approach''.

In Nijmegen we formalized the Fundamental Theorems of Algebra and
Calculus in Coq,
and then extended these formalizations into a structured library
of mathematics named the C-CoRN library~\cite{ccorn,lcf:geu:wie:04}.
For this library we defined a reflection tactic called {\rational}
that automatically establishes equalities of rational expressions in a field 
by bringing both to the same side of the equal sign and then
multiplying everything out.
With this tactic, equalities like
$${1\over x} + {1\over y} = {x + y\over xy}$$
can be automatically proved without any human help.

The {\rational} tactic only works for expressions in a field,
but using the same idea one can define
%% there are
analogous tactics for expressions in a ring or a group.
The trivial way to define these is to duplicate the definition of the
{\rational} tactic and modify it for these simpler algebraic structures
by removing references to division or multiplication.
%This is what we did when we defined a ring version of {\rational}.
This was actually done to implement a ring version of {\rational}.

However this is not efficient, as it means duplication of the full
code of the tactic.
In particular the \emph{normalization function} that describes the simplification
of expressions, which is quite complicated, has to be defined multiple
times.
But looking at the normalization function for field expressions, it
is clear that it contains the normalization function for rings.
In this paper we study a way to \emph{integrate} these tactics for
different algebraic structures.

\subsection{Approach}

In the C-CoRN library algebraic structures like fields, rings
and groups are organized into an \emph{Algebraic Hierarchy}.
The definition of a field reuses the definition of a ring, and
the definition of a ring reuses the definition of a group.
This hierarchy means that theory about these structures
is maximally reusable.
Lemmas about groups are automatically also applicable to rings
and fields, and lemmas about rings also apply to fields.
In this paper we organize the equational tactics for fields, rings
and groups in a similar hierarchical manner.

It turns out that expression simplification in a field
can be directly applied to equational reasoning in rings and groups as well.
This is quite surprising:
the normal forms of expressions that get simplified in this theory
will contain functions like multiplication and division,
operations that do not make sense in a group.

\subsection{Related Work}

{\rational} is for the C-CoRN setoid framework what the standard Coq
tactic \tacticname{Field} is for Leibniz equality.
Both tactics were developed at about the same time.
\tacticname{Field} is a generalization of the Coq \tacticname{Ring}
tactic, so with the \tacticname{Field} and \tacticname{Ring} tactics the
duplication of effort that we try to eliminate is also present.
Also the \tacticname{Ring} tactic applies to rings as well as to
semirings (to be able to use it with the natural numbers),
so there is also this kind of duplication
within the \tacticname{Ring} tactic itself.

Reflection has also been widely used in the NuPRL system as described
originally in~\cite{ACHA90}.  More recently,~\cite{HNC+03} introduces
other techniques that allow code reuse for tactics in MetaPRL,
although the ideas therein are different from ours.  Since the library
of this system also includes an algebraic hierarchy built using
subtyping (see~\cite{YNKH03}), it seems reasonable to expect that the
work we describe could be easily adapted to that framework.

\subsection{Contribution}

We show that it is possible to have one unified mechanism for
simplification of expressions in different algebraic structures like
fields, rings and groups.
We also show that it is not necessary to have different normalization
functions for these expressions, but that it is possible to decide
equalities on all levels with only one normalization function.

\subsection{Outline}

In Section~\ref{reflection} we summarize the methods of reflection
and partial reflection.
In Section~\ref{normalization} we describe in detail the normalization
function of the {\rational} tactic.
Section~\ref{functions} is a small detour where we generalize the same
method to add uninterpreted function
symbols to the expressions that {\rational} understands.
In Section~\ref{hierarchical} we show how to do reflection in an
algebraic hierarchy in a hierarchical way.
Finally in Section~\ref{integration} we present a possibility
to have even tighter integration in a hierarchical reflection tactic,
which unfortunately turns out to require the so-called $K$ axiom.


\section{Reflection and Partial Reflection}\label{reflection} %F

In this section we will briefly summarize~\cite{geu:wie:zwa:00}.
That paper describes a generalization of the technique of
\emph{reflection} there called \emph{partial reflection}.
%Reflection is a technique to automate proof generation.
%It is mainly used in proof assistants that are
%based on type theory, like Coq and NuPRL.
One can give a general account of reflection in terms of
decision procedures, but here we will only present the more
specific method of reflection with
a normalization function, which is used to do equational reasoning.

In the normal, ``total'', kind of reflection
one defines a type $E$ of \emph{syntactic
expressions} for the domain $A$ that one is reasoning about,
together with an \emph{interpretation function}
$$\intfun{-} : E \to A$$
which assigns
to a syntactic expression $e$ an interpretation $\intfun{e}$.
In this, $\rho$ is a \emph{valuation} that maps the variables
in the syntactic expressions to values in $A$.
The type $E$ is an inductive type,
and therefore it is possible to recursively define a \emph{normalization function}
$\N$ on the type of syntactic expressions \emph{inside} the type
theory (this is not possible for $A$;
so the reason for introducing the type $E$ is to be able to define this $\N$).
$$\N : E \to E$$
One now proves the correctness lemma that states that the normalization function
conserves the interpretation.
$$\intfun{e} =_A \intfun{\N(e)}$$
Then, to
reason about the domain $A$ that $\inte{-}$ maps to,
one first a valuation $\rho$ and
syntactic expressions in $E$ which map under $\intfun{-}$ to the
terms that one want to reason about, and
after that one uses the lemma to do the equational reasoning.

For instance, suppose that one wants to prove $a =_A b$.
One finds $e$, $f$ and $\rho$ with $\intfun{e} \eqbdi a$ and
$\intfun{f} \eqbdi b$.
Now if $\N(e) \eqbdi \N(f)$ then
we get $a \eqbdi \intfun{e} =_A \intfun{\N(e)} \eqbdi \intfun{\N(f)} =_A \intfun{f} \eqbdi b$.
(Clearly this uses the correctness lemma twice, see Figure~\ref{fig:tactic}.)
\begin{figure}
\[
\xymatrix{
 e\in E\ar[rr]^{\N\ \ \ \ }\ar@/_/[ddr]_{\intfun{-}} &&
  \N(e)=\N(f)\ar[]+DL+<1cm,0pt>;[ddl]^{\intfun{-}}\ar[]+DR-<1cm,0cm>;[ddr]_{\intfun{-}} &&
  f\in E\ar[ll]_{\ \ \ \ \ \N}\ar@/^/[ddl]^{\intfun{-}} \\ \\
 & a\in A\ar@/_/@{-->}[uul]\ar@{<->}[rr]^{=_A} && b\in A\ar@/^/@{-->}[uur]
}
\]
\caption{Proving equalities}\label{fig:tactic}
\end{figure}
\noindent
Note that the operation of finding an expression in $E$ that
corresponds to a given expression in $A$ (dotted arrows) is not
definable in the type theory, and needs to be implemented
\emph{outside} of it.  In a system like Coq it will be
implemented in ML or in the tactic language \Ltac.

Things get more interesting when the syntactic expressions in $E$ contain
\emph{partial} operations, like division.
In that case the interpretation $\intfun{e}$ will not always be defined.
To address this we generalized the method of reflection to \emph{partial
reflection}.
The naive way to do this is to define a predicate
$$\wfrho : E \to \Prop$$
that tells whether an expression is \emph{well-formed}.
Then the interpretation function takes another argument of type $\wfrho(e)$.
$$\intfun{-} : \Pi_{e:E}.\, \wfrho(e) \to F$$
The problem with this approach is that the definition of $\wf$
needs the interpretation function $\inte{-}$.
Therefore the inductive definition of $\wf$ and the recursive definition
of $\inte{-}$ need to be given simultaneously.
This is called an \emph{inductive-recursive definition}.
Inductive-recursive definitions are not supported by the Coq system,
and for a good reason:
induction-recursion makes a system significantly stronger.
In set theory it corresponds to the existence of a Mahlo
cardinal \cite{dyb:set:03}.

The solution from~\cite{geu:wie:zwa:00} for doing partial reflection without induction-recursion,
is to replace the
interpretation function with an inductively defined \emph{interpretation relation}.
$$\intrel \subseteq E \times A$$
The relation $\intfun{e} = a$ now becomes $e \intrel a$.
It means that the syntactic expression $e$ is
interpreted under the valuation $\rho$ by the object $a$.
%
The lemmas that one then proves are the following.
$$
\begin{array}{c}
e \intrel a \;\land\; e \intrel b \;\Rightarrow\; a =_A b\\
\noalign{\medskip}
e \intrel a \;\Rightarrow\; \N(e) \intrel a
\end{array}
$$
The first lemma states that the interpretation relation is \emph{functional},
and the second lemma is again the correctness of the normalization function.
Note that it is not an equivalence but just an implication.
This is the only direction that is needed.
It is easy to see that in fact in our application
the equivalence does not hold.\footnote{A simple example is $e=1/(1/0)$,
which does not relate to any $a$.
Its normal form is $0/1$, which interprets to $0$.}

For each syntactic expression $e$ that one constructs for an
object $a$,
one also needs to find an inhabitant of the statement $e \intrel a$.
In~\cite{geu:wie:zwa:00} a type $\bar E_\rho(a)$ of \emph{proof loaded syntactic
expressions} is introduced to make this easier.
These types correspond to the expressions that evaluate to $a$.
They are mapped to the normal syntactic expressions by a forgetful function
$$|\mathord{-}| : \bar E_\rho(a) \to E$$
and they satisfy the property that for all $\bar e$ in the type $\bar E_\rho(a)$
$$|{\bar e}| \intrel a.$$
In this paper we will not further go into this,
although everything that we do also works in the presence of these proof loaded
syntactic expressions.


\section{Normalization Function}\label{normalization} %L+F

We will now describe how we defined the normalization function for our
main example of \emph{rational expressions}.
Here the type E of syntactic expressions is given by the following
grammar.
\begin{eqnarray*}
E &::=& \Z \alt \V \alt E+E \alt E\cdot E \alt E/E
\end{eqnarray*}
In this $\Z$ are the integers, and $\V$ is a countable set of
\emph{variable names} (in the Coq formalization we use a copy of the natural
numbers for this).
%new (lcf)
Variables will denoted by $x,y,z$, integers by $i,j,k$.
The elements of this type $E$ are just \emph{syntactic} objects, so they are
different kind of objects
from the \emph{values} of these expressions in specific fields.
Note that in these expressions it is possible to divide by zero: $0/0$ is
one of the terms in this type.

Other algebraic operations are defined as an abbreviation
from operations that occur in the type.
For instance, subtraction is defined by
$$e - f \equiv e + (-1)\cdot f$$
%and exponentiation to a power that is a closed natural number term
%is defined as iterated multiplication
%$$e^n \equiv e\cdot e\cdot \dots\cdot e\cdot 1.$$

\noindent
We now will describe how we define the normalization function $\N(e)$
that maps an element of $E$ to a normal form.
As an example, the normal form of ${1\over x - y} + {1\over x + y}$ is
$$\N({1\over x - y} + {1\over x + y}) = {x\cdot 2 + 0\over x\cdot x\cdot 1 + y\cdot y\cdot (-1) + 0}.$$
This last expression is the ``standard form'' of the way one would normally write
this term, which is
$${2x\over x^2 - y^2}.$$

\noindent
From this example it should be clear how the normalization function works:
it multiplies everything out until there is just a quotient of two
polynomials left.
These polynomials are then in turn written in a ``standard form''.
The expressions in normal form are given by the following grammar.
\begin{eqnarray*}
F &::=& P/P \\
P &::=& M + P \alt \Z \\
M &::=& \V \cdot M \alt \Z
\end{eqnarray*}
In this grammar $F$ represents a fraction of two polynomials, $P$ are the
polynomials and $M$ are the monomials.
One should think of $P$ as a ``list of monomials'' (where $+$ is the ``cons''
and the integers take the place of the ``nil'')
and of $M$ as a ``list of variables'' (where $\cdot$ is the ``cons'' and
again the integers take the place of the ``nil'').

%changed by lcf
%We want to get uniqueness of normal forms, so we require
On the one hand we want the normalization function to terminate,
but on the other hand we want the set of normal forms to be as small
as possible.
We achieve this by requiring
% after ``;'' added by lcf
the polynomials and monomials to be \emph{sorted}; furthermore, no two
monomials in a polynomial can have exactly the same set of variables.
Thus normal forms for polynomials will be unique.

For this we have an ordering of the variable names.
So the ``list'' that is a monomial has to be sorted
according to this order on $\V$, and the ``list'' that
is a polynomial also has to be sorted, according to the
corresponding lexicographic ordering on the monomials.
If an element of $P$ or $M$ is sorted like this,
and monomials with the same set of variables have been collected
together, we say it is \emph{in normal form}.

Now to define $\N$ we have to ``program'' the multiplying out of $E$
expressions together with the sorting of monomials and polynomials,
and collecting factors and terms.
This is done in a systematic way.
Instead of first multiplying out the expressions and then sorting
them to gather common terms, we systematically combine these two things.

We recursively define the following functions
(using the \texttt{Fixpoint} operation of Coq).
$$
\begin{array}{rlcrclcl}
- &\cdot_{\it M\Z}\; - &:& M &\times& \Z &\to& M \\
- &\cdot_{\it M\V}\; - &:& M &\times& \V &\to& M \\
- &\cdot_{\it MM}\; - &:& M &\times& M &\to& M \\
- &+_{\it MM}\; - &:& M &\times& M &\to& M \\
- &+_{\it PM}\; - &:& P &\times& M &\to& P \\
- &+_{\it PP}\; - &:& P &\times& P &\to& P \\
- &\cdot_{\it PM}\; - &:& P &\times& M &\to& P \\
- &\cdot_{\it PP}\; - &:& P &\times& P &\to& P \\
- &+_{\it FF}\; - &:& F &\times& F &\to& F \\
- &\cdot_{\it FF}\; - &:& F &\times& F &\to& F \\
- &/\!_{\it FF}\; - &:& F &\times& F &\to& F
\end{array}
$$
(Actually, all these functions have all type
$$E \times E \to E$$
as we do not have separate types for $F$, $P$ and $M$.
However, the idea is that they only will be called with arguments
that are of the appropriate shape and in normal form.
In that case the functions will return the appropriate normal form.
If the other case it will return any term that is equal to
the sum or product of the arguments --- generally we just use the sum
or product of the arguments.)

%examples added by lcf
For example, the multiplication function $\cdot_{\it MM}$ looks like
%Fixpoint MM_mult [e:expr] : expr->expr := [f:expr]
%let d = (expr_mult e f) in
%  Cases e f of
%    (expr_mult e1 e2) f => (MV_mult (MM_mult e2 f) e1)
%  | (expr_int i) f => (MI_mult f e)
%  | _ _ => d
%  end.
\[
e\cdot_{\it MM}f := \left\{
\begin{array}{ll}
 f\cdot_{\it M\Z}i & \mbox{ if $e=i\in\Z$}\\
 (e_2\cdot_{\it MM}f)\cdot_{\it MV}e_1 & \mbox{ if $e=e_1\cdot e_2$}\\
 e\cdot f & \mbox{ otherwise}
\end{array}\right.
\]
\noindent and the addition function $+_{\it PM}$ is\footnote{In the fourth
case, the equality $e_1=f$ is equality \emph{as lists}, meaning that
they might differ in the integer coefficient at the end.}
%Fixpoint PM_plus [e:expr] : expr->expr := [f:expr]
%let d = (expr_plus e f) in
%  Cases e f of
%    (expr_plus e1 e2) (expr_int _) => (expr_plus e1 (PM_plus e2 f))
%  | (expr_int i) (expr_int j) => (MM_plus e f)
%  | (expr_plus e1 e2) f =>
%      Cases (le_monom e1 f) of
%        true =>
%          Cases (eq_monom e1 f) of
%            true => (PM_plus e2 (MM_plus e1 f))
%          | false => (expr_plus e1 (PM_plus e2 f))
%          end
%      | false => (expr_plus f e)
%      end
%  | (expr_int i) f => (expr_plus f e)
%  | _ _ => d
%  end.
\[
e+_{\it PM}f := \left\{
\begin{array}{ll}
 i+_{\it MM} j & \mbox{ if $e=i\in\Z$, $f=j\in\Z$} \\
 f+i & \mbox{ if $e=i\in\Z$} \\
 e_1+(e_2+_{\it PM} i) & \mbox{ if $e=e_1+e_2$, $f=i\in\Z$} \\
 e_2+_{\it PM}(e_1+_{\it MM} f) & \mbox{ if $e=e_1+e_2$, $e_1=f$} \\
 e_1+(e_2+_{\it PM} f) & \mbox{ if $e=e_1+e_2$, $e_1<_{\mathrm{lex}}f$} \\
 f+e & \mbox{ if $e=e_1+e_2$, $e_1>_{\mathrm{lex}}f$} \\
 e+f & \mbox{ otherwise}
\end{array}\right.
\]
%%Definition FF_plus [e:expr] : expr->expr := [f:expr]
%%let d = (expr_plus e f) in
%%  Cases e f of
%%    (expr_div e1 e2) (expr_div f1 f2) =>
%%      (expr_div (PP_plus (PP_mult e1 f2) (PP_mult e2 f1)) (PP_mult e2 f2))
%%  | _ _ => d
%%  end.
%\[
%e+_{\it FF}f = \left\{
%\begin{array}{ll}
% ((e_1\cdot_{\it PP}f_2)+_{\it PP}(e_2\cdot_{\it PP}&f_1))/(e_2\cdot_{\it PP} f_2)\\ & \mbox{ if $e=e_1/e_2$ and $f=f_1/f_2$}\\
% e+f & \mbox{ otherwise}
%\end{array}\right.
%\]
\noindent where the lexicographic ordering $<_{\mathrm{lex}}$ is
used to guarantee that the monomials in the result are ordered.

Finally we used these functions to recursively ``program'' the
normalization function.
For instance the case where the argument is a division is
defined like
$$\N(e/f) := N(e) \,/\!_{\it FF} N(f).$$
%new!!! (lcf)
\noindent The base case (when $e$ is a variable) looks like
$$\N(v) := {v\cdot 1+0 \over 1}.$$

\noindent
To prove that $a=b$, then, one builds the expression corresponding to
$a-b$ and checks that this normalizes to an expression of the form
$0/e$.
(This turns out to be stronger than building expressions $e$ and $f$
interpreting to $a$ and $b$ and verifying that $\N(e)=\N(f)$, since normal
forms are in general not unique.)

\section{Uninterpreted Function Symbols}\label{functions} %L

When one starts working with the tactic defined as above, one quickly
finds out that there are situations in which it fails because two
terms which are easily seen to be equal generate two expressions whose
difference fails to normalize to $0$.
A simple example arises is when function symbols are used; for example,
trying to prove that $$f(a+b)=f(b+a)$$ will fail because $f(a+b)$ will
be syntactically represented as a variable $x$ and $f(b+a)$ as a
(different) variable $y$, and the difference between these expressions
normalizes to $${x\cdot1+y\cdot(-1)+0 \over 1},$$ which is not zero.

In this section we describe how the syntactic type $E$ and the normalization
function $\N$ can be extended to recognize and deal with function symbols.
The actual implementation includes unary and binary total functions,
as well as unary partial functions (these are binary functions whose second
argument is a proof)\footnote{Other possibilities, such as ternary functions
or binary partial functions, were not considered because this work was
done in the setting of the C-CoRN library, where these
are the types of functions which are used in practice.}.
We will discuss the case for unary total functions in detail; binary and
partial functions are treated in an analogous way.

Function symbols are treated much in the same way as variables; thus, we
extend the type $E$ of syntactic expressions with a new countable set of
\emph{function variable names} $\V_1$, which is implemented (again) as the
natural numbers.
The index $1$ stands for the arity of the function; the original set of
variables is now denoted by $\V_0$.
Function variables will be denoted $u,v$.
\begin{eqnarray*}
E &::=& \Z \alt \V_0 \alt \V_1(E) \alt E+E \alt E\cdot E \alt E/E
\end{eqnarray*}

\noindent Intuitively, the normalization function should also normalize
the arguments of function variables.
The grammar for normal forms becomes the following.
\begin{eqnarray*}
F &::=& P/P \\
P &::=& M + P \alt \Z \\
M &::=& \V_0 \cdot M \alt \V_1(F) \cdot M \alt \Z
\end{eqnarray*}
But now a problem arises: the extra condition that both polynomials and
monomials correspond to sorted lists requires ordering not only variables
in $\V_0$, but also expressions of the form $\V_1(F)$.
The simplest way to do this is by defining an ordering on the whole set $E$
of expressions.

This is achieved by ordering first the sets $\V_0$ and $\V_1$ themselves.
Then, expressions are recursively sorted by first looking at their outermost
operator $$x <_E i <_E e+f <_E e\cdot f <_E e/f <_E v(e)$$
and then sorting expressions with the same operator using a 
lexicographic ordering.
For example, if $x<_{\V_0}y$ and $u<_{\V_1}v$, then
$$x<_E y<_E 2<_E 34<_E x/4<_E u(x+3)<_E u(2\cdot y)<_E v(x+3).$$

\noindent With this different ordering, the same normalization function
as before can be used with only trivial changes.
In particular, the definitions of the functions $\cdot_{\it MM}$ and
$+_{\it PM}$ stay unchanged.
Only at the very last step does one have to add a rule saying that
$$\N(v(e)) := {v(\N(e))\cdot 1+0\over 1}.$$

%nuttig?
\noindent Notice the similarity with the rule for the normal form of
variables.

The next step is to change the interpretation relation.
Instead of the valuation $\rho$, we now need two valuations
\begin{eqnarray*}
\rho_0 & : & \V_0\to A\\
\rho_1 & : & \V_1\to(A\to A)
\end{eqnarray*}
\noindent and the inductive definition of the interpretation relation
is extended with the expected constructor for interpreting expressions
of the form $v(e)$.

As before, one can again prove the two lemmas
\[
\begin{array}{c}
e \intreltwo a \;\land\; e \intreltwo b \;\Rightarrow\; a =_A b\\
\noalign{\medskip}
e \intreltwo a \;\Rightarrow\; \N(e) \intreltwo a
\end{array}
\]

\noindent Now, our original equality $f(a+b)=f(b+a)$ can easily be
solved: $f(a+b)$ can now be more faithfully represented by the
expression $v(x+y)$, where $\rho_1(v)=f$, $\rho_0(x)=a$ and
$\rho_0(y)=b$; the syntactic representation of $f(b+a)$ becomes
$v(y+x)$; and each of these normalizes to
$${v\left({x\cdot1+y\cdot1+0\over 1}\right)\cdot1+0\over 1},$$ so that
their difference normalizes to $0$ as was intended.

Adding binary functions simply requires a new sort $\V_2$ of binary
function symbols and extend the type of expressions to allow for the
like of $v(e,f)$; the normalization function and the interpretation relation
can easily be adapted, the latter requiring yet another valuation
\begin{eqnarray*}
\rho_2 & : & \V_2\to(A\times A\to A).
\end{eqnarray*}

\noindent Partial functions are added likewise, using a sort $V\pidx$
for partial function symbols and a valuation
\begin{eqnarray*}
\rho\pidx & : & \V\pidx\to(A\not\to A).
\end{eqnarray*}
\noindent As was already the case with division, one can write down
expressions like $v(e)$ even when $\rho\pidx(v)$ is not defined at
the interpretation of $e$; the definition of $\intrelfour$ ensures that
only correctly applied partial functions will be interpreted.

\section{Hierarchical Reflection}\label{hierarchical} %L

The normalization procedure described in Section~\ref{normalization}
was used to define a tactic which would prove algebraic equalities in an
arbitrary field in the context of the Algebraic Hierarchy
of~\cite{geu:pol:wie:zwa:02}.

In this hierarchy, fields are formalized as rings with an extra operation
(division) which satisfies some properties; rings, in turn, are themselves
Abelian groups where a multiplication is defined also satisfying some
axioms.
The question then arises of whether it is possible to generalize this
mechanism to the different structures of this Algebraic Hierarchy.
This would mean having three ``growing'' types of syntactic
expressions $E_G$, $E_R$ and $E_F$ (where the indices stand for
groups, rings and fields respectively) together with interpretation
relations\footnote{For simplicity we focus on the setting where
function symbols are absent; the more general situation is analogous.}.
\[
\xymatrix{
E_F \ar[rr]^(.45){\intrelF} & & F : \mbox{Field}\ar[d] \\
E_R \ar[u]^{\subseteq}\ar[rr]^(.45){\intrelR} & & R : \mbox{Ring}\ar[d] \\
E_G \ar[u]^{\subseteq}\ar[rr]^(.45){\intrelG} & & G : \mbox{Group} \\
}
\]

\noindent However one can do better.
The algorithm in the normalization function works outwards; it first
pushes all the divisions to the outside, and then proceeds to
normalize the resulting polynomials.
In other words, it first deals with the field-specific part of the expression,
and then proceeds working within a ring.
This suggests that the same normalization function could be reused to
define a decision procedure for equality of algebraic expressions within
a ring, thus allowing $E_F$ and $E_R$ to be unified.

Better yet, looking at the functions operating on the polynomials one also
quickly realizes that these will never introduce products of variables
unless they are already implicitly in the expression (in other words, a
new product expression can arise e.g.\ from distributing a sum over an
existing product, but if the original expression contains no products then
its normal form will not, either).
So our previous picture can be simplified to this one.
%\[
%\xymatrix{
% & & F : \mbox{Field}\ar[d] \\
%E \ar[drr]^{\intrelG} \ar[urr]^{\intrelF} \ar[rr]^{\intrelR} & & R : \mbox{Ring}\ar[d] \\
%  & & G : \mbox{Group} \\
%}
%\]
\[
\xymatrix{
E \ar[rr]^(.45){\intrelF}
  \ar`d[dr][drr]^(.4){\intrelR}
  \ar`d[ddr][ddrr]^(.4){\intrelG}
& & F : \mbox{Field}\ar[d] \\
& & R : \mbox{Ring}\ar[d] \\
& & G : \mbox{Group} \\
}
\]

The key idea is to use the partiality of the interpretation
relation to be able to map $E$ into a ring $R$ or a group $G$.
In the first case, expressions of the form ${e/f}$ will not be
interpreted; in the latter, neither these nor expressions of the form
${e\cdot f}$ relate to any element of the group.

There is one problem, however.
Suppose $x$ is a variable with $\rho(x)=a$; then $a+a$ is represented
by $x+x$, but
$$
\N(x+x)={x\cdot2+0\over 1} \intrelG a+a
$$
\noindent does not hold.

In order to make sense of the normal forms defined earlier,
one needs to interpret the special cases $e/1$ in groups and rings,
as well as $e\cdot f$ with $f=i\in\Z$ in groups (assuming, of course,
that $e$ can be interpreted).

We summarize what each of the interpretation relations can interpret in
the following table.
\begin{center}
\begin{tabular}{crcccccc}
 &$\qquad$&$\quad$& $\intrelG$ &$\qquad$& $\intrelR$ &$\qquad$& $\intrelF$ \\
\noalign{\smallskip}
\hline
&\vline& \\
\noalign{\vspace{-0.9em}}
 $v\in\V$   &\vline&& yes         && yes        && yes          \\
 $i\in\Z$   &\vline&& if $i=0$    && yes        && yes          \\
 $e+f$      &\vline&& yes         && yes        && yes          \\
 $e\cdot f$ &\vline&& if $f\in\Z$ && yes        && yes          \\
 $e/f$      &\vline&& if $f=1$    && if $f=1$   && if $f\neq 0$ \\
\noalign{\vspace{-0.9em}}
&\vline& \\
\hline
\noalign{\smallskip}
\end{tabular}
\end{center}
\noindent In the last three cases the additional requirement that
$e$ and $f$ can be interpreted is implicit.

Once again, one has to prove the lemmas
$$
\begin{array}{c}
e \intrelG a \;\land\; e \intrelG b \;\Rightarrow\; a =_A b\\
\noalign{\medskip}
e \intrelG a \;\Rightarrow\; \N(e) \intrelG a
\end{array}
$$
\noindent and analogous for $\intrelR$ and $\intrelF$.

In these lemmas, one needs to use the knowledge that the auxiliary functions
will only be applied to the ``right'' arguments to be able to finish
the proofs.
This is trickier to do for groups than for rings and fields.
For example, while correctness of $\cdot_{\it MM}$ w.r.t.\ $\intrelF$
is unproblematic, as it states that
\[e\intrelF a \;\land\; f\intrelF b \;\Rightarrow\; e\cdot_{\it MM}f\intrelF a\cdot b,\]
\noindent
the analogue of this statement for $\intrelG$
cannot be written down, as $a\cdot b$ has no meaning in a group.
However, by the definition of $\intrelF$, this is equivalent to
the following.
\[e\cdot f\intrelF a\cdot b \;\Rightarrow\; e\cdot_{\it MM}f\intrelF a\cdot b\]
Now this second version does possess an analogue for $\intrelG$,
by replacing the expression $a\cdot b$ with a variable.
\[e\cdot f\intrelG c \;\Rightarrow\; e\cdot_{\it MM} f\intrelG c.\]
\noindent This is still not provable, because $\cdot_{\it MM}$ can
swap the order of its arguments.
The correct version is
\[e\cdot f\intrelG c \;\lor\; f\cdot e\intrelG c \;\Rightarrow\; e\cdot_{\it MM} f\intrelG c;\]
\noindent
the condition of this statement
reflects the fact that the normalization function will only
require computing $e\cdot_{\it MM}f$ whenever either
$e$ or $f$ is an integer.

The implementation of the tactic for the hierarchical case now becomes slightly more
sophisticated than the non-hierarchical one.
When given a goal $a=_A b$ it builds the syntactic representation of
$a$ and $b$ as before; and then looks at the type of $A$ to decide
whether it corresponds to a group, a ring or a field.
Using this information the tactic can then call the lemma stating correctness
of $\N$ w.r.t.\ the appropriate interpretation relation.

\subsection*{Optimization}

As was mentioned in Section~\ref{normalization}, normal forms for
polynomials are unique, contrarily to what happens with field expressions
in general.
This suggests that, when $A$ is a group or a ring, the decision procedure
for $a=_Ab$ can be simplified by building expressions $e$ and $f$
interpreting respectively to $a$ and $b$ and comparing their normal forms.
Clearly, this is at most as time-consuming as the previous version, since
computing $\N(e-f)$ requires first computing $\N(e)$ and $\N(f)$.

Also, since the normalization function was not defined at once, but
resorting to the auxiliary functions earlier presented, it is possible
to avoid using divisions altogether when working in rings and groups
by defining directly $\N'$ by e.g.
\[\N'(e+f)=\N'(e)+_{\it PP}\N(f)';\]
\noindent the base case now looks like
\[\N'(v)=v\cdot1+0.\]

\noindent
Notice that although we now have two different normalization functions
we still avoid duplication of the code, since they are both defined in
terms of the same auxiliary functions and these are where the real work is
done.

\section{Tighter Integration}\label{integration} %F

In the previous section we managed to avoid having different syntactic
expressions for the different kinds of algebraic structures.
We unified the types of syntactic expressions into one type $E$.

However we still had different interpretation relations $\intrelF$,
$\intrelR$ and $\intrelG$.
We will now analyze the possibility of unifying those relations
into one interpretation relation $\intrelS$.
This turns out to be possible,
but when one tries to prove the relevant lemmas for it
one runs into problems: to get the proofs finished one needs
to assume an axiom (in the type theory of Coq).

Every field, ring or group has an underlying carrier.
We will write $\crr{A}$ for the carrier of an algebraic structure $A$.
We now define an interpretation relation $\intrelS$ from the type of syntactic
expressions $E$ to an \emph{arbitrary} set\footnote{In the formalization
we actually have \emph{setoids} instead of sets, but that does not make a difference.}
$S$,
where that set is a parameter of the inductive definition.
This inductive definition quantifies
over \emph{different} kinds of algebraic structures in the clauses for the
different algebraic operations.
For instance the inductive clause for addition quantifies over groups.
$$\Pi_{G:{\rm Group}} \Pi_{e,f:E} \Pi_{a,b,c:\crr{G}} \,
(a +_G b =_G c)
\to(e\intrelX{\crr{G}} a)\to(f\intrelX{\crr{G}} b)\to
(e + f \intrelX{\crr{G}} c)$$
With this definition the diagram becomes the following.
\[
\xymatrix{
E \ar`d[dddr][dddrr]^(.4){\intrelS}
& & F : \mbox{Field}\ar[d] \\
& & R : \mbox{Ring}\ar[d] \\
& & G : \mbox{Group}\ar[d]^{\widehat{\;\mbox{\tiny\strut}\cdot\;}} \\
& & S : \mbox{Set}
}
\]
This gives a nice unification of the interpretation
relations.
However when one tries to prove the relevant lemmas
for it in Coq, the obvious way does not work.
%changed by lcf
To prove e.g.
$$
e \intrelS a \;\land\; e \intrelS b \;\Rightarrow\; a =_A b\\
$$
\noindent one needs to use inversion with respect to the inductive definition
of $\intrelS$ to get the possible ways that $e \intrelS a$ can be obtained;
but the \tacticname{Inversion} tactic of Coq then only produces
an equality between dependent pairs where what one needs is equality
between the second components of those pairs.
In Coq this is not derivable without the so-called $K$ axiom,
which states uniqueness of equality proofs.
\begin{center}
\verb|(A:Set; x:A; p:(x=x))p==(refl_equal A x)|
\end{center}
%The proofs clearly need inversion with respect to the inductive definition
%of $\intrelS$,
%but from the \tacticname{Inversion} tactic of Coq one only gets
%an equality between pairs where what one needs is equality between the second
%components of those pairs.
%In Coq this is not derivable without the so-called $K$ axiom.
\noindent We did not want to assume an axiom to be able to have our 
tactic prove equalities in
algebraic structures that are clearly provable without this axiom.
For this reason we did not fully implement this more integrated version
of hierarchical reflection.

\section{Conclusion}\label{conclusion} %F+L

\subsection{Discussion}

We have shown how the {\tacticname{Rational}} tactic (first described
in~\cite{geu:wie:zwa:00}), which is used to prove equalities of expressions
in arbitrary fields, can be generalized in two distinct directions.

First, we showed in Section~\ref{functions} how this tactic could be
extended so that it would also look at the arguments of functions; the
same mechanism can be applied not only to unary total functions, as
explained, but also to binary (or $n$-ary) functions, as well as to
partial functions as defined in the C-CoRN library~\cite{ccorn}.

In Section~\ref{hierarchical} we discussed how the same syntactic type $E$
and normalization function $\N$ could be reused to define similar tactics
that will prove equalities in arbitrary rings or commutative groups.
The work here described has been successfully implemented in Coq, and
is intensively used throughout the whole C-CoRN library.

Further extensions of this tactic are possible; in particular, the same
approach easily yields a tactic that will work in commutative monoids
(e.g.\ the natural numbers with addition).  For simplicity, and since
this adds nothing to this presentation, this situation was left out of
this paper.

Extending the same mechanism to non-commutative structures was not
considered.  The normalization function intensively uses
commutativity of both addition and multiplication, so it cannot
be reused for structures that do not satisfy these; and the
purpose of this work was to reuse as much of the code needed for
{\tacticname{Rational}} as possible.

The correctness of the normalization function w.r.t.\ the
interpretation relation had to be proved three times, one for each
type of structure.  In Section~\ref{integration} we showed one
possible way of overcoming this, that unfortunately failed because proving
correctness of the tactic would then require assuming an axiom which
is not needed to prove the actual equalities that the tactic is meant to solve.

\medskip
\noindent
A different approach to the same problem would be to use the constructor
subtyping of~\cite{bar:raa:00}.
This would allow one to define e.g.\ the interpretation relation for
rings $\intrelR$ by adding one constructor to that for groups $\intrelG$;
proving the relevant lemmas for the broader relation would then only
require proving the new case in all the inductive proofs instead of
duplicating the whole code.

Another advantage of this solution, when compared to the one explored
in Section~\ref{integration}, would be that the tactic could be programmed
and used for e.g.\ groups before rings and fields were even defined.
It would also be more easily extended to other structures.
Unfortunately, constructor subtyping for Coq is at the moment only a
theoretical possibility which has not been implemented.

\medskip
\noindent
It would be interesting to know whether the approach
from Section~\ref{integration} can be made to work without needing
the $K$ axiom for the proofs of the correctness lemmas.

\medskip
\paragraph{Acknowledgments.}
Support for the first author was provided by the Portuguese
{Fun\-da\-\c c\~ao} {pa\-ra} a {Ci\^en\-cia} e {Tec\-no\-lo\-gia,} under grant
SFRH / BD / 4926 / 2001, and by the FCT and FEDER via CLC.

\bibliographystyle{plain}
\bibliography{hrefl}

\end{document}
