\documentclass[orivec,envcountsect,envcountsame,runningheads]{llncs}
\usepackage{includes}
\pagestyle{headings}

\begin{document}
\title{Separation Logic for Non-local Control Flow\texorpdfstring{\\}{ }%
and Block Scope Variables}
\author{Robbert Krebbers \and Freek Wiedijk}
\institute{ICIS,
Radboud University Nijmegen, The Netherlands}

\maketitle{}

\begin{abstract}
\parfillskip=1em plus 1fill
We present an approach for handling non-local control flow (goto and return
statements) in the presence of allocation and deallocation of block scope
variables in imperative programming languages.

We define a small step operational semantics and an axiomatic
semantics (in the form of a separation logic) for a small \C-like language
that combines these two features,
and which also supports pointers to block scope variables.
Our operational semantics represents the program state through a
generalization of Huet's zipper data structure.

We prove soundness of our axiomatic semantics with respect to our
operational semantics.
This proof has been fully formalized in \Coq{}. % proof assistant.
\end{abstract}

\section{Introduction}
\label{section:introduction}

There is a gap between programming language features that can be described well
by a formal semantics, and those that are available in widely used programming
languages.
This gap needs to be bridged in order for formal methods to become mainstream.
However, interaction between the more `dirty' features of widely used
programming languages tends to make an accurate formal semantics even more
difficult.
An example of such interaction is the goto statement in the presence of block
scope variables in the \C{} programming language.

\C{} allows unrestricted gotos which (unlike break and continue) may not only
jump out of blocks, but can also jump into blocks.
Orthogonal to this, blocks may contain local variables, which can
be ``taken out of their block'' by keeping a pointer to them.
This makes non-local control in \C{} (including break and continue) even more
unrestricted, as leaving a block
results in the memory of these variables being freed, and thus making pointers
to them invalid.
Consider:

\begin{lstlisting}
int *p = NULL;
l: if (p) { return (*p); }
   else { int j = 10; p = &j; goto l; }
\end{lstlisting}

\noindent
Here, when the label \lstinline|l| is passed for the first time, the variable
\lstinline|p| is \NULL.
Hence, execution continues in the block containing \lstinline|j| where
\lstinline|p| is assigned a pointer to \lstinline|j|.
However, after the \lstinline|goto l| statement, the block containing
\lstinline|j| is left, and the memory of \lstinline|j| is freed.
After this, dereferencing \lstinline|p| is no longer legal, making the
program exhibit \emph{undefined behavior}.

If a program exhibits undefined behavior, the \ISO{} \C{} standard~\cite{iso:12}
allows it to do literally anything.
This is to avoid compilers having to insert (possibly expensive) dynamic checks
to handle corner cases.
In particular, in the case of non-local control flow this means that an
implementation can ignore allocation issues when jumping, but a semantics
cannot.
Not describing certain undefined behaviors would therefore mean that some programs
can be proven correct with respect to the formal semantic whereas they may
crash or behave unexpectedly when compiled with an actual \C{} compiler.

It is well known that a small step semantics is more flexible than a big step
semantics for modeling more intricate programming language features.
In a small step semantics, it is nonetheless intuitive to treat uses of goto
as big steps, as executing them makes the program jump to a totally different
place in one step.
For functional languages, there has been a lot of research on modeling
control (\callcc{} and variants thereof) in a purely small step manner
(see~\cite{fel:hie:92} for example).
This indicates that the intuition that uses of non-local control should be
treated as big steps is not correct.

We show that a purely small step semantics is also better suited to handle the
interaction between gotos and block scope variables in imperative programming
languages.
Our semantics lets the goto statement traverse in small steps through the
program to search for its corresponding label.
The required allocations and deallocations are calculated incrementally during
this traversal.

Our choice of considering goto at all may seem surprising.
Since the revolution of structured programming in the seventies,
many people have considered goto as bad programming practice~\cite{dij:68}.
However, some have disagreed~\cite{knu:79}, and gotos are still widely used
in practice.
For example, the current \Linux{} kernel contains about a hundred thousand uses
of goto.
Goto statements are particularly useful for breaking from multiple nested loops,
and for systematically cleaning up resources after an error occurred.
Also, gotos can be used to increase performance.

\paragraph{Approach.}
We define a small step operational semantics for a small \C-like language
that supports both non-local control flow and block scope variables.
To obtain more confidence in this semantics, and to support reasoning about
programs in this language, we define an axiomatic semantics for the same
fragment, and prove its soundness with respect to the operational semantics.
%Our semantics is large enough for proofs to become cumbersome to be done by
%hand.
%Therefore, we have formalized all proofs in the \Coq{} proof assistant.

Our operational semantics uses a \emph{zipper}-like data
structure~\cite{hue:97} to store both the location of the substatement that
is being executed and the program stack.
Because we allow pointers to local variables, the stack contains
references to the value of each variable instead of the value itself.
Execution of the program occurs by traversal through the zipper in one of the
following directions: \emph{down}~\(\DDown\), \emph{up}~\(\DUp\),
\emph{jump}~\(\DJumpSym\), or \emph{top}~\(\DTop\).
When a \(\SGoto l\) statement is executed, the direction is changed to
\(\DJump l\), and the semantics performs a 
%non-deterministic
small step traversal through the zipper until the label \(l\) has been reached.

%We extend Hoare triples to become sextuples \(\axstmt \Delta J R P s Q\),
%where \(\Delta\) assigns pre- and postconditions to function names, \(J\)
%assigns assertions to labels, and \(R\) gives the returning assertion.
%The four assertions \(P\), \(Q\), \(J\) and \(R\) correspond exactly to the
%directions \(\DDown\), \(\DUp\), \(\DJumpSym\) and \(\DTop\).

\paragraph{Related work.}
Goto statements (and other forms of non-local control) are often modeled using
continuations.
Appel and Blazy provide a small step continuation semantics for
\Cminor~\cite{app:bla:07} that supports return statements.
\CompCert{} extends their approach to support goto statements in
\Cmedium{}~\cite{ler:12}.
Ellison and Rosu~\cite{ell:ros:12} also use continuations to model gotos
in their \Celeven{} semantics, but whereas the \CompCert{} semantics does not
support block scope variables, they do.
%However, gotos are not natural to be modeled in the presence of block scope
%variables using continuations.
%It requires a tricky reconstruction to determine which variables should be
%freed and allocated after a goto.
We further discuss the differences between continuations and our approach in
Section~\ref{section:operational}.

Tews~\cite{tew:04} defines a denotational semantics for a \C-like language
that supports goto and unstructured switch statements.
His state includes variants for non-local control corresponding to our
directions \(\DJumpSym\) and \(\DTop\).

The most closely related work to our axiomatic semantics is Appel and Blazy's
separation logic for \Cminor{} in \Coq~\cite{app:bla:07}.
Their separation logic supports return statements, but does not support gotos,
nor block scope variables.
Von Oheimb~\cite{ohe:99} defines an operational and axiomatic semantics for
a \Java-like language in \Isabelle.
His language supports both local variables and mutually recursive function calls.
Although his work is fairly different from ours, our approach to mutual
recursion is heavily inspired by his.
Furthermore, Chlipala~\cite{chi:11} gives a separation logic for a low-level
language in \Coq{} that supports gotos.
His approach to automation is impressive, but he does not give an explicit
operational semantics and does not consider block scope variables.

\paragraph{Contribution.}
Our contribution is threefold:
\begin{itemize}
\item We define a small step operational semantics using a novel zipper based
	data structure to handle the interaction between gotos and block scope
	variables in a correct way
	(Section~\ref{section:language} and~\ref{section:operational}).
\item We give an axiomatic semantics that allows reasoning about programs
	with gotos, pointers to local variables, and mutually recursive function
	calls.
	We demonstrate it by verifying Euclid's algorithm
	(Section~\ref{section:axiomatic}).
\item We prove the soundness of our axiomatic semantics
	(Section~\ref{section:correspondence}).
	This proof has been fully formalized in the \Coq{} proof assistant
	(Section~\ref{section:coq}).
\end{itemize}

\section{The language}
\label{section:language}

% The memory and values of our language are simple:
Our memory is a finite partial
function from natural numbers to values, where a value is either an unbounded
integer, a pointer represented by a natural number corresponding to the index
of a memory cell, or the \NULL{}-pointer.

\begin{definition}
A \emph{partial function} from \(A\) to \(B\) is a (total) function
from \(A\) to \(\option B\), where \(\option A\) is the \emph{option type},
defined as containing either \(\None\) or \(\Some x\) for some \(x \in A\).
A partial function is called \emph{finite} if its domain is finite.
The operation \mbox{\(\minsert x y f\)} stores the value \(y\) at index \(x\), and
\(\mdelete x f\) deletes the value at index \(x\).
\emph{Disjointness}, notation \(\mdisjoint {f_1} {f_2}\), is defined as
\(\forall x\wsdot \mlookup x {f_1} = \None \lor \mlookup x {f_2} = \None\).
Given \(f_1\) and \(f_2\) with \(\mdisjoint {f_1} {f_2}\),
the operation \(f_1 \munion f_2\) yields their \emph{union}.
Moreover, the \emph{inclusion} \(f_1 \subseteq f_2\) is defined as
\(\forall x\,y \wsdot \mlookup x {f_1} = y \imp \mlookup x {f_2} = y\).
\end{definition}

\begin{definition}
\emph{Values} are defined as:
\begin{flalign*}
v \inductive{}&
	\VInt n \separator
	\VPtr b \separator
	\VNULL
\end{flalign*}
\emph{Memories} (typically named \(m\)) are finite partial functions from natural
numbers to values. A value \(v\) is \emph{true}, notation \(\valtrue v\), if
it is of the shape \(\VInt n\) with \(n \neq 0\), or \(\VPtr b\).
%\[
%	\valtrue {(\VInt n)} \defined n \neq 0
%\qquad \valtrue {(\VPtr b)} \defined \True
%\qquad \valtrue \VNULL \defined \False
%\]
It is \emph{false}, notation \(\valfalse v\), otherwise.
\end{definition}

Expressions are side-effect free and will be given a deterministic semantics by
an evaluation function.
The variables used in expressions are De Bruijn indexes, \ie{} the variable
\(\EVar i\) refers to the \(i\)th value on the stack.
De Bruijn indexes avoid us from having to deal with shadowing due to block
scope variables.
Especially in the axiomatic semantics this is useful, as we do not want to
lose information by a local variable shadowing an already existing one.

\begin{definition}
\emph{Expressions} are defined as:
\begin{flalign*}
\binOp\; \inductive{}&
	\EqOp \separator 
	\LeOp \separator 
	\PlusOp \separator 
	\MultOp \separator 
	\DivOp \separator 
	\ModOp \\
e \inductive{}&
	\EVar i \separator 
	\EVal v \separator
	\ELoad e \separator 
	\EBinOp \binOp {e_1} {e_2}
\end{flalign*}
\end{definition}

Stacks (typically named \(\rho\)) are lists of memory indexes rather than
lists of values.
This allows us to treat pointers to both local and allocated storage in a
uniform way.
Evaluation of a variable thus consists of looking up its address in the stack,
and returning a pointer to that address.

\begin{definition}
\emph{Evaluation \(\exprEval e \rho m\) of an expression \(e\)} in a stack
\(\rho\) and memory \(m\) is defined by the following partial function:
\begin{flalign*}
\exprEval {\EVar i} \rho m \defined{}& \VPtr a
	\quad\quad\text{if \(\mlookup i \rho = \Some a\)} \\
\exprEval {\EVal v} \rho m \defined{}& v \\
\exprEval {\ELoad e} \rho m \defined{}& \mlookup a m
	\quad\quad\text{if \(\exprEval e \rho m = \Some {\VPtr a}\)} \\
\exprEval {\EBinOp \binOp {e_1} {e_2}} \rho m \defined{}&
	{\exprEval {e_1} \rho m} {\evalBinOp \binOp} {\exprEval {e_2} \rho m}
\end{flalign*}
\end{definition}

\begin{lemma}
\label{lemma:expr_eval_weaken_mem}
If \(m_1 \subseteq m_2\) and \(\exprEval e \rho {m_1} = \Some v\),
then \(\exprEval e \rho {m_2} = \Some v\).
\end{lemma}

\begin{definition}
\emph{Statements} are defined as:
\begin{flalign*}
s \inductive{}&
	\SBlock s \separator
	\SAssign {e_l} {e_r} \separator
	\SCall f {\vec e} \separator
	\SSkip \separator
	\SGoto l \\
& \separator
	\SLabel l s \separator
	\SComp {s_1} {s_2} \separator
	\SIf e {s_1} {s_2} \separator
	\SReturn
\end{flalign*}
\end{definition}

The construct \(\SBlock s\) opens a new scope with one variable.
Since we use De Bruijn indexes for variables, it does not contain the name of
the variable.
%The assignment \(\SAssign {e_l} {e_r}\) stores the result of evaluating \(e_r\)
%at the address \(a\) if \(e_l\) evaluates to \(\VPtr a\).
%The function call \(\SCall f {\vec e}\) evaluates the parameters \(\vec e\)
%and calls \(f\).
For presentation's sake, we have omitted functions that return values (these are
however included in our \Coq{} formalization).
In the semantics presented here, an additional function parameter with a
pointer for the return value can be used instead.
Given a statement \(s\), the function \(\labels s\) collects the labels of
labeled statements in~\(s\), and the function \(\gotos s\) collects
the labels of gotos in \(s\).

\section{Operational semantics}
\label{section:operational}

We define the semantics of statements by a small step operational semantics.
That means, computation is defined by the reflexive transitive closure of a
reduction relation \(\cstepSym\) on \emph{program states}.
This reduction relation traverses through the program in small steps by moving
the focus on the substatement that is being executed.
Uses of non-local control (goto and return) are performed in small steps
rather than in big steps as well.

In order to model the concept of focusing on the substatement that is being
executed, we need a data structure to capture the location in the program.
For this we define \emph{program contexts} as an extension of Huet's zipper
data structure~\cite{hue:97}.
Program contexts extend the zipper data structure by annotating each
block scope variable with its associated memory index, and furthermore
contain the full call stack of the program.
Program contexts can also be seen as a generalization of continuations (as
for example being used in \CompCert~\cite{app:bla:07,ler:09,ler:12}).
However, there are some notable differences.

\begin{itemize}
\item Program contexts implicitly contain the stack,
	% (a map assigning memory	indexes to local variables)
	whereas a continuation semantics typically stores the stack separately.
\item Program contexts also contain the part of the program that 
	has been executed, whereas continuations only contain the part that remains
	to be done.
\item Since the complete program is preserved, looping constructs
	like the while statement do not have to duplicate code (see the \Coq{}
	formalization).
\end{itemize}

\noindent
The fact that program contexts do not throw away the parts of the statement that
have been executed is essential for our treatment of goto.
Upon an invocation of a goto, the semantics traverses through the program
context until the corresponding label has been found.
During this traversal it passes all block scope variables that went out of
scope, allowing it to perform required allocations and deallocations in a
natural way.
Hence, the point of this traversal is not so much to \emph{search}
for the label, but much more to incrementally \emph{calculate} the required
allocations and deallocations.

In a continuation semantics, upon the use of a goto, one typically computes,
or looks up, the statement and continuation corresponding to the target label.
However, it is not very natural to reconstruct the required allocations and deallocations
from the current and target continuations.

\begin{definition}
\emph{Singular statement contexts} are defined as:
\begin{flalign*}
\singular E \inductive{}&
	\CCompL {s_2} \separator
	\CCompR {s_1} \separator
	\CIfL e {s_2} \separator
	\CIfR e {s_1} \separator
	\CLabel l
\end{flalign*}
Given a singular statement context \(\singular E\) and a statement \(s\),
\emph{substitution of \(s\) for the hole in \(\singular E\)}, notation
\(\subst {\singular E} s\), is defined in the ordinary way.
\end{definition}

A pair \(\pair {\vec{\singular E}} s\) consisting of a list of singular
statement contexts \(\vec{\singular E}\) and a statement \(s\) forms a zipper
for statements without block scope variables.
That means, \(\vec{\singular E}\) is a statement turned inside-out that
represents a path from the focused substatement \(s\) to the top of the whole
statement.

\begin{definition}
\emph{Singular program contexts} are defined as:
\begin{flalign*}
E \inductive{}&
	\singular E \separator
	\CBlock b \separator
	\CCall f {\vec e} \separator
	\CParams {\vec b}
\end{flalign*}
\emph{Program contexts} (typically named \(k\)) are lists of singular
program contexts.
\end{definition}

The previously introduced contexts will be used as follows.
\begin{itemize}
\item When entering a block, \(\SBlock s\), the context \(\CBlock b\)
   is appended to the head of the program context.
	It associates the block scope variable with its corresponding memory index
	\(b\).
\item Upon a function call, \(\SCall f {\vec e}\), the context
	\(\CCall f {\vec e}\) is appended to the head of the program context.
	It contains the location of the caller so that it can be restored when the
	called function \(f\) returns.
\item When a function body is entered, the context \(\CParams {\vec b}\) is
	appended to the head of the program context.
	It contains a list of memory indexes of the function parameters.
\end{itemize}

As program contexts implicitly contain the stack, we define a function to
extract it from them.

\begin{definition}
The \emph{corresponding stack} \(\getstack k\) of \(k\) is defined as:
\begin{flalign*}
	\getstack {(\cons {\singular E} k)} \defined{}& \getstack k \\
	\getstack {(\cons {\CBlock b} k)} \defined{}& \cons b {\getstack k} \\
	\getstack {(\cons {\CCall f {\vec e}} k)} \defined{}& \nil \\
	\getstack {(\cons {\CParams {\vec b}} k)} \defined{}& \app {\vec b} {\getstack k}
\end{flalign*}
We will treat \(\getstackSym\) as an implicit coercion and will omit it everywhere.
\end{definition}

We define \(\getstack {(\cons {\CCall f {\vec e}} k)}\) as
\(\nil\) instead of \(\getstack k\), as otherwise it would be possible to refer
to the local variables of the calling function.

\begin{definition}
\emph{Directions}, \emph{focuses} and \emph{program states} are defined as:
\begin{flalign*}
d \inductive{}&
	\DDown \separator
	\DUp \separator
	\DJump l \separator
	\DTop \\
\phi \inductive{}&
	\Stmt d s \separator
	\Call f {\vec v} \separator
	\Return \\
S \inductive{}&
	\State k \phi m
\end{flalign*}
\end{definition}

A program state \(\State k \phi m\) consists of a program context \(k\), the
part of the program that is focused \(\phi\), and the memory \(m\).
Like Leroy's semantics for \Cminor{}~\cite{ler:09}, we consider
three kinds of states:
\begin{inparaenum}[(a)]
\item execution of statements
\item calling a function
\item returning from a function.
\end{inparaenum}
The \CompCert{} \Cmedium{} semantics~\cite{ler:12} also
includes a state for execution of expressions and a \emph{stuck} state for
undefined behavior.
Since our expressions are side-effect free, we do not need an
additional expression state.
Furthermore, since expressions are deterministic, we can easily capture
undefined behavior by letting the reduction get stuck.

\begin{definition}
The relation \(\allocparams {m_1} {\vec b} {\vec v} {m_2}\) 
(non-deterministically) \emph{allocates
fresh blocks \(\vec b\) for function parameters} \(\vec v\).
It is inductively defined as:
\[
\AXC{\strut}
\UIC{\(\allocparams m \nil \nil m\)}
\normalAlignProof
\DisplayProof
\qquad
\AXC{\(\strut\allocparams {m_1} {\vec b} {\vec v} {m_2}\)}
\AXC{\(\isfree {m_2} b\)}
\BIC{\(\allocparams {m_1} {(\cons b {\vec b})} {(\cons v {\vec v})}
	{\minsert b v {m_2}}\)}
\normalAlignProof
\DisplayProof
\]
\end{definition}

\begin{definition}
Given a function \(\delta\) assigning statements to function names, the
\emph{small step reduction relation} \(\cstep {S_1} {S_2}\) is defined as:
\begin{enumerate}
\setlength{\itemsep}{3pt}
\item \emph{For simple statements:}

\begin{small}\begin{enumerate}
\item
	\(\cstep {\State k {\Stmt \DDown {\SAssign {e_1} {e_2}}} m}
	         {\State k {\Stmt \DUp {\SAssign {e_1} {e_2}}} {\minsert a v m}}\) \\
	for any \(a\) and \(v\) such that \(\exprEval {e_1} k m = \Some {\VPtr a}\),
	\(\exprEval {e_2} k m = \Some v\) and
	\(\iswritable m a\).
\item
	\(\cstep {\State k {\Stmt \DDown {\SCall f {\vec e}}} m}
	         {\State {\cons {\CCall f {\vec e}} k} {\Call f {\vec v}} m}\) \\
	for any \(\vec v\) such that \(\exprEval {e_i} k m = \Some {v_i}\) for each \(i\).
\item
	\(\cstep {\State k {\Stmt \DDown \SSkip} m}
	         {\State k {\Stmt \DUp \SSkip} m}\)
\item
	\(\cstep {\State k {\Stmt \DDown {\SGoto l}} m}
	         {\State k {\Stmt {\DJump l} {\SGoto l}} m}\)
\item
	\(\cstep {\State k {\Stmt \DDown \SReturn} m}
	         {\State k {\Stmt \DTop \SReturn} m}\)
\end{enumerate}\end{small}

\item\emph{For compound statements:}

\begin{small}\begin{enumerate}
\item
	\(\cstep {\State k {\Stmt \DDown {\SBlock s}} m}
	       {\State {\cons {(\CBlock b)} k} {\Stmt \DDown s} {\minsert b v m}}\) \\
	for any \(b\) and \(v\) such that \(\isfree m b\).
\item
	\(\cstep {\State k {\Stmt \DDown {\SComp {s_1} {s_2}}} m}
	         {\State {\cons {(\CCompL {s_2})} k} {\Stmt \DDown {s_1}} m}\)
%	⟦ e ⟧ k m = \Some v
%	val_true v
%	\cstep {\State k {\Stmt \DDown {\SWhile e s}} m
%	       \State {\cons {\CWhile e} k} {\Stmt \DDown s} m

%	⟦ e ⟧ k m = \Some v
%	val_false v
%	\cstep {\State k {\Stmt \DDown {\SWhile e s}} m
%	       \State k {\Stmt \DUp {\SWhile e s} m

\item
	\(\cstep {\State k {\Stmt \DDown {\SIf e {s_1} {s_2}}} m}
	         {\State {\cons {(\CIfL e {s_2})} k} {\Stmt \DDown {s_1}} m}\) \\
	for any \(v\) such that \(\exprEval e k m = \Some v\) and \(\valtrue v\).
\item
	\(\cstep {\State k {\Stmt \DDown {\SIf e {s_1} {s_2}}} m}
	         {\State {\cons {(\CIfR e {s_1})} k} {\Stmt \DDown {s_2}} m}\) \\
	for any \(v\) such that \(\exprEval e k m = v\) and \(\valfalse v\).
\item
	\(\cstep {\State k {\Stmt \DDown {\SLabel l s}} m}
	         {\State {\cons {(\CLabel l)} k} {\Stmt \DDown s} m}\)
\item
	\(\cstep {\State {\cons {(\CBlock b)} k} {\Stmt \DUp s} m}
	         {\State k {\Stmt \DUp {\SBlock s}} {\mdelete b m}}\)
\item
	\(\cstep {\State {\cons {(\CCompL {s_2})} k} {\Stmt \DUp {s_1}} m}
	         {\State {\cons {(\CCompR {s_1})} k} {\Stmt \DDown {s_2}} m}\)
\item
	\(\cstep {\State {\cons {(\CCompR {s_1})} k} {\Stmt \DUp {s_2}} m}
	         {\State k {\Stmt \DUp {\SComp {s_1} {s_2}}} m}\)

%	\UIC{\(\cstep {\State {\cons {\Cwhile e} :: k) {\Stmt \DUp s) m
%	       \State k {\Stmt \DDown (while (e) s)) m

\item
	\(\cstep {\State {\cons {(\CIfL e {s_2})} k} {\Stmt \DUp {s_1}} m}
	         {\State k {\Stmt \DUp {\SIf e {s_1} {s_2}}} m}\)
\item
	\(\cstep {\State {\cons {(\CIfR e {s_1})} k} {\Stmt \DUp {s_2}} m}
	         {\State k {\Stmt \DUp {\SIf e {s_1} {s_2}}} m}\)
\item
	\(\cstep {\State {\cons {(\CLabel l)} k} {\Stmt \DUp s} m}
	         {\State k {\Stmt \DUp {\SLabel l s}} m}\)
\end{enumerate}\end{small}

\item\emph{For function calls:}

\begin{small}\begin{enumerate}
\item
	\(\cstep {\State k {\Call f {\vec v}} {m_1}}
	         {\State {\cons {\CParams {\vec b}} k} {\Stmt \DDown s} {m_2}}\) \\
	for any \(s\), \(\vec b\) and \(m_2\) such that \(\mlookup f \delta = \Some s\) and
	\(\allocparams {m_1} {\vec b} {\vec v} {m_2}\).
\item
	\(\cstep {\State {\cons {\CParams {\vec b}} k} {\Stmt \DUp s} m}
	       {\State k \Return {\mdeletelist {\vec b} m}}\)
\item
	\(\cstep {\State {\cons {\CParams {\vec b}} k} {\Stmt \DTop s} m}
	       {\State k \Return {\mdeletelist {\vec b} m}}\)
\item
	\(\cstep {\State {\cons {\CCall f {\vec e}} k} \Return m}
	         {\State k {\Stmt \DUp {\SCall f {\vec e}}} m}\)
\end{enumerate}\end{small}

\item\emph{For non-local control flow:}

\begin{small}\begin{enumerate}
\item
	\(\cstep {\State {\cons {(\CBlock b)} k} {\Stmt \DTop s} m}
	         {\State k {\Stmt \DTop {\SBlock s}} {\mdelete b m}}\)
\item
	\(\cstep {\State {\cons {\singular E} k} {\Stmt \DTop s} m}
	         {\State k {\Stmt \DTop {\subst {\singular E} s}} m}\)
\item
	\(\cstep {\State k {\Stmt {\DJump l} {\SBlock s}} m}
	         {\State {\cons {(\CBlock b)} k} {\Stmt {\DJump l} s} {\minsert b v m}}\) \\
	for any \(b\) and \(v\) such that \(\isfree m b\), and provided that
	\(l \in \labels s\).
\item
	\(\cstep {\State k {\Stmt {\DJump l} {\SLabel l s}} m}
	         {\State {\cons {(\CLabel l)} k} {\Stmt \DDown s} m}\)
	\label{item:cstep_label_here}
\item
	\(\cstep {\State k {\Stmt {\DJump l} {\subst {\singular E} s}} m}
	         {\State {\cons {\singular E} k} {\Stmt {\DJump l} s} m}\) % \\
	provided that \(l \in \labels s\).
	\label{item:cstep_label_item}
\item
	\(\cstep {\State {\cons {\CBlock b} k} {\Stmt {\DJump l} s} m}
	       {\State k {\Stmt {\DJump l} {\SBlock s}} {\mdelete b m}}\) \\
	provided that \(l \notin \labels s\).
\item
	\(\cstep {\State {\cons {\singular E} k} {\Stmt {\DJump l} s} m}
	       {\State k {\Stmt {\DJump l} {\subst {\singular E} s}} m}\) % \\
	provided that \(l \notin \labels s\).
\end{enumerate}\end{small}
\end{enumerate}
Note that the rules~\ref{item:cstep_label_here} and~\ref{item:cstep_label_item}
overlap, and that the splitting into \(\singular E\) and \(s\) in
rule~\ref{item:cstep_label_item} is non-deterministic.
We let \(\cstepRtcSym\) denote the reflexive-transitive closure, and
\(\cstepBoundedSym {n}\) paths of \(\le n\) steps.
\end{definition}

Execution of a statement \(\State k {\Stmt d s} m\) is performed by traversing
through the program context \(k\) and statement \(s\) in direction \(d\).
The direction \emph{down} \(\DDown\) (respectively \emph{up} \(\DUp\)) is 
used to traverse downwards (respectively upwards) to the next substatement to
be executed. Consider the example from the introduction (with the return
expression omitted).
\begin{lstlisting}
int *p = NULL;
l: if (p) { return; }
   else { int j = 10; p = &j; goto l; }
\end{lstlisting}
Figure \ref{figure:example} below displays some states corresponding to execution of
this program starting at \lstinline|p = &j| in downwards direction. 

\begin{figure}[bp!]
\centering
\vspace{-.7\baselineskip}
\begin{scriptsize}
\begin{equation*}
\begin{split}
k_1 ={}& \CCompL {\SGoto l}\\[-.3em]
\consSym{}& \CCompR {\SAssign {\EVar 0} {\EVal {\VInt 10}}}\\[-.3em]
\consSym{}& \CBlock {b_j}\\[-.3em]
\consSym{}& \SIfSym\; ( {\ELoad {\EVar 0}} )\; \SReturn\\[-.3em]
	& \SElseSym \; \Box\\[-.3em]
\consSym{}& \CLabel l\\[-.3em]
\consSym{}& \CCompR {\SAssign {\EVar 0} {\EVal \VNULL}}\\[-.3em]
\consSym{}& \CBlock {b_p}\\
\phi_1 ={}& \Stmt \DDown {\SAssign {\EVar 1} {\EVar 0}}\\
m_1 ={}& \{ b_p \mapsto \VNULL, b_j \mapsto 10 \}\\
S_1 ={}& \State {k_1} {\phi_1} {m_1}
\end{split}\qquad
\begin{split}
k_2 ={}& \CCompL {\SGoto l}\\[-.3em]
\consSym{}& \CCompR {\SAssign {\EVar 0} {\EVal {\VInt 10}}}\\[-.3em]
\consSym{}& \CBlock {b_j}\\[-.3em]
\consSym{}& \SIfSym\; ( {\ELoad {\EVar 0}} )\; \SReturn\\[-.3em]
	& \SElseSym \; \Box\\[-.3em]
\consSym{}& \CLabel l\\[-.3em]
\consSym{}& \CCompR {\SAssign {\EVar 0} {\EVal \VNULL}}\\[-.3em]
\consSym{}& \CBlock {b_p} \\
\phi_2 ={}& \Stmt \DUp {\SAssign {\EVar 1} {\EVar 0}} \\
m_2 ={}& \{ b_p \mapsto \VPtr {b_j}, b_j \mapsto 10 \} \\
S_2 ={}& \State {k_2} {\phi_2} {m_2}
\end{split}\qquad
\begin{split}
k_3 ={}& \CCompR {\SAssign {\EVar 1} {\EVar 0}}\\[-.3em]
\consSym{}& \CCompR {\SAssign {\EVar 0} {\EVal {\VInt 10}}}\\[-.3em]
\consSym{}& \CBlock {b_j}\\[-.3em]
\consSym{}& \SIfSym\; (\ELoad {\EVar 0})\; \SReturn\\[-.3em]
	& \SElseSym \; \Box\\[-.3em]
\consSym{}& \CLabel l\\[-.3em]
\consSym{}& \CCompR {\SAssign {\EVar 0} {\EVal \VNULL}}\\[-.3em]
\consSym{}& \CBlock {b_p} \\
\phi_3 ={}& \Stmt \DDown {\SGoto l} \\
m_3 ={}& \{ b_p \mapsto \VPtr {b_j}, b_j \mapsto 10 \} \\
S_3 ={}& \State {k_3} {\phi_3} {m_3}
\end{split}
\end{equation*}
\end{scriptsize}
\caption{An example reduction path \(S_1 \cstepSym S_2 \cstepSym S_3\).}
\label{figure:example}
\end{figure}

\begin{comment}
Here, upon entering a block, a fresh memory index \(b\) is associated to the
variable \(\EVar 0\), and it is initialized with an arbitrary value \(w\).
This is one of the two sources of non-determinism in our operational semantics.
\end{comment}

Execution of a function call \(\State k {\Stmt \DDown {\SCall f {\vec e}}} m\)
consists of two reductions.
The reduction to \(\State {\cons {\CCall f {\vec e}} k} {\Call f {\vec v}} m\)
evaluates the function parameters \(\vec e\) to values \(\vec v\), and stores
the location of the calling function on the program context.
The subsequent reduction to
\(\State {\cons {\CParams {\vec b}} {\cons {\CCall f {\vec e}} k}}
{\Stmt \DDown s} {m'}\) looks up the called function's body \(s\), allocates
storage for the parameters \(\vec v\), and then performs a transition to
execute the called function's body.

% Apart from traversal in upwards and downwards direction,
We consider two directions for non-local control flow:
\emph{jump} \(\DJump l\) and \emph{top} \(\DTop\).
After a \(\SGoto l\) the direction \(\DJump l\) is used to traverse to the
substatement labeled \(l\).
Although this search is non-deterministic, there are some side conditions on
it so as to ensure it not going back and forth between the same locations.
This is required as it otherwise may impose non-terminating behavior on
terminating programs.
The non-determinism could be removed entirely by adding
additional side conditions.
However
%, as allocation of variables is another source of non-determinism,
we omitted doing so in order to ease formalization.

The direction \(\DTop\) is used to traverse to the \emph{top} of the
statement after a \(\SReturn\).
%The reduction rules for this direction are similar to those of goto, but can
%only go upwards.
When it reaches the top, there will be two reductions to leave the called
function.
The first reduction, from \(\State {\cons {\CParams {\vec b}}
{\cons {\CCall f {\vec e}} k}} {\Stmt \DTop s} m\) to
\mbox{\(\State {\cons {\CCall f {\vec e}} k} \Return {\mdeletelist {\vec b} m}\)},
deallocates the function parameters, and the second, to
\(\State k {\Stmt \DUp {\SCall f {\vec e}}} m\), reinstates the calling function.

When we relate our operational and axiomatic semantics in
Section~\ref{section:correspondence}, we will have to restrict the traversal
through the program to remain below a certain context.

\begin{definition}
The \emph{\(k\)-restricted reduction} \(\cstepBelow k {S_1} {S_2}\) is defined
as \(\cstep {S_1} {S_2}\) provided that \(k\) is a suffix of the program context
of \(S_2\).
We let \(\cstepBelowRtcSym k\) denote the reflexive-transitive closure, and
\(\cstepBelowBoundedSym k n\) paths of \(\le n\) steps.
\end{definition}

% For the soundness proof of the axiomatic semantics we often need to cut of
% the maximal \(\app l k\)-restricted prefix of a \(\cstepBelowSym k\)-reduction
% sequence.

\begin{comment}
\begin{lemma}
\label{lemma:cstep_subctx_cut}
Given a reduction path \(\cstepBelowBounded k n {S_1} {S_3}\), where the
context of \(S_1\) is of the shape \(\app l k\), then either:
\begin{enumerate}
\item \(\cstepBelowBounded {\app l k} n {S_1} {S_3}\), or,
\item there is a state \(S_2\) with
	\(\cstepBelowBounded {\app l k} n {S_1} {S_2} \cstepBelowBoundedSym k n {S_3}\)
	and \(S_2 \not\cstepBelowSym {\app l k}\).
\end{enumerate}
\end{lemma}
\end{comment}

\begin{lemma}
If  \(\cstepBelowRtc{k} {\State k {\Stmt d s} m}
{\State k {\Stmt {d'} {s'}} {m'}}\), then \(s = s'\).
\end{lemma}

The previous lemma shows that the small step semantics indeed behaves as
traversing through a zipper.
Its proof is not entirely trivial due to the presence of function
calls, as these add the statement of the called function to the state.
%We proved this lemma by defining a well-formedness relation on contexts,
%showed that this well-formedness relation implies uniqueness of statements,
%and proved that reduction preserves well-formed contexts.
%We refer to the \Coq{} formalization for the proof of this lemma.

\section{Axiomatic semantics}
\label{section:axiomatic}

Judgments of Hoare logic are triples \(\axShort P s Q\), where \(s\) is a
statement, and \(P\) and \(Q\) are \emph{assertions} called the pre- and
postcondition.
The intuitive reading of such a triple is: if \(P\) holds for the state before
executing \(s\), and execution of \(s\) terminates, then \(Q\) holds
afterwards.
To deal with non-local control flow and function calls, our
judgments become sextuples \(\axstmt \Delta J R P s Q\), where:

\begin{itemize}
\item \(\Delta\) is a finite function from function names to their pre- and
	postconditions.
	This environment is used to cope with (mutually) recursive functions.
\item \(J\) is a function that gives the jumping condition for each \(\SGotoSym\).
	When executing a \(\SGoto l\), the assertion \(J\,l\) has to hold.
\item \(R\) is the assertion that has to hold when executing a \(\SReturn\).
\end{itemize}

\noindent
The assertions \(P\), \(Q\), \(J\) and \(R\) correspond to the four directions
\(\DDown\), \(\DUp\), \(\DJumpSym\) and \(\DTop\) in which traversal through a
statement can be performed.
We therefore often treat the sextuple as a triple
\(\axstmtP \Delta {\bar P} s\), where \(\bar P\) is a function from directions
to assertions such that \(\bar P\,\DDown = P\), \(\bar P\,\DUp = Q\),
\(\bar P\,(\DJump l) = J\,l\) and \(\bar P\,\DTop = R\).

We use a shallow embedding for the representation of assertions.
This treatment is similar
to that of Appel and Blazy~\cite{app:bla:07} and Von Oheimb~\cite{ohe:99}.

\begin{definition}
\label{definition:assertions}
\emph{Assertions} are predicates over the the stack and the memory.
We define the following connectives on assertions.
\[
\begin{split}
P \axImp Q \defined{}& \lambda \rho\,m \wsdot P\,\rho\,m \imp Q\,\rho\,m \\
P \axAnd Q \defined{}& \lambda \rho\,m \wsdot P\,\rho\,m \land Q\,\rho\,m \\
P \axOr Q \defined{}& \lambda \rho\,m \wsdot P\,\rho\,m \lor Q\,\rho\,m \\
\axNeg P \defined{}& \lambda \rho\,m \wsdot \neg P\,\rho\,m \\
\axForall x \wsdot P\,x \defined{}&
	\lambda \rho\,m \wsdot \forall x \wsdot P\,x\,\rho\,m \\
\axExists x \wsdot P\,x \defined{}&
	\lambda \rho\,m \wsdot \exists x \wsdot P\,x\,\rho\,m
\end{split}\qquad
\begin{split}
\axProp P \defined{}& \lambda \rho\,m \wsdot P \\
\axEval e v \defined{}& \lambda \rho\,m \wsdot \exprEval e \rho m = \Some v \\
\axEval e \dash \defined{}& \axExists v \wsdot \axEval e v \\
\axEval e \top \defined{}& \axExists v \wsdot \axProp {\valtrue v} \axAnd \axEval e v \\
\axEval e \bot \defined{}& \axExists v \wsdot \axProp {\valfalse v} \axAnd \axEval e v \\
\axSubst a v P \defined{}& \lambda \rho\,m \wsdot P\,\rho\,\minsert a v m
\end{split}
\]

\noindent
We treat \(\axProp {\,\_\,}\) as an implicit coercion, for example, we write
\(\axTrue\) instead of \(\axProp \True\).
Also, we often lift the above connectives to functions to assertions, for example,
we write \(P \axAnd Q\) instead of \(\lambda v \wsdot P\,v \axAnd Q\,v\).
\end{definition}

\begin{definition}
An assertion \(P\) is \emph{stack independent} if
\(P\,\rho_1\,m \imp P\,\rho_2\,m\) for each memory \(m\) and stacks \(\rho_1\)
and \(\rho_2\), and similarly is \emph{memory independent} if
\(P\,\rho\,m_1 \imp P\,\rho\,m_2\) for each stack \(\rho\) and memories
\(m_1\) and \(m_2\).
\end{definition}

Next, we define the assertions of separation logic~\cite{hea:rey:yan:01}.
The assertion \(\axEmp\) asserts that the memory is empty.
The \emph{separating conjunction} \(P \axSep Q\) asserts that the
memory can be split into two disjoint parts such that \(P\) holds in the one
part, and \(Q\) in the other.
Finally, \(\axSingleton {e_1} {e_2}\) asserts that the memory consists of
exactly one cell at address \(e_1\) with contents \(e_2\), and
\(\axAssign {e_1} {e_2}\) asserts that the memory contains at least a cell
at address \(e_1\) with contents \(e_2\).

\begin{definition}
The \emph{connectives of separation logic} are defined as follows.
\begin{flalign*}
\axEmp \defined{}& \lambda \rho\,m \wsdot m = \emptyset \\
P \axSep Q \defined{}& \lambda \rho\,m \wsdot
	\exists m_1\,m_2 \wsdot m = m_1 \munion m_2
		\land \mdisjoint {m_1} {m_2}
		\land P\,\rho\,m_1 \land Q\,\rho\,m_2 \\
\axSingleton {e_1} {e_2} \defined{}& \lambda \rho\,m \wsdot \exists b\,v \wsdot
	\exprEval {e_1} \rho m = \Some {\VPtr b} \land
	\exprEval {e_2} \rho m = \Some v \land
	m = \msingleton b v \\
\axSingleton {e_1} \dash \defined{}&
	\axExists e_2 \wsdot \axSingleton {e_1} {e_2} \\
\axAssign {e_1} {e_2} \defined{}& \lambda \rho\,m \wsdot \exists b\,v \wsdot
	\exprEval {e_1} \rho m = \Some {\VPtr b} \land
	\exprEval {e_2} \rho m = \Some v \land
	\mlookup b m = \Some v \\
\axAssign {e_1} \dash \defined{}&
	\axExists e_2 \wsdot \axAssign {e_1} {e_2}
\end{flalign*}
\end{definition}

To deal with block scope variables we need to lift an assertion such that
the De Bruijn indexes of its variables are increased.
We define the lifting \(\axLift P\) of an assertion \(P\) semantically, and
prove that it indeed behaves as expected.

\begin{definition}
The assertion \(\axLift P\) is defined as
\(\lambda \rho\,m \wsdot P\,(\tail \rho)\,m\).
\end{definition}

\begin{lemma}
We have \(\axLift {(\axEval e v)} = \axEval {(\exprLift e)} v\) and
\(\axLift {(\axSingleton {e_1} {e_2})} =
\axSingleton {(\exprLift {e_1})} {(\exprLift {e_2})}\), where the
operation \(\exprLift e\) replaces each variable \(\EVar i\) in \(e\) by
\(\EVar {i + 1}\).
Furthermore, \(\axLift {(\_\,)}\) distributes over the connectives \(\axImp\),
\(\axAnd\), \(\axOr\), \(\axNeg\), \(\axForall\), \(\axExists\), and
\(\axSep\).
\end{lemma}

In order to relate the pre- and postcondition of a function, we allow universal
quantification over arbitrary logical variables \(\vec y\).
The specification of a function with parameters \(\vec v\) consists
therefore of a precondition \(P\,\vec y\,\vec v\) and postcondition \(Q\,\vec y\,\vec v\).
These should be stack independent
because local variables will have a different
meaning at the calling function than in the called function's body.
We will write such a specification as
\(\forall \vec y \,\forall\vec v \wsdot \axFun {P\,\vec y\,\vec v} {Q\,\vec y\,\vec v}\).

\begin{definition}
Given a function \(\delta\) assigning statements to function names,
the rules of the axiomatic semantics are defined as:

\def\defaultHypSeparation{\quad}
\def\ScoreOverhang{.1em}
\begin{small}
\begin{gather}
\AXC{\(\axstmt \Delta J R P s Q\)}
\UIC{\(\axstmt \Delta
	{A \axSep J}
	{A \axSep R}
	{A \axSep P}
	s
	{A \axSep Q}\)}
\normalAlignProof
\DisplayProof \quad
\AXC{\(\!\forall x. (\axstmt \Delta J R {P\,x} s Q)\!\)}
\UIC{\(\axstmt \Delta J	R {\axExists x \wsdot P\,x} s Q\)}
\normalAlignProof
\DisplayProof \tag{frame \& exists} \\[.4em]
%
(\forall l \in \labels s \wsdot J' l \axImp J l)\quad
	(\forall l \notin \labels s \wsdot J l \axImp J' l)\quad
   R \axImp R' \notag \\[-.1em]
\AXC{\qquad\qquad\(P' \axImp P\quad
	\axstmt \Delta J R P s Q\quad
	Q \axImp Q'\)\qquad\qquad}
\UIC{\(\axstmt \Delta {J'} {R'} {P'} s {Q'}\)}
\normalAlignProof
\DisplayProof \tag{weaken} \\[.6em] \displaybreak[2]
%
\AXC{}
\UIC{\(\axstmt \Delta J R P \SSkip P\)}
\normalAlignProof
\DisplayProof
\qquad
\AXC{}
\UIC{\(\axstmt \Delta J R R \SReturn Q\)}
\normalAlignProof
\DisplayProof \tag{skip \& return}  \\[.4em]
%
\AXC{}
\UIC{\(\axstmt \Delta J R
	{\axExists a\,v \wsdot \axEval {e_1} a
		\axAnd \axEval {e_2} v \axAnd \axAssign {\EVal {\VPtr a}} \dash
		\axAnd \axSubst a v P}
	{\SAssign {e_1} {e_2}}
	P\)}
\normalAlignProof
\DisplayProof \tag{assign}  \\[.4em]
%
\AXC{\strut\(\axstmt \Delta J R {J\;l} s Q\)}
\UIC{\(\axstmt \Delta J R {J\;l} {\SLabel l s} Q\)}
\normalAlignProof
\DisplayProof
\qquad
\AXC{\strut}
\UIC{\(\axstmt \Delta J R {J\;l} {\SGoto l} Q\)}
\normalAlignProof
\DisplayProof \tag{label \& goto}  \\[.4em]
%
\AXC{\(\axstmt \Delta
	{\axSingleton {\EVar 0} \dash \axSep \axLift J}
	{\axSingleton {\EVar 0} \dash \axSep \axLift R}
	{\axSingleton {\EVar 0} \dash \axSep \axLift P}
	s
	{\axSingleton {\EVar 0} \dash \axSep \axLift Q}\)}
\UIC{\(\axstmt \Delta J R P {\SBlock s} Q\)}
\normalAlignProof
\DisplayProof \tag{block}  \\[.4em]
%
\AXC{\(\axstmt \Delta J R P {s_1} {P'}\)}
\AXC{\(\axstmt \Delta J R {P'} {s_2} Q\)}
\BIC{\(\axstmt \Delta J R P {\SComp {s_1} {s_2}} Q\)}
\normalAlignProof
\DisplayProof \tag{comp}  \\[.4em]
%
\AXC{\(\axstmt \Delta J R {\axEval e \top \axAnd P} {s_1} Q\)}
\AXC{\(\axstmt \Delta J R {\axEval e \bot \axAnd P} {s_2} Q\)}
\BIC{\(\axstmt \Delta J R {\axEval e \dash \axAnd P} {\SIf e {s_1} {s_2}} Q\)}
\normalAlignProof
\DisplayProof \tag{cond}  \\[.4em]
%
\AXC{\(\mlookup f \Delta = \Some {\axFun P Q}\)}
\AXC{\(\axEval {\vec e} {\vec v} \axAnd P\,\vec y\,\vec v \axImp A\)}
\AXC{\(A\) memory independent}
\TIC{\(\axstmt \Delta J R
	{\axEval {\vec e} {\vec v} \axAnd P\,\vec y\,\vec v}
	{\SCall f {\vec e}}
	{A \axAnd Q\,\vec y\,\vec v}\)}
\normalAlignProof
\DisplayProof \tag{call}  \\[.4em]
%
\begin{split}
& \forall f\,P'\,Q' \wsdot \Delta' f = (\forall \vec z \,
		\forall\vec w \wsdot \axFun {P\,\vec z\,\vec w} {Q\,\vec z\,\vec w}) \imp
	\forall \vec y\,\vec v \wsdot \\[-.3em]
& (\axstmt {\Delta' \cup \Delta}
	{\!\lambda l . \False}
	{\!\Pi_* [\axSingleton {x_i} \dash] \axSep Q'\vec y\,\vec v}
	{\Pi_* [\axSingleton {x_i} {v_i}] \axSep P'\vec y\,\vec v}
	{\mlookup f \delta}
	{\Pi_* [\axSingleton {x_i} \dash] \axSep Q'\vec y\,\vec v})
\end{split}\hspace{-.5pt} \notag \\[-.1em]
\AXC{\(\axstmt {\Delta' \cup \Delta} J R P s Q\)\qquad\qquad}
\AXC{\qquad\qquad\(\mdom {\Delta'} \subseteq \mdom \delta\)}
\BIC{\(\axstmt \Delta J R P s Q\)}
\normalAlignProof
\DisplayProof \tag{add funs}
\end{gather}
\end{small}
\end{definition}

The traditional \emph{frame rule} of separation logic~\cite{hea:rey:yan:01}
includes the side-condition
\(\mathsf{vars}\,s \cap \mathsf{vars}\,A = \emptyset\) on the free variables
in the statement \(s\) and assertion \(A\).
However, as our local variables are just (immutable) references into the memory,
we do not need this side-condition.
Also, the (frame) and (block) rule are uniform in all assertions, allowing us
to write:

\[
\AXC{\(\axstmtP \Delta {\bar P} s\)}
\UIC{\(\axstmtP \Delta {A \axSep \bar P} s\)}
\normalAlignProof
\DisplayProof
\qquad
\AXC{\(\axstmtP \Delta {\bar P} {\SBlock s}\)}
\UIC{\(\axstmtP \Delta
	{\axSingleton {\EVar 0} \dash \axSep \axLift {\bar P}} s\)}
\normalAlignProof
\DisplayProof
\]

Since the return and goto statements leave the normal control flow, the
postconditions of the (goto) and (return) rules are arbitrary.

Our rules for function calls are similar to those by Von Oheimb~\cite{ohe:99}.
The (call) rule is to call a function that is already in \(\Delta\).
It is important to notice that its postcondition is not
\(\axEval {\vec e} {\vec v} \axAnd Q\,\vec y\,\vec v\),
as after calling \(f\) evaluation of \(\vec e\) may be different after all.
However, in case \(\vec e\) contains no load expressions, we have that
\(\axEval {\vec e} {\vec v}\) is memory independent, and we can simply take
\(A\) to be \(\axEval {\vec e} {\vec v}\).

The (add funs) rule can be used to add an arbitrary family \(\Delta'\) of
specifications of (possibly mutually recursive) functions to \(\Delta\).
For each function \(f\) in \(\Delta'\) with precondition \(P'\) and
postcondition \(Q'\), it has to be verified that the function body
\(\mlookup f \delta\) is correct for all instantiations of the
logical variables \(\vec y\) and input values \(\vec v\).
The precondition \(\Pi_* [\axSingleton {x_i} {v_i}] \axSep P'\vec y\,\vec v\),
where \(\Pi_* [\axSingleton {x_i} {v_i}]\) denotes the assertion
\(\axSingleton {x_i} {v_i} \axSep \ldots \axSingleton {x_n} {v_n}\), states
that the function parameters \(\vec x\) are allocated with values \(\vec v\)
for which the precondition \(P'\) of the function holds.
The post- and returning condition
\(\Pi_* [\axSingleton {x_i} \dash] \axSep Q'\vec y\,\vec v\) ensure that the
parameters have not been deallocated while executing the function body and that
the postcondition \(P'\) of the function holds on a return.
The jumping condition \(\lambda l . \False\) ensures that all gotos
jump to a label that occurs in the function body.

%Our rules do not include side-conditions on the labels of labeled
%statements or gotos.
%In case of double occurrences of labeled statements, the operational
%semantics will jump non-deterministically.
%The environment \(J\) therefore ensures that each such jump is allowed.
%Also, in the rule to add functions to \(\Delta\), the jumping condition is
%\(\lambda l \wsdot \False\) to ensure that all gotos are bounded.

As an example we verify Euclid's algorithm for computing the greatest
common divisor.
We first verify the \lstinline|swap| function, which takes two pointers \(p\)
and \(q\) and swaps their contents.
Its specification is as follows:
\[
\forall y\,z \, \forall p\,q\wsdot \axFun
	{\axSingleton p {\EVal y} \axSep \axSingleton q {\EVal z}}
	{\axSingleton p {\EVal z} \axSep \axSingleton q {\EVal y}}
\]
Here, universal quantification over the logical variables \(y\) and \(z\) is
used to relate the contents of the pointers \(p\) and \(q\) in the pre- and
postcondition.
In order to add this function to the context \(\Delta\) of verified functions
using the (add funs) rule, we have to prove that the body satisfies the above
specification.
An outline of this proof (with implicit uses of weakening) is displayed in
Figure~\ref{figure:euclid}.

To verify the body of the \lstinline|gcd| function, we use a jumping environment
\(J\) that assigns \(\axExists y'z' \wsdot
	\axSingleton {\EVar 0} {\EVal {\VInt y'}}
   \axSep \axSingleton {\EVar 1} {\VInt z'}
   \axAnd \gcd y z = \gcd {y'} {z'}\) to the label \(l\).
For the function call to \lstinline|swap|, we use the (frame) rule with the
framing condition \(z' \neq 0 \axAnd \gcd y z = \gcd {y'} {z'} \axAnd \axEmp\)
as displayed in Figure~\ref{figure:euclid}.
We refer to the \Coq{} formalization for the full details of these proofs.

\begin{figure}[pt!]
\renewcommand{\gcd}[2]{\textsf{gcd}\;#1\;#2}
\newcommand{\assertion}[1]{\textcolor{Blue}{ \{ #1 \} }}
\newcommand{\assertionStart}[1]{\textcolor{Blue}{ \{ #1  }}
\newcommand{\assertionEnd}[1]{\textcolor{Blue}{ \;\; #1 \} }}
\setlength{\jot}{0.25em}
\begin{minipage}[b]{0.46\textwidth}
Euclid's algorithm in \C:
\begin{scriptsize}
\begin{lstlisting}[xleftmargin=0.0cm]
void swap(int *p, int *q) {
  int z = *p; *p = *q; *q = z;
}

int gcd(int y, int z) {
  l: if (z) {
    y = y % z; swap(&y, &z); goto l;
  }
  return y;
}
\end{lstlisting}
\end{scriptsize}
\medskip

Verification of the body of \lstinline|swap|:
\begin{scriptsize}
\begin{flalign*}
& \assertion{ \axSingleton {\EVar 0} {\EVal p}
   \axSep \axSingleton {\EVar 1} {\EVal q}
   \axSep \axSingleton {\EVal p} {\EVal y}
   \axSep \axSingleton {\EVal q} {\EVal z} } \\
& \quad \SBlockSym\;( \\
& \assertion{ \axSingleton {\EVar 0} \dash
   \axSep \axSingleton {\EVar 1} {\EVal p}
   \axSep \axSingleton {\EVar 2} {\EVal q}
   \axSep \axSingleton {\EVal p} {\EVal y}
   \axSep \axSingleton {\EVal q} {\EVal z} } \\   
& \quad \quad \SAssign {\EVar 0} {\ELoad {(\ELoad {\EVar 1})}} \SCompSym \\
& \assertion{ \axSingleton {\EVar 0} {\EVal y}
   \axSep \axSingleton {\EVar 1} {\EVal p}
   \axSep \axSingleton {\EVar 2} {\EVal q}
   \axSep \axSingleton {\EVal p} {\EVal y}
   \axSep \axSingleton {\EVal q} {\EVal z} } \\
& \quad \quad \SAssign {\ELoad {\EVar 1}} {\ELoad {(\ELoad {\EVar 2})}} \SCompSym \\
& \assertion{ \axSingleton {\EVar 0} {\EVal y}
   \axSep \axSingleton {\EVar 1} {\EVal p}
   \axSep \axSingleton {\EVar 2} {\EVal q}
   \axSep \axSingleton {\EVal p} {\EVal z}
   \axSep \axSingleton {\EVal q} {\EVal z} } \\
& \quad \quad \SAssign {\ELoad {\EVar 2}} {\ELoad {\EVar 0}} \\
& \assertion{ \axSingleton {\EVar 0} {\EVal y}
   \axSep \axSingleton {\EVar 1} {\EVal p}
   \axSep \axSingleton {\EVar 2} {\EVal q}
   \axSep \axSingleton {\EVal p} {\EVal z}
   \axSep \axSingleton {\EVal q} {\EVal y} } \\
& \quad ) \\
& \assertion{ \axSingleton {\EVar 0} {\EVal p}
   \axSep \axSingleton {\EVar 1} {\EVal q}
   \axSep \axSingleton {\EVal p} {\EVal z}
   \axSep \axSingleton {\EVal q} {\EVal y} }
\end{flalign*}
\end{scriptsize}
\end{minipage}\quad
\begin{minipage}[b]{0.50\textwidth}
Verification of the body of \lstinline|gcd|:
\begin{scriptsize}
\begin{flalign*}
& \assertion{ \axSingleton {\EVar 0} {\EVal {\VInt y}}
   \axSep \axSingleton {\EVar 1} {\VInt z}} \\
& \quad l\;\SLabelSym \\
& \assertion{ J\;l } \\
& \quad \SIfSym\; (\ELoad {\EVar 1}) \;( \\
& \assertion{ \axEval {\ELoad {\EVar 1}} \top \axAnd J\;l } \\
& \assertionStart{ \axSingleton {\EVar 0} {\EVal {\VInt y'}}
   \axSep \axSingleton {\EVar 1} {\VInt z'} \axAnd } \\[-.3em]
& \assertionEnd{ z' \neq 0 \axAnd \gcd y z = \gcd {y'} {z'} } \\
& \quad \quad \SAssign {\EVar 0} {\EBinOp \ModOp {\ELoad {\EVar 0}}
	{\ELoad {\EVar 1}}} \SCompSym \\
& \assertionStart{ \axSingleton {\EVar 0} {\EVal {\VInt {(y' \ModOp z')}}}
   \axSep \axSingleton {\EVar 1} {\VInt z'} \axSep} \\[-.3em]
& \assertionEnd{ (z' \neq 0 \axAnd \gcd y z = \gcd {y'} {z'} \axAnd \axEmp) } \\
%& \assertion{ \axSingleton {\EVar 0} {\EVal {\VInt {(y' \ModOp z')}}}
%	\axSep \axSingleton {\EVar 1} {\VInt z'} } \\
& \quad \quad \SCall {\texttt{swap}} {\EVar 0, \; \EVar 1} \SCompSym \\
%& \assertion{ \axSingleton {\EVar 0} {\VInt z'}
%	\axSep \axSingleton {\EVar 1} {\EVal {\VInt {(y' \ModOp z')}}} } \\
& \assertionStart{ \axSingleton {\EVar 0} {\VInt z'}
	\axSep \axSingleton {\EVar 1} {\EVal {\VInt {(y' \ModOp z')}}} \axSep} \\[-.3em]
& \assertionEnd{ (z' \neq 0 \axAnd \gcd y z = \gcd {y'} {z'} \axAnd \axEmp) } \\
& \assertion{ J\;l } \\
& \quad \quad \SGoto l \\
& \assertion{ \axSingleton {\EVar 0} {\EVal {\VInt {(\gcd y z)}}}
   \axSep \axSingleton {\EVar 1} {\VInt 0} } \\
& \quad ) \; \SElseSym \\
& \assertion{ \axEval {\ELoad {\EVar 1}} \bot \axAnd J\;l } \\
& \assertion{ \axSingleton {\EVar 0} {\EVal {\VInt y'}}
   \axSep \axSingleton {\EVar 1} {\EVal {\VInt 0}}
   \axAnd \gcd y z = \gcd {y'} 0 } \\
& \quad \quad \SSkip \\
& \assertion{ \axSingleton {\EVar 0} {\EVal {\VInt {(\gcd y z)}}}
   \axSep \axSingleton {\EVar 1} {\VInt 0} }
\end{flalign*}
\end{scriptsize}
\end{minipage}
\vspace{-\baselineskip}
\caption{Verification of Euclid's algorithm.}
\label{figure:euclid}
\end{figure}
 
\section{Soundness of the axiomatic semantics}
\label{section:correspondence}

We define \(\Axstmt \Delta J R P s Q\) for Hoare sextuples in terms of
our operational semantics.
Proving \emph{soundness} of the axiomatic semantics then consists of showing
that \(\axstmt \Delta J R P s Q\) implies that \(\Axstmt \Delta J R P s Q\).

We want \(\Axstmt \Delta J R P s Q\) to ensure partial program correctness.
Intuitively, that means: if \(P\,k\,m\) and
\(\cstepRtc {\State k {\Stmt s \DDown} {m}} {\State k {\Stmt s \DUp} {m'}}
\), then \(Q\,k\,m'\).
However, due to the additional features, this is too simple.
\begin{enumerate}
\item We also have to account for reductions starting in \(\DTop\) or
	\(\DJumpSym\) direction.
	Hence, we take the four assertions \(J\), \(R\), \(P\) and \(Q\) together
	as one function \(\bar P\) and write \(\Axstmt \Delta J R P s Q\) as
	\(\AxstmtP \Delta {\bar P} s\).
	The intuitive meaning of \(\AxstmtP \Delta {\bar P} s\) is:
	if \(\bar P\,d\,k\,m\) and
	\(\cstepRtc {\State k {\Stmt s {d}} {m}} {\State k {\Stmt s {d'}} {m'}}
		\), then \(\bar P\,d'\,k\,m'\).
\item We have to enforce the reduction
	\(\cstepRtc {\State k {\Stmt s {d}} {m}} {\State k {\Stmt s {d'}} {m'}}\)
	to remain below \(k\) as it could otherwise take too many items off the
	context.	
\item Our language has various kinds of undefined behavior (\eg{}
	%constructs	on which it can crash as for example
	invalid pointer dereferences).
	We therefore also want \(\AxstmtP \Delta {\bar P} s\) to guarantee that \(s\)
	does not	exhibit such undefined behaviors.
	Hence, \(\AxstmtP \Delta {\bar P} s\) should at least guarantee that if
	\(\bar P\,d\,k\,m\) and	\(\cstepBelowRtc k {\State k {\Stmt s {d}} {m}} S\),
	then \(S\) is either:
	\begin{itemize}
	\item reducible (no undefined behavior has occurred); or:
	\item of the shape \(\State k {\Stmt s {d'}} {m'}\) with
		\(\bar P\,d'\,k\,m'\) (execution is finished).
	\end{itemize}	
\item The program should satisfy a form of memory safety so as the make
	the frame rule derivable.
	Hence, if before execution the memory can be extended with a disjoint part,
	that part should not be modified during the execution.
%	can be split into two disjoint parts
%	\(m\) and \(m_f\) such that the precondition holds for \(m\), then \(m_f\)
%	should not be modified during the execution.
\item We take a step indexed approach in order to relate the assertions of
	functions in \(\Delta\) to the statement \(s\).
\end{enumerate}

%To account for the previously remarked properties, we will generalize the
%notion of validity, and use it to define the triple
%\(\AxstmtP \Delta {\bar P} s\).

Together this leads to the following definitions:

\begin{definition}
Given a predicate \(\bar P\) over stacks, focuses and memories, specifying valid
ending states, the relation \mbox{\(\Ax {\bar P} n \phi k m\)} is defined as: for
each reduction
\(
\cstepBelowBounded k n {\State k \phi {m \munion m_f}}
	{\State {k'} {\phi'} {m'}}
\),
we have that \(m'\) is of the shape \(m' = m'' \munion m_f\) for some memory
\(m''\), and either:
\begin{enumerate}
\item there is a state \(S\) such that
	\(\cstepBelow k {\State {k'} {\phi'} {m'}} S\); or:
\item \(k' = k\) and \(\bar P\,k'\,\phi'\,m''\).
\end{enumerate}
\end{definition}

\begin{definition}
\emph{Validity of the environment} \(\Delta\), notation
\(\Axfuns n \Delta\) is defined as:
if \(\mlookup f \Delta = \Some (\forall \vec y \wsdot\forall\vec v \wsdot
\axFun {P\,\vec y\,\vec v} {Q\,\vec y\,\vec v})\) and
\(P\,\vec y\,\vec v\,k\,m\), then \(\Ax {Q'} n {\Call f {\vec v}} k m\), where 
\(
Q' \defined \lambda \rho\;\phi\;m' \wsdot
	(\phi = \Return)  \land  Q\,\vec y\,\vec v\,\rho\;m'.
\)
\end{definition}

\begin{definition}
\emph{Validity of a statement \(s\)}, notation \(\AxstmtP \Delta {\bar P} s\) is
defined as:
if \(\Axfuns n \Delta\), \(\downwards d s\), and \(\bar P\,d\,k\,m\), then
\(\Ax {Q'} n {\Stmt d s} k m\), where
\[
Q' \defined \lambda \rho\,\phi\,m' \wsdot \exists d'\,s'\wsdot
	\phi = \Stmt {d'} {s'} \ \land\
	\upwards {d'} {s'} \ \land\ \bar P\,d'\,\rho\;m'
\]
The predicate \(\downwards\) holds if \(\downwards \DDown {s'}\) or
\(\downwards {(\DJump l)} {s'}\) with \(l \in \labels {s'}\).
%\downwards \DDown s \defined \True, \
%\downwards \DUp s \defined \False, \
%\downwards {(\DJump l)} s \defined l \in \labels s, \
%\downwards \DTop s \defined \False
%\end{gather*}
\end{definition}

\begin{proposition}[Soundness]
\(\axstmtP \Delta {\bar P} s\) implies \(\AxstmtP \Delta {\bar P} s\).
\end{proposition}

This proposition is proven by induction on the derivation of
\(\axstmtP \Delta {\bar P} s\).
Thus, for each rule of the axiomatic semantics, we have to show that it holds in
the model.
The rules for the skip, return, assignment, goto and function calls are proven
by chasing all possible reduction paths.
In the case of the assignment statement, we need weakening of expression
evaluation (Lemma~\ref{lemma:expr_eval_weaken_mem}).

All structural rules are proven by induction on the length of the reduction.
These proofs involve chasing all possible reduction paths.
We refer to the \Coq{} formalization for the proofs of these rules.

\begin{comment}
The main problem now is that \(\Ax {\bar P} n \phi k m\) only gives
information for \(k\)-restricted reductions starting in the same context \(k\),
whereas in for example the proof of the \(\SLabel l s\) rule, we have a
reduction \(\cstepBelowBounded k n {\State {\cons {(\CLabel l)} k} {\Stmt d s}
  {m \munion m_f}} {\State {k'} {\phi'} {m'}}\).
We use the following lemma to handle these situations.

\begin{lemma}
If \(\Ax {\bar P} n \phi {\app l k} m\) and
\(
\cstepBelowBounded k n {\State {\app l k} \phi {m \munion m_f}}
	{\State {k'} {\phi'} {m'}},
\)
then one of the following properties holds:
\begin{enumerate}
\item \(m'\) is of the shape \(m' = m'' \munion m_f\) for some \(m''\), and
	there is a state \(S\) such that
	\(\cstepBelow {\app l k} {\State {k'} {\phi'} {m'}} S\), or,
\item \(\cstepBelowBounded {\app l k} n {\State {\app l k} \phi {m \munion m_f}}
	{\State {\app l k} {\phi''} {m'' \munion m_f}}
	\cstepBelowBoundedSym k n \State {k'} {\phi'} {m'}\) for some focus \(\phi''\)
	and memory \(m''\) with \(\bar P\,(\app l k)\,\phi''\,m''\).
\end{enumerate}
\end{lemma}

\begin{proof}
We use Lemma~\ref{lemma:cstep_subctx_cut} to distinguish the following cases:
\begin{itemize}
\item We have \(\cstepBelowBounded {\app l k} n {\State {\app l k} \phi
		{m \munion m_f}} {\State {k'} {\phi'} {m'}}\).
	Now we use the assumption \(\Ax {\bar P} n \phi {\app l k} m\) to
	obtain that	\(\State {k'} {\phi'} {m'}\) is either reducible, or that
	\(k' = \app l k\) and \(\bar P\,k'\,\phi'\,m''\).
	Therefore, the required property (1), respectively (2), holds.
\item \(\cstepBelowBounded {\app l k} n {\State {\app l k} \phi {m \munion m_f}}
	{\State {\app l k} {\phi''} {m'''}}
	\cstepBelowBoundedSym k n \State {k'} {\phi'} {m'}\) for some focus \(\phi''\)
	and memory \(m'''\).
	By \(\Ax {\bar P} n \phi {\app l k} m\) we have that
	\(m''' = m'' \munion m_f\) and \(\bar P\,(\app l k)\,\phi''\,m''\).
	In this case we have that the required property (2) holds. \qed
\end{itemize}
\end{proof}

\begin{lemma}
If we have \(\Axstmt \Delta J R P {s_1} {P'}\) and
\(\Axstmt \Delta J R {P'} {s_2} Q\), then
\(\Axstmt \Delta J R P {\SComp {s_1} {s_2}} Q\).
\end{lemma}

\begin{proof}
The reduction paths for the composition statement are as follows.
\begin{center}
\begin{tikzpicture}
\draw (3,0) node (s12) {\(\SCompSym\)};
\draw (0,-2) node (s1) {\(s_1\)}
	edge[-,dashed] (s12)
	edge[thick,<-,bend left=30] node {\(\DDown\)
		or \(\DJump l\) with \(l \in \gotos {s_1}\)} (s12)
	edge[thick,->,bend right=30] node {\(\DTop\)} (s12);
\draw (6,-2) node (s2) {\(s_2\)}
	edge[-,dashed] (s12)
	edge[thick,<-,bend left=30] (s12)
	edge[thick,->,bend right=30] node {\(\DDown\)
		or \(\DJump l\) with \(l \notin \gotos {\SComp {s_1} {s_2}}\)} (s12);
\draw (3,-3) node {}
	edge[thick,<-,bend left=20] (s1)
	edge[thick,->,bend right=20] (s2);
\end{tikzpicture}
\end{center}
\end{proof}
\end{comment}

\section{Formalization in \Coq}
\label{section:coq}

All proofs in this paper have been fully formalized using the \Coq{} proof
assistant. % ~\cite{coq:12}.
Formalization has been of great help in order to develop and debug the
semantics.
We used \Coq's notation mechanism combined with unicode symbols and type
classes for overloading to let the \Coq{} development correspond as well as
possible to the definitions in this paper.

There are some small differences between the \Coq{} development and this paper.
Firstly, we omitted while statements and functions with return values
here, whereas they are included in the \Coq{} development.
Secondly, in this paper, we presented the axiomatic semantics as an inference
system, and then showed that it has a model.
Since we did not consider completeness, in \Coq{} we directly proved all rules
to be derivable with respect to the model.

We used type classes to provide abstract interfaces for commonly used structures
like finite sets and finite functions, so we were able to prove theory and implement
automation in an abstract way.
Our approach is greatly inspired by the \emph{unbundled} approach of Spitters
and van der Weegen~\cite{spi:wee:11}.
However, whereas their work heavily relies on \emph{setoids} (types equipped
with an equivalence relation), we tried to avoid that, and used Leibniz equality
wherever possible.
In particular, our interface for finite functions requires \emph{extensionality}
with respect to Leibniz equality.
That means \(m_1 = m_2 \iff \forall x \wsdot m_1\,x = m_2\,x\).

Intensional type theories like \Coq{} do not satisfy extensionality.
However, finite functions indexed by a countable type can still be implemented
in a way that extensionality holds.
For the memory we used finite functions indexed by binary
natural numbers implemented as radix-2 search trees.
This implementation is based on the implementation in
\CompCert~\cite{ler:12}. But whereas \CompCert's implementation does not satisfy
canonicity, and thus allows different trees for the same finite function, we
have equipped our trees with a proof of canonicity.
This way, equality on these finite functions as trees becomes extensional.

Extensional equality on finite functions is particularly useful for dealing with
assertions, which are defined as predicates on the stack and memory
(Definition~\ref{definition:assertions}).
Due to extensionality, we did not have to equip assertions with a proof of
well-definedness with respect to extensional equality on memories.

Although the semantics described in this paper is not extremely big, it is
still quite cumbersome to be treated without automation in a proof assistant.
In particular, the operational semantics is defined as an inductive type
consisting of 32 constructors.
To this end, we have automated many steps of the proofs.
For example, we implemented the tactic \lstinline|do_cstep| to automatically
perform reduction steps and to solve the required side-conditions, and the
tactic \lstinline|inv_cstep| to perform case analyzes on reductions and to
automatically discharge impossible cases.
Ongoing experiments show that this approach is successful, as the semantics
can be extended easily without having to redo many proofs.

Our \Coq{} code, available at \url{http://robbertkrebbers.nl/research/ch2o/},
is about 3500 lines of code including comments and white space.
Apart from that, the library on general purpose theory (finite sets, finite
functions, lists, \etc) is about 7000 lines, and the \lstinline|gcd| example
is about 250 lines.

\section{Conclusions and further research}
\label{section:conclusions}

The further reaching goal of this work is to develop an operational semantics
for a large part of the \Celeven{} programming language~\cite{iso:12} as part
of the \Formalin{} project~\cite{kre:wie:11}.
In order to get there, support for non-local control flow is a
necessary step.
The operational semantics in this paper extends easily to most other forms of
non-local control in \C: the break and continue statement, and non-structured
switch statements (\eg{} Duff's device).
To support these, we just have to add an additional \emph{direction} and its
corresponding reduction rules.
% Also, looping constructs like while and for can easily be
% supported (while is supported in our \Coq{} code).

In this paper we have also defined an axiomatic semantics.
% for non-local control.
The purpose of this axiomatic semantics is twofold.
Firstly, it gives us more confidence in the correctness and usability
of our operational semantics.
% Indeed, it has been of great of help for debugging the operational semantics.
Secondly, in order to reason about actual \C{} programs, non-local control
flow and pointers to block scope variables have to be supported by the axiomatic
semantics.

Unfortunately, the current version of our axiomatic semantics is a bit
cumbersome to be used for actual program verification.
The foremost reason is that our way of handling local variables introduces
some overhead.
In traditional separation logic~\cite{hea:rey:yan:01}, there is a strict
separation between local variables and allocated storage: the values of local
variables are stored directly on the stack, whereas the memory is only used for
allocated storage.
To that end, the separating conjunction does not deal with local variables,
and many assertions can be written down in a shorter way.
For example, even though we do not use pointers to the local variables of the
\lstinline|swap| function (Figure~\ref{figure:euclid}), we still have to deal
with two levels of indirection.

It seems not too hard to make allocation of local variables in the memory
optional, so that it can be used only for variables that actually have pointers
to them.
Ordinary variables then correspond nicely to those with the \texttt{register}
keyword in \C{}.
Alternatively, the work of Parkinson \etal{}~\cite{par:bor:cal:06} on
variables as resources may be useful.

Another requirement to conveniently use an axiomatic semantics for program
verification is strong automation.
Specific to the \Coq{} proof assistant there has been work on this by for
example Appel~\cite{app:06} and Chlipala~\cite{chi:11}.
As our main purpose is to develop an operational semantics for a large part
of \Celeven{}, we consider automation a problem for future work.

In order to get closer to a semantics for \Celeven{} we are currently
investigating the following additional features of the \Celeven{} standard.
\begin{itemize}
\item Expressions with side effects and \emph{sequence points}.
\item The \C{} type system including structs, unions, arrays and integer types.
% and an abstract	way of handling primitive data types (integers, floats).
\item The non-aliasing restrictions (\emph{effective types} in particular).
\end{itemize}
\noindent
We intend to support these features in both our operational and axiomatic
semantics.
Ongoing work shows that our current operational semantics can easily be extended
with non-deterministic expressions using a similar approach as
Norrish~\cite{nor:98} and Leroy~\cite{ler:12}.
As non-determinism in expressions is closely related to concurrency, we use
separation logic for a Hoare logic for expressions.

In this paper we have not considered completeness of the axiomatic semantics as
it is not essential for program verification.
Also, our future extension for non-deterministic expressions with side-effects
will likely be incomplete.

Another direction for future research is to relate our semantics to the
\CompCert{} semantics~\cite{ler:12} (by eliminating block scope variables).
That way we can link it to actual non-local jumps in assembly.

\begin{comment}
For example, the rule for the binary operator is as follows.

\begin{prooftree}
\AXC{\(\axSimple {P_1} {e_1} {Q_1}\)}
\AXC{\(\axSimple {P_2} {e_2} {Q_2}\)}
\AXC{\((\forall v_1\,v_2 \wsdot
	Q_1\,v_1 \axSep Q_2\,v_2 \axImp \EBinOp {\EVal v_1 @{op} val vr ⇓-)%A →
  Δ ⊨ₑ {{ Pl * Pr }} el @{op} er
       {{ λ v, ∃ vl vr, val vl @{op} val vr ⇓ v ∧ (Ql vl * Qr vr) }}.
\end{comment}

\paragraph{Acknowledgments.}
We thank Erik Poll for bringing up the idea of an axiomatic semantics for
gotos, and thank Herman Geuvers and the anonymous referees for providing
several helpful suggestions.
This work is financed by the Netherlands Organisation for Scientific 
Research (NWO). 

\bibliographystyle{abbrv}
\bibliography{../CH2O}
\end{document}
