Verus overview
Verus is a tool for verifying the correctness of code written in Rust. The main goal is to verify full functional correctness of low-level systems code, building on ideas from existing verification frameworks like Dafny, Boogie, F*, VCC, Prusti, Creusot, Aeneas, Cogent, Coq, and Isabelle/HOL. Verification is static: Verus adds no run-time checks, but instead uses computer-aided theorem proving to statically verify that executable Rust code will always satisfy some user-provided specifications for all possible executions of the code.
In more detail, Verus aims to:
- provide a pure mathematical language for expressing specifications (like Dafny, Creusot, F*, Coq, Isabelle/HOL)
- provide a mathematical language for expressing proofs (like Dafny, F*, Coq, Isabelle/HOL) based exclusively on classical logic (like Dafny)
- provide a low-level, imperative language for expressing executable code (like VCC), based on Rust (like Prusti, Creusot, and Aeneas)
- generate small, simple verification conditions that an SMT solver
like Z3 can solve efficiently,
based on the following principles:
- keep the mathematical specification language close to the SMT solver’s mathematical language (like Boogie)
- use lightweight linear type checking, rather than SMT solving, to reason about memory and aliasing (like Cogent, Creusot, Aeneas, and linear Dafny)
We believe that Rust is a good language for achieving these goals. Rust combines low-level data manipulation, including manual memory management, with an advanced, high-level, safe type system. The type system includes features commonly found in higher-level verification languages, including algebraic datatypes (with pattern matching), type classes, and first-class functions. This makes it easy to express specifications and proofs in a natural way. More importantly, Rust’s type system includes sophisticated support for linear types and borrowing, which takes care of much of the reasoning about memory and aliasing. As a result, the remaining reasoning can ignore most memory and aliasing issues, and treat the Rust code as if it were code written in a purely functional language, which makes verification easier.
This guide
This guide assumes that you’re already somewhat familiar with the basics of Rust programming. (If you’re not, we recommend spending a couple hours on the Learn Rust page.) Familiarity with Rust is useful for Verus, because Verus builds on Rust’s syntax and Rust’s type system to express specifications, proofs, and executable code. In fact, there is no separate language for specifications and proofs; instead, specifications and proofs are written in Rust syntax and type-checked with Rust’s type checker. So if you already know Rust, you’ll have an easier time getting started with Verus.
Nevertheless, verifying the correctness of Rust code requires concepts and techniques
beyond just writing ordinary executable Rust code.
For example, Verus extends Rust’s syntax (via macros) with new concepts for
writing specifications and proofs, such as forall
, exists
, requires
, and ensures
,
as well as introducing new types, like the mathematical integer types int
and nat
.
It can be challenging to prove that a Rust function satisfies its postconditions (its ensures
clauses)
or that a call to a function satisfies the function’s preconditions (its requires
clauses).
Therefore, this guide’s tutorial will walk you through the various concepts and techniques,
starting with relatively simple concepts (basic proofs about integers),
moving on to more moderately difficult challenges (inductive proofs about data structures),
and then on to more advanced topics such as proofs about arrays using forall
and exists
and proofs about concurrent code.
All of these proofs are aided by an automated theorem prover
(specifically, Z3,
a satisfiability-modulo-theories solver, or “SMT solver” for short).
The SMT solver will often be able to prove simple properties,
such as basic properties about booleans or integer arithmetic,
with no additional help from the programmer.
However, more complex proofs often require effort from both the programmer and the SMT solver.
Therefore, this guide will also help you understand the strengths and limitations of SMT solving,
and give advice on how to fill in the parts of proofs that SMT solvers cannot handle automatically.
(For example, SMT solvers usually cannot automatically perform proofs by induction,
but you can write a proof by induction simply by writing a recursive Rust function whose ensures
clause expresses the induction hypothesis.)
Getting Started
To get started with Verus, use git clone
to fetch the Verus source code from
the Verus GitHub page,
and then follow the directions on
the Verus GitHub page
to build Verus.
Let’s try running Verus on the following sample Rust program, found at getting_started.rs:
use vstd::prelude::*;
verus! {
spec fn min(x: int, y: int) -> int {
if x <= y {
x
} else {
y
}
}
fn main() {
assert(min(10, 20) == 10);
assert(min(-10, -20) == -20);
assert(forall|i: int, j: int| min(i, j) <= i && min(i, j) <= j);
assert(forall|i: int, j: int| min(i, j) == i || min(i, j) == j);
assert(forall|i: int, j: int| min(i, j) == min(j, i));
}
} // verus!
To run Verus on this code, change to the source
directory and type the following in Unix:
./target-verus/release/verus rust_verify/example/guide/getting_started.rs
or the following in Windows:
.\target-verus\release\verus.exe rust_verify\example\guide\getting_started.rs
You should see the following output:
note: verifying root module
verification results:: 1 verified, 0 errors
This indicates that Verus successfully verified 1 function (the main
function).
If you want, you can try editing the rust_verify/example/guide/getting_started.rs
file
to see a verification failure.
For example, if you add the following line to main
:
assert(forall|i: int, j: int| min(i, j) == min(i, i));
you will see an error message:
note: verifying root module
error: assertion failed
--> example/guide/getting_started.rs:19:12
|
19 | assert(forall|i: int, j: int| min(i, j) == min(i, i));
| ^^^^^^ assertion failed
error: aborting due to previous error
verification results:: 0 verified, 1 errors
Using Verus in Rust files
Verus uses a macro named verus!
to extend Rust’s syntax with verification-related features
such as preconditions, postconditions, assertions, forall
, exists
, etc.
Therefore, each file in a crate will typically contain the following declarations:
use vstd::prelude::*;
verus! {
In the remainder of this guide, we will omit these declarations from the examples to avoid clutter.
However, remember that any example code should be placed inside the verus! { ... }
block,
and that the file should use vstd::prelude::*;
.
Compilation
The instructions above verify a Rust file without compiling it.
To both verify and compile a Rust file, add the --compile
command-line option.
For example:
./target-verus/release/verus --compile rust_verify/example/guide/getting_started.rs
This will generate an executable for getting_started.rs
.
(However, in this example, the executable won’t do anything interesting,
because the main
function contains no executable code —
it contains only statically-checked assertions,
which are erased before compilation.)
Basic specifications
Verus programs contain specifications to describe the intended behavior of the code. These specifications include preconditions, postconditions, assertions, and loop invariants. Specifications are one form of ghost code — code that appears in the Rust source code for verification’s sake, but does not appear in the compiled executable.
This chapter will walk through some basic examples of preconditions, postconditions, and assertions, showing the syntax for writing these specifications and discussing integer arithmetic and equality in specifications.
Preconditions (requires clauses)
Let’s start with a small example.
Suppose we want to verify a function octuple
that multiplies a number by 8:
fn octuple(x1: i8) -> i8 {
let x2 = x1 + x1;
let x4 = x2 + x2;
x4 + x4
}
If we ask Verus to verify this code, Verus immediately reports errors about octuple
:
error: possible arithmetic underflow/overflow
|
| let x2 = x1 + x1;
| ^^^^^^^
Here, Verus cannot prove that the result of x1 + x1
fits in an 8-bit i8
value,
which allows values in the range -128
…127
.
If x1
were 100
, for example, x1 + x1
would be 200
, which is larger than 127
.
We need to make sure that when octuple
is called, the argument x1
is not too large.
We can do this by adding preconditions (also known as “requires
clauses”)
to octuple
specifying which values for x1
are allowed.
In Verus, preconditions are written with a requires
followed by zero or more boolean expressions
separated by commas:
fn octuple(x1: i8) -> i8
requires
-64 <= x1,
x1 < 64,
{
let x2 = x1 + x1;
let x4 = x2 + x2;
x4 + x4
}
The two preconditions above say that x1 must be at least -64
and less than 64
,
so that x1 + x1
will fit in the range -128
…127
.
This fixes the error about x1 + x1
, but we still get an error about x2 + x2
:
error: possible arithmetic underflow/overflow
|
| let x4 = x2 + x2;
| ^^^^^^^
If we want x1 + x1
, x2 + x2
, and x4 + x4
to all succeed, we need a tighter bound on x1
:
fn octuple(x1: i8) -> i8
requires
-16 <= x1,
x1 < 16,
{
let x2 = x1 + x1;
let x4 = x2 + x2;
x4 + x4
}
This time, verification is successful.
Now suppose we try to call octuple
with a value that does not satisfy octuple
’s precondition:
fn main() {
let n = octuple(20);
}
For this call, Verus reports an error, since 20
is not less than 16
:
error: precondition not satisfied
|
| x1 < 16,
| ------- failed precondition
...
| let n = octuple(20);
| ^^^^^^^^^^^
If we pass 10
instead of 20
, verification succeeds:
fn main() {
let n = octuple(10);
}
Postconditions (ensures clauses)
Suppose we want to verify properties about the value returned from octuple
.
For example, we might want to assert that the value returned from octuple
is 8 times as large as the argument passed to octuple
.
Let’s try putting an assertion in main
that the result of calling octuple(10)
is 80
:
fn main() {
let n = octuple(10);
assert(n == 80);
}
Although octuple(10)
really does return 80
,
Verus nevertheless reports an error:
error: assertion failed
|
| assert(n == 80);
| ^^^^^^^ assertion failed
The error occurs because, even though octuple
multiplies its argument by 8
,
octuple
doesn’t publicize this fact to the other functions in the program.
To do this, we can add postconditions (ensures
clauses) to octuple
specifying
some properties of octuple
’s return value:
fn octuple(x1: i8) -> (x8: i8)
requires
-16 <= x1,
x1 < 16,
ensures
x8 == 8 * x1,
{
let x2 = x1 + x1;
let x4 = x2 + x2;
x4 + x4
}
To write a property about the return value, we need to give a name to the return value.
The Verus syntax for this is -> (name: return_type)
. In the example above,
saying -> (x8: i8)
allows the postcondition x8 == 8 * x1
to use the name x8
for octuple
’s return value.
Preconditions and postconditions establish a modular verification protocol between functions.
When main
calls octuple
, Verus checks that the arguments in the call satisfy octuple
’s
preconditions.
When Verus verifies the body of the octuple
function,
it can assume that the preconditions are satisfied,
without having to know anything about the exact arguments passed in by main
.
Likewise, when Verus verifies the body of the main
function,
it can assume that octuple
satisfies its postconditions,
without having to know anything about the body of octuple
.
In this way, Verus can verify each function independently.
This modular verification approach breaks verification into small, manageable pieces,
which makes verification more efficient than if Verus tried to verify
all of a program’s functions together simultaneously.
Nevertheless, writing preconditions and postconditions requires significant programmer effort —
if you want to verify a large program with a lot of functions,
you’ll probably spend substantial time writing preconditions and postconditions for the functions.
assert and assume
While requires
and ensures
connect functions together,
assert
makes a local, private request to the SMT solver to prove a certain fact.
(Note: assert(...)
should not be confused with the Rust assert!(...)
macro —
the former is statically checked using the SMT solver, while the latter is checked at run-time.)
assert
has an evil twin named assume
, which asks the SMT solver to
simply accept some boolean expression as a fact without proof.
While assert
is harmless and won’t cause any unsoundness in a proof,
assume can easily enable a “proof” of a fact that isn’t true.
In fact, by writing assume(false)
, you can prove anything you want:
assume(false);
assert(2 + 2 == 5); // succeeds
Verus programmers often use assert
and assume
to help develop and debug proofs.
They may add temporary assert
s to determine which facts the SMT solver can prove
and which it can’t,
and they may add temporary assume
s to see which additional assumptions are necessary
for the SMT solver to complete a proof,
or as a placeholder for parts of the proof that haven’t yet been written.
As the proof evolves, the programmer replaces assume
s with assert
s,
and may eventually remove the assert
s.
A complete proof may contain assert
s, but should not contain any assume
s.
(In some situations, assert
can help the SMT solver complete a proof,
by giving the SMT hints about how to manipulate forall
and exists
expressions.
There are also special forms of assert
, such as assert(...) by(bit_vector)
,
to help prove properties about bit vectors, nonlinear integer arithmetic,
forall
expressions, etc.)
Executable code and ghost code
Let’s put everything from this section together into a final version of our example program:
use vstd::prelude::*;
verus! {
#[verifier::external_body]
fn print_two_digit_number(i: i8)
requires
-99 <= i < 100,
{
println!("The answer is {}", i);
}
fn octuple(x1: i8) -> (x8: i8)
requires
-16 <= x1 < 16,
ensures
x8 == 8 * x1,
{
let x2 = x1 + x1;
let x4 = x2 + x2;
x4 + x4
}
fn main() {
let n = octuple(10);
assert(n == 80);
print_two_digit_number(n);
}
} // verus!
Here, we’ve made a few final adjustments.
First, we’ve combined the two preconditions -16 <= x1
and x1 < 16
into a single preconditon -16 <= x1 < 16
,
since Verus lets us chain multiple inequalities together in a single expression
(equivalently, we could have also written -16 <= x1 && x1 < 16
).
Second, we’ve added a function print_two_digit_number
to print the result of octuple
.
Unlike main
and octuple
, we ask Verus not to verify print_two_digit_number
.
We do this by marking it #[verifier::external_body]
,
so that Verus pays attention to the function’s preconditions and postconditions but ignores
the function’s body.
This is common in projects using Verus:
you may want to verify some of it (perhaps the program’s core algorithms),
but leave other aspects, such as input-output operations, unverified.
More generally, since verifying all the software in the world is still infeasible,
there will be some boundary between verified code and unverified code,
and #[verifier::external_body]
can be used to mark this boundary.
We can now compile the program above using the --compile
option to Verus:
./tools/rust-verify.sh --compile rust_verify/example/guide/requires_ensures.rs
This will produce an executable that prints a message when run:
The answer is 80
Note that the generated executable does not contain the requires
, ensures
, and assert
code,
since these are only needed during static verification,
not during run-time execution.
We refer to requires
, ensures
, assert
, and assume
as ghost code,
in contast to the executable code that actually gets compiled.
Verus erases all ghost code before compilation so that it imposes no run-time overhead.
Expressions and operators for specifications
Verus extends Rust’s syntax with additional operators and expressions useful for writing specifications. For example:
forall|i: int, j: int| 0 <= i <= j < len ==> f(i, j)
This snippet illustrates:
- the
forall
quantifier, which we will cover later - chained operators
- implication operators
Here, we’ll discuss the last two, along with Verus notation for conjunction, disjunction, and field access.
Chained inequalities
Specifications can chain together multiple <=
, <
, >=
, and >
operations.
For example,
0 <= i <= j < len
has the same meaning as 0 <= i && i <= j && j < len
.
Logical implication
To make specifications more readable, Verus supports an implication operator ==>
.
The expression a ==> b
(pronounced “a
implies b
”) is logically equivalent to !a || b
.
As an example, the expression
forall|i: int, j: int| 0 <= i <= j < len ==> f(i, j)
means that for every pair i
and j
such that 0 <= i <= j < len
, f(i, j)
is true.
Note that ==>
has lower precedence that most other boolean operations.
For example, a ==> b && c
means a ==> (b && c)
.
Verus also supports two-way implication for booleans (<==>
) with even lower precedence,
so that a <==> b && c
is equivalent to a == (b && c)
.
See the reference for a full description of precedence
in Verus.
Conjunction and disjunction
Because &&
, ||
, and ==>
are so common in Verus specifications, it is often desirable to have
low precedence versions of &&
and ||
. Verus also supports “triple-and” (&&&
) and
“triple-or” (|||
) which are equivalent to &&
and ||
except for their precedence.
Implication ==>
and equivalence <==>
bind more tightly than either &&&
or |||
.
&&&
and |||
are also convenient for the “bulleted list” form:
&&& a ==> b
&&& c
&&& d <==> e && f
This has the same meaning as (a ==> b) && c && (d <==> (e && f))
.
Accessing fields of a struct
or enum
Verus has ->
, is
, and matches
syntax for accessing fields
of struct
s
and matching variants of enum
s.
Integer types
Rust supports various fixed-bit-width integer types:
i8
,i16
,i32
,i64
,i128
,isize
u8
,u16
,u32
,u64
,u128
,usize
To these, Verus adds two more integer types to represent arbitrarily large integers in specifications:
- int
- nat
The type int
is the most fundamental type for reasoning about integer arithmetic in Verus.
It represents all mathematical integers,
both positive and negative.
The SMT solver contains direct support for reasoning about values of type int
.
Internally, Verus uses int
to represent the other integer types,
adding mathematical constraints to limit the range of the integers.
For example, a value of the type nat
of natural numbers
is a mathematical integer constrained to be greater than or equal to 0
.
Rust’s fixed-bit-width integer types have both a lower and upper bound;
a u8
value is an integer constrained to be greater than or equal to 0
and less than 256:
fn test_u8(u: u8) {
assert(0 <= u < 256);
}
The bounds of usize
and isize
are platform dependent.
By default, Verus assumes that these types may be either 32 bits or 64 bits wide,
but this assumption may be configured.
Verus recognizes the constants
usize::BITS
,
usize::MAX
,
isize::MAX
,
and
isize::MIN
,
which are useful for reasoning symbolically
about the usize
integer range.
Using integer types in specifications
Since there are 14 different integer types (counting int
, nat
, u8
…usize
, and i8
…isize
),
it’s not always obvious which type to use when writing a specification.
Our advice is to be as general as possible by default:
- Use
int
by default, since this is the most general type and is supported most efficiently by the SMT solver.- Example: the Verus sequence library
uses
int
for most operations, such as indexing into a sequence. - Note: as discussed below, most arithmetic operations in specifications produce values of type
int
, so it is usually most convenient to write specifications in terms ofint
.
- Example: the Verus sequence library
uses
- Use
nat
for return values and datatype fields where the 0 lower bound is likely to provide useful information, such as lengths of sequences.- Example: the Verus
Seq::len()
function returns anat
to represent the length of a sequence. - The type
nat
is also handy for proving that recursive definitions terminate; you might to define a recursivefactorial
function to take a parameter of typenat
, if you don’t want to provide a definition offactorial
for negative integers.
- Example: the Verus
- Use fixed-width integer types for fixed-with values such as bytes.
- Example: the bytes of a network packet can be represented with type
Seq<u8>
, an arbitrary-length sequence of 8-bit values.
- Example: the bytes of a network packet can be represented with type
Note that int
and nat
are usable only in ghost code;
they cannot be compiled to executable code.
For example, the following will not work:
fn main() {
let i: int = 5; // FAILS: executable variable `i` cannot have type `int`, which is ghost-only
}
Integer constants
As in ordinary Rust, integer constants in Verus can include their type as a suffix
(e.g. 7u8
or 7u32
or 7int
) to precisely specify the type of the constant:
fn test_consts() {
let u: u8 = 1u8;
assert({
let i: int = 2int;
let n: nat = 3nat;
0int <= u < i < n < 4int
});
}
Usually, but not always, Verus and Rust will be able to infer types for integer constants, so that you can omit the suffixes unless the Rust type checker complains about not being able to infer the type:
fn test_consts_infer() {
let u: u8 = 1;
assert({
let i: int = 2;
let n: nat = 3;
0 <= u < i < n < 4
});
}
Note that the values 0
, u
, i
, n
, and 4
in the expression 0 <= u < i < n < 4
are allowed to all have different types —
you can use <=
, <
, >=
, >
, ==
, and !=
to compare values of different integer types inside ghost code
(e.g. comparing a u8
to an int
in u < i
).
Constants with the suffix int
and nat
can be arbitrarily large:
fn test_consts_large() {
assert({
let i: int = 0x10000000000000000000000000000000000000000000000000000000000000000int;
let j: int = i + i;
j == 2 * i
});
}
Integer coercions using “as”
As in ordinary rust, the as
operator coerces one integer type to another.
In ghost code, you can use as int
or as nat
to coerce to int
or nat
:
fn test_coerce() {
let u: u8 = 1;
assert({
let i: int = u as int;
let n: nat = u as nat;
u == i && u == n
});
}
You can use as
to coerce a value v
to a type t
even if v
is too small or too large to fit in t
.
However, if the value v
is outside the bounds of type t,
then the expression v as t
will produce some arbitrary value of type t
:
fn test_coerce_fail() {
let v: u16 = 257;
let u: u8 = v as u8;
assert(u == v); // FAILS, because u has type u8 and therefore cannot be equal to 257
}
This produces an error for the assertion, along with a hint that the value in the as
coercion might have been out of range:
error: assertion failed
|
| assert(u == v); // FAILS, because u has type u8 and therefore cannot be equal to 257
| ^^^^^^ assertion failed
note: recommendation not met: value may be out of range of the target type (use `#[verifier::truncate]` on the cast to silence this warning)
|
| let u: u8 = v as u8;
| ^
See the reference for more on how Verus defines as-truncation and how to reason about it.
Integer arithmetic
Integer arithmetic behaves a bit differently in ghost code than in executable code.
In executable code, we frequently have to reason about integer overflow, and in fact, Verus requires us to prove the absence of overflow. The following operation fails because the arithmetic might produce an operation greater than 255:
fn test_sum(x: u8, y: u8) {
let sum1: u8 = x + y; // FAILS: possible overflow
}
error: possible arithmetic underflow/overflow
|
| let sum1: u8 = x + y; // FAILS: possible overflow
| ^^^^^
In ghost code, however,
common arithmetic operations
(+
, -
, *
, /
, %
) never overflow or wrap.
To make this possible, Verus widens the results of many operations;
for example, adding two u8
values is widened to type int
.
fn test_sum2(x: u8, y: u8) {
assert({
let sum2: int = x + y; // in ghost code, + returns int and does not overflow
0 <= sum2 < 511
});
}
Since +
does not overflow in ghost code, we can easily write specifications about overflow.
For example, to make sure that the executable x + y
doesn’t overflow,
we simply write requires x + y < 256
, relying on the fact that x + y
is widened to type int
in the requires
clause:
fn test_sum3(x: u8, y: u8)
requires
x + y < 256, // make sure "let sum1: u8 = x + y" can't overflow
{
let sum1: u8 = x + y; // succeeds
}
Also note that the inputs need not have the same type; you can add, subtract, or multiply one integer type with another:
fn test_sum_mixed(x: u8, y: u16) {
assert(x + y >= y); // x + y has type int, so the assertion succeeds
assert(x - y <= x); // x - y has type int, so the assertion succeeds
}
In general in ghost code,
Verus widens native Rust integer types to int
for operators like +
, -
, and *
that might overflow;
the reference page describes the widening rules in more detail.
Here are some more tips to keep in mind:
- In ghost code,
/
and%
compute Euclidean division and remainder, rather than Rust’s truncating division and remainder, when operating on negative left-hand sides or negative right-hand sides. - Division-by-0 and mod-by-0 are errors in executable code and are unspecified in ghost code (see Ghost code vs. exec code for more detail).
- The named arithmetic functions,
add(x, y)
,sub(x, y)
, andmul(x, y)
, do not perform widening, and thus have truncating behavior, even in ghost code. Verus also recognizes some Rust functions likewrapped_add
andchecked_add
, which may be used in either executable or ghost code.
Equality
Equality behaves differently in ghost code than in executable code.
In executable code, Rust defines ==
to mean a call to the eq
function of the PartialEq
trait:
fn equal1(x: u8, y: u8) {
let eq1 = x == y; // means x.eq(y) in Rust
let eq2 = y == x; // means y.eq(x) in Rust
assert(eq1 ==> eq2); // succeeds
}
For built-in integer types like u8
, the x.eq(y)
function is defined as we’d expect,
returning true
if x
and y
hold the same integers.
For user-defined types, though, eq
could have other behaviors:
it might have side effects, behave nondeterministically,
or fail to fulfill its promise to implement an
equivalence relation,
even if the type implements the Rust Eq
trait:
fn equal2<A: Eq>(x: A, y: A) {
let eq1 = x == y; // means x.eq(y) in Rust
let eq2 = y == x; // means y.eq(x) in Rust
assert(eq1 ==> eq2); // won't work; we can't be sure that A is an equivalence relation
}
In ghost code, by contrast, the ==
operator is always an equivalence relation
(i.e. it is reflexive, symmetric, and transitive):
fn equal3(x: u8, y: u8) {
assert({
let eq1 = x == y;
let eq2 = y == x;
eq1 ==> eq2
});
}
Verus defines ==
in ghost code to be true when:
- for two integers or booleans, the values are equal
- for two structs or enums, the types are the same and the fields are equal
- for two
&
references, two Box values, two Rc values, or two Arc values, the pointed-to values are the same - for two RefCell values or two Cell values, the pointers to the interior data are equal (not the interior contents)
In addition, collection dataypes such as Seq<T>
, Set<T>
, and Map<Key, Value>
have their own definitions of ==
,
where two sequences, two sets, or two maps are equal if their elements are equal.
As explained more in specification libraries and extensional equality,
these sometimes require the “extensional equality” operator =~=
to help prove equality
between two sequences, two sets, or two maps.
Specification code, proof code, executable code
Verus classifies code into three modes: spec
, proof
, and exec
,
where:
spec
code describes properties about programsproof
code proves that programs satisfy propertiesexec
code is ordinary Rust code that can be compiled and run
Both spec
code and proof
code are forms of ghost code,
so we can organize the three modes in a hierarchy:
- code
- ghost code
spec
codeproof
code
exec
code
- ghost code
Every function in Verus is either a spec
function, a proof
function, or an exec
function:
spec fn f1(x: int) -> int {
x / 2
}
proof fn f2(x: int) -> int {
x / 2
}
// "exec" is optional, and is usually omitted
exec fn f3(x: u64) -> u64 {
x / 2
}
exec
is the default function annotation, so it is usually omitted:
fn f3(x: u64) -> u64 { x / 2 } // exec function
The rest of this chapter will discuss these three modes in more detail. As you read, you can keep in mind the following relationships between the three modes:
spec code | proof code | exec code | |
---|---|---|---|
can contain spec code, call spec functions | yes | yes | yes |
can contain proof code, call proof functions | no | yes | yes |
can contain exec code, call exec functions | no | no | yes |
spec functions
Let’s start with a simple spec
function that computes the minimum of two integers:
spec fn min(x: int, y: int) -> int {
if x <= y {
x
} else {
y
}
}
fn test() {
assert(min(10, 20) == 10); // succeeds
assert(min(100, 200) == 100); // succeeds
}
Unlike exec
functions,
the bodies of spec
functions are visible to other functions in the same module,
so the test
function can see inside the min
function,
which allows the assertions in test
to succeed.
Across modules, the bodies of spec
functions can be made public to other modules
or kept private to the current module.
The body is public if the function is marked open
,
allowing assertions about the function’s body to succeed in other modules:
mod M1 {
use builtin::*;
pub open spec fn min(x: int, y: int) -> int {
if x <= y {
x
} else {
y
}
}
}
mod M2 {
use builtin::*;
use crate::M1::*;
fn test() {
assert(min(10, 20) == 10); // succeeds
}
}
By contrast, if the function is marked closed
,
then other modules cannot see the function’s body,
even if they can see the function’s declaration. However,
functions within the same module can view a closed spec fn
’s body.
In other words, pub
makes the declaration public,
while open
and closed
make the body public or private.
All pub
spec
functions must be marked either open
or closed
;
Verus will complain if the function lacks this annotation.
mod M1 {
use builtin::*;
pub closed spec fn min(x: int, y: int) -> int {
if x <= y {
x
} else {
y
}
}
pub proof fn lemma_min(x: int, y: int)
ensures
min(x,y) <= x && min(x,y) <= y,
{}
}
mod M2 {
use builtin::*;
use crate::M1::*;
fn test() {
assert(min(10, 20) == min(10, 20)); // succeeds
assert(min(10, 20) == 10); // FAILS
proof {
lemma_min(10,20);
}
assert(min(10, 20) <= 10); // succeeds
}
}
In the example above with min
being closed
,
the module M2
can still talk about the function min
,
proving, for example, that min(10, 20)
equals itself
(because everything equals itself, regardless of what’s in it’s body).
On the other hand, the assertion that min(10, 20) == 10
fails,
because M2
cannot see min
’s body and therefore doesn’t know that min
computes the minimum of two numbers:
error: assertion failed
|
| assert(min(10, 20) == 10); // FAILS
| ^^^^^^^^^^^^^^^^^ assertion failed
After the call to lemma_min
, the assertion that min(10, 20) <= 10
succeeds because lemma_min
exposes min(x,y) <= x
as a post-condition. lemma_min
can prove because this postcondition because it can see the body of min
despite min
being closed
, as lemma_min
and min
are in the same module.
You can think of pub open spec
functions as defining abbreviations
and pub closed spec
functions as defining abstractions.
Both can be useful, depending on the situation.
spec
functions may be called from other spec
functions
and from specifications inside exec
functions,
such as preconditions and postconditions.
For example, we can define the minimum of three numbers, min3
,
in terms of the mininum of two numbers.
We can then define an exec
function, compute_min3
,
that uses imperative code with mutable updates to compute
the minimum of 3 numbers,
and defines its postcondition in terms of the spec
function min3
:
spec fn min(x: int, y: int) -> int {
if x <= y {
x
} else {
y
}
}
spec fn min3(x: int, y: int, z: int) -> int {
min(x, min(y, z))
}
fn compute_min3(x: u64, y: u64, z: u64) -> (m: u64)
ensures
m == min3(x as int, y as int, z as int),
{
let mut m = x;
if y < m {
m = y;
}
if z < m {
m = z;
}
m
}
fn test() {
let m = compute_min3(10, 20, 30);
assert(m == 10);
}
The difference between min3
and compute_min3
highlights some differences
between spec
code and exec
code.
While exec
code may use imperative language features like mutation,
spec
code is restricted to purely functional mathematical code.
On the other hand, spec
code is allowed to use int
and nat
,
while exec
code is restricted to compilable types like u64
.
proof functions
Consider the pub closed spec
min
function from the previous section.
This defined an abstract min
function without revealing the internal
definition of min
to other modules.
However, an abstract function definition is useless unless we can say something about the function.
For this, we can use a proof
function.
In general, proof
functions will reveal or prove properties about specifications.
In this example, we’ll define a proof
function named lemma_min
that
reveals properties about min
without revealing the exact definition of min
.
Specifically, lemma_min
reveals that min(x, y)
equals either x
or y
and
is no larger than x
and y
:
mod M1 {
use builtin::*;
pub closed spec fn min(x: int, y: int) -> int {
if x <= y {
x
} else {
y
}
}
pub proof fn lemma_min(x: int, y: int)
ensures
min(x, y) <= x,
min(x, y) <= y,
min(x, y) == x || min(x, y) == y,
{
}
}
mod M2 {
use builtin::*;
use crate::M1::*;
proof fn test() {
lemma_min(10, 20);
assert(min(10, 20) == 10); // succeeds
assert(min(100, 200) == 100); // FAILS
}
}
Like exec
functions, proof
functions may have requires
and ensures
clauses.
Unlike exec
functions, proof
functions are ghost and are not compiled to executable code.
In the example above, the lemma_min(10, 20)
function is used to help the function test
in module M2
prove an assertion about min(10, 20)
, even when M2
cannot see the internal definition of min
because min
is closed
.
On the other hand, the assertion about min(100, 200)
still fails
unless test
also calls lemma_min(100, 200)
.
proof blocks
Ultimately, the purpose of spec
functions and proof
functions is to help prove
properties about executable code in exec
functions.
In fact, exec
functions can contain pieces of proof
code in proof blocks,
written with proof { ... }
.
Just like a proof
function contains proof
code,
a proof
block in an exec
function contains proof
code
and can use all of the ghost code features that proof
functions can use,
such as the int
and nat
types.
Consider an earlier example that introduced variables inside an assertion:
fn test_consts_infer() {
let u: u8 = 1;
assert({
let i: int = 2;
let n: nat = 3;
0 <= u < i < n < 4
});
}
We can write this in a more natural style using a proof block:
fn test_consts_infer() {
let u: u8 = 1;
proof {
let i: int = 2;
let n: nat = 3;
assert(0 <= u < i < n < 4);
}
}
Here, the proof
code inside the proof
block can create local variables
of type int
and nat
,
which can then be used in a subsequent assertion.
The entire proof
block is ghost code, so all of it, including its local variables,
will be erased before compilation to executable code.
Proof blocks can call proof
functions.
In fact, any calls from an exec
function to a proof
function
must appear inside proof
code such as a proof
block,
rather than being called directly from the exec
function’s exec
code.
This helps clarify which code is executable and which code is ghost,
both for the compiler and for programmers reading the code.
In the exec function test
shown below,
a proof
block is used to call lemma_min
,
allowing subsequent assertions about min
to succeed.
mod M1 {
use builtin::*;
pub closed spec fn min(x: int, y: int) -> int {
if x <= y {
x
} else {
y
}
}
pub proof fn lemma_min(x: int, y: int)
ensures
min(x, y) <= x,
min(x, y) <= y,
min(x, y) == x || min(x, y) == y,
{
}
}
mod M2 {
use builtin::*;
use crate::M1::*;
fn test() {
proof {
lemma_min(10, 20);
lemma_min(100, 200);
}
assert(min(10, 20) == 10); // succeeds
assert(min(100, 200) == 100); // succeeds
}
}
assert-by
Notice that in the previous example,
the information that test
gains about min
is not confined to the proof
block,
but instead propagates past the end of the proof
block
to help prove the subsequent assertions.
This is often useful,
particularly when the proof
block helps
prove preconditions to subsequent calls to exec
functions,
which must appear outside the proof
block.
However, sometimes we only need to prove information for a specific purpose,
and it clarifies the structure of the code if we limit the scope
of the information gained.
For this reason,
Verus supports assert(...) by { ... }
expressions,
which allows proof
code inside the by { ... }
block whose sole purpose
is to prove the asserted expression in the assert(...)
.
Any additional information gained in the proof
code is limited to the scope of the block
and does not propagate outside the assert(...) by { ... }
expression.
In the example below,
the proof
code in the block calls both lemma_min(10, 20)
and lemma_min(100, 200)
.
The first call is used to prove min(10, 20) == 10
in the assert(...) by { ... }
expression.
Once this is proven, the subsequent assertion assert(min(10, 20) == 10);
succeeds.
However, the assertion assert(min(100, 200) == 100);
fails,
because the information gained by the lemma_min(100, 200)
call
does not propagate outside the block that contains the call.
mod M1 {
use builtin::*;
pub closed spec fn min(x: int, y: int) -> int {
if x <= y {
x
} else {
y
}
}
pub proof fn lemma_min(x: int, y: int)
ensures
min(x, y) <= x,
min(x, y) <= y,
min(x, y) == x || min(x, y) == y,
{
}
}
mod M2 {
use builtin::*;
use crate::M1::*;
fn test() {
assert(min(10, 20) == 10) by {
lemma_min(10, 20);
lemma_min(100, 200);
}
assert(min(10, 20) == 10); // succeeds
assert(min(100, 200) == 100); // FAILS
}
}
spec functions vs. proof functions
Now that we’ve seen both spec
functions and proof
functions,
let’s take a longer look at the differences between them.
We can summarize the differences in the following table
(including exec
functions in the table for reference):
spec function | proof function | exec function | |
---|---|---|---|
compiled or ghost | ghost | ghost | compiled |
code style | purely functional | mutation allowed | mutation allowed |
can call spec functions | yes | yes | yes |
can call proof functions | no | yes | yes |
can call exec functions | no | no | yes |
body visibility | may be visible | never visible | never visible |
body | body optional | body mandatory | body mandatory |
determinism | deterministic | nondeterministic | nondeterministic |
preconditions/postconditions | recommends | requires/ensures | requires/ensures |
As described in the spec functions section,
spec
functions make their bodies visible to other functions in their module
and may optionally make their bodies visible to other modules as well.
spec
functions can also omit their bodies entirely:
spec fn f(i: int) -> int;
Such an uninterpreted function can be useful in libraries that define an abstract, uninterpreted function along with trusted axioms about the function.
Determinism
spec
functions are deterministic:
given the same arguments, they always return the same result.
Code can take advantage of this determinism even when a function’s body
is not visible.
For example, the assertion x1 == x2
succeeds in the code below,
because both x1
and x2
equal s(10)
,
and s(10)
always produces the same result, because s
is a spec
function:
mod M1 {
use builtin::*;
pub closed spec fn s(i: int) -> int {
i + 1
}
pub proof fn p(i: int) -> int {
i + 1
}
}
mod M2 {
use builtin::*;
use crate::M1::*;
proof fn test_determinism() {
let s1 = s(10);
let s2 = s(10);
assert(s1 == s2); // succeeds
let p1 = p(10);
let p2 = p(10);
assert(p1 == p2); // FAILS
}
}
By contrast, the proof function p
is, in principle,
allowed to return different results each time it is called,
so the assertion p1 == p2
fails.
(Nondeterminism is common for exec
functions
that perform input-output operations or work with random numbers.
In practice, it would be unusual for a proof
function to behave nondeterministically,
but it is allowed.)
recommends
exec
functions and proof
functions can have requires
and ensures
clauses.
By contrast, spec
functions cannot have requires
and ensures
clauses.
This is similar to the way Boogie works,
but differs from other systems like Dafny
and F*.
The reason for disallowing requires and ensures is to keep Verus’s specification language
close to the SMT solver’s mathematical language in order to use the SMT solver as efficiently
as possible (see the Verus Overview).
Nevertheless, it’s sometimes useful to have some sort of preconditions on spec
functions
to help catch mistakes in specifications early or to catch accidental misuses of spec
functions.
Therefore, spec
functions may contain recommends
clauses
that are similar to requires
clauses,
but represent just lightweight recommendations rather than hard requirements.
For example, for the following function,
callers are under no obligation to obey the i > 0
recommendation:
spec fn f(i: nat) -> nat
recommends
i > 0,
{
(i - 1) as nat
}
proof fn test1() {
assert(f(0) == f(0)); // succeeds
}
It’s perfectly legal for test1
to call f(0)
, and no error or warning will be generated for g
(in fact, Verus will not check the recommendation at all).
However, if there’s a verification error in a function,
Verus will automatically rerun the verification with recommendation checking turned on,
in hopes that any recommendation failures will help diagnose the verification failure.
For example, in the following:
proof fn test2() {
assert(f(0) <= f(1)); // FAILS
}
Verus print the failed assertion as an error and then prints the failed recommendation as a note:
error: assertion failed
|
| assert(f(0) <= f(1)); // FAILS
| ^^^^^^^^^^^^ assertion failed
note: recommendation not met
|
| recommends i > 0
| ----- recommendation not met
...
| assert(f(0) <= f(1)); // FAILS
| ^^^^^^^^^^^^
If the note isn’t helpful, programmers are free to ignore it.
By default, Verus does not perform recommends
checking on calls from spec
functions:
spec fn caller1() -> nat {
f(0) // no note, warning, or error generated
}
However, you can write spec(checked)
to request recommends
checking,
which will cause Verus to generate warnings for recommends
violations:
spec(checked) fn caller2() -> nat {
f(0) // generates a warning because of "(checked)"
}
This is particularly useful for specifications that are part of the “trusted computing base” that describes the interface to external, unverified components.
Ghost code vs. exec code
The purpose of exec
code is to manipulate physically real values —
values that exist in physical electronic circuits when a program runs.
The purpose of ghost code, on the other hand,
is merely to talk about the values that exec
code manipulates.
In a sense, this gives ghost code supernatural abilities:
ghost code can talk about things that could not be physically implemented at run-time.
We’ve already seen one example of this with the types int
and nat
,
which can only be used in ghost code.
As another example, ghost code can talk about the result of division by zero:
fn divide_by_zero() {
let x: u8 = 1;
assert(x / 0 == x / 0); // succeeds in ghost code
let y = x / 0; // FAILS in exec code
}
This simply reflects the SMT solver’s willingness to reason about the result of division by zero
as an unspecified integer value.
By contrast, Verus reports a verification failure if exec
code attempts to divide by zero:
error: possible division by zero
|
| let y = x / 0; // FAILS
| ^^^^^
Two particular abilities of ghost code1 are worth keeping in mind:
- Ghost code can copy values of any type,
even if the type doesn’t implement the Rust
Copy
trait. - Ghost code can create a value of any type2, even if the type has no public constructors (e.g. even if the type is struct whose fields are all private to another module).
For example, the following spec
functions create and duplicate values of type S
,
defined in another module with private fields and without the Copy
trait:
mod MA {
// does not implement Copy
// does not allow construction by other modules
pub struct S {
private_field: u8,
}
}
mod MB {
use builtin::*;
use crate::MA::*;
// construct a ghost S
spec fn make_S() -> S;
// duplicate an S
spec fn duplicate_S(s: S) -> (S, S) {
(s, s)
}
}
These operations are not allowed in exec
code.
Furthermore, values from ghost code are not allowed to leak into exec
code —
what happens in ghost code stays in ghost code.
Any attempt to use a value from ghost code in exec
code will result in a compile-time error:
fn test(s: S) {
let pair = duplicate_S(s); // FAILS
}
error: cannot call function with mode spec
|
| let pair = duplicate_S(s); // FAILS
| ^^^^^^^^^^^^^^
As an example of ghost code that uses these abilities,
a call to the Verus Seq::index(...)
function
can duplicate a value from the sequence, if the index i
is within bounds,
and create a value out of thin air if i
is out of bounds:
impl<A> Seq<A> {
...
/// Gets the value at the given index `i`.
///
/// If `i` is not in the range `[0, self.len())`, then the resulting value
/// is meaningless and arbitrary.
pub spec fn index(self, i: int) -> A
recommends 0 <= i < self.len();
...
}
Variables in proof
code can opt out of these special abilities using
the tracked
annotation,
but this is an advanced feature that can be ignored for now.
This is true even if the type has no values in exec
code,
like the Rust !
“never” type
(see the “bottom” value in this technical discussion).
const declarations
const
declarations can either be marked spec
or left without a mode:
spec const SPEC_ONE: int = 1;
spec fn spec_add_one(x: int) -> int {
x + SPEC_ONE
}
const ONE: u8 = 1;
fn add_one(x: u8) -> (ret: u8)
requires
x < 0xff,
ensures
ret == x + ONE, // use "ONE" in spec code
{
x + ONE // use "ONE" in exec code
}
A spec const
is like spec
function with no arguments.
It is always ghost and cannot be used as an exec
value.
By contrast, a const
without a mode is dual-use:
it is usable as both an exec
value and a spec
value.
Therefore, the const
definition is restricted to obey the rules
for both exec
code and spec
code.
For example, as with exec
code, its type must be compilable (e.g. u8
, not int
),
and, as with spec
code, it cannot call any exec
or proof
functions.
Recursion and loops
Suppose we want to compute the nth triangular number:
triangle(n) = 0 + 1 + 2 + ... + (n - 1) + n
We can express this as a simple recursive funciton:
spec fn triangle(n: nat) -> nat
decreases n,
{
if n == 0 {
0
} else {
n + triangle((n - 1) as nat)
}
}
This chapter discusses how to define and use recursive functions,
including writing decreases
clauses and using fuel.
It then explores a series of verified implementations of triangle
,
starting with a basic recursive implementation and ending with a while loop.
Recursive functions, decreases, fuel
Recursive functions are functions that call themselves.
In order to ensure soundness, a recursive spec
function must terminate on all inputs —
infinite recursive calls aren’t allowed.
To see why termination is important, consider the following nonterminating function definition:
spec fn bogus(i: int) -> int {
bogus(i) + 1 // FAILS, error due to nontermination
}
Verus rejects this definition because the recursive call loops infinitely, never terminating.
If Verus accepted the definion, then you could very easily prove false,
because, for example, the definition insists that bogus(3) == bogus(3) + 1
,
which implies that 0 == 1
, which is false:
proof fn exploit_bogus()
ensures
false,
{
assert(bogus(3) == bogus(3) + 1);
}
To help prove termination,
Verus requires that each recursive spec
function definition contain a decreases
clause:
spec fn triangle(n: nat) -> nat
decreases n,
{
if n == 0 {
0
} else {
n + triangle((n - 1) as nat)
}
}
Each recursive call must decrease the expression in the decreases
clause by at least 1.
Furthermore, the call cannot cause the expression to decrease below 0.
With these restrictions, the expression in the decreases
clause serves as an upper bound on the
depth of calls that triangle
can make to itself, ensuring termination.
Fuel and reasoning about recursive functions
Given the definition of triangle
above, we can make some assertions about it:
fn test_triangle_fail() {
assert(triangle(0) == 0); // succeeds
assert(triangle(10) == 55); // FAILS
}
The first assertion, about triangle(0)
, succeeds.
But somewhat surprisingly, the assertion assert(triangle(10) == 55)
fails,
despite the fact that triangle(10)
really is
equal to 55.
We’ve just encountered a limitation of automated reasoning:
SMT solvers cannot automatically prove all true facts about all recursive functions.
For nonrecursive functions,
an SMT solver can reason about the functions simply by inlining them.
For example, if we have a call min(a + 1, 5)
to the min
function:
spec fn min(x: int, y: int) -> int {
if x <= y {
x
} else {
y
}
}
the SMT solver can replace min(a + 1, 5)
with:
if a + 1 <= 5 {
a + 1
} else {
5
}
which eliminates the call. However, this strategy doesn’t completely work with recursive functions, because inlining the function produces another expression with a call to the same function:
triangle(x) = if x == 0 { 0 } else { x + triangle(x - 1) }
Naively, the solver could keep inlining again and again, producing more and more expressions, and this strategy would never terminate:
triangle(x) = if x == 0 { 0 } else { x + triangle(x - 1) }
triangle(x) = if x == 0 { 0 } else { x + (if x - 1 == 0 { 0 } else { x - 1 + triangle(x - 2) }) }
triangle(x) = if x == 0 { 0 } else { x + (if x - 1 == 0 { 0 } else { x - 1 + (if x - 2 == 0 { 0 } else { x - 2 + triangle(x - 3) }) }) }
To avoid this infinite inlining,
Verus limits the number of recursive calls that any given call can spawn in the SMT solver.
This limit is called the fuel;
each nested recursive inlining consumes one unit of fuel.
By default, the fuel is 1, which is just enough for assert(triangle(0) == 0)
to succeed
but not enough for assert(triangle(10) == 55)
to succeed.
To increase the fuel to a larger amount,
we can use the reveal_with_fuel
directive:
fn test_triangle_reveal() {
proof {
reveal_with_fuel(triangle, 11);
}
assert(triangle(10) == 55);
}
Here, 11 units of fuel is enough to inline the 11 calls
triangle(0)
, …, triangle(10)
.
Note that even if we only wanted to supply 1 unit of fuel,
we could still prove assert(triangle(10) == 55)
through a long series of assertions:
fn test_triangle_step_by_step() {
assert(triangle(0) == 0);
assert(triangle(1) == 1);
assert(triangle(2) == 3);
assert(triangle(3) == 6);
assert(triangle(4) == 10);
assert(triangle(5) == 15);
assert(triangle(6) == 21);
assert(triangle(7) == 28);
assert(triangle(8) == 36);
assert(triangle(9) == 45);
assert(triangle(10) == 55); // succeeds
}
This works because 1 unit of fuel is enough to prove assert(triangle(0) == 0)
,
and then once we know that triangle(0) == 0
,
we only need to inline triangle(1)
once to get:
triangle(1) = if 1 == 0 { 0 } else { 1 + triangle(0) }
Now the SMT solver can use the previously computed triangle(0)
to simplify this to:
triangle(1) = if 1 == 0 { 0 } else { 1 + 0 }
and then produce triangle(1) == 1
.
Likewise, the SMT solver can then use 1 unit of fuel to rewrite triangle(2)
in terms of triangle(1)
, proving triangle(2) == 3
, and so on.
However, it’s probably best to avoid long series of assertions if you can,
and instead write a proof that makes it clear why the SMT proof fails by default
(not enough fuel) and fixes exactly that problem:
fn test_triangle_assert_by() {
assert(triangle(10) == 55) by {
reveal_with_fuel(triangle, 11);
}
}
Recursive exec and proof functions, proofs by induction
The previous section introduced a specification for triangle numbers. Given that, let’s try a series of executable implementations of triangle numbers, starting with a simple recursive implementation:
fn rec_triangle(n: u32) -> (sum: u32)
ensures
sum == triangle(n as nat),
{
if n == 0 {
0
} else {
n + rec_triangle(n - 1) // FAILS: possible overflow
}
}
We immediately run into one small practical difficulty: the implementation needs to use a finite-width integer to hold the result, and this integer may overflow:
error: possible arithmetic underflow/overflow
|
| n + rec_triangle(n - 1) // FAILS: possible overflow
| ^^^^^^^^^^^^^^^^^^^^^^^
Indeed, we can’t expect the implementation to work if the result
won’t fit in the finite-width integer type,
so it makes sense to add a precondition saying
that the result must fit,
which for a u32
result means triangle(n) < 0x1_0000_0000
:
fn rec_triangle(n: u32) -> (sum: u32)
requires
triangle(n as nat) < 0x1_0000_0000,
ensures
sum == triangle(n as nat),
{
if n == 0 {
0
} else {
n + rec_triangle(n - 1)
}
}
This time, verification succeeds.
It’s worth pausing for a few minutes, though, to understand why the verification succeeds.
For example, an execution of rec_triangle(10)
performs 10 separate additions, each of which could potentially overflow.
How does Verus know that none of these 10 additions will overflow,
given just the initial precondition triangle(10) < 0x1_0000_0000
?
The answer is that each instance of triangle(n)
for n != 0
makes a recursive call to triangle(n - 1)
,
and this recursive call must satisfy the precondition triangle(n - 1) < 0x1_0000_0000
.
Let’s look at how this is proved.
If we know triangle(n) < 0x1_0000_0000
from rec_triangle
’s precondition
and we use 1 unit of fuel to inline the definition of triangle
once,
we get:
triangle(n) < 0x1_0000_0000
triangle(n) = if n == 0 { 0 } else { n + triangle(n - 1) }
In the case where n != 0
, this simplifies to:
triangle(n) < 0x1_0000_0000
triangle(n) = n + triangle(n - 1)
From this, we conclude n + triangle(n - 1) < 0x1_0000_0000
,
which means that triangle(n - 1) < 0x1_0000_0000
,
since 0 <= n
, since n
has type u32
, which is nonnegative.
Intuitively, you can imagine that as rec_triangle
executes,
proofs about triangle(n) < 0x1_0000_0000
gets passed down the stack to the recursive calls,
proving triangle(10) < 0x1_0000_0000
in the first call,
then triangle(9) < 0x1_0000_0000
in the second call,
triangle(8) < 0x1_0000_0000
in the third call,
and so on.
(Of course, the proofs don’t actually exist at run-time —
they are purely static and are erased before compilation —
but this is still a reasonable way to think about it.)
Towards an imperative implementation: mutation and tail recursion
The recursive implementation presented above is easy to write and verify,
but it’s not very efficient, since it requires a lot of stack space for the recursion.
Let’s take a couple small steps towards a more efficient, imperative implementation
based on while loops.
First, to prepare for the mutable variables that we’ll use in while loops,
let’s switch sum
from being a return value to being a mutably updated variable:
fn mut_triangle(n: u32, sum: &mut u32)
requires
triangle(n as nat) < 0x1_0000_0000,
ensures
*sum == triangle(n as nat),
{
if n == 0 {
*sum = 0;
} else {
mut_triangle(n - 1, sum);
*sum = *sum + n;
}
}
From the verification’s point of view, this doesn’t change anything significant.
Internally, when performing verification,
Verus simply represents the final value of *sum
as a return value,
making the verification of mut_triangle
essentially the same as
the verification of rec_triangle
.
Next, let’s try to eliminate the excessive stack usage by making the function
tail recursive.
We do this by introducing and index variable idx
that counts up from 0
to n
,
just as a while loop would do:
fn tail_triangle(n: u32, idx: u32, sum: &mut u32)
requires
idx <= n,
*old(sum) == triangle(idx as nat),
triangle(n as nat) < 0x1_0000_0000,
ensures
*sum == triangle(n as nat),
{
if idx < n {
let idx = idx + 1;
*sum = *sum + idx;
tail_triangle(n, idx, sum);
}
}
In the preconditions and postconditions,
the expression *old(sum)
refers to the initial value of *sum
,
at the entry to the function,
while *sum
refers to the final value, at the exit from the function.
The precondition *old(sum) == triangle(idx as nat)
specifies that
as tail_triangle
executes more and more recursive calls,
sum
accumulates the sum 0 + 1 + ... + idx
.
Each recursive call increases idx
by 1 until idx
reaches n
,
at which point sum
equals 0 + 1 + ... + n
and the function simply returns sum
unmodified.
When we try to verify tail_triangle
, though, Verus reports an error about possible overflow:
error: possible arithmetic underflow/overflow
|
| *sum = *sum + idx;
| ^^^^^^^^^^
This may seem perplexing at first:
why doesn’t the precondition triangle(n as nat) < 0x1_0000_0000
automatically take care of the overflow,
as it did for rec_triangle
and mut_triangle
?
The problem is that we’ve reversed the order of the addition and the recursive call.
rec_triangle
and mut_triangle
made the recursive call first,
and then performed the addition.
This allowed them to prove all the necessary
facts about overflow first in the series of recursive calls
(e.g. proving triangle(10) < 0x1_0000_0000
, triangle(9) < 0x1_0000_0000
,
…, triangle(0) < 0x1_0000_0000
.)
before doing the arithmetic that depends on these facts.
But tail_triangle
tries to perform the arithmetic first,
before the recursion,
so it never has a chance to develop these facts from the original
triangle(n) < 0x1_0000_0000
assumption.
Proofs by induction
In the example of computing triangle(10)
,
we need to know triangle(0) < 0x1_0000_0000
,
then triangle(1) < 0x1_0000_0000
,
and so on, but we only know triangle(10) < 0x1_0000_0000
to start with.
If we somehow knew that
triangle(0) <= triangle(10)
,
triangle(1) <= triangle(10)
,
and so on,
then we could derive what we want from triangle(10) < 0x1_0000_0000
.
What we need is a lemma that proves the if i <= j
,
then triangle(i) <= triangle(j)
.
In other words, we need to prove that triangle
is monotonic.
We can use a proof
function to implement this lemma:
proof fn triangle_is_monotonic(i: nat, j: nat)
ensures
i <= j ==> triangle(i) <= triangle(j),
decreases j,
{
// We prove the statement `i <= j ==> triangle(i) <= triangle(j)`
// by induction on `j`.
if j == 0 {
// The base case (`j == 0`) is trivial since it's only
// necessary to reason about when `i` and `j` are both 0.
// So no proof lines are needed for this case.
}
else {
// In the induction step, we can assume the statement is true
// for `j - 1`. In Verus, we can get that fact into scope with
// a recursive call substituting `j - 1` for `j`.
triangle_is_monotonic(i, (j - 1) as nat);
// Once we know it's true for `j - 1`, the rest of the proof
// is trivial.
}
}
The proof is by induction on j,
where the base case of the induction is i == j
and the induction step relates j - 1
to j
.
In Verus, the induction step is implemented as a recursive call from the proof to itself
(in this example, this recursive call is line triangle_is_monotonic(i, (j - 1) as nat)
).
As with recursive spec
functions,
recursive proof
functions must terminate and need a decreases
clause.
Otherwise, it would be easy to prove false
,
as in the following non-terminating “proof”:
proof fn circular_reasoning()
ensures
false,
{
circular_reasoning(); // FAILS, does not terminate
}
We can use the triangle_is_monotonic
lemma to complete the verification of tail_triangle
:
fn tail_triangle(n: u32, idx: u32, sum: &mut u32)
requires
idx <= n,
*old(sum) == triangle(idx as nat),
triangle(n as nat) < 0x1_0000_0000,
ensures
*sum == triangle(n as nat),
{
if idx < n {
let idx = idx + 1;
assert(*sum + idx < 0x1_0000_0000) by {
triangle_is_monotonic(idx as nat, n as nat);
}
*sum = *sum + idx;
tail_triangle(n, idx, sum);
}
}
Intuitively, we can think of the call from tail_triangle
to triangle_is_monotonic
as performing a similar recursive proof that rec_triangle
and mut_triangle
performed as they proved their triangle(n) < 0x1_0000_0000
preconditions
in their recursive calls.
In going from rec_triangle
and mut_triangle
to tail_triangle
,
we’ve just shifted this recursive reasoning from the executable code into a separate recursive lemma.
Loops and invariants
The previous section developed a tail-recursive implementation of triangle
:
fn tail_triangle(n: u32, idx: u32, sum: &mut u32)
requires
idx <= n,
*old(sum) == triangle(idx as nat),
triangle(n as nat) < 0x1_0000_0000,
ensures
*sum == triangle(n as nat),
{
if idx < n {
let idx = idx + 1;
assert(*sum + idx < 0x1_0000_0000) by {
triangle_is_monotonic(idx as nat, n as nat);
}
*sum = *sum + idx;
tail_triangle(n, idx, sum);
}
}
We can rewrite this as a while
loop as follows:
fn loop_triangle(n: u32) -> (sum: u32)
requires
triangle(n as nat) < 0x1_0000_0000,
ensures
sum == triangle(n as nat),
{
let mut sum: u32 = 0;
let mut idx: u32 = 0;
while idx < n
invariant
idx <= n,
sum == triangle(idx as nat),
triangle(n as nat) < 0x1_0000_0000,
{
idx = idx + 1;
assert(sum + idx < 0x1_0000_0000) by {
triangle_is_monotonic(idx as nat, n as nat);
}
sum = sum + idx;
}
sum
}
The loop is quite similar to the tail-recursive implementation.
(In fact, internally, Verus verifies the loop as if it were its own function,
separate from the enclosing loop_triangle
function.)
Where the tail-recursive function had preconditions,
the loop has loop invariants that describe what must
be true before and after each iteration of the loop.
For example, if n = 10
,
then the loop invariant must be true 11 times:
before each of the 10 iterations,
and after the final iteration.
Notice that the invariant idx <= n
allows for the possibility that idx == n
,
since this will be the case after the final iteration.
If we tried to write the invariant as idx < n
,
then Verus would fail to prove that the invariant is maintained after the final iteration.
After the loop exits,
Verus knows that idx <= n
(because of the loop invariant)
and it knows that the loop condition idx < n
must have been false
(otherwise, the loop would have continued).
Putting these together allows Verus to prove that idx == n
after exiting the loop.
Since we also have the invariant sum == triangle(idx as nat)
,
Verus can then substitute n
for idx
to conclude sum == triangle(n as nat)
,
which proves the postcondition of loop_triangle
.
Just as verifying functions requires some programmer effort to write
appropriate preconditions and postconditions,
verifying loops requires programmer effort to write loop invariants.
The loop invariants have to be neither too weak (invariant true
is usually too weak)
nor too strong (invariant false
is too strong),
so that:
- the invariants hold upon the initial entry to the loop
(e.g.
idx <= n
holds for the initial valueidx = 0
, since0 <= n
) - the invariant still holds at the end of the loop body, so that the invariant is maintained across loop iterations
- the invariant is strong enough to prove the properties we want
to know after the loop exits (e.g. to prove
loop_triangle
’s postcondition)
As mentioned above,
Verus verifies the loop separately from the function that contains the loop
(e.g. separately from loop_triangle
).
This means that the loop does not automatically inherit preconditions
like triangle(n as nat) < 0x1_0000_0000
from the surrounding function —
if the loop relies on these preconditions,
they must be listed explicitly in the loop invariants.
(The reason for this is to improve the efficiency of the SMT solving
for large functions with large while loops;
verification runs faster if Verus breaks the surrounding function and the loops into separate pieces
and verifies them modularly.)
Verus does allow you to opt-out of this behavior, meaning that your loops will inherit
information from the surrounding context. This will simplify your loop invariants,
but verification time may increase for medium-to-large functions.
To opt-out for a single function or while loop, you can add the attribute
#[verifier::loop_isolation(false)]
. You can also opt-out at the module or
crate level, by adding the #![verifier::loop_isolation(false)]
attribute
to the module or the root of the crate. You can then override the global
setting locally by adding #[verifier::loop_isolation(true)]
on individual
functions or loops.
Loops with break
Loops can exit early using return
or break
.
Suppose, for example, we want to remove the requirement
triangle(n as nat) < 0x1_0000_0000
from the loop_triangle
function,
and instead check for overflow at run-time.
The following version of the function uses return
to return
the special value 0xffff_ffff
in case overflow is detected at run-time:
fn loop_triangle_return(n: u32) -> (sum: u32)
ensures
sum == triangle(n as nat) || (sum == 0xffff_ffff && triangle(n as nat) >= 0x1_0000_0000),
{
let mut sum: u32 = 0;
let mut idx: u32 = 0;
while idx < n
invariant
idx <= n,
sum == triangle(idx as nat),
{
idx = idx + 1;
if sum as u64 + idx as u64 >= 0x1_0000_0000 {
proof {
triangle_is_monotonic(idx as nat, n as nat);
}
return 0xffff_ffff;
}
sum = sum + idx;
}
sum
}
Another way to exit early from a loop is with a break
inside the loop body.
However, break
complicates the specification of a loop slightly.
For simple while
loops without a break
,
Verus knows that the loop condition (e.g. idx < n
)
must be false after exiting the loop.
If there is a break
, though, the loop condition is not necessarily false
after the loop, because the break
might cause the loop to exit even when
the loop condition is true.
To deal with this, while
loops with a break
,
as well as Rust loop
expressions (loops with no condition),
must explicitly specify what is true after the loop exit using ensures
clauses,
as shown in the following code.
Furthermore, invariants that don’t hold after a break
must be marked as invariant_except_break
rather than invariant
:
fn loop_triangle_break(n: u32) -> (sum: u32)
ensures
sum == triangle(n as nat) || (sum == 0xffff_ffff && triangle(n as nat) >= 0x1_0000_0000),
{
let mut sum: u32 = 0;
let mut idx: u32 = 0;
while idx < n
invariant_except_break
idx <= n,
sum == triangle(idx as nat),
ensures
sum == triangle(n as nat) || (sum == 0xffff_ffff && triangle(n as nat) >= 0x1_0000_0000),
{
idx = idx + 1;
if sum as u64 + idx as u64 >= 0x1_0000_0000 {
proof {
triangle_is_monotonic(idx as nat, n as nat);
}
sum = 0xffff_ffff;
break;
}
sum = sum + idx;
}
sum
}
Lexicographic decreases clauses
For some recursive functions,
it’s difficult to specify a single value that decreases
in each recursive call.
For example, the Ackermann function
has two parameters m
and n
,
and neither m
nor n
decrease in all 3 of the recursive calls:
spec fn ackermann(m: nat, n: nat) -> nat
decreases m, n,
{
if m == 0 {
n + 1
} else if n == 0 {
ackermann((m - 1) as nat, 1)
} else {
ackermann((m - 1) as nat, ackermann(m, (n - 1) as nat))
}
}
proof fn test_ackermann() {
reveal_with_fuel(ackermann, 12);
assert(ackermann(3, 2) == 29);
}
For this situation, Verus allows the decreases
clause to contain multiple expressions,
and it treats these expressions as
lexicographically ordered.
For example, decreases m, n
means that one of the following must be true:
- m stays the same, and n decreases,
which happens in the call
ackermann(m, (n - 1) as nat)
- m decreases and n may increase or decrease arbitrarily,
which happens in the two calls of the form
ackermann((m - 1) as nat, ...)
Mutual recursion
Functions may be mutually recursive,
as in the following example where is_even
calls is_odd
recursively
and is_odd
calls is_even
recursively:
spec fn abs(i: int) -> int {
if i < 0 {
-i
} else {
i
}
}
spec fn is_even(i: int) -> bool
decreases abs(i),
{
if i == 0 {
true
} else if i > 0 {
is_odd(i - 1)
} else {
is_odd(i + 1)
}
}
spec fn is_odd(i: int) -> bool
decreases abs(i),
{
if i == 0 {
false
} else if i > 0 {
is_even(i - 1)
} else {
is_even(i + 1)
}
}
proof fn even_odd_mod2(i: int)
ensures
is_even(i) <==> i % 2 == 0,
is_odd(i) <==> i % 2 == 1,
decreases abs(i),
{
if i < 0 {
even_odd_mod2(i + 1);
}
if i > 0 {
even_odd_mod2(i - 1);
}
}
fn test_even() {
proof {
reveal_with_fuel(is_even, 11);
}
assert(is_even(10));
}
fn test_odd() {
proof {
reveal_with_fuel(is_odd, 11);
}
assert(!is_odd(10));
}
The recursion here works for both positive and negative i
;
in both cases, the recursion decreases abs(i)
, the absolute value of i
.
An alternate way to write this mutual recursion is:
spec fn is_even(i: int) -> bool
decreases abs(i), 0int,
{
if i == 0 {
true
} else if i > 0 {
is_odd(i - 1)
} else {
is_odd(i + 1)
}
}
spec fn is_odd(i: int) -> bool
decreases abs(i), 1int,
{
!is_even(i)
}
proof fn even_odd_mod2(i: int)
ensures
is_even(i) <==> i % 2 == 0,
is_odd(i) <==> i % 2 == 1,
decreases abs(i),
{
reveal_with_fuel(is_odd, 2);
if i < 0 {
even_odd_mod2(i + 1);
}
if i > 0 {
even_odd_mod2(i - 1);
}
}
fn test_even() {
proof {
reveal_with_fuel(is_even, 21);
}
assert(is_even(10));
}
fn test_odd() {
proof {
reveal_with_fuel(is_odd, 22);
}
assert(!is_odd(10));
}
In this alternate version, the recursive call !is_even(i)
doesn’t
decrease abs(i)
, so we can’t just use abs(i)
as the decreases
clause by itself.
However, we can employ a trick with lexicographic ordering.
If we write decreases abs(i), 1
,
then the call to !is_even(i)
keeps the first expression abs(i)
the same,
but decreases the second expression from 1
to 0
,
which satisfies the lexicographic requirements for decreasing.
The call is_odd(i - 1)
also obeys lexicographic ordering,
since it decreases the first expression abs(i)
,
which allows the second expression to increase from 0
to 1
.
Datatypes: Structs and Enums
Datatypes, in both executable code and specifications, are
defined via Rust’s struct
and enum
.
Struct
In Verus, just as in Rust, you can use struct
to define a datatype that
collects a set of fields together:
struct Point {
x: int,
y: int,
}
Spec and exec code can refer to struct
fields:
impl Point {
spec fn len2(&self) -> int {
self.x * self.x + self.y * self.y
}
}
fn rotate_90(p: Point) -> (o: Point)
ensures o.len2() == p.len2()
{
let o = Point { x: -p.y, y: p.x };
assert((-p.y) * (-p.y) == p.y * p.y) by(nonlinear_arith);
o
}
Enum
In Verus, just as in Rust, you can use enum
to define a datatype that is any one of the
defined variants:
enum Beverage {
Coffee { creamers: nat, sugar: bool },
Soda { flavor: Syrup },
Water { ice: bool },
}
An enum
is often used just for its tags, without member fields:
enum Syrup {
Cola,
RootBeer,
Orange,
LemonLime,
}
Identifying a variant with the is
operator
In spec contexts, the is
operator lets you query which variant of an
enum a variable contains.
fn make_float(bev: Beverage) -> Dessert
requires bev is Soda
{
Dessert::new(/*...*/)
}
Accessing fields with the arrow operator
If all the fields have distinct names, as in the Beverage
example,
you can refer to fields with the arrow ->
operator:
proof fn sufficiently_creamy(bev: Beverage) -> bool
requires bev is Coffee
{
bev->creamers >= 2
}
If an enum
field reuses a name, you can qualify the field access:
enum Life {
Mammal { legs: int, has_pocket: bool },
Arthropod { legs: int, wings: int },
Plant { leaves: int },
}
spec fn is_insect(l: Life) -> bool
{
l is Arthropod && l->Arthropod_legs == 6
}
match
works as in Rust.
enum Shape {
Circle(int),
Rect(int, int),
}
spec fn area_2(s: Shape) -> int {
match s {
Shape::Circle(radius) => { radius * radius * 3 },
Shape::Rect(width, height) => { width * height }
}
}
For variants like Shape
declared with round parentheses ()
,
you can use Verus’ `->’ tuple-like syntax to access a single field
without a match destruction:
spec fn rect_height(s: Shape) -> int
recommends s is Rect
{
s->1
}
matches with &&, ==>, and &&&
match
is natural for examining every variant of an enum
.
If you’d like to bind the fields while only considering one or two of the
variants, you can use Verus’ matches
syntax:
use Life::*;
spec fn cuddly(l: Life) -> bool {
||| l matches Mammal { legs, .. } && legs == 4
||| l matches Arthropod { legs, wings } && legs == 8 && wings == 0
}
Because the matches
syntax binds names in patterns, it has no trouble
with field names reused across variants, so it may be preferable
to the (qualified) arrow syntax.
Notice that l matches Mammal{legs} && legs == 4
is a boolean expression,
with the special property that legs
is bound in the remainder of the
expression after &&
. That helpful binding also works with ==>
and &&&
:
spec fn is_kangaroo(l: Life) -> bool {
&&& l matches Life::Mammal { legs, has_pocket }
&&& legs == 2
&&& has_pocket
}
spec fn walks_upright(l: Life) -> bool {
l matches Life::Mammal { legs, .. } ==> legs == 2
}
Libraries
Specification libraries: Seq, Set, Map
The Verus libraries contain types Seq<T>
, Set<T>
, and Map<Key, Value>
for representing sequences, sets, and maps in specifications.
In contrast to executable Rust collection datatypes in
std::collections,
the Seq
, Set
and Map
types
represent collections of arbitrary size.
For example, while the len()
method of
std::collections::HashSet
returns a length of type usize
,
which is bounded,
the len()
methods of Seq
and Set
return
lengths of type nat
, which is unbounded.
Furthermore, Set
and Map
can represent infinite sets and maps.
(Sequences, on the other hand, are always finite.)
This allows specifications to talk about collections that
are larger than could be contained in the physical memory of a computer.
Constructing and using Seq, Set, Map
The seq!
, set!
, and map!
macros construct values of type Seq
, Set
, and Map
with particular contents:
proof fn test_seq1() {
let s: Seq<int> = seq![0, 10, 20, 30, 40];
assert(s.len() == 5);
assert(s[2] == 20);
assert(s[3] == 30);
}
proof fn test_set1() {
let s: Set<int> = set![0, 10, 20, 30, 40];
assert(s.finite());
assert(s.contains(20));
assert(s.contains(30));
assert(!s.contains(60));
}
proof fn test_map1() {
let m: Map<int, int> = map![0 => 0, 10 => 100, 20 => 200, 30 => 300, 40 => 400];
assert(m.dom().contains(20));
assert(m.dom().contains(30));
assert(!m.dom().contains(60));
assert(m[20] == 200);
assert(m[30] == 300);
}
The macros above can only construct finite sequences, sets, and maps.
There are also functions Seq::new
, Set::new
, and Map::new
,
which can allocate both finite values and (for sets and maps) infinite values:
proof fn test_seq2() {
let s: Seq<int> = Seq::new(5, |i: int| 10 * i);
assert(s.len() == 5);
assert(s[2] == 20);
assert(s[3] == 30);
}
proof fn test_set2() {
let s: Set<int> = Set::new(|i: int| 0 <= i <= 40 && i % 10 == 0);
assert(s.contains(20));
assert(s.contains(30));
assert(!s.contains(60));
let s_infinite: Set<int> = Set::new(|i: int| i % 10 == 0);
assert(s_infinite.contains(20));
assert(s_infinite.contains(30));
assert(!s_infinite.contains(35));
}
proof fn test_map2() {
let m: Map<int, int> = Map::new(|i: int| 0 <= i <= 40 && i % 10 == 0, |i: int| 10 * i);
assert(m[20] == 200);
assert(m[30] == 300);
let m_infinite: Map<int, int> = Map::new(|i: int| i % 10 == 0, |i: int| 10 * i);
assert(m_infinite[20] == 200);
assert(m_infinite[30] == 300);
assert(m_infinite[90] == 900);
}
Each Map<Key, Value>
value has a domain of type Set<Key>
given by .dom()
.
In the test_map2
example above, m
’s domain is the finite set {0, 10, 20, 30, 40}
,
while m_infinite
’s domain is the infinite set {..., -20, 10, 0, 10, 20, ...}
.
For more operations, including sequence contenation (.add
or +
),
sequence update,
sequence subrange,
set union (.union
or +
),
set intersection (.intersect
),
etc.,
see:
See also the API documentation.
Proving properties of Seq, Set, Map
The SMT solver will prove some properties about Seq, Set, and Map automatically, as shown in the examples above. However, some other properties may require calling lemmas in the library or may require proofs by induction.
If two collections (Seq
, Set
, or Map
) have the same elements,
Verus considers them to be equal.
This is known as equality via extensionality.
However, the SMT solver will in general not automatically recognize that
the two collections are equal
if the collections were constructed in different ways.
For example, the following 3 sequences are equal,
but asserting equality fails:
proof fn test_eq_fail() {
let s1: Seq<int> = seq![0, 10, 20, 30, 40];
let s2: Seq<int> = seq![0, 10] + seq![20] + seq![30, 40];
let s3: Seq<int> = Seq::new(5, |i: int| 10 * i);
assert(s1 === s2); // FAILS, even though it's true
assert(s1 === s3); // FAILS, even though it's true
}
To convince the SMT solver that s1
, s2
, and s3
are equal,
we have to explicitly assert the equality via the extensional equality operator =~=
,
rather than just the ordinary equality operator ==
.
Using =~=
forces the SMT solver
to check that all the elements of the collections are equal,
which it would not ordinarily do.
Once we’ve explicitly proven equality via extensionality,
we can then successfully assert ==
:
proof fn test_eq() {
let s1: Seq<int> = seq![0, 10, 20, 30, 40];
let s2: Seq<int> = seq![0, 10] + seq![20] + seq![30, 40];
let s3: Seq<int> = Seq::new(5, |i: int| 10 * i);
assert(s1 =~= s2);
assert(s1 =~= s3);
assert(s1 === s2); // succeeds
assert(s1 === s3); // succeeds
}
(See the Equality via extensionality section for more details.)
Proofs about set cardinality (Set::len
) and set finiteness (Set::finite
)
often require inductive proofs.
For example, the exact cardinality of the intersection of two sets
depends on which elements the two sets have in common.
If the two sets are disjoint,
the intersection’s cardinality will be 0,
but otherwise, the intersections’s cardinality will be some non-zero value.
Let’s try to prove that the intersection’s cardinality is no larger than
either of the two sets’ cardinalities.
Without loss of generality, we can just prove that
the intersection’s cardinality is no larger than the first set’s cardinality:
s1.intersect(s2).len() <= s1.len()
.
The proof (which is found in set_lib.rs)
is by induction on the size of the set s1
.
In the induction step, we need to make s1
smaller,
which means we need to remove an element from it.
The two methods .choose
and .remove
allow us to choose
an arbitrary element from s1
and remove it:
let a = s1.choose();
... s1.remove(a) ...
Based on this, we expect an inductive proof to look something like the following,
where the inductive step removes s1.choose()
:
pub proof fn lemma_len_intersect<A>(s1: Set<A>, s2: Set<A>)
requires
s1.finite(),
ensures
s1.intersect(s2).len() <= s1.len(),
decreases
s1.len(),
{
if s1.is_empty() {
} else {
let a = s1.choose();
lemma_len_intersect(s1.remove(a), s2);
}
}
Unfortunately, Verus fails to verify this proof. Therefore, we’ll need to fill in the base case and induction case with some more detail. Before adding this detail to the code, let’s think about what a fully explicit proof might look like if we wrote it out by hand:
pub proof fn lemma_len_intersect<A>(s1: Set<A>, s2: Set<A>)
requires
s1.finite(),
ensures
s1.intersect(s2).len() <= s1.len(),
decreases
s1.len(),
{
if s1.is_empty() {
// s1 is the empty set.
// Therefore, s1.intersect(s2) is also empty.
// So both s1.len() and s1.intersect(s2).len() are 0,
// and 0 <= 0.
} else {
// s1 is not empty, so it has at least one element.
// Let a be an element from s1.
// Let s1' be the set s1 with the element a removed (i.e. s1' == s1 - {a}).
// Removing an element decreases the cardinality by 1, so s1'.len() == s1.len() - 1.
// By induction, s1'.intersect(s2).len() <= s1'.len(), so:
// (s1 - {a}).intersect(s2).len() <= s1'.len()
// (s1.intersect(s2) - {a}).len() <= s1'.len()
// (s1.intersect(s2) - {a}).len() <= s1.len() - 1
// case a in s1.intersect(s2):
// (s1.intersect(s2) - {a}).len() == s1.intersect(s2).len() - 1
// case a not in s1.intersect(s2):
// (s1.intersect(s2) - {a}).len() == s1.intersect(s2).len()
// In either case:
// s1.intersect(s2).len() <= (s1.intersect(s2) - {a}).len() + 1
// Putting all the inequalities together:
// s1.intersect(s2).len() <= (s1.intersect(s2) - {a}).len() + 1 <= (s1.len() - 1) + 1
// So:
// s1.intersect(s2).len() <= (s1.len() - 1) + 1
// s1.intersect(s2).len() <= s1.len()
}
}
For such a simple property, this is a surprisingly long proof! Fortunately, the SMT solver can automatically prove most of the steps written above. What it will not automatically prove, though, is any step requiring equality via extensionality, as discussed earlier. The two crucial steps requiring equality via extensionality are:
- “Therefore, s1.intersect(s2) is also empty.”
- Replacing
(s1 - {a}).intersect(s2)
withs1.intersect(s2) - {a}
For these, we need to explicitly invoke =~=
:
pub proof fn lemma_len_intersect<A>(s1: Set<A>, s2: Set<A>)
requires
s1.finite(),
ensures
s1.intersect(s2).len() <= s1.len(),
decreases
s1.len(),
{
if s1.is_empty() {
assert(s1.intersect(s2) =~= s1);
} else {
let a = s1.choose();
assert(s1.intersect(s2).remove(a) =~= s1.remove(a).intersect(s2));
lemma_len_intersect(s1.remove(a), s2);
}
}
With this, Verus and the SMT solver successfully complete the proof.
However, Verus and the SMT solver aren’t the only audience for this proof.
Anyone maintaining this code might want to know why we invoked =~=
,
and we probably shouldn’t force them to work out the entire hand-written proof above
to rediscover this.
So although it’s not strictly necessary,
it’s probably polite to wrap the assertions in assert...by
to indicate
the purpose of the =~=
:
pub proof fn lemma_len_intersect<A>(s1: Set<A>, s2: Set<A>)
requires
s1.finite(),
ensures
s1.intersect(s2).len() <= s1.len(),
decreases s1.len(),
{
if s1.is_empty() {
assert(s1.intersect(s2).len() == 0) by {
assert(s1.intersect(s2) =~= s1);
}
} else {
let a = s1.choose();
lemma_len_intersect(s1.remove(a), s2);
// by induction: s1.remove(a).intersect(s2).len() <= s1.remove(a).len()
assert(s1.intersect(s2).remove(a).len() <= s1.remove(a).len()) by {
assert(s1.intersect(s2).remove(a) =~= s1.remove(a).intersect(s2));
}
// simplifying ".remove(a).len()" yields s1.intersect(s2).len() <= s1.len())
}
}
Executable libraries: Vec
The previous section discussed the mathematical collection types
Seq
, Set
, and Map
.
This section will discuss Vec
, an executable implementation of Seq
.
Verus supports some functionality of Rust’s std::vec::Vec
type. To use
Vec
, include use std::vec::Vec;
in your code.
You can allocate Vec
using Vec::new
and then push elements into it:
fn test_vec1() {
let mut v: Vec<u32> = Vec::new();
v.push(0);
v.push(10);
v.push(20);
v.push(30);
v.push(40);
assert(v.len() == 5);
assert(v[2] == 20);
assert(v[3] == 30);
v.set(2, 21);
assert(v[2] == 21);
assert(v[3] == 30);
}
The code above is able to make assertions directly about the Vec
value v
.
You could also write more compilicated specifications and proofs about Vec
values.
In general, though, Verus encourages programmers to write spec
functions
and proof
functions about mathematical types like Seq
, Set
, and Map
instead
of hard-wiring the specifications and proofs to particular concrete datatypes like Vec
.
This allows spec
functions and proof
functions to focus on the essential ideas,
written in terms of mathematical types like Seq
, Set
, Map
, int
, and nat
,
rather than having to fiddle around with finite-width integers like usize
,
worry about arithmetic overflow, etc.
Of course, there needs to be a connection between the mathematical types
and the concrete types, and specifications in exec
functions will commonly have to move
back and forth between mathematical abstractions and concrete reality.
To make this easier, Verus supports the syntactic sugar @
for extracting
a mathematical view
from a concrete type.
For example, v@
returns a Seq
of all the elements in the vector v
:
spec fn has_five_sorted_numbers(s: Seq<u32>) -> bool {
s.len() == 5 && s[0] <= s[1] <= s[2] <= s[3] <= s[4]
}
fn test_vec2() {
let mut v: Vec<u32> = Vec::new();
v.push(0);
v.push(10);
v.push(20);
v.push(30);
v.push(40);
v.set(2, 21);
assert(v@ =~= seq![0, 10, 21, 30, 40]);
assert(v@ =~= seq![0, 10] + seq![21] + seq![30, 40]);
assert(v@[2] == 21);
assert(v@[3] == 30);
assert(v@.subrange(2, 4) =~= seq![21, 30]);
assert(has_five_sorted_numbers(v@));
}
Using the Seq
view of the Vec
allows us to use the various features of Seq
,
such as concatenation and subsequences,
when writing specifications about the Vec
contents.
Verus support for std::vec::Vec
is currently being expanded. For up-to-date
documentation, visit this link.
Note that these functions provide specifications for std::vec::Vec
functions. Thus,
for example, ex_vec_insert
represents support for the Vec
function insert
. Code written
in Verus should use insert
rather than ex_vec_insert
.
Documentation for std::vec::Vec
functionality can be found here.
Spec Closures
Verus supports anonymous functions (known as “closures” in Rust) in ghost code.
For example, the following code from earlier in this chapter
uses an anonymous function |i: int| 10 * i
to initialize a sequence with the values 0, 10, 20, 30, 40:
proof fn test_seq2() {
let s: Seq<int> = Seq::new(5, |i: int| 10 * i);
assert(s.len() == 5);
assert(s[2] == 20);
assert(s[3] == 30);
}
The anonymous function |i: int| 10 * i
has type spec_fn(int) -> int
and has mode spec
.
Because it has mode spec
,
the anonymous function is subject to the same restrictions as named spec
functions.
(For example, it can call other spec
functions but not proof
functions or exec
functions.)
Note that in contrast to standard executable
Rust closures,
where Fn
, FnOnce
, and FnMut
are traits,
spec_fn(int) -> int
is a type, not a trait.
Therefore, ghost code can return a spec closure directly,
using a return value of type spec_fn(t1, ..., tn) -> tret
,
without having to use
dyn or impl,
as with standard executable Rust closures.
For example, the spec
function adder
, shown below,
can return an anonymous function that adds x
to y
:
spec fn adder(x: int) -> spec_fn(int) -> int {
|y: int| x + y
}
proof fn test_adder() {
let f = adder(10);
assert(f(20) == 30);
assert(f(60) == 70);
}
Using assert and assume to develop proofs
In an earlier chapter, we started with an outline of a proof:
pub proof fn lemma_len_intersect<A>(s1: Set<A>, s2: Set<A>)
requires
s1.finite(),
ensures
s1.intersect(s2).len() <= s1.len(),
decreases
s1.len(),
{
if s1.is_empty() {
} else {
let a = s1.choose();
lemma_len_intersect(s1.remove(a), s2);
}
}
and then filled in the crucial missing steps to complete the proof. It didn’t say, though, how you might go about discovering which crucial steps are missing. In practice, it takes some experimentation to fill in this kind of proof.
This section will walk through a typical process of developing a proof,
using the proof outline above as a starting point.
The process will consist of a series of queries to Verus and the SMT solver,
using assert
and assume
to ask questions,
and using the answers to narrow in on the cause of the verification failure.
If we run the proof above, Verus reports an error:
error: postcondition not satisfied
|
| s1.intersect(s2).len() <= s1.len(),
| ---------------------------------- failed this postcondition
This raises a couple questions:
- Why is this postcondition failing?
- If this postcondition succeeded, would the verification of the whole function succeed?
Let’s check the second question first. We can simply assume the postcondition and see what happens:
pub proof fn lemma_len_intersect<A>(s1: Set<A>, s2: Set<A>)
...
{
if s1.is_empty() {
} else {
let a = s1.choose();
lemma_len_intersect::<A>(s1.remove(a), s2);
}
assume(s1.intersect(s2).len() <= s1.len());
}
In this case, verification succeeds:
verification results:: verified: 1 errors: 0
There are two paths through the code, one when s1.is_empty()
and one when !s1.empty()
.
The failure could lie along either path, or both.
Let’s prepare to work on each branch of the if
/else
separately
by moving a separate copy the assume
into each branch:
{
if s1.is_empty() {
assume(s1.intersect(s2).len() <= s1.len());
} else {
let a = s1.choose();
lemma_len_intersect::<A>(s1.remove(a), s2);
assume(s1.intersect(s2).len() <= s1.len());
}
}
verification results:: verified: 1 errors: 0
Next, let’s change the first assume
to an assert
to see if it succeeds in the if
branch:
{
if s1.is_empty() {
assert(s1.intersect(s2).len() <= s1.len());
} else {
let a = s1.choose();
lemma_len_intersect::<A>(s1.remove(a), s2);
assume(s1.intersect(s2).len() <= s1.len());
}
}
error: assertion failed
|
| assert(s1.intersect(s2).len() <= s1.len());
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ assertion failed
In the s1.is_empty()
case, we expect s1.len() == 0
(an empty set has cardinality 0).
We can double-check this with a quick assertion:
{
if s1.is_empty() {
assert(s1.len() == 0);
assume(s1.intersect(s2).len() <= s1.len());
} else {
...
}
}
verification results:: verified: 1 errors: 0
So what we need is s1.intersect(s2).len() <= 0
.
If this were true, we’d satisfy the postcondition here:
{
if s1.is_empty() {
assume(s1.intersect(s2).len() <= 0);
assert(s1.intersect(s2).len() <= s1.len());
} else {
...
}
}
verification results:: verified: 1 errors: 0
Since set cardinality is a nat
, the only way it can be <= 0
is if it’s equal to 0
:
{
if s1.is_empty() {
assume(s1.intersect(s2).len() == 0);
assert(s1.intersect(s2).len() <= s1.len());
} else {
...
}
}
verification results:: verified: 1 errors: 0
and the only way it can be 0
is if the set is the empty set:
{
if s1.is_empty() {
assume(s1.intersect(s2) === Set::empty());
assert(s1.intersect(s2).len() == 0);
assert(s1.intersect(s2).len() <= s1.len());
} else {
...
}
}
verification results:: verified: 1 errors: 0
If we change the assume
to an assert
, the assertion fails:
{
if s1.is_empty() {
assert(s1.intersect(s2) === Set::empty());
assert(s1.intersect(s2).len() == 0);
assert(s1.intersect(s2).len() <= s1.len());
} else {
...
}
}
error: assertion failed
|
| assert(s1.intersect(s2) === Set::empty());
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ assertion failed
So we’ve narrowed in on the problem:
the intersection of the empty set s1
with another set should equal the empty set,
but the verifier doesn’t see this automatically.
And from the previous section’s discussion of equality, we can guess why:
the SMT solver doesn’t always automatically prove equalities between collections,
but instead requires us to assert the equality using extensionality.
So we can add the extensionality assertion:
{
if s1.is_empty() {
assert(s1.intersect(s2) =~= Set::empty());
assert(s1.intersect(s2) === Set::empty());
assert(s1.intersect(s2).len() == 0);
assert(s1.intersect(s2).len() <= s1.len());
} else {
...
}
}
verification results:: verified: 1 errors: 0
It works! We’ve now verified the s1.is_empty()
case,
and we can turn our attention to the !s1.is_empty()
case:
{
if s1.is_empty() {
...
} else {
let a = s1.choose();
lemma_len_intersect::<A>(s1.remove(a), s2);
assume(s1.intersect(s2).len() <= s1.len());
}
}
Changing this assume
to an assert
fails,
so we’ve got work to do in this case as well:
{
if s1.is_empty() {
...
} else {
let a = s1.choose();
lemma_len_intersect::<A>(s1.remove(a), s2);
assert(s1.intersect(s2).len() <= s1.len());
}
}
error: assertion failed
|
| assert(s1.intersect(s2).len() <= s1.len());
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ assertion failed
Fortunately, the recursive call lemma_len_intersect::<A>(s1.remove(a), s2)
succeeded,
so we have some information from the postcondition of this call.
Let’s write this out explictly so we can examine it more closely,
substituting s1.remove(a)
for s1
:
{
if s1.is_empty() {
...
} else {
let a = s1.choose();
lemma_len_intersect::<A>(s1.remove(a), s2);
assert(s1.remove(a).intersect(s2).len() <= s1.remove(a).len());
assume(s1.intersect(s2).len() <= s1.len());
}
}
verification results:: verified: 1 errors: 0
Let’s compare what we know above s1.remove(a)
with what we’re trying to prove about s1
:
assert(s1.remove(a).intersect(s2).len() <= s1.remove(a).len()); // WE KNOW THIS
assume(s1 .intersect(s2).len() <= s1 .len()); // WE WANT THIS
Is there any way we can make what we know look more like what we want?
For example, how does s1.remove(a).len()
relate to s1.len()
?
The value a
is an element of s1
, so if we remove it from s1
,
it should decrease the cardinality by 1:
{
if s1.is_empty() {
...
} else {
let a = s1.choose();
lemma_len_intersect::<A>(s1.remove(a), s2);
assert(s1.remove(a).intersect(s2).len() <= s1.remove(a).len());
assert(s1.remove(a).len() == s1.len() - 1);
assume(s1.intersect(s2).len() <= s1.len());
}
}
verification results:: verified: 1 errors: 0
So we can simplify a bit:
{
if s1.is_empty() {
...
} else {
let a = s1.choose();
lemma_len_intersect::<A>(s1.remove(a), s2);
assert(s1.remove(a).intersect(s2).len() <= s1.remove(a).len());
assert(s1.remove(a).intersect(s2).len() <= s1.len() - 1);
assert(s1.remove(a).intersect(s2).len() + 1 <= s1.len());
assume(s1.intersect(s2).len() <= s1.len());
}
}
verification results:: verified: 1 errors: 0
Now the missing piece is the relation between s1.remove(a).intersect(s2).len() + 1
and s1.intersect(s2).len()
:
{
if s1.is_empty() {
...
} else {
let a = s1.choose();
lemma_len_intersect::<A>(s1.remove(a), s2);
assert(s1.remove(a).intersect(s2).len() <= s1.remove(a).len());
assert(s1.remove(a).intersect(s2).len() <= s1.len() - 1);
assert(s1.remove(a).intersect(s2).len() + 1 <= s1.len());
assume(s1.intersect(s2).len() <= s1.remove(a).intersect(s2).len() + 1);
assert(s1.intersect(s2).len() <= s1.len());
}
}
verification results:: verified: 1 errors: 0
If we can prove the assumption s1.intersect(s2).len() <= s1.remove(a).intersect(s2).len() + 1
,
we’ll be done:
assume(s1 .intersect(s2).len()
<= s1.remove(a).intersect(s2).len() + 1);
Is there anyway we can make s1.remove(a).intersect(s2)
look more like s1.intersect(s2)
so that it’s easier to prove this inequality?
If we switched the order from s1.remove(a).intersect(s2)
to s1.intersect(s2).remove(a)
,
then the subexpression s1.intersect(s2)
would match:
assume(s1.intersect(s2) .len()
<= s1.intersect(s2).remove(a).len() + 1);
so let’s try that:
{
if s1.is_empty() {
...
} else {
let a = s1.choose();
lemma_len_intersect::<A>(s1.remove(a), s2);
assert(s1.remove(a).intersect(s2).len() <= s1.remove(a).len());
assert(s1.remove(a).intersect(s2).len() <= s1.len() - 1);
assert(s1.remove(a).intersect(s2).len() + 1 <= s1.len());
assert(s1.intersect(s2).len() <= s1.intersect(s2).remove(a).len() + 1);
assert(s1.intersect(s2).len() <= s1.remove(a).intersect(s2).len() + 1);
assert(s1.intersect(s2).len() <= s1.len());
}
}
error: assertion failed
|
| assert(s1.intersect(s2).len() <= s1.remove(a).intersect(s2).len() + 1);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ assertion failed
One of these assertion succeeds and the other fails.
The only difference between the successful assertion
and the failing assertion is the order of intersect
and remove
in s1.intersect(s2).remove(a)
and s1.remove(a).intersect(s2)
,
so all we need to finish the proof is for s1.intersect(s2).remove(a)
to be equal to s1.remove(a).intersect(s2)
:
{
if s1.is_empty() {
...
} else {
let a = s1.choose();
lemma_len_intersect::<A>(s1.remove(a), s2);
assert(s1.remove(a).intersect(s2).len() <= s1.remove(a).len());
assert(s1.remove(a).intersect(s2).len() <= s1.len() - 1);
assert(s1.remove(a).intersect(s2).len() + 1 <= s1.len());
assert(s1.intersect(s2).len() <= s1.intersect(s2).remove(a).len() + 1);
assume(s1.intersect(s2).remove(a) === s1.remove(a).intersect(s2));
assert(s1.intersect(s2).len() <= s1.remove(a).intersect(s2).len() + 1);
assert(s1.intersect(s2).len() <= s1.len());
}
}
verification results:: verified: 1 errors: 0
Again, we found ourselves needing to know the equality of two collections. And again, the first thing to try is to assert extensional equality:
{
if s1.is_empty() {
...
} else {
let a = s1.choose();
lemma_len_intersect::<A>(s1.remove(a), s2);
assert(s1.remove(a).intersect(s2).len() <= s1.remove(a).len());
assert(s1.remove(a).intersect(s2).len() <= s1.len() - 1);
assert(s1.remove(a).intersect(s2).len() + 1 <= s1.len());
assert(s1.intersect(s2).len() <= s1.intersect(s2).remove(a).len() + 1);
assert(s1.intersect(s2).remove(a) =~= s1.remove(a).intersect(s2));
assert(s1.intersect(s2).remove(a) === s1.remove(a).intersect(s2));
assert(s1.intersect(s2).len() <= s1.remove(a).intersect(s2).len() + 1);
assert(s1.intersect(s2).len() <= s1.len());
}
}
verification results:: verified: 1 errors: 0
It works!
Now we’ve eliminated all the assume
s, so we’ve completed the verification:
pub proof fn lemma_len_intersect<A>(s1: Set<A>, s2: Set<A>)
requires
s1.finite(),
ensures
s1.intersect(s2).len() <= s1.len(),
decreases
s1.len(),
{
if s1.is_empty() {
assert(s1.intersect(s2) =~= Set::empty());
assert(s1.intersect(s2) === Set::empty());
assert(s1.intersect(s2).len() == 0);
assert(s1.intersect(s2).len() <= s1.len());
} else {
let a = s1.choose();
lemma_len_intersect::<A>(s1.remove(a), s2);
assert(s1.remove(a).intersect(s2).len() <= s1.remove(a).len());
assert(s1.remove(a).intersect(s2).len() <= s1.len() - 1);
assert(s1.remove(a).intersect(s2).len() + 1 <= s1.len());
assert(s1.intersect(s2).len() <= s1.intersect(s2).remove(a).len() + 1);
assert(s1.intersect(s2).remove(a) =~= s1.remove(a).intersect(s2));
assert(s1.intersect(s2).remove(a) === s1.remove(a).intersect(s2));
assert(s1.intersect(s2).len() <= s1.remove(a).intersect(s2).len() + 1);
assert(s1.intersect(s2).len() <= s1.len());
}
}
verification results:: verified: 1 errors: 0
The code above contains a lot of unnecessary assert
s, though,
so it’s worth spending a few minutes cleaning the code up
for sake of anyone who has to maintain the code in the future.
We want to clear out unnecessary code so there’s less code to maintain,
but keep enough information so
someone maintaining the code can still understand the code.
The right amount of information is a matter of taste,
but we can try to strike a reasonable balance between conciseness and informativeness:
pub proof fn lemma_len_intersect<A>(s1: Set<A>, s2: Set<A>)
requires
s1.finite(),
ensures
s1.intersect(s2).len() <= s1.len(),
decreases s1.len(),
{
if s1.is_empty() {
assert(s1.intersect(s2).len() == 0) by {
assert(s1.intersect(s2) =~= s1);
}
} else {
let a = s1.choose();
lemma_len_intersect(s1.remove(a), s2);
// by induction: s1.remove(a).intersect(s2).len() <= s1.remove(a).len()
assert(s1.intersect(s2).remove(a).len() <= s1.remove(a).len()) by {
assert(s1.intersect(s2).remove(a) =~= s1.remove(a).intersect(s2));
}
// simplifying ".remove(a).len()" yields s1.intersect(s2).len() <= s1.len())
}
}
Quantifiers
Suppose that we want to specify that all the elements of a sequence are even. If the sequence has a small, fixed size, we could write a specification for every element separately:
spec fn is_even(i: int) -> bool {
i % 2 == 0
}
proof fn test_seq_5_is_evens(s: Seq<int>)
requires
s.len() == 5,
is_even(s[0]),
is_even(s[1]),
is_even(s[3]),
is_even(s[3]),
is_even(s[4]),
{
assert(is_even(s[3]));
}
Clearly, though, this won’t scale well to larger sequences or sequences of unknown length.
We could write a recursive specification:
spec fn all_evens(s: Seq<int>) -> bool
decreases s.len(),
{
if s.len() == 0 {
true
} else {
is_even(s.last()) && all_evens(s.drop_last())
}
}
proof fn test_seq_recursive(s: Seq<int>)
requires
s.len() == 5,
all_evens(s),
{
assert(is_even(s[3])) by {
reveal_with_fuel(all_evens, 2);
}
}
However, using a recursive definition will lead to many proofs by induction, which can require a lot of programmer effort to write.
Fortunately, Verus and SMT solvers support the
universal and existential quantifiers
forall
and exists
,
which we can think of as infinite conjunctions or disjunctions:
(forall|i: int| f(i)) = ... f(-2) && f(-1) && f(0) && f(1) && f(2) && ...
(exists|i: int| f(i)) = ... f(-2) || f(-1) || f(0) || f(1) || f(2) || ...
With this, it’s much more convenient to write a specification about all elements of a sequence:
proof fn test_use_forall(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int| 0 <= i < s.len() ==> #[trigger] is_even(s[i]),
{
assert(is_even(s[3]));
}
Although quantifiers are very powerful, they require some care, because the SMT solver’s reasoning about quantifiers is incomplete. This isn’t a deficiency in the SMT solver’s implementation, but rather a deeper issue: it’s an undecidable problem to figure out whether a formula with quantifiers, functions, and arithmetic is valid or not, so there’s no complete algorithm that the SMT solver could implement. Instead, the SMT solver uses an incomplete strategy based on triggers, which instantiates quantifiers when expressions match trigger patterns.
This chapter will describe how to use forall
and exists
,
how triggers work,
and some related topics on choose
expressions and closures.
forall and triggers
Let’s take a closer look at the following code,
which uses a forall
expression in a requires
clause
to prove an assertion:
proof fn test_use_forall(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int| 0 <= i < s.len() ==> #[trigger] is_even(s[i]),
{
assert(is_even(s[3]));
}
The forall
expression means that 0 <= i < s.len() ==> is_even(s[i])
for all possible integers i
:
...
0 <= -3 < s.len() ==> is_even(s[-3])
0 <= -2 < s.len() ==> is_even(s[-2])
0 <= -1 < s.len() ==> is_even(s[-1])
0 <= 0 < s.len() ==> is_even(s[0])
0 <= 1 < s.len() ==> is_even(s[1])
0 <= 2 < s.len() ==> is_even(s[2])
0 <= 3 < s.len() ==> is_even(s[3])
...
There are infinitely many integers i
, so the list shown above is infinitely long.
We can’t expect the SMT solver to literally expand the forall
into
an infinite list of expressions.
Furthermore, in this example, we only care about one of the expressions,
the expression for i = 3
,
since this is all we need to prove assert(is_even(s[3]))
:
0 <= 3 < s.len() ==> is_even(s[3])
Ideally, the SMT solver will choose just the i
that are likely to be relevant
to verifying a particular program.
The most common technique that SMT solvers use for choosing likely relevant i
is based on triggers
(also known as SMT patterns or just
patterns).
A trigger is simply an expression or set of expressions that the SMT solver uses as a pattern
to match with.
In the example above, the #[trigger]
attribute marks the expression is_even(s[i])
as the trigger for the forall
expression.
Based on this attribute,
the SMT solver looks for expressions of the form is_even(s[...])
.
During the verification of the test_use_forall
function shown above,
there is one expression that has this form: is_even(s[3])
.
This matches the trigger is_even(s[i])
exactly for i = 3
.
Based on this pattern match, the SMT solver chooses i = 3
and introduces the following fact:
0 <= 3 < s.len() ==> is_even(s[3])
This fact allows the SMT solver to complete the proof about the assertion
assert(is_even(s[3]))
.
Triggers are the way you program the instantiations of the forall
expressions
(and the way you program proofs of exists
expressions, as discussed in a later section).
By choosing different triggers, you can influence how the forall
expressions
get instantiated with different values, such as i = 3
in the example above.
Suppose, for example, we change the assertion slightly so that we assert
s[3] % 2 == 0
instead of is_even(s[3])
.
Mathematically, these are both equivalent.
However, the assertion about s[3] % 2 == 0
fails:
spec fn is_even(i: int) -> bool {
i % 2 == 0
}
proof fn test_use_forall_fail(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int| 0 <= i < s.len() ==> #[trigger] is_even(s[i]),
{
assert(s[3] % 2 == 0); // FAILS: doesn't trigger is_even(s[i])
}
This fails because there are no expressions matching the pattern is_even(s[...])
;
the expression s[3] % 2 == 0
doesn’t mention is_even
at all.
In order to prove s[3] % 2 == 0
,
we’d first have to mention is_even(s[3])
explicitly:
proof fn test_use_forall_succeeds1(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int| 0 <= i < s.len() ==> #[trigger] is_even(s[i]),
{
assert(is_even(s[3])); // triggers is_even(s[3])
assert(s[3] % 2 == 0); // succeeds, because previous line already instantiated the forall
}
Once the expression is_even(s[3])
coaxes the SMT solver into instantiating the
forall
expression with i = 3
,
the SMT solver can use the resulting 0 <= 3 < s.len() ==> is_even(s[3])
to prove s[3] % 2 == 0
.
Alternatively, we could just choose a trigger that is less picky.
For example, the trigger s[i]
matches any expression of the form
s[...]
, which includes the s[3]
inside s[3] % 2 == 0
and
also includes the s[3]
inside is_even(s[3])
:
proof fn test_use_forall_succeeds2(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int| 0 <= i < s.len() ==> is_even(#[trigger] s[i]),
{
assert(s[3] % 2 == 0); // succeeds by triggering s[3]
}
In fact, if we omit the #[trigger]
attribute entirely,
Verus chooses the trigger s[i]
automatically:
proof fn test_use_forall_succeeds3(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int| 0 <= i < s.len() ==> is_even(s[i]), // Verus chooses s[i] as the trigger and prints a note
{
assert(s[3] % 2 == 0); // succeeds by triggering s[3]
}
In fact, Verus prints a note stating that it chose this trigger:
note: automatically chose triggers for this expression:
|
| forall|i: int| 0 <= i < s.len() ==> is_even(s[i]), // Verus chooses s[i] as the trigger
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: trigger 1 of 1:
|
| forall|i: int| 0 <= i < s.len() ==> is_even(s[i]), // Verus chooses s[i] as the trigger
| ^^^^
note: Verus printed one or more automatically chosen quantifier triggers
because it had low confidence in the chosen triggers.
Verus isn’t sure, though,
whether the programmer wants s[i]
as the trigger or is_even(s[i])
as the trigger.
It slightly prefers s[i]
because s[i]
is smaller than is_even(s[i])
,
so it chooses s[i]
,
but it also prints out the note encouraging the programmer to review the decision.
The programmer can accept this decision by writing #![auto]
before the quantifier body,
which suppresses the note:
proof fn test_use_forall_succeeds4(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int|
#![auto]
0 <= i < s.len() ==> is_even(s[i]), // Verus chooses s[i] as the trigger
{
assert(s[3] % 2 == 0); // succeeds by triggering s[3]
}
Good triggers and bad triggers
So … which trigger is better, s[i]
or is_even(s[i])
?
Unfortunately, there’s no one best answer to this kind of question.
There are tradeoffs between the two different choices.
The trigger s[i]
leads to more pattern matches than is_even(s[i])
.
More matches means that the SMT solver is more likely to find relevant
instantiations that help a proof succeed.
However, more matches also means that the SMT solver is more likely to generate
irrelevant instantiations that clog up the SMT solver with useless information,
slowing down the proof.
In this case, s[i]
is probably a good trigger to choose.
It matches whenever the function test_use_forall_succeeds4
talks about an element of the sequence s
,
yielding a fact that is likely to be useful for reasoning about s
.
By contrast, suppose we chose the following bad trigger, 0 <= i
:
proof fn test_use_forall_bad1(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int| (#[trigger](0 <= i)) && i < s.len() ==> is_even(s[i]),
{
assert(s[3] % 2 == 0);
}
In principle, this would match any value that is greater than or equal to 0,
which would include values that have nothing to do with s
and are unlikely
to be relevant to s
.
In practice, Verus doesn’t even let you do this:
triggers cannot contain equality or disequality (==
, ===
, !=
, or !==
),
any basic integer arithmetic operator (like <=
or +
),
or any basic boolean operator (like &&
):
error: trigger must be a function call, a field access, or a bitwise operator
|
| forall|i: int| (#[trigger](0 <= i)) && i < s.len() ==> is_even(s[i]),
| ^^^^^^^^^^^^^^^^^^^^
If we really wanted, we could work around this by introducing an extra function:
spec fn nonnegative(i: int) -> bool {
0 <= i
}
proof fn test_use_forall_bad2(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int| #[trigger] nonnegative(i) && i < s.len() ==> is_even(s[i]),
{
assert(is_even(s[3])); // FAILS: doesn't trigger nonnegative(i)
}
but this trigger fails to match, because the code doesn’t explicitly mention nonnegative(3)
(you’d have to add an explicit assert(nonnegative(3))
to make the code work).
This is probably just as well; s[i]
is simply a better trigger than nonnegative(i)
,
because s[i]
mentions s
, and the whole point of
forall|i: int| 0 <= i < s.len() ==> is_even(s[i])
is to say something about the elements of s
,
not to say something about nonnegative numbers.
Multiple variables, multiple triggers, matching loops
Suppose we have a forall
expression with more than one variable, i
and j
:
spec fn is_distinct(x: int, y: int) -> bool {
x != y
}
proof fn test_distinct1(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int, j: int| 0 <= i < j < s.len() ==> #[trigger] is_distinct(s[i], s[j]),
{
assert(is_distinct(s[2], s[4]));
}
The forall
expression shown above says that every element of s
is distinct.
(Note: we could have written
0 <= i < s.len() && 0 <= j < s.len() && i != j
instead of
0 <= i < j < s.len()
,
but the latter is more concise and is just as general:
given any two distinct integers, we can let i
be the smaller one
and j
be the larger one so that i < j
.)
In the example above, the trigger is_distinct(s[i], s[j])
contains both the variables i
and j
,
and the expression is_distinct(s[2], s[4])
matches the trigger with i = 2, j = 4
:
0 <= 2 < 4 < s.len() ==> is_distinct(s[2], s[4])
Instead of using a function call is_distinct(s[i], s[j])
,
we could just write s[i] != s[j]
directly.
However, in this case, we cannot use the expression s[i] != s[j]
as a trigger,
because, as discussed in the previous section,
triggers cannot contain equalities and disequalities like !=
.
However, a trigger does not need to be just a single expression.
It can be split across multiple expressions,
as in the following code, which defines the trigger to be the pair of expressions
s[i]
, s[j]
:
proof fn test_distinct2(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int, j: int| 0 <= i < j < s.len() ==> #[trigger] s[i] != #[trigger] s[j],
{
assert(s[4] != s[2]);
}
Verus also supports an alternate, equivalent syntax #![trigger ...]
,
where the #![trigger ...]
immediately follows the forall|...|
,
in case we prefer to write the pair s[i]
, s[j]
directly:
proof fn test_distinct3(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int, j: int| #![trigger s[i], s[j]] 0 <= i < j < s.len() ==> s[i] != s[j],
{
assert(s[4] != s[2]);
}
When the trigger is the pair s[i]
, s[j]
,
there are four matches: i = 2, j = 2
and i = 2, j = 4
and i = 4, j = 2
and i = 4, j = 4
:
0 <= 2 < 2 < s.len() ==> s[2] != s[2]
0 <= 2 < 4 < s.len() ==> s[2] != s[4]
0 <= 4 < 2 < s.len() ==> s[4] != s[2]
0 <= 4 < 4 < s.len() ==> s[4] != s[4]
The i = 2, j = 4
instantiation proves s[2] != s[4],
which is equivalent to s[4] != s[2].
The other instantiations are dead ends, since 2 < 2
, 4 < 2
, and 4 < 4
all fail.
A trigger must mention each of the quantifier variables i
and j
at least once.
Otherwise, Verus will complain:
proof fn test_distinct_fail1(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int, j: int|
0 <= i < j < s.len() ==> s[i] != #[trigger] s[j], // error: trigger fails to mention i
{
assert(s[4] != s[2]);
}
error: trigger does not cover variable i
|
| / ... forall|i: int, j: int|
| | ... 0 <= i < j < s.len() ==> s[i] != #[trigger] s[j], // error: trigger fails to ment...
| |__________________________________________________________^
In order to match a trigger with multiple expressions,
the SMT solver has to find matches for all the expressions in the trigger.
Therefore,
you can always make a trigger more restrictive by adding more expressions to the trigger.
For example, we could gratuitously add a third expression is_even(i)
to the trigger, which would cause the match to fail,
since no expression matches is_even(i)
:
proof fn test_distinct_fail2(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int, j: int| #![trigger s[i], s[j], is_even(i)]
0 <= i < j < s.len() ==> s[i] != s[j],
{
assert(s[4] != s[2]); // FAILS, because nothing matches is_even(i)
}
To make this example succeed, we’d have to mention is_even(2)
explicitly:
assert(is_even(2));
assert(s[4] != s[2]); // succeeds; we've matched s[2], s[4], is_even(2)
Multiple triggers
In all the examples so far, each quantifier contained exactly one trigger (although the trigger sometimes contained more than one expression). It’s also possible, although rarer, to specify multiple triggers for a quantifier. The SMT solver will instantiate the quantifier if any of the triggers match. Thus, adding more triggers leads to more quantifier instantiations. (This stands in contrast to adding expressions to a trigger: adding more expressions to a trigger makes a trigger more restrictive and leads to fewer quantifier instantiations.)
The following example specifies both #![trigger a[i], b[j]]
and #![trigger a[i], c[j]]
as triggers, since neither is obviously better than the other:
proof fn test_multitriggers(a: Seq<int>, b: Seq<int>, c: Seq<int>)
requires
5 <= a.len(),
a.len() == b.len(),
a.len() == c.len(),
forall|i: int, j: int|
#![trigger a[i], b[j]]
#![trigger a[i], c[j]]
0 <= i < j < a.len() ==> a[i] != b[j] && a[i] != c[j],
{
assert(a[2] != c[4]); // succeeds, matches a[i], c[j]
}
(Note: to specify multiple triggers, you must use the #![trigger ...]
syntax
rather than the #[trigger]
syntax.)
If the quantifier had only mentioned the single trigger #![trigger a[i], b[j]]
,
then the assertion above would have failed, because a[2] != c[4]
doesn’t mention b
.
A single trigger #![trigger a[i], b[j], c[j]]
would be even more restrictive,
requiring both b
and c
to appear, so the assertion would still fail.
In the example above, you can omit the explicit triggers and
Verus will automatically infer exactly the two triggers
#![trigger a[i], b[j]]
and #![trigger a[i], c[j]]
.
However, in most cases, Verus deliberately avoids inferring more than one trigger,
because multiple triggers lead to more quantifier instantiations,
which potentially slows down the SMT solver.
One trigger is usually enough.
As an example of where one trigger is safer than multiple triggers,
consider an assertion that says that updating element j
of sequence s
leaves element i
unaffected:
proof fn seq_update_different<A>(s: Seq<A>, i: int, j: int, a: A) {
assert(forall|i: int, j: int|
0 <= i < s.len() && 0 <= j < s.len() && i != j ==> s.update(j, a)[i] == s[i]);
}
There are actually two possible triggers for this:
#![trigger s.update(j, a)[i]]
#![trigger s.update(j, a), s[i]]
However, Verus selects only the first one and rejects the second, in order to avoid too many quantifier instantiations:
note: automatically chose triggers for this expression:
|
| assert(forall|i: int, j: int|
| ____________^
| | 0 <= i < s.len() && 0 <= j < s.len() && i != j ==> s.update(j, a)[i] === s[i]
| |_____________________________________________________________________________________^
note: trigger 1 of 1:
--> .\rust_verify\example\guide\quants.rs:243:60
|
| 0 <= i < s.len() && 0 <= j < s.len() && i != j ==> s.update(j, a)[i] === s[i]
| ^^^^^^^^^^^^^^^^^
(Note: you can use the --triggers
command-line option to print the message above.)
Matching loops: what they are and to avoid them
Suppose we want to specify that a sequence is sorted.
We can write this in a similar way to the earlier forall
expression
about sequence distinctness,
writing s[i] <= s[j]
in place of s[i] != s[j]
:
proof fn test_sorted_good(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int, j: int| 0 <= i <= j < s.len() ==> s[i] <= s[j],
{
assert(s[2] <= s[4]);
}
In Verus, this is the best way to express sortedness,
because the trigger s[i], s[j]
works very well.
However, there is an alternate approach.
Instead of quantifying over both i
and j
,
we could try to quantify over just a single variable i
,
and then compare s[i]
to s[i + 1]
:
proof fn test_sorted_bad(s: Seq<int>)
requires
5 <= s.len(),
forall|i: int|
0 <= i < s.len() - 1 ==> s[i] <= s[i + 1],
{
assert(s[2] <= s[4]);
}
However, Verus complains that it couldn’t find any good triggers:
error: Could not automatically infer triggers for this quantifer. Use #[trigger] annotations to manually mark trigger terms instead.
|
| / forall|i: int|
| | 0 <= i < s.len() - 1 ==> s[i] <= s[i + 1],
| |_____________________________________________________^
Verus considers the expressions 0 <= i
, i < s.len() - 1
, s[i]
, and s[i + 1]
as candidates for a trigger.
However, all of these except s[i]
contain integer arithmetic, which is not allowed in triggers.
The remaining candidate, s[i]
, looks reasonable at first glance.
Verus nevertheless rejects it, though, because it potentially leads to an infinite matching loop.
Triggers are the way to program the SMT solver’s quantifier instantiations,
and if we’re not careful, we can program infinite loops.
Let’s look at how this can happen.
Suppose that we insist on using s[i]
as a trigger:
forall|i: int|
0 <= i < s.len() - 1 ==> #[trigger] s[i] <= s[i + 1],
(TODO: Verus should print a warning about a potential matching loop here.)
This would, in fact, succeed in verifying the assertion s[2] <= s[4]
,
but not necessarily in a good way.
The SMT solver would match on i = 2
and i = 4
.
For i = 2
, we’d get:
0 <= 2 < s.len() - 1 ==> s[2] <= s[3]
This creates a new expression s[3]
, which the SMT can then match on with i = 3
:
0 <= 3 < s.len() - 1 ==> s[3] <= s[4]
This tells us s[2] <= s[3]
and s[3] <= s[4]
,
which is sufficient to prove s[2] <= s[4]
.
The problem is that the instantiations don’t necessarily stop here.
Given s[4]
, we can match with i = 4
, which creates s[5]
,
which leads to matching with i = 5
, and so on:
0 <= 4 < s.len() - 1 ==> s[4] <= s[5]
0 <= 5 < s.len() - 1 ==> s[5] <= s[6]
0 <= 6 < s.len() - 1 ==> s[6] <= s[7]
...
In principle, the SMT solver could loop forever with i = 6
, i = 7
, and so on.
In practice, the SMT solver imposes a cutoff on quantifier instantiations which often
(but not always) halts the infinite loops.
But even if the SMT solver halts the loop,
this is still an inefficient process,
and matching loops should be avoided.
(For an example of a matching loop that causes the SMT solver to use an infinite
amount of time and memory, see this section.)
exists and choose
exists
expressions are the dual of forall
expressions.
While forall|i: int| f(i)
means that f(i)
is true for all i
,
exists|i: int| f(i)
means that f(i)
is true for at least one i
.
To prove exists|i: int| f(i)
,
an SMT solver has to find one value for i
such that f(i)
is true.
This value is called a witness for exists|i: int| f(i)
.
As with forall
expressions, proofs about exists
expressions are based on triggers.
Specifically, to prove an exists
expression,
the SMT solver uses the exists
expression’s trigger to try to find a witness.
In the following example, the trigger is is_even(i)
:
proof fn test_exists_succeeds() {
assert(is_even(4));
assert(!is_even(5));
assert(is_even(6));
assert(exists|i: int| #[trigger] is_even(i)); // succeeds with witness i = 4 or i = 6
}
There are three expressions that match the trigger:
is_even(4)
, is_even(5)
, and is_even(6)
.
Two of them, is_even(4)
and is_even(6)
are possible witnesses
for exists|i: int| #[trigger] is_even(i)
.
Based on these, the assertion succeeds, using either i = 4
or i = 6
as a witness.
By contrast, the same assertion fails in the following code,
since no expressions matching is_even(i)
are around:
proof fn test_exists_fails() {
assert(exists|i: int| #[trigger] is_even(i)); // FAILS, no match for trigger
}
choose
The proofs above try to prove that an exists
expression is true.
Suppose, though, that we already know that an exists
expression is true,
perhaps because we assume it as a function precondition.
This means that some witness to the exists
expression must exist.
If we want to get the witness, we can use a choose
expression.
A choose
expression choose|i: int| f(i)
implements
the Hilbert choice operator
(sometimes known as epsilon):
it chooses some value i
that satisfies f(i)
if such a value exists.
Otherwise, it picks an arbitrary value for i
.
The following example assumes exists|i: int| f(i)
as a precondition.
Based on this, the SMT solver knows that there is at least one witness i
that makes f(i)
true,
and choose
picks one of these witnesses arbitrarily:
spec fn f(i: int) -> bool;
proof fn test_choose_succeeds()
requires
exists|i: int| f(i),
{
let i_witness = choose|i: int| f(i);
assert(f(i_witness));
}
If, on the other hand, we don’t know exists|i: int| f(i)
,
then choose
just returns an arbitrary value that might not satisfy f(i)
(as discussed in ghost vs exec code,
ghost code can create an arbitrary value of any type):
proof fn test_choose_fails() {
let i_witness = choose|i: int| f(i);
assert(i_witness < 0 || i_witness >= 0); // i_witness is some integer
assert(f(i_witness)); // FAILS because we don't know exists|i: int| f(i)
}
Regardless of whether we know exists|i: int| f(i)
or not,
the choose|i: int| f(i)
expression always returns the same value:
proof fn test_choose_same() {
let x = choose|i: int| f(i);
let y = choose|i: int| f(i);
assert(x == y);
}
You can also choose multiple values together, collecting the values in a tuple:
spec fn less_than(x: int, y: int) -> bool {
x < y
}
proof fn test_choose_succeeds2() {
assert(less_than(3, 7)); // promote i = 3, i = 7 as a witness
let (x, y) = choose|i: int, j: int| less_than(i, j);
assert(x < y);
}
In this example, the SMT solver can prove
exists|i: int, j: int| less_than(i, j)
because the expression less_than(3, 7)
matches the
automatically chosen trigger less_than(i, j)
when i = 3
and j = 7
,
so that i = 3, j = 7
serves as a witness.
Proofs about forall and exists
The previous sections emphasized the importance of triggers
for forall
and exists
expressions.
Specifically, if you know forall|i| f(i)
,
then the SMT solver will instantiate i
by looking at triggers,
and if you want to prove exists|i| f(i)
,
then the SMT solver will look at triggers to find a witness i
such that f(i)
is true.
In other words, using a forall
expression relies on triggers
and proving an exists
expression relies on triggers.
We can write these cases in the following table:
proving | using | |
---|---|---|
forall | usually just works; otherwise assert-by | triggers |
exists | triggers | usually just works; otherwise choose |
What about the other two cases,
proving a forall
expression and using an exists
expression?
These cases are actually easier to automate and do not rely on triggers.
In fact, they often just work automatically,
as in the following examples:
spec fn is_distinct(x: int, y: int) -> bool {
x != y
}
spec fn dummy(i: int) -> bool;
proof fn prove_forall()
ensures
forall|i: int, j: int|
#![trigger dummy(i), dummy(j)]
is_distinct(i, j) ==> is_distinct(j, i),
{
// proving the forall just works; the trigger is irrelevant
}
proof fn use_exists(x: int)
requires
exists|i: int| #![trigger dummy(i)] x == i + 1 && is_distinct(i, 5),
{
// using the exists just works; the trigger is irrelevant
assert(x != 6);
}
In these examples, the triggers play no role.
To emphasize this, we’ve used a dummy
function for the trigger
that doesn’t even appear anywhere else in the examples,
and the SMT solver still verifies the functions with no difficulty.
(Note, though, that if you called one of the functions above,
then the caller would have to prove the exists
expression
or use the forall
expression,
and the caller would have to deal with triggers.)
If you want some intuition for why the SMT solver doesn’t rely on triggers to verify the code above, you can think of the verification as being similar to the verification of the following code, where the quantifiers are eliminated and the quantified variables are hoisted into the function parameters:
proof fn hoisted_forall(i: int, j: int)
ensures
is_distinct(i, j) ==> is_distinct(j, i),
{
}
proof fn hosted_exists(x: int, i: int)
requires
x == i + 1 && is_distinct(i, 5),
{
assert(x != 6);
}
Proving forall with assert-by
Sometimes a proof doesn’t “just work” like it does in the simple examples above.
For example, the proof might rely on a lemma that is proved by induction,
which the SMT solver cannot prove completely automatically.
Suppose we have a lemma that proves f(i)
for any even i
:
spec fn f(i: int) -> bool { ... }
proof fn lemma_even_f(i: int)
requires
is_even(i),
ensures
f(i),
{ ... }
Now suppose we want to prove that f(i)
is true for all even i
:
proof fn test_even_f()
ensures
forall|i: int| is_even(i) ==> f(i), // FAILS because we don't call the lemma
{
}
The proof above fails because it doesn’t call lemma_even_f
.
If we try to call lemma_even_f
, though, we immediately run into a problem:
we need to pass i
as an argument to the lemma,
but i
isn’t in scope:
proof fn test_even_f()
ensures
forall|i: int| is_even(i) ==> f(i),
{
lemma_even_f(i); // ERROR: i is not in scope here
}
To deal with this, Verus supports a special form of assert ... by
for proving forall
expressions:
proof fn test_even_f()
ensures
forall|i: int| is_even(i) ==> f(i),
{
assert forall|i: int| is_even(i) implies f(i) by {
// First, i is in scope here
// Second, we assume is_even(i) here
lemma_even_f(i);
// Finally, we have to prove f(i) here
}
}
Inside the body of the assert ... by
,
the variables of the forall
are in scope
and the left-hand side of the ==>
is assumed.
This allows the body to call lemma_even_f(i)
.
Using exists with choose
The example above needed to bring a forall
quantifier variable into scope
in order to call a lemma.
A similar situation can arise for exists
quantifier variables.
Suppose we have the following lemma to prove f(i)
:
spec fn g(i: int, j: int) -> bool { ... }
proof fn lemma_g_proves_f(i: int, j: int)
requires
g(i, j),
ensures
f(i),
{ ... }
If we know that there exists some j
such that g(i, j)
is true,
we should be able to call lemma_g_proves_f
.
However, we run into the problem that j
isn’t in scope:
proof fn test_g_proves_f(i: int)
requires
exists|j: int| g(i, j),
ensures
f(i),
{
lemma_g_proves_f(i, j); // ERROR: j is not in scope here
}
In this situation,
we can use choose
(discussed in the previous section)
to extract the value j
from the exists
expression:
proof fn test_g_proves_f(i: int)
requires
exists|j: int| g(i, j),
ensures
f(i),
{
lemma_g_proves_f(i, choose|j: int| g(i, j));
}
Example: binary search
Let’s see how forall
and exists
work in a larger example.
The following code searches for a value k
in a sorted sequence
and returns the index r
where k
resides.
fn binary_search(v: &Vec<u64>, k: u64) -> (r: usize)
requires
forall|i: int, j: int| 0 <= i <= j < v.len() ==> v[i] <= v[j],
exists|i: int| 0 <= i < v.len() && k == v[i],
ensures
r < v.len(),
k == v[r as int],
{
let mut i1: usize = 0;
let mut i2: usize = v.len() - 1;
while i1 != i2
invariant
i2 < v.len(),
exists|i: int| i1 <= i <= i2 && k == v[i],
forall|i: int, j: int| 0 <= i <= j < v.len() ==> v[i] <= v[j],
{
let ix = i1 + (i2 - i1) / 2;
if v[ix] < k {
i1 = ix + 1;
} else {
i2 = ix;
}
}
i1
}
fn main() {
let mut v: Vec<u64> = Vec::new();
v.push(0);
v.push(10);
v.push(20);
v.push(30);
v.push(40);
assert(v[3] == 30); // needed to trigger exists|i: int| ... k == v[i]
let r = binary_search(&v, 30);
assert(r == 3);
}
The precondition exists|i: int| 0 <= i < v.len() && k == v[i]
specifies that k
is somewhere in the sequence,
so that the search is guaranteed to find it.
The automatically inferred trigger for this exists
expression is v[i]
.
The main
function satisfies this with the witness i = 3
so that 30 == v[3]
:
assert(v[3] == 30); // needed to trigger exists|i: int| ... k == v[i]
let r = binary_search(&v, 30);
The search proceeds by keeping two indices i1
and i2
that
narrow in on k
from both sides,
so that the index containing k
remains between i1
and i2
throughout the search:
exists|i: int| i1 <= i <= i2 && k == v[i]
In order for the loop to exit, the loop condition i1 != i2
must be false,
which means that i1
and i2
must be equal.
In this case, the i
in the exists
expression above must be equal to i1
and i2
,
so we know k == v[i1]
,
so that we can return the result i1
.
Proving that the loop invariant is maintained
In each loop iteration, we can assume that the loop invariants hold before the iteration,
and we have to prove that the loop invariants hold after the iteration.
Let’s look in more detail at the proof of the invariant
exists|i: int| i1 <= i <= i2 && k == v[i]
,
focusing on how the SMT solver handles the forall
and exists
quantifiers.
The key steps are:
- Knowing
exists|i: int| ... k == v[i]
gives us a witnessi_witness
such thatk == v[i_witness]
. - The witness
i_witness
from the current iteration’sexists|i: int| ...
serves as the witness for the next iteration’sexists|i: int| ...
. - The comparison
*v.index(ix) < k
tells us whetherv[ix] < k
orv[ix] >= k
. - The expressions
v[i_witness]
andv[ix]
match the triggerv[i], v[j]
trigger in the expressionforall|i: int, j: int| ... v[i] <= v[j]
.
We’ll now walk through these steps in more detail. (Feel free to skip ahead if this is too boring — as the next subsection discusses, the whole point is that the SMT solver takes care of the boring details automatically if we set things up right.)
There are two cases to consider, one where the if
condition *v.index(ix) < k
is true and one
where *v.index(ix) < k
is false.
We’ll just look at the former, where v[ik] < k
.
We assume the loop invariant at the beginning of the loop iteration:
exists|i: int| i1 <= i <= i2 && k == v[i]
This tells us that there is some witness i_witness
such that:
i1 <= i_witness <= i2 && k == v[i_witness]
In the case where *v.index(ix) < k
is true, we execute i1 = ix + 1
:
let ix = i1 + (i2 - i1) / 2;
if *v.index(ix) < k {
i1 = ix + 1;
} else {
Since the new value of i1
is ix + 1
,
we’ll need to prove the loop invariant with ix + 1
substituted for i1
:
exists|i: int| ix + 1 <= i <= i2 && k == v[i]
To prove an exists
expression, the SMT solver needs to match the expression’s trigger.
The automatically chosen trigger for this expression is v[i]
,
so the SMT solver looks for expressions of the form v[...]
.
It finds v[i_witness]
from the previous loop invariant (shown above).
It also finds v[ix]
from the call v.index(ix)
in the expression *v.index(ix) < k
.
Based on these, it attempts to prove ix + 1 <= i <= i2 && k == v[i]
with i = i_witness
or i = ix
:
ix + 1 <= i_witness <= i2 && k == v[i_witness]
ix + 1 <= ix <= i2 && k == v[ix]
The i = ix
case is a dead end, because ix + 1 <= ix
is never true.
The i = i_witness
case is more promising.
We already know i_witness <= i2
and k == v[i_witness]
from our assumptions about i_witness
at the beginning of the loop iteration.
We just need to prove ix + 1 <= i_witness
.
We can simplify this to ix < i_witness
.
Proving ix < i_witness
To prove ix < i_witness
, we now turn to the forall
loop invariant:
forall|i: int, j: int| 0 <= i <= j < v.len() ==> v[i] <= v[j],
In order to instantiate this, the SMT solver again relies on triggers.
In this forall
, expression, the trigger is v[i], v[j]
,
so again the SMT solver looks for terms of the form v[...]
and finds v[i_witness]
and v[ix]
.
There are four different possible assignments of i_witness
and ix
to i
and j
.
0 <= i_witness <= i_witness < v.len() ==> v[i_witness] <= v[i_witness]
0 <= i_witness <= ix < v.len() ==> v[i_witness] <= v[ix]
0 <= ix <= i_witness < v.len() ==> v[ix] <= v[i_witness]
0 <= ix <= ix < v.len() ==> v[ix] <= v[ix]
Out of these, the second one is most useful:
0 <= i_witness <= ix < v.len() ==> v[i_witness] <= v[ix]
We already know k == v[i_witness]
, so this becomes:
0 <= i_witness <= ix < v.len() ==> k <= v[ix]
The right-hand side of the ==>
says k <= v[ix]
,
which contradicts our assumption that v[ik] < k
in the case where *v.index(ix) < k
.
This means that the left-hand side of the ==>
must be false:
!(0 <= i_witness <= ix < v.len())
The SMT solver knows that 0 <= i_witness
and ix < v.len()
,
so it narrows this down to:
!(i_witness <= ix)
This tells us that ix < i_witness
, which is what we want.
Helping the automation succeed
As seen in the previous section, proving the loop invariant requires a long chain of reasoning. Fortunately, the SMT solver performs all of these steps automatically. In fact, this is a particularly fortunate example, because Verus automatically chooses the triggers as well, and these triggers happen to be just what the SMT solver needs to complete the proof.
In general, though, how we express the preconditions, postconditions, and loop invariants has a big influence on whether Verus and the SMT solver succeed automatically. Suppose, for example, that we had written the sortedness condition (in the precondition and loop invariant) as:
forall|i: int| 0 <= i < v.len() - 1 ==> #[trigger] v[i] <= v[i + 1]
instead of:
forall|i: int, j: int| 0 <= i <= j < v.len() ==> v[i] <= v[j]
As discussed in a previous section,
the trigger v[i]
in combination with v[i] <= v[i + 1]
leads to a matching loop,
which can send the SMT solver into an infinite loop.
This is, in fact, exactly what happens:
error: while loop: Resource limit (rlimit) exceeded; consider rerunning with --profile for more details
|
| / while i1 != i2
| | invariant
| | i2 < v.len(),
| | exists|i: int| i1 <= i <= i2 && k == v[i],
|
| | }
| | }
| |_____^
Even if the SMT solver had avoided the infinite loop, though,
it’s hard to see how it could have succeeded automatically.
As discussed above, a crucial step involves instantiating
i = i_witness
and j = ix
to learn something about v[i_witness] <= v[ix]
.
This simply isn’t a possible instantiation when there’s only one variable i
in the forall
expression.
Learning something about v[i_witness] <= v[ix]
would require chaining together
an arbitrarily long sequence of v[i] <= v[i + 1]
steps to get from
i_witness
to i_witness + 1
to i_witness + 2
all the way to ix
.
This would require a separate proof by induction.
Intuitively, the expression v[i] <= v[j]
is better suited than v[i] <= v[i + 1]
to an algorithm like binary search that takes large steps from one index to another,
because i
and j
can be arbitrarily far apart,
whereas i
and i + 1
are only one element apart.
When the SMT automation fails, it’s often tempting to immediately start adding assert
s,
lemmas, proofs by induction, etc., until the proof succeeds.
Given enough manual effort, we could probably finish a proof of binary search with the problematic
v[i] <= v[i + 1]
definition of sortedness.
But this would be a mistake;
it’s better to structure the definitions in a way that helps the automation succeed
without so much manual effort.
If you find yourself writing a long manual proof,
it’s worth stepping back and figuring out why the automation is failing;
maybe a change of definitions can fix the failure in the automation.
After all, if your car breaks down, it’s usually better to fix the car than to push it.
Adding Ambient Facts to the Proof Environment with broadcast
In a typical Verus project, a developer might prove a fact (e.g., that reversing a sequence preserves its length) in a proof function, e.g.,
pub proof fn seq_reverse_len<A>(s: Seq<A>)
ensures
reverse(s).len() == s.len(),
{
...
}
To make use of this fact, the developer must explicitly invoke the proof function, e.g.,
fn example(s: Seq<bool>) {
let t = reverse(s);
// assert(t.len() == s.len()); // FAILS
seq_reverse(s); // Adds the proof's fact to the proof environment
assert(t.len() == s.len()); // SUCCEEDS
}
However, in some cases, a proof fact is so useful that a developer always wants it to be in scope, without manually invoking the corresponding proof. For example, the fact that an empty sequence’s length is zero is so “obvious” that most programmers will expect Verus to always know it. This feature should be used with caution, however, as every extra ambient fact slows the prover’s overall performance.
Suppose that after considering the impact on the solver’s performance, the
programmer decides to make the above fact about reverse
ambient. To do so,
they can add the broadcast
modifier in the
definition of seq_reverse_len
: pub broadcast proof fn seq_reverse_len<A>(s: Seq<A>)
.
The effect is to introduce the following
quantified fact to the proof environment:
forall |s| reverse(s).len() == s.len()
Because this introduces a quantifier, Verus will typically ask you to
explicitly choose a trigger, e.g., by adding a #[trigger]
annotation.
Hence, the final version of our example might look like this:
pub broadcast proof fn seq_reverse_len<A>(s: Seq<A>)
ensures
#[trigger] reverse(s).len() == s.len(),
{
...
}
To bring this ambient lemma into scope, for a specific proof, or for an entire
module, you can use broadcast use seq_reverse_len;
.
Some of these broadcast-ed lemmas are available in the verus standard library vstd
,
some as part of broadcast “groups”, which combine a number of properties into a single
group name, which can be brought into scope with broadcast use broadcast_group_name;
.
We are working on extending the discoverablility of these groups in the standard library
documentation: they currently appear as regular functions.
SMT solving and automation
Sometimes an assertion will fail even though it’s true. At a high level, Verus
works by generating formulas called “verification conditions” from a program’s
assertions and specifications (requires
and ensures
clauses); if these
verification conditions are always true, then all of the assertions and
specifications hold. The verification conditions are checked by an SMT solver
(Z3), and therefore Verus is limited by Z3’s ability to prove generated
verification conditions.
This section walks through the reasons why a proof might fail.
The first reason why a proof might fail is that the statement is wrong! If there is a bug in a specification or assertion, then we hope that Z3 will not manage to prove it. We won’t talk too much about this case in this document, but it’s important to keep this in mind when debugging proofs.
The core reason for verification failures is that proving the verification conditions from Verus is an undecidable task: there is no algorithm that can prove general formulas true. In practice Z3 is good at proving even complex formulas are true, but there are some features that lead to inconclusive verification results.
Quantifiers: Proving theorems with quantifiers (exists
and forall
) is
in general undecidable. For Verus, we rely on Z3’s pattern-based instantiation
of quantifiers (“triggers”) to use and prove formulas with quantifiers. See the
section on forall and triggers for more details.
Opaque and closed functions: Verification conditions by default hide the bodies of opaque and closed functions; revealing those bodies might make verification succeed, but Verus intentionally leaves this to the user to improve performance and allow hiding where desired.
Inductive invariants: Reasoning about recursion (loops, recursive lemmas) requires an inductive invariant, which Z3 cannot in general come up with.
Extensional equality assertions: If a theorem requires extensional equality
(eg, between sequences, maps, or spec functions), this typically requires
additional assertions in the proof. The key challenge is that there are many
possible sequence expressions (for example) in a program that Z3 could attempt
to prove are equal. For performance reasons Z3 cannot attempt to prove all
pairs of expressions equal, both because there are too many (including the
infinitely many not in the program at all) and because each proof involves
quantifiers and is reasonably expensive. The result is that a proof may start
working if you add an equality assertion: the assertion explicitly asks Z3 to
prove and use an equality.
See extensional equality for how to use the
extensional equality operators =~=
and =~~=
.
Incomplete axioms: The standard library includes datatypes like Map
and
Seq
that are implemented using axioms that describe their expected
properties. These axioms might be incomplete; there may be a property that you
intuitively expect a map or sequence to satisfy but which isn’t implied by the
axioms, or which is implied but requires a proof by induction.
If you think this is the case, please open an issue or a pull request adding
the missing axiom.
Slow proofs: Z3 may be able to find a proof but it would simply take too long. We limit how long Z3 runs (using its resource limit or “rlimit” feature so that this limit is independent of how fast the computer is), and consider it a failure if Z3 runs into this limit. The philosophy of Verus is that it’s better to improve solver performance than rely on a slow proof. Improving SMT performance talks more about what you can do to diagnose and fix poor verification performance.
Integers: Nonlinear Arithmetic
Generally speaking, Verus’s default solver (Z3) is excellent at handling linear integer arithmetic.
Linear arithmetic captures equalities, inequalities, addition, subtraction, and multiplication and division by constants.
This means it’s great at handling expressions like 4 * x + 3 * y - z <= 20
. However, it is less capable
when nonlinear expressions are involved, like x * y
(when neither x
nor y
can be substituted for a constant)
or x / y
(when y
cannot be substituted for a constant).
That means many common axioms are inaccessible in the default mode, including but not limited to:
x * y == y * x
x * (y * z) == (x * y) * z
x * (a + b) == x * a + x * b
0 <= x <= y && 0 <= z <= w ==> x * z <= y * w
The reason for this limitation is that Verus intentionally disables theories of nonlinear arithmetic in its default prover mode.
However, it is possible to opt-in to nonlinear reasoning by invoking a specialized prover mode. There are two prover modes related to nonlinear arithmetic.
nonlinear_arith
- Enable Z3’s nonlinear theory of arithmetic.integer_ring
- Enable a decidable, equational theory of rings.
The first is general purpose, but unfortunately somewhat unpredicable. (This is why it is turned off by default.) The second implements a decidable procedure for a specific class of problems. Invoking either prover mode requires an understanding of how to minimize prover context. We describe each of these modes in more detail below.
If neither mode works for your proof, you can also manually invoke a lemma from Verus’s arithmetic library, which supplies a large collection of verified facts about how nonlinear operations behave. For example, the inaccessible properties listed above can be proven by invoking
lemma_mul_is_commutative
lemma_mul_is_associative
lemma_mul_is_distributive_add
lemma_mul_upper_bound
respectively. If your proof involves using multiple such lemmas, you may want to use a structured proof to make the proof more readable and easier to maintain.
1. Invoking a specialized solver: nonlinear_arith
A specialized solver is invoked with the by
keyword, which can be applied to either
an assert
statement or a proof fn
.
Here, we’ll see how it works using the nonlinear_arith
solver,
which enables Z3’s theory of nonlinear arithmetic for integers.
Inline Proofs with assert(...) by(nonlinear_arith)
To prove a nonlinear property in the midst of a larger function,
you can write assert(...) by(nonlinear_arith)
. This creates
a separate Z3 query just to prove the asserted property,
and for this query, Z3 runs with its nonlinear heuristics enabled.
The query does NOT include ambient facts (e.g., knowledge that stems
from the surrounding function’s requires
clause or from preceding variable assignments)
other than that which is:
- inferred from a variable’s type (e.g., the allowed ranges of a
u64
ornat
), or - supplied explicitly.
To supply context explicitly, you can use a requires
clause, a shown below:
proof fn bound_check(x: u32, y: u32, z: u32)
requires
x <= 8,
y <= 8,
{
assert(x * y <= 100) by (nonlinear_arith)
requires
x <= 10,
y <= 10;
assert(x * y <= 1000);
}
Let’s go through this example, one step at a time:
- Verus uses its normal solver to prove that assert’s “requires” clause, that
x <= 10 && y <= 10
. This follows from the precondition of the function. - Verus uses Z3’s nonlinear solver to prove
x <= 10 && y <= 10 ==> x * y <= 100
. This would not be possible with the normal solver, but it is possible for the nonlinear solver. - The fact
x * y <= 100
is now provided in the proof context for later asserts. - Verus uses its normal solver to prove that
x * y <= 1000
, which follows fromx * y <= 100
.
Furthermore, if you use a by
clause, as in assert ... by(nonlinear_arith) by { ... }
, then everything in the by
clause will opt-in to the nonlinear solver.
Reusable proofs with proof fn ... by(nonlinear_arith)
You can also use by(nonlinear_arith)
in a proof function’s signature. By including by(nonlinear_arith)
, the query for this function runs with nonlinear arithmetic reasoning enabled. For example:
proof fn bound_check2(x: u32, y: u32, z: u32) by (nonlinear_arith)
requires
x <= 8,
y <= 8,
ensures
x * y <= 64
{ }
When a specialized solver is invoked on a proof fn
like this, it is used to prove the
lemma. When the lemma is then invoked from elsewhere, Verus (as usual) proves that the
precondition is met; for this it uses its normal solver.
2. Proving Ring-based Properties: integer_ring
While general nonlinear formulas cannot be solved consistently, certain
sub-classes of nonlinear formulas can be. For example, nonlinear formulas that
consist of a series of congruence relations (i.e., equalities modulo some
divisor n
). As a simple example, we might like to show that a % n == b % n ==> (a * c) % n == (b * c) % n
.
Verus offers a deterministic proof strategy to discharge such obligations.
The strategy is called integer_ring
.
[Note: at present, it is only possible to invoke integer_ring
using the
proof fn ... by(integer_ring)
style; inline asserts are not supported.]
Verus will then discharge the proof obligation using a dedicated algebra solver called Singular. As hinted at by the annotation, this proof technique is only complete (i.e., guaranteed to succeed) for properties that are true for all rings. Formulas that rely specifically on properties of the integers may not be solved successfully.
Using this proof technique requires a bit of additional configuration of your Verus installation. See installing and setting up Singular.
Details/Limitations
- This can be used only with int parameters.
- Formulas that involve inequalities are not supported.
- Division is not supported.
- Function calls in the formulas are treated as uninterpreted functions. If a function definition is important for the proof, you should unfold the definition of the function in the proof function’s
requires
clause. - When using an
integer_ring
lemma, the divisor of a modulus operator (%
) must not be zero. If a divisor can be zero in the ensures clause of theinteger_ring
lemma, the facts in the ensures clause will not be available in the callsite.
To understand what integer_ring
can or cannot do, it is important to understand how it
handles the modulus operator, %
. Since integer_ring
does not understand inequalities,
it cannot perform reasoning that requires that 0 <= (a % b) < b
.
As a result, Singular’s results might be confusing if you think of %
primarily
as the operator you’re familiar with from programming.
For example, suppose you use a % b == x
as a precondition.
Encoded in Singular, this will become a % b == x % b
, or in more traditional “mathematical”
language, a ≡ x (mod b)
. This does not imply that x
is in the range [0, b)
,
it only implies that a
and x
are in the same equivalence class mod b.
In other words, a % b == x
implies a ≡ x (mod b)
, but not vice versa.
For the same reason, you cannot ask the integer_ring
solver to prove a postcondition
of the form a % b == x
, unless x
is 0. The integer_ring
solver can prove
that a ≡ x (mod b)
, equivalently (a - x) % b == 0
, but this does not imply
that a % b == x
.
Let’s look at a specific example to understand the limitation.
proof fn foo(a: int, b: int, c: int, d: int, x: int, y: int) by(integer_ring)
requires
a == c,
b == d,
a % b == x,
c % d == y
ensures
x == y,
{
}
This theorem statement appears to be trivial, and indeed, Verus would solve it easily
using its default proof strategy.
However, integer_ring
will not solve it.
We can inspect the Singular query to understand why:
(See here for how to log these.)
ring ring_R=integer, (a, b, c, d, x, y, tmp_0, tmp_1, tmp_2), dp;
ideal ideal_I =
a - c,
b - d,
(a - (b * tmp_0)) - x,
(c - (d * tmp_1)) - y;
ideal ideal_G = groebner(ideal_I);
reduce(x - y, ideal_G);
quit;
We can see here that a % b
is translated to a - b * tmp_0
,
while c % d
is translated to c - d * tmp_1
.
Again, since there is no constraint that a - b * tmp_0
or c - d * tmp_1
is bounded, it is not possible to conclude
that a - b * tmp_0 == c - d * tmp_1
after this simplification has taken place.
3. Combining integer_ring
and nonlinear_arith
.
As explained above, the integer_ring
feature has several limitations, it is not possible to get an arbitary nonlinear property only with the integer_ring
feature. Instead, it is a common pattern to have a by(nonlinear_arith)
function as a main lemma for the desired property, and use integer_ring
lemma as a helper lemma.
To work around the lack of support for inequalities and division, you can often write a helper proof discharged with integer_ring
and use it to prove properties that are not directly supported by integer_ring
. Furthermore, you can also add additional variables to the formulas. For example, to work around division, one can introduce c
where b = a * c
, instead of b/a
.
Example 1: integer_ring
as a helper lemma to provide facts on modular arithmetic
In the lemma_mod_difference_equal
function below, we have four inequalities inside the requires clauses, which cannot be encoded into integer_ring
. In the ensures clause, we want to prove y % d - x % d == y - x
. The helper lemma lemma_mod_difference_equal_helper
simply provides that y % d - x % d
is equal to (y - x)
modulo d
. The rest of the proof is done by by(nonlinear_arith)
.
pub proof fn lemma_mod_difference_equal_helper(x: int, y:int, d:int, small_x:int, small_y:int, tmp1:int, tmp2:int) by(integer_ring)
requires
small_x == x % d,
small_y == y % d,
tmp1 == (small_y - small_x) % d,
tmp2 == (y - x) % d,
ensures
(tmp1 - tmp2) % d == 0
{}
pub proof fn lemma_mod_difference_equal(x: int, y: int, d: int) by(nonlinear_arith)
requires
d > 0,
x <= y,
x % d <= y % d,
y - x < d
ensures
y % d - x % d == y - x
{
let small_x = x % d;
let small_y = y % d;
let tmp1 = (small_y - small_x) % d;
let tmp2 = (y - x) % d;
lemma_mod_difference_equal_helper(x,y,d, small_x, small_y, tmp1, tmp2);
}
In the lemma_mod_between
function below, we want to prove that x % d <= z % d < y % d
. However, integer_ring
only supports equalities, so we cannot prove lemma_mod_between
directly. Instead, we provide facts that can help assist the proof. The helper lemma provides 1) x % d - y % d == x - y (mod d)
and 2) y % d - z % d == y - z (mod d)
. The rest of the proof is done via by(nonlinear_arith)
.
pub proof fn lemma_mod_between_helper(x: int, y: int, d: int, small_x:int, small_y:int, tmp1:int) by(integer_ring)
requires
small_x == x % d,
small_y == y % d,
tmp1 == (small_x - small_y) % d,
ensures
(tmp1 - (x-y)) % d == 0
{}
// note that below two facts are from the helper function, and the rest are done by `by(nonlinear_arith)`.
// x % d - y % d == x - y (mod d)
// y % d - z % d == y - z (mod d)
pub proof fn lemma_mod_between(d: int, x: int, y: int, z: int) by(nonlinear_arith)
requires
d > 0,
x % d < y % d,
y - x <= d,
x <= z < y
ensures
x % d <= z % d < y % d
{
let small_x = x % d;
let small_y = y % d;
let small_z = z % d;
let tmp1 = (small_x - small_z) % d;
lemma_mod_between_helper(x,z,d, small_x, small_z, tmp1);
let tmp2 = (small_z - small_y) % d;
lemma_mod_between_helper(z,y,d, small_z, small_y, tmp2);
}
Example 2: Proving properties on bounded integers with the help of integer_ring
Since integer_ring
proofs only support int
, you need to include explicit bounds when you want to prove properties about bounded integers. For example, as shown below, in order to use the proof lemma_mod_after_mul
on u32
s, lemma_mod_after_mul_u32
must ensure that all arguments are within the proper bounds before passing them to lemma_mod_after_mul
.
If a necessary bound (e.g., m > 0
) is not included, Verus will fail to verify the proof.
proof fn lemma_mod_after_mul(x: int, y: int, z: int, m: int) by (integer_ring)
requires (x-y) % m == 0
ensures (x*z - y*z) % m == 0
{}
proof fn lemma_mod_after_mul_u32(x: u32, y: u32 , z: u32, m: u32)
requires
m > 0,
(x-y) % (m as int) == 0,
x >= y,
x <= 0xffff,
y <= 0xffff,
z <= 0xffff,
m <= 0xffff,
ensures (x*z - y*z) % (m as int) == 0
{
lemma_mod_after_mul(x as int, y as int, z as int, m as int);
// rest of proof body omitted for space
}
The desired property for nat
can be proved similarly.
The next example is similar, but note that we introduce several additional variables(ab
, bc
, and abc
) to help with the integer_ring proof.
pub proof fn multiple_offsed_mod_gt_0_helper(a: int, b: int, c: int, ac: int, bc: int, abc: int) by (integer_ring)
requires
ac == a % c,
bc == b % c,
abc == (a - b) % c,
ensures (ac - bc - abc) % c == 0
{}
pub proof fn multiple_offsed_mod_gt_0(a: nat, b: nat, c: nat) by (nonlinear_arith)
requires
a > b,
c > 0,
b % c == 0,
a % c > 0,
ensures (a - b) % (c as int) > 0
{
multiple_offsed_mod_gt_0_helper(
a as int,
b as int,
c as int,
(a % c) as int,
(b % c) as int,
((a - b) % (c as int)) as int
);
}
More integer_ring
examples can be found in this folder, and this testcase file.
Examining the encoding
Singular queries will be logged to the directory specified with --log-dir
(which defaults to .verus-log
) in a the .air
file for the module containing the file.
Bit vectors and bitwise operations
In its default prover mode, Verus treats bitwise operations like &
, |
, ^
, <<
and >>
as uninterpreted functions.
Even basic facts like x & y == y & x
are not exported by Verus’s default solver mode.
To handle these situations, Verus provides the specialized solver mode bit_vector
.
This solver is great for properties about bitwise operators, and it can also handle
some bounded integer arithmetic, though for this, its efficacy varies.
Invoking the bit_vector
prover mode.
The bit_vector
prover mode can be invoked
similarly to nonlinear_arith
,
with by(bit_vector)
either on an assert
or a proof fn
.
For example, we can shorts and context-free bit-manipulation properties:
fn test_passes(b: u32) {
assert(b & 7 == b % 8) by (bit_vector);
assert(b & 0xff < 0x100) by (bit_vector);
}
Again, as with nonlinear_arith
, assertions that use by(bit_vector)
do not include any ambient facts from the surrounding context (e.g., from the surrounding function’s requires
clause or from previous variable assignments).
Currently, assertions expressed via assert(...) by(bit_vector)
do not include any ambient facts from the surrounding context (e.g., from the surrounding function’s requires
clause or from previous variable assignments). For example, the following example will fail:
fn test_fails(x: u32, y: u32)
requires x == y
{
assert(x & 3 == y & 3) by(bit_vector); // Fails
}
But context can be imported explicitly with a requires
clause:
fn test_success(x: u32, y: u32)
requires
x == y,
{
assert(x & 3 == y & 3) by (bit_vector)
requires
x == y,
; // now x == y is available for the bit_vector proof
}
And by(bit_vector)
is also supported on proof functions:
proof fn de_morgan_auto()
by (bit_vector)
ensures
forall|a: u32, b: u32| #[trigger] (!(a & b)) == !a | !b,
forall|a: u32, b: u32| #[trigger] (!(a | b)) == !a & !b,
{
}
Again, this will use the bit_vector
solver to prove the lemma, but all calls to the lemma
will use the normal solver to prove the precondition.
How the bit_vector
solver works and what it’s good at
The bitvector solver uses a different SMT encoding, though one where all arithmetic operations
have the same semantic meaning.
Specifically, it encodes all integers into the Z3 bv
type and encodes arithmetic via the built-in bit-vector operations.
Internally, the SMT solver uses a technique called “bit blasting”.
In order to implement this encoding, Verus needs to choose an appropriate bit width to represent
any given integer. For symbolic, fixed-width integer values (e.g., u64
) it can just choose
the appropriate bitwidth (e.g., 64 bits). For the results of arithmetic operations,
Verus chooses an appropriate bitwidth automatically.
However, for this reason, the bitvector solver cannot reason over symbolic integer values.
The bitvector solver is ideal for proofs about bitwise operations
(&
, |
, ^
, <<
and >>
).
However, it is also decent at arithmetic (+
, -
, *
, /
, %
) over bounded integers.
Examples and tips
Functions vs macros
The bit-vector solver doesn’t allow arbitrary functions. However, you can use macros. This is useful when certain operations need a common shorthand, like “get the ith bit of an integer”.
macro_rules! get_bit_macro {
($a:expr, $b:expr) => {{
(0x1u32 & ($a >> $b)) == 1
}};
}
macro_rules! get_bit {
($($a:tt)*) => {
verus_proof_macro_exprs!(get_bit_macro!($($a)*))
}
}
Overflow checking
Though the bit_vector
solver does not handle symbolic int
values, it does support many
arithmetic operations that return int
values.
This makes it possible to write conditions about overflow:
proof fn test_overflow_check(a: u8, b: u8) {
// `a` and `b` are both `u8` integers, but we can test if their addition
// overflows a `u8` by simply writing `a + b < 256`.
assert((a & b) == 0 ==> (a | b) == (a + b) && (a + b) < 256) by(bit_vector);
}
Integer wrapping and truncation
The bit_vector
solver is one of the easiest ways to reason about truncation, which can be naturally expressed through bit operations.
proof fn test_truncation(a: u64) {
assert(a as u32 == a & 0xffff_ffff) by(bit_vector);
// You can write an identity with modulus as well:
assert(a as u32 == a % 0x1_0000_0000) by(bit_vector);
}
You may also find it convenient to use add
, sub
, and mul
, which (unlike +
, -
, and *
) automatically truncate.
proof fn test_truncating_add(a: u64, b: u64) {
assert(add(a, b) == (a + b) as u64) by(bit_vector);
}
Working with usize
and isize
If you use variables of type usize
or isize
, the bitvector solver (by default) assumes they
might be either 32-bit or 64-bit, which affects the encoding.
In that case, the solver will generate 2 different queries and verifies both.
However, the solver can also be configured to assume a particular platform size.
Bit-width dependence and independence
For many operations, their results are independent of the input bit-widths.
This is true of &
, |
, ^
, and >>
.
In fact, we don’t even need the bit-vector to prove this; the normal solver mode is “aware”
of this fact as well.
proof fn test_xor_u32_vs_u64(x: u32, y: u32) {
assert((x as u64) ^ (y as u64) == (x ^ y) as u64) by(bit_vector);
// XOR operation is independent of bitwidth so we don't even
// need the `bit_vector` solver to do this:
assert((x as u64) ^ (y as u64) == (x ^ y) as u64);
}
However, this is not true of left shift, <<
.
With left shift, you always need to be careful of the bitwidth of the left operand.
proof fn test_left_shift_u32_vs_u64(y: u32) {
assert(1u32 << y == 1u64 << y); // FAILS (in either mode) because it's not true
}
More examples
Some larger examples to browse:
Equality via extensionality
In the specification libraries section,
we introduced the extensional equality operator =~=
to check equivalence for Seq
, Set
, and Map
.
Suppose that a struct
or enum
datatype has a field containing Seq
, Set
, and Map
,
and suppose that we’d like to prove that two values of the datatype are equal.
We could do this by using =~=
on each field individually:
struct Foo {
a: Seq<int>,
b: Set<int>,
}
proof fn ext_equal_struct() {
let f1 = Foo { a: seq![1, 2, 3], b: set!{4, 5, 6} };
let f2 = Foo { a: seq![1, 2].push(3), b: set!{5, 6}.insert(4) };
// assert(f1 == f2); // FAILS -- need to use =~= first
assert(f1.a =~= f2.a); // succeeds
assert(f1.b =~= f2.b); // succeeds
assert(f1 == f2); // succeeds, now that we've used =~= on .a and .b
}
However, it’s rather painful to use =~=
on each field every time to check for equivalence.
To help with this, Verus supports the #[verifier::ext_equal]
attribute
to mark datatypes that need extensionality on Seq
, Set
, Map
, Multiset
, spec_fn
fields or fields of other #[verifier::ext_equal]
datatypes. For example:
#[verifier::ext_equal] // necessary for invoking =~= on the struct
struct Foo {
a: Seq<int>,
b: Set<int>,
}
proof fn ext_equal_struct() {
let f1 = Foo { a: seq![1, 2, 3], b: set!{4, 5, 6} };
let f2 = Foo { a: seq![1, 2].push(3), b: set!{5, 6}.insert(4) };
assert(f1.a =~= f2.a); // succeeds
// assert(f1 == f2); // FAILS
assert(f1 =~= f2); // succeeds
}
(Note: adding #[verifier::ext_equal]
does not change the meaning of ==
;
it just makes it more convenient to use =~=
to prove ==
on datatypes.)
Collection datatypes like sequences and sets can contain other collection datatypes as elements
(for example, a sequence of sequences, or set of sequences).
The =~=
operator only applies extensionality to the top-level collection,
not to the nested elements of the collection.
To also apply extensionality to the elements,
Verus provides a “deep” extensional equality operator =~~=
that handles arbitrary nesting of collections, spec_fn
, and datatypes.
For example:
proof fn ext_equal_nested() {
let inner: Set<int> = set!{1, 2, 3};
let s1: Seq<Set<int>> = seq![inner];
let s2 = s1.update(0, s1[0].insert(1));
let s3 = s1.update(0, s1[0].insert(2).insert(3));
// assert(s2 == s3); // FAILS
// assert(s2 =~= s3); // FAILS
assert(s2 =~~= s3); // succeeds
let s4: Seq<Seq<Set<int>>> = seq![s1];
let s5: Seq<Seq<Set<int>>> = seq![s2];
assert(s4 =~~= s5); // succeeds
}
The same applies to spec_fn
, as in:
#[verifier::ext_equal] // necessary for invoking =~= on the struct
struct Bar {
a: spec_fn(int) -> int,
}
proof fn ext_equal_fnspec(n: int) {
// basic case
let f1 = (|i: int| i + 1);
let f2 = (|i: int| 1 + i);
// assert(f1 == f2); // FAILS
assert(f1 =~= f2); // succeeds
// struct case
let b1 = Bar { a: |i: int| if i == 1 { i } else { 1 } };
let b2 = Bar { a: |i: int| 1int };
// assert(b1 == b2); // FAILS
assert(b1 =~= b2); // succeeds
// nested case
let i1 = (|i: int| i + 2);
let i2 = (|i: int| 2 + i);
let n1: Seq<spec_fn(int) -> int> = seq![i1];
let n2: Seq<spec_fn(int) -> int> = seq![i2];
// assert(n1 =~= n2); // FAILS
assert(n1 =~~= n2); // succeeds
}
Managing proof performance and why it’s critical
Sometimes your proof succeeds, but it takes too long. It’s tempting to simply tolerate the longer verification time and move on. However, we urge you to take the time to improve the verification performance. Slow verification performance typically has an underlying cause. Diagnosing and fixing the cause is much easier to do as the problems arise; waiting until you have multiple performance problems compounds the challenges of diagnosis and repair. Plus, if the proof later breaks, you’ll appreciate having a short code-prove development cycle. Keeping verification times short also makes it easier to check for regressions.
This chapter describes various ways to measure the performance of your proofs and steps you can take to improve it.
Meausuring verification performance
To see a more detailed breakdown of where Verus is spending time, you can pass
--time
on the command line. For even more details, try --time-expanded
.
For a machine-readable output, add --output-json
. These flags will also
report on the SMT resources (rlimit) used. SMT resources are an advanced topic;
they give a very rough estimate of how hard the SMT solver worked on the provided
query (or queries).
See verus --help
for more information about these options.
Quantifier Profiling
Sometimes the verification of a Verus function will time out, meaning that the solver couldn’t determine whether all of the proof obligations have been satisfied. Or verification might succeed but take longer than we would like. One common cause for both of these phenomena is quantifiers. If quantifiers (and their associated triggers) are written too liberally (i.e., they trigger too often), then the SMT solver may generate too many facts to sort through efficiently. To determine if this is the case for your Verus code, you can use the built-in quantifier profiler.
As a concrete example, suppose we have the following three functions defined:
spec fn f(x: nat, y: nat) -> bool;
spec fn g(x: nat) -> bool;
spec fn h(x: nat, y: nat) -> bool;
and we use them in the following proof code:
proof fn trigger_forever2()
requires
forall|x: nat| g(x),
forall|x: nat, y: nat| h(x, y) == f(x, y),
forall|x: nat, y: nat| f(x + 1, 2 * y) && f(2 * x, y + x) || f(y, x) ==> #[trigger] f(x, y),
ensures
forall|x: nat, y: nat| x > 2318 && y < 100 ==> h(x, y),
{
assert(g(4));
}
Notice that we have three quantifiers in the requires
clause; the first will
trigger on g(x)
, which will be useful for proving the assertion about g(4)
.
The second quantifier triggers on both f(x, y)
and h(x, y)
and says that
they’re equal. The last quantifier is manually triggered on f(x, y)
, but it
then introduces two more expressions that have a similar shape, namely f(x + 1, 2 * y)
and f(2 * x, y + x)
. Each of these has new arguments to f
, so
this will cause quantifier 3 to trigger again, creating an infinite cycle of
instantations. Notice that each such instantiation will also cause quantifier
2 to trigger as well.
If we run Verus on this example, it will quickly time out. When this happens, you
can run Verus with the --profile
option to launch the profiler. We strongly
recommend combining that option with --rlimit 1
, so that you don’t generate too
much profiling data (the more you generate, the longer the analysis takes). With
--profile
, if verification times out, the profiler automatically launches.
If you want to profile a function that is verifying successfully but slowly, you
can use the --profile-all
option. You may want to combine this with the
--verify-function
option to target the function you’re interested in.
If we run the profiler on the example above, we’ll see something along the lines of:
error: function body check: Resource limit (rlimit) exceeded
--> rust_verify/example/trigger_loops.rs:64:1
|
64 | fn trigger_forever2() {
| ^^^^^^^^^^^^^^^^^^^^^
Analyzing prover log...
[00:00:39] ████████████████████████████████████████████████████████████████████████████████ 1153/1153K lines
... analysis complete
note: Observed 27,184 total instantiations of user-level quantifiers
note: Cost * Instantiations: 5391549700 (Instantiated 13,591 times - 49% of the total, cost 396700) top 1 of 3 user-level quantifiers.
--> rust_verify/example/trigger_loops.rs:68:78
|
68 | forall|x: nat, y: nat| f(x + 1, 2 * y) && f(2 * x, y + x) || f(y, x) ==> #[trigger] f(x, y),
| -------------------------------------------------------------------------^^^^^^^^^^^^^^^^^^ Triggers selected for this quantifier
note: Cost * Instantiations: 1037237938 (Instantiated 13,591 times - 49% of the total, cost 76318) top 2 of 3 user-level quantifiers.
--> rust_verify/example/trigger_loops.rs:67:28
|
67 | forall|x: nat, y: nat| h(x, y) == f(x, y),
| -----------------------^^^^^^^----^^^^^^^ Triggers selected for this quantifier
note: Cost * Instantiations: 16 (Instantiated 2 times - 0% of the total, cost 8) top 3 of 3 user-level quantifiers.
--> rust_verify/example/trigger_loops.rs:66:20
|
66 | forall|x: nat| g(x),
| ---------------^^^^ Triggers selected for this quantifier
error: aborting due to previous error
The profiler measures two aspects of quantifier performance. First, it collects a basic count of how
many times each quantifier is instantiated. Second, it attempts to calculate a “cost” for each
quantifier. The cost of a quantifier is the sum of cost of its instantiations. The cost of an instantiation i
is roughly 1 + sum_{(i, n) \in edges} cost(n) / in-degree(n)
where each n
is an instantiation caused
by instantiation i
. In other words, instantiation i
produced a term that caused the solver to create
another instantiation (of the same or a different quantifier) n
. This heuristic attempts to place more
weight on quantifiers whose instantiations themselves cause other expensive instantiations. By default,
the profiler will sort by the product of these two metrics.
In the example above, we see that the top quantifier is quantifer 3 in the Verus code, which is indeed the troublemaker. The use of the cost metric elevates it above quantifier 2, which had the same number of instantiations but is really an “innocent bystander” in this scenario. And both of these quantifiers are instantiated vastly more than quantifier 3, indicating that quantifier 3 is not the source of the problem. If all of the quantifiers have a small number of instantiations, that may be a sign that quantifier instantiation is not the underlying source of the solver’s poor performance.
Hiding local proofs with assert(...) by { ... }
Motivation
Sometimes, in a long function, you need to establish a fact F
that requires
a modest-size proof P
. Typically, you do this by ...; P; assert(F); ...
.
But doing this comes with a risk: the facts P
introduces can be used not
only for proving F
but also for proving the entire rest of the
function. This gives the SMT solver much more to think about when proving things beyond
assert(F)
, which is especially problematic when these additional facts are
universally quantified. This can make the solver take longer, and even time out, on
the rest of the function.
Enter assert(...) by { ... }
Saying assert(F) by {P}
restricts the context that P
affects, so that
it’s used to establish F
and nothing else. After the closing brace at the
end of { P }
, all facts that it established except for F
are removed from
the proof context.
Underlying mechanism
The way this works internally is as follows. The solver is given the facts following
from P
as a premise when proving F
but isn’t given them for the rest of
the proof. For instance, suppose lemma_A
establishes fact A
and lemma_B
establishes fact B
. Then
lemma_A();
assert(F) by { lemma_B(); };
assert(G);
is encoded to the solver as something like (A && B ==> F) && (A ==> G)
. If B
is an expansive fact
to think about, like forall|i: int| b(i)
, the solver won’t be able to think about it
when trying to establish G
.
Difference from auxiliary lemmas
Another way to isolate the proof of F
from the local context is to put the
proof P
in a separate lemma and invoke that lemma. To do this, the proof
writer has to think about what parts of the context (like fact A
in the
example above) are necessary to establish F
, and put those as requires
clauses in the lemma. The developer may then also need to pass other variables
to the lemma that are mentioned in those required facts. This can be done, but
can be a lot of work. Using assert(F) by { P }
obviates all this work. It
also makes the proof more compact by removing the need to have a separate
lemma with its own signature.
Structured Proofs by Calculation
Motivation
Sometimes, you need to establish some relation R
between two expressions, say,
a_1
and a_n
, where it might be easier to do this in a series of steps, a_1
to a_2
, a_2
to a_3
, … all the way to a_n
. One might do this by just
doing all the steps at once, but as mentioned in the section on
assert-by, a better approach might be to split it into a
collection of restricted contexts. This is better, but still might not be ideal,
since you need to repeat each of the intermediate expressions at each point.
calc!
ulations, to Reduce Redundant Redundancy
The calc!
macro supports structured proofs through calculations.
In particular, one can show a_1 R a_n
for some transitive relation R
by performing a series
of steps a_1 R a_2
, a_2 R a_3
, … a_{n-1} R a_n
. The calc macro provides both convenient
syntax sugar to perform such a proof conveniently, without repeating oneself too often, or
exposing the internal steps to the outside context.
The expected usage looks like:
calc! {
(R)
a_1; { /* proof that a_1 R a_2 */ }
a_2; { /* proof that a_2 R a_3 */ }
...
a_n;
}
For example,
let a: int = 2;
calc! {
(<=)
a; {}
a + 3; {}
5;
}
which is equivalent to proving a <= 5
using a <= b <= 5
. In this case, each
of the intermediate proofs are trivial, thus have an empty {}
block, but in
general, can have arbitrary proofs inside their blocks.
Notice that you mention a_1
, a_2
, … a_n
only once each. Additionally,
the proof for each of the steps is localized, and restricted to only its
particular step, ensuring that proof-context is not polluted.
The body of the function where this calc
statement is written only gets to see
a_1 R a_n
, and not any of the intermediate steps (or their proofs), further
limiting proof-context pollution.
Currently, the calc!
macro supports common transitive relations for R
(such
as ==
and <=
). This set of relations may be extended in the future.
Relating Relations to Relations
While a relation like <=
might be useful to use like above, it is possible
that not every intermediate step needs a <=
; sometimes one might be able to be
more precise, and maintaining this (especially for documentation/readability
reasons) might be useful. For example, one might want to say a_1 <= a_2 == a_3 <= a_4 < a_5 <= ...
.
This is supported by calc
by specifying the extra intermediate relations
inline (with the default being the high-level relation). These relations are
checked to be consistent with the top-level relation, in order to maintain
transitivity (so for example, using >
in the above chain would be caught and
reported with a helpful message).
A simple example of using intermediate relations looks like the following:
let x: int = 2;
let y: int = 5;
calc! {
(<=)
x; (==) {}
5 - 3; (<) {}
5int; {} // Notice that no intermediate relation
// is specified here, so `calc!` will
// consider the top-level relation
// `R`; here `<=`.
y;
}
This example is equivalent to saying x <= y
using x == 5 - 3 < 5 <= y
.
Proofs by Computation
Motivation
Some proofs should be “obvious” by simply computing on values. For example,
given a function pow(base, exp)
defining exponentiation, we would like it to
be straightforward and deterministic to prove that pow(2, 8) == 256
.
However, in general, to keep recursive functions like pow
from overwhelming
the SMT solver with too many unrollings, Verus defaults to only unrolling such
definitions once. Hence, to make the assertion above go through, the developer
needs to carefully adjust the amount of “fuel” provided to unroll pow
. Even
with such adjustment, we have observed cases where Z3 does “the wrong thing”,
e.g., it does not unroll the definitions enough, or it refuses to simplify
non-linear operations on statically known constants. As a result, seemingly
simple proofs like the one above don’t always go through as expected.
Enter Proof by Computation
Verus allows the developer to perform such proofs via computation, i.e.,
by running an internal interpreter over the asserted fact. The developer
can specify the desired computation using assert(e) by (compute)
(for some
expression e
). Continuing the example above, the developer could
write:
// Naive definition of exponentiation
spec fn pow(base: nat, exp: nat) -> nat
decreases exp,
{
if exp == 0 {
1
} else {
base * pow(base, (exp - 1) as nat)
}
}
proof fn concrete_pow() {
assert(pow(2, 8) == 256) by (compute); // Assertion 1
assert(pow(2, 9) == 512); // Assertion 2
assert(pow(2, 8) == 256) by (compute_only); // Assertion 3
}
In Assertion 1, Verus will internally reduce the left-hand side to 256 by repeatedly evaluating
pow
and then simplify the entire expression to true
.
When encoded to the SMT solver, the result will be (approximately):
assert(true);
assume(pow(2, 8) == 256);
In other words, in the encoding, we assert whatever remains after
simplification and then assume the original expression. Hence, even if
simplification only partially succeeds, Z3 may still be able to complete the
proof. Furthermore, because we assume the original expression, it is still
available to trigger other ambient knowledge or contribute to subsequent facts.
Hence Assertion 2 will succeed, since Z3 will unfold the definition of pow
once and then use the previously established fact that pow(2,8) == 256
.
If you want to ensure that the entire proof completes through computation and
leaves no additional work for Z3, then you can use assert(e) by (compute_only)
as shown in Assertion 3. Such an assertion will fail unless
the interpreter succeeds in reducing the expression completely down to true
.
This can be useful for ensuring the stability of your proof, since it does not
rely on any Z3 heuristics.
Important note: An assertion using proof by computation does not inherit any context from its environment. Hence, this example:
let x = 2;
assert(pow(2, x) == 4) by (compute_only);
will fail, since x
will be treated symbolically, and hence the assertion will
not simplify all the way down to true
. This can be remedied either
by using assert(e) by (compute)
and allowing Z3 to finish the proof, or by moving
the let
into the assertion, e.g., as:
proof fn let_passes() {
assert({
let x = 2;
pow(2, x) == 4
}) by (compute_only);
}
While proofs by computation are most useful for concrete values, the interpreter
also supports symbolic values, and hence it can complete certain proofs
symbolically. For example, given variables a, b, c, d
, the following succeeds:
proof fn seq_example(a: Seq<int>, b: Seq<int>, c: Seq<int>, d: Seq<int>) {
assert(seq![a, b, c, d] =~= seq![a, b].add(seq![c, d])) by (compute_only);
}
Many proofs by computation take place over a concrete range of integers. To reduce
the boilerplate needed for such proofs, you can use
all_spec
.
In the example below,
use vstd::compute::RangeAll;
spec fn p(u: usize) -> bool {
u >> 8 == 0
}
proof fn range_property(u: usize)
requires 25 <= u < 100,
ensures p(u),
{
assert((25..100int).all_spec(|x| p(x as usize))) by (compute_only);
let prop = |x| p(x as usize);
assert(prop(u));
}
we use all_spec
to prove that p
holds for all values between 25 and 100,
and hence it must hold for a generic value u
that we know is in that range.
Note that all_spec
currently expects to operate over int
s, so you may need
add casts as appropriate. Also, due to some techinical restrictions, at present,
you can’t pass a top-level function like p
to all_spec
. Instead, you need
to wrap it in a closure, as seen in this example. Finally, the lemmas in vstd
will give you a quantified resulted about the outcome of all_spec
, so you may
need to add an additional assertion (in our example, assert(prop(u))
) to trigger
that quantifier. This guide provides more detail on quantifiers and
triggers in another chapter.
To prevent infinite interpretation loops (which can arise even when the code is
proven to terminate, since the termination proof only applies to concrete
inputs, whereas the interpreter may encounter symbolic values), Verus limits
the time it will spend interpreting any given proof by computation.
Specifically, the time limit is the number of seconds specified via the
--rlimit
command-line option.
By default, the interpreter does not cache function call results based on the
value of the arguments passed to the function. Experiments showed this typically
hurts performance, since it entails traversing the (large) AST nodes representing
the arguments. However, some examples need such caching to succceed (e.g., computing
with the naive definition of Fibonacci). Such functions can be annotated with
#[verifier::memoize]
, which will cause their results to be cached during computation.
Current Limitations
- As mentioned above, the expression given to a proof by computation is interpreted in isolation from any surrounding context.
- The expression passed to a proof-by-computation assertion must be in spec mode, which means it cannot be used on proof or exec mode functions.
- The interpreter is recursive, so a deeply nested expression (or series of function calls) may cause Verus to exceed the process’ stack space.
See Also
- The test suite has a variety of small examples.
- We also have several more complex examples.
Breaking proofs into smaller pieces
Motivation
If you write a long function with a lot of proof code, Verus will correspondingly give the SMT solver a long and difficult problem to solve. So one can improve solver performance by breaking that function down into smaller pieces. This performance improvement can be dramatic because solver response time typically increases nonlinearly as proof size increases. After all, having twice as many facts in scope gives the solver far more than twice as many possible paths to search for a proof. As a consequence, breaking functions down can even make the difference between the solver timing out and the solver succeeding quickly.
Moving a subproof to a lemma
If you have a long function, look for a modest-size piece P
of it that
functions as a proof of some locally useful set of facts S
. Replace P
with
a call to a lemma whose postconditions are S
, then make P
the body of that
lemma. Consider what parts of the original context of P
are necessary to
establish S
, and put those as requires
clauses in the lemma. Those
requires
clauses may involve local variables, in which case pass those
variables to the lemma as parameters.
For instance:
fn my_long_function(x: u64, ...)
{
let y: int = ...;
... // first part of proof, establishing fact f(x, y)
P1; // modest-size proof...
P2; // establishing...
P3; // facts s1 and s2...
P4; // about x and y
... // second part of proof, using facts s1 and s2
}
might become
proof fn my_long_function_helper(x: u64, y: int)
requires
f(x, y)
ensures
s1(x),
s2(x, y)
{
P1; // modest-size proof...
P2; // establishing...
P3; // facts s1 and s2...
P4; // about x and y
}
fn my_long_function(x: u64, ...)
{
... // first part of proof, establishing fact f(x, y)
my_long_function_helper(x, y);
... // second part of proof, using facts s1 and s2
}
You may find that, once you’ve moved P
into the body of the lemma, you can
not only remove P
from the long function but also remove significant
portions of P
from the lemma where it was moved to. This is because a lemma
dedicated solely to establishing S
will have a smaller context for the
solver to reason about. So less proof annotation may be necessary to get it to
successfully and quickly establish S
. For instance:
proof fn my_long_function_helper(x: u64, y: int)
requires
f(x, y)
ensures
s1(x),
s2(x, y)
{
P1; // It turns out that P2 and P3 aren't necessary when
P4; // the solver is focused on just f, s1, s2, x, and y.
}
Dividing a proof into parts 1, 2, …, n
Another approach is to divide your large function’s proof into n
consecutive
pieces and put each of those pieces into its own lemma. Make the first lemma’s
requires
clauses be the requires
clauses for the function, and make its
ensures
clauses be a summary of what its proof establishes. Make the second
lemma’s requires
clauses match the ensures
clauses of the first lemma, and
make its ensures
clauses be a summary of what it establishes. Keep going
until lemma number n
, whose ensures
clauses should be the ensures
clauses of the original function. Finally, replace the original function’s
proof with a sequence of calls to those n
lemmas in order.
For instance:
proof fn my_long_function(x: u64)
requires r(x)
ensures e(x)
{
P1;
P2;
P3;
}
might become
proof fn my_long_function_part1(x: u64) -> (y: int)
requires
r(x)
ensures
mid1(x, y)
{
P1;
}
proof fn my_long_function_part2(x: u64, y: int)
requires
mid1(x, y)
ensures
mid2(x, y)
{
P2;
}
proof fn my_long_function_part3(x: u64, y: int)
requires
mid2(x, y)
ensures
e(x)
{
P3;
}
proof fn my_long_function(x: u64)
requires r(x)
ensures e(x)
{
let y = my_long_function_part1(x);
my_long_function_part2(x, y);
my_long_function_part3(x, y);
}
Since the expressions r(x)
, mid1(x, y)
, mid2(x, y)
, and e(x)
are each
repeated twice, it may be helpful to factor each out as a spec function and
thereby avoid repetition.
Checklist: What to do when proofs go wrong
A proof is failing and I don’t expect it to. What’s going wrong?
- Try running Verus with
--expand-errors
to get more specific information about what’s failing. - Check Verus’s output for
recommends
-failures and other notes. - Add more
assert
statements. This can either give you more information about what’s failing, or even just fix the proof. See this guide. - Are you using quantifiers? Make sure you understand how triggers work.
- Are you using nonlinear arithmetic? Try one of the strategies for nonlinear arithmetic.
- Are you using bitwise arithmetic or
as
-truncation? Try the bit_vector solver. - Are you relying on the equality of a container type (like
Seq
orMap
)? Try extensional equality. - Are you using a recursive function? Make sure you understand how fuel works.
The verifier says “rlimit exceeded”. What can I do?
- Try the quantifier profiler to identify a problematic trigger-pattern.
- Try breaking the proof into pieces.
- Try increasing the
rlimit
. Sometimes a proof really is just kind of big and you want Verus to spend a little more effort on it.
My proof is “flaky”: it sometimes works, but then I change something unrelated, and it breaks.
- Try adding
#[verifier::spinoff_prover]
to the function. This can make it a little more stable. - Try breaking the proof into pieces.
Higher-order executable functions
Here we discuss the use of higher order functions via closures and other function types in Rust.
Passing functions as values
In Rust, functions may be passed by value using the FnOnce
, FnMut
, and Fn
traits.
Just like for normal functions, Verus supports reasoning about the preconditions
and postconditions of such functions.
Reasoning about preconditions and postconditions
Verus allows you to reason about the preconditions and postconditions of function values
via two builtin spec functions: call_requires
and call_ensures
.
call_requires(f, args)
represents the precondition. It takes two arguments: the function object and arguments as a tuple. If it returns true, then it is possible to callf
with the given args.call_ensures(f, args, output)
represents the postcondition. It takes takes three arguments: the function object, arguments, and return vaue. It represents the valid input-output pairs forf
.
The vstd
library also provides aliases, f.requires(args)
and f.ensures(args, output)
.
These mean the same thing as call_requires
and call_ensures
.
As with any normal call, Verus demands that the precondition be satisfied when you call a function object. This is demonstrated by the following example:
fn double(x: u8) -> (res: u8)
requires
0 <= x < 128,
ensures
res == 2 * x,
{
2 * x
}
fn higher_order_fn(f: impl Fn(u8) -> u8) -> (res: u8) {
f(50)
}
fn test() {
higher_order_fn(double);
}
As we can see, test
calls higher_order_fn
, passing in double
.
The higher_order_fn
then calls the argument with 50
. This should be allowed,
according to the requires
clause of double
; however, higher_order_fn
does not have
the information to know this is correct.
Verus gives an error:
error: Call to non-static function fails to satisfy `callee.requires(args)`
--> vec_map.rs:25:5
|
25 | f(50)
| ^^^^^
To fix this, we can add a precondition to higher_order_fn
that gives information on
the precondition of f
:
fn double(x: u8) -> (res: u8)
requires
0 <= x < 128,
ensures
res == 2 * x,
{
2 * x
}
fn higher_order_fn(f: impl Fn(u8) -> u8) -> (res: u8)
requires
call_requires(f, (50,)),
{
f(50)
}
fn test() {
higher_order_fn(double);
}
The (50,)
looks a little funky. This is a 1-tuple.
The call_requires
and call_ensures
always take tuple arguments for the “args”.
If f
takes 0 arguments, then call_requires
takes a unit tuple;
if f
takes 2 arguments, then it takes a pair; etc.
Here, f
takes 1 argument, so it takes a 1-tuple, which can be constructed by using
the trailing comma, as in (50,)
.
Verus now accepts this code, as the precondition of higher_order_fn
now guarantees that
f
accepts the input of 50
.
We can go further and allow higher_order_fn
to reason about the output value of f
:
fn double(x: u8) -> (res: u8)
requires
0 <= x < 128,
ensures
res == 2 * x,
{
2 * x
}
fn higher_order_fn(f: impl Fn(u8) -> u8) -> (res: u8)
requires
call_requires(f, (50,)),
forall|x, y| call_ensures(f, x, y) ==> y % 2 == 0,
ensures
res % 2 == 0,
{
let ret = f(50);
return ret;
}
fn test() {
higher_order_fn(double);
}
Observe that the precondition of higher_order_fn
places a constraint on the postcondition
of f
.
As a result, higher_order_fn
learns information about the return value of f(50)
.
Specifically, it learns that call_ensures(f, (50,), ret)
holds, which by higher_order_fn
’s
precondition, implies that ret % 2 == 0
.
An important note
The above examples show the idiomatic way to constrain the preconditions and postconditions
of a function argument. Observe that call_requires
is used in a positive position,
i.e., “call_requires
holds for this value”.
Meanwhile call_ensures
is used in a negative position, i.e., on the left hand side
of an implication: “if call_ensures
holds for a given value, this is satisfies this particular constraint”.
It is very common to need a guarantee that f(args)
will return one specific value,
say expected_return_value
.
In this situation, it can be tempting to write,
requires call_ensures(f, args, expected_return_value),
as your constraint. However, this is almost never what you actually want,
and in fact, Verus may not even let you prove it.
The proposition call_ensures(f, args, expected_return_value)
says that expected_return_value
is a possible return value of f(args)
;
however, it says nothing about other possible return values.
In general, f
may be nondeterministic!
Just because expected_return_value
is one possible return
value does not mean it is only one.
When faced with this situation, what you really want is to write:
requires forall |ret| call_ensures(f, args, ret) ==> ret == expected_return_value
This is the proposition that you really want, i.e., “if f(args)
returns a value ret
,
then that value is equal to expected_return_value
”.
Of course, this is flipped around when you write a postcondition, as we’ll see in the next example.
Example: vec_map
Let’s take what we learned and write a simple function, vec_map
, which applies a given
function to each element of a vector and returns a new vector.
The key challenge is to determine the right specfication to use.
The signature we want is:
fn vec_map<T, U>(v: &Vec<T>, f: impl Fn(T) -> U) -> (result: Vec<U>) where
T: Copy,
First, what do we need to require? We need to require that it’s okay to call f
with any element of the vector as input.
requires
forall|i|
0 <= i < v.len() ==> call_requires(
f,
(v[i],),
),
Next, what ought we to ensure? Naturally, we want the returned vector to have the same
length as the input. Furthermore, we want to guarantee that any element in the output
vector is a possible output when the provided function f
is called on the corresponding
element from the input vector.
ensures
result.len() == v.len(),
forall|i|
0 <= i < v.len() ==> call_ensures(
f,
(v[i],),
#[trigger] result[i],
)
,
Now that we have a specification, the implementation and loop invariant should fall into place:
fn vec_map<T, U>(v: &Vec<T>, f: impl Fn(T) -> U) -> (result: Vec<U>) where
T: Copy,
requires
forall|i|
0 <= i < v.len() ==> call_requires(
f,
(v[i],),
),
ensures
result.len() == v.len(),
forall|i|
0 <= i < v.len() ==> call_ensures(
f,
(v[i],),
#[trigger] result[i],
)
,
{
let mut result = Vec::new();
let mut j = 0;
while j < v.len()
invariant
forall|i| 0 <= i < v.len() ==> call_requires(f, (v[i],)),
0 <= j <= v.len(),
j == result.len(),
forall|i| 0 <= i < j ==> call_ensures(f, (v[i],), #[trigger] result[i]),
{
result.push(f(v[j]));
j += 1;
}
result
}
Finally, we can try it out with an example:
fn double(x: u8) -> (res: u8)
requires
0 <= x < 128,
ensures
res == 2 * x,
{
2 * x
}
fn test_vec_map() {
let mut v = Vec::new();
v.push(0);
v.push(10);
v.push(20);
let w = vec_map(&v, double);
assert(w[2] == 40);
}
Conclusion
In this chapter, we learned how to write higher-order functions with higher-order specifications, i.e., specifications that constrain the specifications of functions that are passed around as values.
All of the examples from this chapter passed functions by referring to them directly by name,
e.g., passing the function double
by writing double
.
In Rust, a more common way to work with higher-order functions is to pass closures.
In the next chapter, we’ll learn how to use closures.
Closures
In the previous chapter, we saw how to pass functions as values, which we did by referencing function items by name. However, it is more common in Rust to creating functions using closures.
Preconditions and postconditions on a closure
Verus allows you to specify requires
and ensures
on a closure just like you can for
any other function.
Here’s an example, calling the vec_map
function we defined in the
previous chapter:
fn test_vec_map_with_closure() {
let double = |x: u8| -> (res: u8)
requires 0 <= x < 128
ensures res == 2 * x
{
2 * x
};
assert(forall |x| 0 <= x < 128 ==> call_requires(double, (x,)));
assert(forall |x, y| call_ensures(double, (x,), y) ==> y == 2 * x);
let mut v = Vec::new();
v.push(0);
v.push(10);
v.push(20);
let w = vec_map(&v, double);
assert(w[2] == 40);
}
Closure capturing
One of the most challenging aspects of closures, in general, is that closures
can capture variables from the surrounding context.
Rust resolves this challenge through its hierarcy of function traits:
FnOnce
, FnMut
, and Fn
.
The declaration of the closure and the details of its context capture determine
which traits it has. In turn,
the traits determine what capabilities the caller has: Can they call it more than
once? Can they call it in parallel?
See the Rust documentation for a more detailed introduction.
In brief, the traits provide the following capabilities to callers and restrictions on the context capture:
Caller capability | Capturing | |
---|---|---|
FnOnce | May call once | May move variables from the context |
FnMut | May call multiple times via &mut reference | May borrow mutably from the context |
Fn | May call multiple times via & reference | May borrow immutably from the context |
Verus does not yet support borrowing mutably from the context,
though it does handle moving and immutable borrows easily.
Therefore, Verus has better support for Fn
and FnOnce
—it does not yet take advantage of the
capturing capabilities supported by Rust’s FnMut
.
Fortunately, both move-captures and immutable-reference-captures are easy to handle, as we can simply take their values inside the closure to be whatever they are at the program point of the closure expression.
Example:
fn example_closure_capture() {
let x: u8 = 20;
let f = || {
// Inside the closure, we have seamless access to
// variables defined outside the closure.
assert(x == 20);
x
};
}
Unsafe code & complex ownership
Here we discuss the handling of more complex patterns relating to Rust ownership including:
- Interior mutability, where Rust allows you to mutate data even through a shared reference
&T
- Raw pointers, which require proper ownership handling in order to uphold safety contracts
- Concurrency, where objects owned across different threads may need to coordinate.
Interior Mutability
The Interior Mutability pattern
is a particular Rust pattern wherein the user is able to manipulate the contents of a value
accessed via a shared borrow &
. (Though &
is often referred to as “immutable borrow,”
we will call it a “shared borrow” here, to avoid confusion.)
Two common Rust types illustrating interior mutability are
Cell
and RefCell
.
Here, we will overview the equivalent concepts in Verus.
Mutating stuff that can’t mutate
To understand the key challenge in verifying these interior mutability patterns,
recall an important fact of Verus’s SMT encoding. Verus assumes that any value of type &T
,
for any type T
, can never change. However, we also know that the contents of a
&Cell<V>
might change. After all, that’s the whole point of the Cell<T>
type!
The inescapable conclusion, then, is that
the value taken by a Cell<T>
in Verus’ SMT encoding must not depend on the cell’s contents.
Instead, the SMT “value” of a Cell<T>
is nothing more than a unique identifier for the Cell.
In some regards, it may help to think of Cell<T>
as similar to a pointer T*
.
The value of the Cell<T>
is only its identifier (its “pointer address”) rather than
its contents (“the thing pointed to be a pointer”). Of course, it’s not a pointer, but
from the perspective of the encoding, it might as well be.
Note one immediate ramification of this property:
Verus’ pure equality ===
on Cell
types cannot possibly
give the same results as Rust’s standard ==
(eq
) on Cell
types.
Rust’s ==
function actually compares the contents of the cells.
But pure equality, ===
, which must depend on the SMT encoding values,
cannot possibly depend on the contents!
Instead, ===
compares two cells as equal only if they are the same cell.
So, with these challenges in mind, how do we handle interior mutability in Verus?
There are a few different approaches we can take.
-
When retrieving a value from the interior of a Cell-like data structure, we can model this as non-deterministically receiving a value of the given type. At first, this might seem like it gives us too little to work with for verifying correctness properties. However, we can impose additional structure by specifying data invariants to restrict the space of possible values.
-
Track the exact value using
tracked ghost
code.
More sophisticated data structures—especially concurrent ones—often require a careful balance of both approaches. We’ll introduce both here.
Data Invariants with InvCell
.
Suppose we have an expensive computation and we want to memoize its value. The first time
we need to compute the value, we perform the computation and store its value for whenever
it’s needed later. To do this, we’ll use a Cell
, whose interior is intialized to None
to store the computed value.
The memoized compute function will then:
- Read the value in the
Cell
.- If it’s
None
, then the value hasn’t been computed yet. Compute the value, store it for later, then return it. - If it’s
Some(x)
, then the value has already been computed, so return it immediately.
- If it’s
Crucially, the correctness of this approach doesn’t actually depend on being able to predict which of these cases any invocation will take. (It might be a different story if we were attempting to prove a bound on the time the program will take.) All we need to know is that it will take one of these cases. Therefore, we can verify this code by using a cell with a data invariant:
- Invariant: the value stored in the interior of the cell is either
None
orSome(x)
, wherex
is the expected result of the computation.
Concretely, the above can be implemented in Verus using
InvCell
,
provided by Verus’ standard library, which provides a data-invariant-based specification.
When constructing a new InvCell<T>
, the user specifies a data invariant: some boolean predicate
over the type T
which tells the cell what values are allowed to be stored.
Then, the InvCell
only has to impose the restriction that whenever the user writes to the cell,
the value val
being written has to satisfy the predicate, cell.inv(val)
.
In exchange, though, whenever the user reads from the cell, they know the value they
receive satisfies cell.inv(val)
.
Here’s an example using an InvCell
to implement a memoized function:
spec fn result_of_computation() -> u64 {
2
}
fn expensive_computation() -> (res: u64)
ensures
res == result_of_computation(),
{
1 + 1
}
spec fn cell_is_valid(cell: &InvCell<Option<u64>>) -> bool {
forall|v|
(cell.inv(v) <==> match v {
Option::Some(i) => i == result_of_computation(),
Option::None => true,
})
}
// Memoize the call to `expensive_computation()`.
// The argument here is an InvCell wrapping an Option<u64>,
// which is initially None, but then it is set to the correct
// answer once it's computed.
//
// The precondition here, given in the definition of `cell_is_valid` above,
// says that the InvCell has an invariant that the interior contents is either
// `None` or `Some(i)` where `i` is the desired value.
fn memoized_computation(cell: &InvCell<Option<u64>>) -> (res: u64)
requires
cell_is_valid(cell),
ensures
res == result_of_computation(),
{
let c = cell.get();
match c {
Option::Some(i) => {
// The value has already been computed; return the cached value
i
},
Option::None => {
// The value hasn't been computed yet. Compute it here
let i = expensive_computation();
// Store it for later
cell.replace(Option::Some(i));
// And return it now
i
},
}
}
Tracked ghost state with PCell
.
(TODO finish writing this chapter)
Pointers and cells
See the vstd documentation for more information on handling these features.
- For cells, see
PCell
- For pointers to fixed-sized heap allocations, see
PPtr
. - For general support for
*mut T
and*const T
, seevstd::raw_ptr
Concurrency
Verus provides the VerusSync framework for verifing programs that require a nontrivial ownership discipline. This includes multi-threaded concurrent code, and frequently it is also needed for nontrivial applications of unsafe features (such as pointers or unsafe cells).
The topic is sufficiently complex that we cover it in a separate tutorial and reference book.
Understanding the guarantees of a verified program
A persistent challenge with verified software is understanding what, exactly, is being verified and what guarantees are being given. Verified code doesn’t run in a vacuum; verified code ofter interacts with unverified code, possibly in either direction. This chapter documents technical information need to properly understand how verified code might interact with unverified code.
Assumptions and trusted components
Often times, it’s not possible to verify every line of code and some things need to be assumed. In such cases, the ultimate correctness of the code is dependent not just on verification but on the assumptions being made.
Assumptions can be introduced through the following mechanisms:
- As
assume
statement - An axiom - any proof function introduced with
#[verifier::external_body]
- An axiomatic specification - any exec function introduced with
#[verifier::external_body]
or#[verifier::external_fn_specification]
#[verifier::external]
(See below.)
Types (structs and enums) can also be marked as #[verifier::external_body]
,
though to be pedantic, this does not introduce a new assumption per se.
In practice, though, such types are usually associated with additional assumptions
to make them useful.
The #[verifier::external]
attribute
The #[verifier::external]
annotation tells Verus to ignore an item entirely.
It can be applied to any item - a function, trait, trait implementation, type, etc.
For many items (functions, types, trait declarations), this does not, on its own,
introduce any new “assumptions” about that item.
Attempting to call an external
function from a verified
function, for example, will result in an error from Verus. In practice, a developer
will often call an external
function (say f
) from an external_body
function (say g
),
in which case, the external_body
attribute introduces assumptions about g
, thus
indirectly introducing assumptions about f
.
Furthermore, adding #[verifier::external]
to a trait implementation requires even more
careful consideration, as Verus relies on rustc’s trait-checking for some things,
so trait implementations can sometimes affect what code gets accepted or rejected.
For example:
#[verifier::external]
unsafe impl Send for X { }
Memory safety is conditional on verification
Let’s briefly compare and contrast the philosophies of Rust and Verus with regards to memory safety. Memory safety, here, refers to a program being free of any undefined behavior (UB) in its memory access. Both Rust and Verus rely on memory safety being upheld; in turn, they both do a great deal to enforce it. However, they enforce it in different ways.
Rust’s enforcement of memory safety is built around a contract between “safe” and
“unsafe” code. The first chapter of the Rustonomicon
summarizes the philosophy. In short: any “safe” code (i.e., code free of the unsafe
keyword)
must be memory safe, enforced by Rust itself via its type-checker and borrow-checker,
regardless of user error. However, if any code uses unsafe
, it is the responsibility
of the programmer to ensure that the program is memory safe—and if the programmer fails to
do so, then the behavior of the program is undefined (by definition).
In practice, of course, most code does use unsafe
, albeit only indirectly.
Most code relies on low-level utilities that can only be implemented with unsafe code,
including many from the standard library (e.g., Arc
, RefCell
, and so on), but also
from user-provided crates. In any case, the Rust philosophy is that the providers of these
low-level utilities should meet a standard of “unsafe encapsulation.”
A programmer interacting using the library only through its safe API (and also not using
unsafe
code anywhere else) should not be able to exhibit undefined behavior,
not even by writing buggy code or using the API is an unintended way.
As such, the library implementors need to code defensively against all possible ways the
client might use the safe API.
When they are successful in this, the clients once again gain the guarantee that they
cannot invoke UB without unsafe
code.
By contrast, Verus does not have an “unsafe/safe” distinction, nor does it have a notion of unsafe encapsulation. This is because it verifies both memory safety and other forms of correctness through Verus specifications.
Example
Consider, for example, the index operation in Rust’s standard Vec
container.
If the client calls this function with an index that is not in-range for the vector’s
length, then it is likely a bug on the part of the client. However, the index operation
is part of the safe API, and therefore it must be robust to such things, and it can never
attempt to read out-of-bounds memory. Therefore, the implementation of this operation has
to do a bounds check (panicking if the bounds check fails).
On the other hand, consider this (possible) implementation of index
for Verus’s
Vec
collection:
impl<A> Vec<A> {
#[verifier::external_body]
pub fn index(&self, i: usize) -> (r: &A)
requires
i < self.len(),
ensures
*r === self[i as int],
{
unsafe { self.vec.get_unchecked(i) }
}
}
Unlike Rust’s index
, this implementation has no bounds checks, and it exhibits UB if called
for a value of i
that is out-of-bounds. Therefore, as ordinary Rust, it would not meet
the standards of unsafe encapsulation.
However, due to its requires
clause,
Verus enforces that any call to this function will satisfy the contract and be in-bounds.
Therefore, UB cannot occur in a verified Verus program, but type-checking alone is not
sufficient to ensure this.
Conclusion
Rust’s concept of unsafe encapsulation means that programmers writing in safe Rust can be sure that their programs will be memory safe as long as they type-check and pass the borrow-checker, even if their code otherwise has bugs.
In Verus, there is no staggered notion of correctness. If the program verifies, then it is memory safe, and it will execute according to all its specifications. If the program fails to verify, then all bets are off.
Calling verified code from unverified code
Of course, the correctness of Verus code depends on meeting the assumptions as provided in its specification. If you call verified code from unverified code, Verus won’t be able to check that these contracts are upheld at each call-site, so the responsibility is on the developer to meet them.
The developer needs to meet these assumptions:
- Any
requires
clauses on the function being called - Any trait implementation used to meet the function’s trait bounds are implemented according to the trait specifications.
Let me give an example of the latter. Suppose V is the verified source code, which declares
a trait Trait
and a function with a trait bound, f<T: Trait>
.
Also suppose Trait
has a function trait_fn
with an ensures
clause.
Now suppose we have unverified source U, which defines a type X
and a trait impl
impl Trait for X
.
Then, in order for U to safely call f
, the developer needs to make sure that
X::trait_fn
correctly meets the ensures
specification that V demands.
Requirements on the Drop trait
Note: We hope to simplify or remove this requirement in the future.
Note that the Drop
trait has some special considerations. Specifically, Verus treats drop
as if it has the following signature:
fn drop(&mut self)
opens_invariants none
no_unwind
(See opens_invariants
and no_unwind
.)
For any verified implementation of Drop
, Verus checks that it meets this criterion.
For unverified implementations of drop, this onus is on the user to meet this criterion.
Warning
As discussed in the last chapter, the memory safety of a verified program is conditional on verification. Therefore, calling verified code from unverified code could be non-memory-safe if the unverified code fails to uphold these contracts.
IDE Support for Verus
Verus currently has IDE support for VS Code and Emacs.
For VS Code, we require verus-analyzer
, our Verus-specific fork of rust-analyzer
. To use Verus with VS Code, follow the instructions in the README for verus-analyzer.
For Emacs, we have stand-alone support for Verus. The steps to get started with Emacs are below.
Quickstart Emacs
We support for Verus programming in Emacs through verus-mode.el, a major mode that supports syntax highlighting, verification-on-save, jump-to-definition, and more.
To use verus-mode, the setup can be as simple as configuring .emacs
to (i) set verus-home
to the path to Verus, and then (ii) load up verus-mode
.
For example, if you use use-package
, you can clone verus-mode.el into a location that Emacs can load from, and add the following snippet:
(use-package verus-mode
:init
(setq verus-home "PATH_TO_VERUS_DIR")) ; Path to where you've cloned https://github.com/verus-lang/verus
Depending on your specific Emacs setup, your exact installation process for verus-mode.el could vary. Detailed installation steps for various Emacs setups are documented in the Install section on verus-mode.el’s README.
For more details on latest features, key-bindings, and troubleshooting tips, do check out the README for verus-mode.el.
Installing and configuring Singular
Singular must be installed in order to use the integer_ring
solver mode.
Steps:
-
Install Singular
- To use Singular’s standard library, you need more than just the Singular executable binary.
Hence, when possible, we strongly recommend using your system’s package manager. Regardless of the method you select, please install Singular version 4.3.2: other versions are untested, and 4.4.0 is known to be incompatible with Verus. Here are
some suggested steps for different platforms.
-
Mac:
brew install Singular
and set theVERUS_SINGULAR_PATH
environment variable when running Verus. (e.g.VERUS_SINGULAR_PATH=/usr/local/bin/Singular
). For more options, see Singular’s OS X installation guide. -
Debian-based Linux:
apt-get install singular
and set theVERUS_SINGULAR_PATH
environment variable when running Verus. (e.g.VERUS_SINGULAR_PATH=/usr/bin/Singular
). For more options, see Singular’s Linux installation guide. -
Windows: See Singular’s Windows installation guide.
-
- To use Singular’s standard library, you need more than just the Singular executable binary.
Hence, when possible, we strongly recommend using your system’s package manager. Regardless of the method you select, please install Singular version 4.3.2: other versions are untested, and 4.4.0 is known to be incompatible with Verus. Here are
some suggested steps for different platforms.
-
Compiling Verus with Singular Support
- The
integer_ring
functionality is conditionally compiled when thesingular
feature is set. To add this feature, add the--features singular
flag when you invokevargo build
to compile Verus.
- The
Documentation with Rustdoc
Verus provides a tool to help make Verus specification look nice in rustdoc.
To do this, you first run rustdoc
on a crate and then run an HTML postprocessor called
Verusdoc.
First, make sure verusdoc
is built by running vargo build -p verusdoc
in the
verus/source
directory.
Unfortunately, we currently don’t have helpful tooling for running rustdoc
with the
appropriate dependencies and flags, so you’ll need to set that up manually.
Here is an example:
VERUS=/path/to/verus/source
if [ `uname` == "Darwin" ]; then
DYN_LIB_EXT=dylib
elif [ `uname` == "Linux" ]; then
DYN_LIB_EXT=so
fi
# Run rustdoc.
# Note the VERUSDOC environment variable.
RUSTC_BOOTSTRAP=1 VERUSDOC=1 rustdoc \
--extern builtin=$VERUS/target-verus/debug/libbuiltin.rlib \
--extern builtin_macros=$VERUS/target-verus/debug/libbuiltin_macros.$DYN_LIB_EXT \
--extern state_machines_macros=$VERUS/target-verus/debug/libstate_machines_macros.$DYN_LIB_EXT \
--extern vstd=$VERUS/target-verus/debug/libvstd.rlib \
--edition=2021 \
--cfg verus_keep_ghost \
--cfg verus_keep_ghost_body \
--cfg 'feature="std"' \
--cfg 'feature="alloc"' \
'-Zcrate-attr=feature(register_tool)' \
'-Zcrate-attr=register_tool(verus)' \
'-Zcrate-attr=register_tool(verifier)' \
'-Zcrate-attr=register_tool(verusfmt)' \
--crate-type=lib \
./lib.rs
# Run the post-processor.
$VERUS/target/debug/verusdoc
If you run it with a file lib.rs
like this:
#![allow(unused_imports)]
use builtin::*;
use builtin_macros::*;
use vstd::prelude::*;
verus!{
/// Computes the max
pub fn compute_max(x: u32, y: u32) -> (max: u32)
ensures max == (if x > y { x } else { y }),
{
if x < y {
y
} else {
x
}
}
}
It will generate rustdoc that looks like this:
Supported Rust Features
Quick reference for supported Rust features. Note that this list does not include all Verus features, and Verus has many spec/proof features without any standard Rust equivalent—this list only concerns Rust features. See the guide for more information about Verus’ distinction between executable Rust code, specification code, and proof code.
Note that Verus is in active development. If a feature is unsupported, it might be genuinely hard, or it might just be low priority. See the github issues or discussions for information on planned features.
Last Updated: 2024-06-26
Items | |
---|---|
Functions, methods, associated functions | Supported |
Associated constants | Not supported |
Structs | Supported |
Enums | Supported |
Const functions | Partially supported |
Async functions | Not supported |
Macros | Supported |
Type aliases | Supported |
Const items | Partially supported |
Static items | Partially supported |
Struct/enum definitions | |
Type parameters | Supported |
Where clauses | Supported |
Lifetime parameters | Supported |
Const generics | Partially Supported |
Custom discriminants | Not supported |
public / private fields | Partially supported |
Expressions and Statements | |
Variables, assignment, mut variables | Supported |
If, else | Supported |
patterns, match, if-let, match guards | Supported |
Block expressions | Supported |
Items | Not supported |
loop , while |
Supported |
for |
Partially supported |
? |
Supported |
Async blocks | Not supported |
await | Not supported |
Unsafe blocks | Supported |
& |
Supported |
&mut , place expressions |
Partially supported |
== , != |
Supported, for certain types |
Type cast (as ) |
Partially supported |
Compound assigments (+= , etc.) |
Supported |
Array expressions | Partially supported (no fill expressions with `const` arguments) |
Range expressions | Supported |
Index expressions | Partially supported |
Tuple expressions | Supported |
Struct/enum constructors | Supported |
Field access | Supported |
Function and method calls | Supported |
Closures | Supported |
Labels, break, continue | Supported |
Return statements | Supported |
Integer arithmetic | |
Arithmetic for unsigned | Supported |
Arithmetic for signed (+ , - , * ) |
Supported |
Arithmetic for signed (/ , % ) |
Not supported |
Bitwise operations (& , | , ! , >> , << ) |
Supported |
Arch-dependent types (usize , isize ) |
Supported |
Types and standard library functionality | |
Integer types | Supported |
bool |
Supported |
Strings | Supported |
Vec | Supported |
Option / Result | Supported |
Floating point | Not supported |
Slices | Supported |
Arrays | Supported |
Pointers | Partially supported |
References (& ) |
Supported |
Mutable references (&mut ) |
Partially supported |
Never type | Not supported |
Function pointer types | Not supported |
Closure types | Supported |
Trait objects (dyn) | Not supported |
impl types | Partially supported |
Cell, RefCell | Not supported (see vstd alternatives) |
Iterators | Not supported |
HashMap |
Not supported |
Smart pointers (Box , Rc , Arc ) |
Supported |
Pin |
Not supported |
Hardware intrinsics | Not supported |
Printing, I/O | Not supported |
Panic-unwinding | Not supported |
Traits | |
User-defined traits | Supported |
Default implementations | Supported |
Trait bounds on trait declarations | Supported |
Traits with type arguments | Partially supported |
Associated types | Partially supported |
Generic associated types | Partially supported (only lifetimes are supported) |
Higher-ranked trait bounds | Supported |
Clone |
Supported |
Marker traits (Copy , Send , Sync ) |
Supported |
Standard traits (Hash , Debug ) |
Not supported |
User-defined destructors (Drop ) |
Not supported |
Sized (size_of , align_of ) |
Supported |
Deref , DerefMut |
Not supported |
Multi-threading | |
Mutex , RwLock (from standard library) |
Not supported |
Verified lock implementations | Supported |
Atomics | Supported (vstd equivalent) |
spawn and join | Supported |
Interior mutability | Supported |
Unsafe | |
Raw pointers | Supported (only pointers from global allocator) |
Transmute | Not supported |
Unions | Supported |
Supported (vstd equivalent) | |
Crates and code organization | |
Multi-crate projects | Partially supported |
Verified crate + unverified crates | Partially supported |
Modules | Supported |
rustdoc | Supported |
Verus Syntax
The code below illustrates a large swath of Verus’ syntax.
#![allow(unused_imports)]
use builtin::*;
use builtin_macros::*;
use vstd::{modes::*, prelude::*, seq::*, *};
#[verifier::external]
fn main() {}
verus! {
/// functions may be declared exec (default), proof, or spec, which contain
/// exec code, proof code, and spec code, respectively.
/// - exec code: compiled, may have requires/ensures
/// - proof code: erased before compilation, may have requires/ensures
/// - spec code: erased before compilation, no requires/ensures, but may have recommends
/// exec and proof functions may name their return values inside parentheses, before the return type
fn my_exec_fun(x: u32, y: u32) -> (sum: u32)
requires
x < 100,
y < 100,
ensures
sum < 200,
{
x + y
}
proof fn my_proof_fun(x: int, y: int) -> (sum: int)
requires
x < 100,
y < 100,
ensures
sum < 200,
{
x + y
}
spec fn my_spec_fun(x: int, y: int) -> int
recommends
x < 100,
y < 100,
{
x + y
}
/// exec code cannot directly call proof functions or spec functions.
/// However, exec code can contain proof blocks (proof { ... }),
/// which contain proof code.
/// This proof code can call proof functions and spec functions.
fn test_my_funs(x: u32, y: u32)
requires
x < 100,
y < 100,
{
// my_proof_fun(x, y); // not allowed in exec code
// let u = my_spec_fun(x, y); // not allowed exec code
proof {
let u = my_spec_fun(x as int, y as int); // allowed in proof code
my_proof_fun(u / 2, y as int); // allowed in proof code
}
}
/// spec functions with pub or pub(...) must specify whether the body of the function
/// should also be made publicly visible (open function) or not visible (closed function).
pub open spec fn my_pub_spec_fun1(x: int, y: int) -> int {
// function and body visible to all
x / 2 + y / 2
}
/* TODO
pub open(crate) spec fn my_pub_spec_fun2(x: u32, y: u32) -> u32 {
// function visible to all, body visible to crate
x / 2 + y / 2
}
*/
// TODO(main_new) pub(crate) is not being handled correctly
// pub(crate) open spec fn my_pub_spec_fun3(x: int, y: int) -> int {
// // function and body visible to crate
// x / 2 + y / 2
// }
pub closed spec fn my_pub_spec_fun4(x: int, y: int) -> int {
// function visible to all, body visible to module
x / 2 + y / 2
}
pub(crate) closed spec fn my_pub_spec_fun5(x: int, y: int) -> int {
// function visible to crate, body visible to module
x / 2 + y / 2
}
/// Recursive functions must have decreases clauses so that Verus can verify that the functions
/// terminate.
fn test_rec(x: u64, y: u64)
requires
0 < x < 100,
y < 100 - x,
decreases x,
{
if x > 1 {
test_rec(x - 1, y + 1);
}
}
/// Multiple decreases clauses are ordered lexicographically, so that later clauses may
/// increase when earlier clauses decrease.
spec fn test_rec2(x: int, y: int) -> int
decreases x, y,
{
if y > 0 {
1 + test_rec2(x, y - 1)
} else if x > 0 {
2 + test_rec2(x - 1, 100)
} else {
3
}
}
/// Decreases and recommends may specify additional clauses:
/// - decreases .. "when" restricts the function definition to a condition
/// that makes the function terminate
/// - decreases .. "via" specifies a proof function that proves the termination
/// - recommends .. "when" specifies a proof function that proves the
/// recommendations of the functions invoked in the body
spec fn add0(a: nat, b: nat) -> nat
recommends
a > 0,
via add0_recommends
{
a + b
}
spec fn dec0(a: int) -> int
decreases a,
when a > 0
via dec0_decreases
{
if a > 0 {
dec0(a - 1)
} else {
0
}
}
#[via_fn]
proof fn add0_recommends(a: nat, b: nat) {
// proof
}
#[via_fn]
proof fn dec0_decreases(a: int) {
// proof
}
/// variables may be exec, tracked, or ghost
/// - exec: compiled
/// - tracked: erased before compilation, checked for lifetimes (advanced feature, discussed later)
/// - ghost: erased before compilation, no lifetime checking, can create default value of any type
/// Different variable modes may be used in different code modes:
/// - variables in exec code are always exec
/// - variables in proof code are ghost by default (tracked variables must be marked "tracked")
/// - variables in spec code are always ghost
/// For example:
fn test_my_funs2(
a: u32, // exec variable
b: u32, // exec variable
)
requires
a < 100,
b < 100,
{
let s = a + b; // s is an exec variable
proof {
let u = a + b; // u is a ghost variable
my_proof_fun(u / 2, b as int); // my_proof_fun(x, y) takes ghost parameters x and y
}
}
/// assume and assert are treated as proof code even outside of proof blocks.
/// "assert by" may be used to provide proof code that proves the assertion.
#[verifier::opaque]
spec fn f1(i: int) -> int {
i + 1
}
fn assert_by_test() {
assert(f1(3) > 3) by {
reveal(f1); // reveal f1's definition just inside this block
}
assert(f1(3) > 3);
}
/// "assert by" can also invoke specialized provers for bit-vector reasoning or nonlinear arithmetic.
fn assert_by_provers(x: u32) {
assert(x ^ x == 0u32) by (bit_vector);
assert(2 <= x && x < 10 ==> x * x > x) by (nonlinear_arith);
}
/// "assert by" provers can also appear on function signatures to select a specific prover
/// for the function body.
proof fn lemma_mul_upper_bound(x: int, x_bound: int, y: int, y_bound: int)
by (nonlinear_arith)
requires
x <= x_bound,
y <= y_bound,
0 <= x,
0 <= y,
ensures
x * y <= x_bound * y_bound,
{
}
/// "assert by" can use nonlinear_arith with proof code,
/// where "requires" clauses selectively make facts available to the proof code.
proof fn test5_bound_checking(x: u32, y: u32, z: u32)
requires
x <= 0xffff,
y <= 0xffff,
z <= 0xffff,
{
assert(x * z == mul(x, z)) by (nonlinear_arith)
requires
x <= 0xffff,
z <= 0xffff,
{
assert(0 <= x * z);
assert(x * z <= 0xffff * 0xffff);
}
}
/// The syntax for forall and exists quantifiers is based on closures:
fn test_quantifier() {
assert(forall|x: int, y: int| 0 <= x < 100 && 0 <= y < 100 ==> my_spec_fun(x, y) >= x);
assert(my_spec_fun(10, 20) == 30);
assert(exists|x: int, y: int| my_spec_fun(x, y) == 30);
}
/// "assert forall by" may be used to prove foralls:
fn test_assert_forall_by() {
assert forall|x: int, y: int| f1(x) + f1(y) == x + y + 2 by {
reveal(f1);
}
assert(f1(1) + f1(2) == 5);
assert(f1(3) + f1(4) == 9);
// to prove forall|...| P ==> Q, write assert forall|...| P implies Q by {...}
assert forall|x: int| x < 10 implies f1(x) < 11 by {
assert(x < 10);
reveal(f1);
assert(f1(x) < 11);
}
assert(f1(3) < 11);
}
/// To extract ghost witness values from exists, use choose:
fn test_choose() {
assume(exists|x: int| f1(x) == 10);
proof {
let x_witness = choose|x: int| f1(x) == 10;
assert(f1(x_witness) == 10);
}
assume(exists|x: int, y: int| f1(x) + f1(y) == 30);
proof {
let (x_witness, y_witness): (int, int) = choose|x: int, y: int| f1(x) + f1(y) == 30;
assert(f1(x_witness) + f1(y_witness) == 30);
}
}
/// To manually specify a trigger to use for the SMT solver to match on when instantiating a forall
/// or proving an exists, use #[trigger]:
fn test_single_trigger1() {
// Use [my_spec_fun(x, y)] as the trigger
assume(forall|x: int, y: int| f1(x) < 100 && f1(y) < 100 ==> #[trigger] my_spec_fun(x, y) >= x);
}
fn test_single_trigger2() {
// Use [f1(x), f1(y)] as the trigger
assume(forall|x: int, y: int| #[trigger]
f1(x) < 100 && #[trigger] f1(y) < 100 ==> my_spec_fun(x, y) >= x);
}
/// To manually specify multiple triggers, use #![trigger]:
fn test_multiple_triggers() {
// Use both [my_spec_fun(x, y)] and [f1(x), f1(y)] as triggers
assume(forall|x: int, y: int|
#![trigger my_spec_fun(x, y)]
#![trigger f1(x), f1(y)]
f1(x) < 100 && f1(y) < 100 ==> my_spec_fun(x, y) >= x);
}
/// Verus can often automatically choose a trigger if no manual trigger is given.
/// Use the command-line option --triggers to print the chosen triggers.
fn test_auto_trigger1() {
// Verus automatically chose [my_spec_fun(x, y)] as the trigger.
// (It considers this safer, i.e. likely to match less often, than the trigger [f1(x), f1(y)].)
assume(forall|x: int, y: int| f1(x) < 100 && f1(y) < 100 ==> my_spec_fun(x, y) >= x);
}
/// If Verus prints a note saying that it automatically chose a trigger with low confidence,
/// you can supply manual triggers or use #![auto] to accept the automatically chosen trigger.
fn test_auto_trigger2() {
// Verus chose [f1(x), f1(y)] as the trigger; go ahead and accept that
assume(forall|x: int, y: int| #![auto] f1(x) < 100 && f1(y) < 100 ==> my_spec_fun(3, y) >= 3);
}
/// &&& and ||| are like && and ||, but have low precedence (lower than all other binary operators,
/// and lower than forall/exists/choose).
/// &&& must appear before each conjunct, rather than between the conjuncts (similarly for |||).
/// &&& must appear directly inside a block or at the end of a block.
spec fn simple_conjuncts(x: int, y: int) -> bool {
&&& 1 < x
&&& y > 9 ==> x + y < 50
&&& x < 100
&&& y < 100
}
spec fn complex_conjuncts(x: int, y: int) -> bool {
let b = x < y;
&&& b
&&& if false {
&&& b ==> b
&&& !b ==> !b
} else {
||| b ==> b
||| !b
}
&&& false ==> true
}
/// ==> associates to the right, while <== associates to the left.
/// <==> is nonassociative.
/// == is SMT equality.
/// != is SMT disequality.
pub(crate) proof fn binary_ops<A>(a: A, x: int) {
assert(false ==> true);
assert(true && false ==> false && false);
assert(!(true && (false ==> false) && false));
assert(false ==> false ==> false);
assert(false ==> (false ==> false));
assert(!((false ==> false) ==> false));
assert(false <== false <== false);
assert(!(false <== (false <== false)));
assert((false <== false) <== false);
assert(2 + 2 !== 3);
assert(a == a);
assert(false <==> true && false);
}
/// In specs, <=, <, >=, and > may be chained together so that, for example, a <= b < c means
/// a <= b && b < c. (Note on efficiency: if b is a complex expression,
/// Verus will automatically introduce a temporary variable under the hood so that
/// the expression doesn't duplicate b: {let x_b = b; a <= x_b && x_b < c}.)
proof fn chained_comparisons(i: int, j: int, k: int)
requires
0 <= i + 1 <= j + 10 < k + 7,
ensures
j < k,
{
}
/// In specs, e@ is an abbreviation for e.view()
/// Many types implement a view() method to get an abstract ghost view of a concrete type.
fn test_views() {
let mut v: Vec<u8> = Vec::new();
v.push(10);
v.push(20);
proof {
let s: Seq<u8> = v@; // v@ is equivalent to v.view()
assert(s[0] == 10);
assert(s[1] == 20);
}
}
/// struct and enum declarations may be declared exec (default), tracked, or ghost,
/// and fields may be declared exec (default), tracked or ghost.
tracked struct TrackedAndGhost<T, G>(tracked T, ghost G);
/// Proof code may manipulate tracked variables directly.
/// Declarations of tracked variables must be explicitly marked as "tracked".
proof fn consume(tracked x: int) {
}
proof fn test_tracked(
tracked w: int,
tracked x: int,
tracked y: int,
z: int,
) -> tracked TrackedAndGhost<(int, int), int> {
consume(w);
let tracked tag: TrackedAndGhost<(int, int), int> = TrackedAndGhost((x, y), z);
let tracked TrackedAndGhost((a, b), c) = tag;
TrackedAndGhost((a, b), c)
}
/// Variables in exec code may be exec, ghost, or tracked.
fn test_ghost(x: u32, y: u32)
requires
x < 100,
y < 100,
{
let ghost u: int = my_spec_fun(x as int, y as int);
let ghost mut v = u + 1;
assert(v == x + y + 1);
proof {
v = v + 1; // proof code may assign to ghost mut variables
}
let ghost w = {
let temp = v + 1;
temp + 1
};
assert(w == x + y + 4);
}
/// Variables in exec code may be exec, ghost, or tracked.
/// However, exec function parameters and return values are always exec.
/// In these places, the library types Ghost and Tracked are used
/// to wrap ghost values and tracked values.
/// Ghost and tracked expressions Ghost(expr) and Tracked(expr) create values of type Ghost<T>
/// and Tracked<T>, where expr is treated as proof code whose value is wrapped inside Ghost or Tracked.
/// The view x@ of a Ghost or Tracked x is the ghost or tracked value inside the Ghost or Tracked.
fn test_ghost_wrappers(x: u32, y: Ghost<u32>)
requires
x < 100,
y@ < 100,
{
// Ghost(...) expressions can create values of type Ghost<...>:
let u: Ghost<int> = Ghost(my_spec_fun(x as int, y@ as int));
let mut v: Ghost<int> = Ghost(u@ + 1);
assert(v@ == x + y@ + 1);
proof {
v@ = v@ + 1; // proof code may assign to the view of exec variables of type Ghost/Tracked
}
let w: Ghost<int> = Ghost(
{
// proof block that returns a ghost value
let temp = v@ + 1;
temp + 1
},
);
assert(w@ == x + y@ + 4);
}
fn test_consume(t: Tracked<int>)
requires
t@ <= 7,
{
proof {
let tracked x = t.get();
assert(x <= 7);
consume(x);
}
}
/// Ghost(...) and Tracked(...) patterns can unwrap Ghost<...> and Tracked<...> values:
fn test_ghost_unwrap(
x: u32,
Ghost(y): Ghost<u32>,
) // unwrap so that y has typ u32, not Ghost<u32>
requires
x < 100,
y < 100,
{
// Ghost(u) pattern unwraps Ghost<...> values and gives u and v type int:
let Ghost(u): Ghost<int> = Ghost(my_spec_fun(x as int, y as int));
let Ghost(mut v): Ghost<int> = Ghost(u + 1);
assert(v == x + y + 1);
proof {
v = v + 1; // assign directly to ghost mut v
}
let Ghost(w): Ghost<int> = Ghost(
{
// proof block that returns a ghost value
let temp = v + 1;
temp + 1
},
);
assert(w == x + y + 4);
}
struct S {}
/// Exec code can use "let ghost" and "let tracked" to create local ghost and tracked variables.
/// Exec code can extract individual ghost and tracked values from Ghost and Tracked wrappers
/// with "let ...Ghost(x)..." and "let ...Tracked(x)...".
fn test_ghost_tuple_match(t: (Tracked<S>, Tracked<S>, Ghost<int>, Ghost<int>)) -> Tracked<S> {
let ghost g: (int, int) = (10, 20);
assert(g.0 + g.1 == 30);
let ghost (g1, g2) = g;
assert(g1 + g2 == 30);
// b1, b2: Tracked<S> and g3, g4: Ghost<int>
let (Tracked(b1), Tracked(b2), Ghost(g3), Ghost(g4)) = t;
Tracked(b2)
}
/// Exec code can Ghost(...) or Tracked(...) unwrapped parameter
/// to create a mutable ghost or tracked parameter:
fn test_ghost_mut(Ghost(g): Ghost<&mut int>)
ensures
*g == *old(g) + 1,
{
proof {
*g = *g + 1;
}
}
fn test_call_ghost_mut() {
let ghost mut g = 10int;
test_ghost_mut(Ghost(&mut g));
assert(g == 11);
}
/// Spec functions are not checked for correctness (although they are checked for termination).
/// However, marking a spec function as "spec(checked)" enables lightweight "recommends checking"
/// inside the spec function.
spec(checked) fn my_spec_fun2(x: int, y: int) -> int
recommends
x < 100,
y < 100,
{
// Because of spec(checked), Verus checks that my_spec_fun's recommends clauses are satisfied here:
my_spec_fun(x, y)
}
/// Spec functions may omit their body, in which case they are considered
/// uninterpreted (returning an arbitrary value of the return type depending on the input values).
/// This is safe, since spec functions (unlike proof and exec functions) may always
/// return arbitrary values of any type,
/// where the value may be special "bottom" value for otherwise uninhabited types.
spec fn my_uninterpreted_fun1(i: int, j: int) -> int;
spec fn my_uninterpreted_fun2(i: int, j: int) -> int
recommends
0 <= i < 10,
0 <= j < 10,
;
/// Trait functions may have specifications
trait T {
proof fn my_uninterpreted_fun2(&self, i: int, j: int) -> (r: int)
requires
0 <= i < 10,
0 <= j < 10,
ensures
i <= r,
j <= r,
;
}
enum ThisOrThat {
This(nat),
That { v: int },
}
proof fn uses_is(t: ThisOrThat) {
match t {
ThisOrThat::This(..) => assert(t is This),
ThisOrThat::That { .. } => assert(t is That),
}
}
proof fn uses_arrow_matches_1(t: ThisOrThat)
requires
t is That ==> t->v == 3,
t is This ==> t->0 == 4,
{
assert(t matches ThisOrThat::This(k) ==> k == 4);
assert(t matches ThisOrThat::That { v } ==> v == 3);
}
proof fn uses_arrow_matches_2(t: ThisOrThat)
requires
t matches ThisOrThat::That { v: a } && a == 3,
{
assert(t is That && t->v == 3);
}
proof fn uses_spec_has(s: Set<int>, ms: vstd::multiset::Multiset<int>)
requires
s has 3,
ms has 4,
{
assert(s has 3);
assert(s has 3 == s has 3);
assert(ms has 4);
assert(ms has 4 == ms has 4);
}
} // verus!
Variable modes
In addition to having three function modes, Verus has three variable modes: exec
, tracked
, and ghost
. Only exec
variables exist in the compiled code, while ghost
and tracked
variables are “erased” from the compiled code.
See this tutorial page for an introduction to the concept of modes. The tracked mode is an advanced feature, and is discussed more in the concurrency guide.
Variable modes and function modes
Which variables are allowed depends on the expression mode, according to the following table:
Default variable mode | ghost variables | tracked variables | exec variables | |
---|---|---|---|---|
spec code | ghost | yes | ||
proof code | ghost | yes | yes | |
exec code | exec | yes | yes | yes |
Although exec
code allows variables of any mode, there are some restrictions; see below.
Using tracked
and ghost
variables from a proof
function.
By default, any variable in a proof function has ghost
mode. Parameters, variables,
and return values may be marked tracked. For example:
fn some_proof_fn(tracked param: Foo) -> (tracked ret: RetType) {
let tracked x = ...;
}
For return values, the tracked
keyword can only apply to the entire return type.
It is not possible to selectively apply tracked
to individual elements of a tuple,
for example.
To mix-and-match tracked and ghost data, there are a few possibilities.
First, you can create a struct
marked tracked
, which individual fields either marked ghost
or tracked
.
Secondly, you can use the Tracked
and Ghost
types from Verus’s builtin library to
create tuples like (Tracked<X>, Ghost<Y>)
. These support pattern matching:
proof fn some_call() -> (tracked ret: (Tracked<X>, Ghost<Y>)) { ... }
proof fn example() {
// The lower-case `tracked` keyword is used to indicate the right-hand side
// has `proof` mode, in order to allow the `tracked` call.
// The upper-case `Tracked` and `Ghost` are used in the pattern matching to unwrap
// the `X` and `Y` objects.
let tracked (Tracked(x), Ghost(y)) = some_call();
}
Using tracked
and ghost
variables from an exec
function.
Variables in exec
code may be marked tracked
or ghost
. These variables will be erased
when the code is compiled. However, there are some restrictions.
In particular, variables marked tracked
or ghost
may be declared anywhere in an exec
block. However, such variables may only be assigned to from inside a proof { ... }
block.
fn some_exec_fn() {
let ghost mut x = 5; // this is allowed
proof {
x = 7; // this is allowed
}
x = 9; // this is not allowed
}
Futhermore:
-
Arguments and return values for an
exec
function must beexec
mode. -
Struct fields of an
exec
struct must beexec
mode.
To work around these, programs can use the Tracked
and Ghost
types.
Like in proof code, Verus supports pattern-matching for these types.
exec fn example() {
// Because of the keyword `tracked`, Verus interprets the right-hand side
// as if it were in a `proof` block.
let tracked (Tracked(x), Ghost(y)) = some_call();
}
To handle parameters that must be passed via Tracked
or Ghost
types,
you can unwrap them via pattern matching:
exec fn example(Tracked(x): Tracked<X>, Ghost(y): Ghost<Y>) {
// Use `x` as if it were declared `let tracked x`
// Use `y` as if it were declared `let tracked y`
}
Cheat sheet
Proof function, take tracked or ghost param
proof fn example(tracked x: X, ghost y: Y)
To call this function from proof code:
proof fn test(tracked x: X, ghost y: Y) {
example(x, y);
}
To call this function from exec code:
fn test() {
let tracked x = ...;
let ghost y = ...;
// From a proof block:
proof { example(x, y); }
}
Proof function, return ghost param
proof fn example() -> (ret: Y)
To call this function from proof code:
proof fn test() {
let y = example();
}
To call this function from exec code:
fn test() {
let ghost y = example();
}
Proof function, return tracked param
proof fn example() -> (tracked ret: X)
To call this function from proof code:
proof fn test() {
let tracked y = example();
}
To call this function from exec code:
fn test() {
// In a proof block:
proof { let tracked y = example(); }
// Or outside a proof block:
let tracked y = example();
}
Proof function, return both a ghost param and tracked param
proof fn example() -> (tracked ret: (Tracked<X>, Ghost<Y>))
To call this function from proof code:
proof fn test() {
let tracked (Tracked(x), Ghost(y)) = example();
}
To call this function from exec code:
fn test() {
// In a proof block:
proof { let tracked (Tracked(x), Ghost(y)) = example(); }
// or outside a proof block:
let tracked (Tracked(x), Ghost(y)) = example();
}
Exec function, take a tracked and ghost parameter:
fn example(Tracked(x): Tracked<X>, Ghost(y): Ghost<Y>)
To call this function from exec code:
fn test() {
let tracked x = ...;
let ghost y = ...;
example(Tracked(x), Ghost(y));
}
Exec function, return a tracked and ghost value:
fn example() -> (Tracked<X>, Ghost<Y>)
To call this function from exec code:
fn test() {
let (Tracked(x), Ghost(y)) = example();
}
Exec function, take a tracked parameter that is a mutable reference:
fn example(Tracked(x): Tracked<&mut X>)
To call this function from exec code:
fn test() {
let tracked mut x = ...;
example(Tracked(&mut x));
}
Spec expressions
Many built-in operators are in spec mode, i.e., they can be used in specification expressions. This section discusses those operators.
Rust subset
Much of the spec language looks like a subset of the Rust language, though there are some subtle differences.
Function calls
Only pure function calls are allowed (i.e., calls to other spec
functions or
functions marked with the when_used_as_spec
directive).
Let-assignment
Spec expressions support let
-bindings, but not let mut
-bindings.
if / if let / match statements
These work as normal.
&&
and ||
These work as normal, though as all spec expressions are pure and effectless, there is no notion of “short-circuiting”.
Equality (==)
This is not the same thing as ==
in exec-mode; see more on ==
.
Arithmetic
Arithmetic works a little differently in order to operate with Verus’s int
and nat
types. See more on arithmetic.
References (&T)
Verus attempts to ignore Box
and references as much as possible in spec mode.
However, you still needs to satisfy the Rust type-checker, so you may need to insert
references (&
) or dereferences (*
) to satisfy the checker. Verus will ignore these
operations however.
Box
Verus special-cases Box
along with box operations like Box::new(x)
or *box
so they may be used in spec mode. Like with references, these operations are ignored,
however they are often useful. For example, to create a recursive type you need to satisfy
Rust’s sanity checks, which often involves using a Box
.
Operator Precedence
Operator | Associativity |
---|---|
Binds tighter | |
. -> | left |
is matches | left |
* / % | left |
+ - | left |
<< >> | left |
& | left |
^ | left |
| | left |
!== == != <= < >= > | requires parentheses |
&& | left |
|| | left |
==> | right |
<== | left |
<==> | requires parentheses |
.. | left |
= | right |
closures; forall , exists ; choose | right |
&&& | left |
||| | left |
Binds looser |
All operators that are from ordinary Rust have the same precedence-ordering as in ordinary Rust. See also the Rust operator precedence.
Arithmetic in spec code
Note: This reference page is about arithmetic in Verus specification code. This page is does not apply to arithmetic is executable Rust code.
For an introduction to Verus arithmetic, see Integers and arithmetic.
Type widening
In spec code, the results of arithmetic are automatically widened to avoid overflow or wrapping. The types of various operators, given as functions of the input types, are summarized in the below table. Note that in most cases, the types of the inputs are not required to be the same.
operation | LHS type | RHS type | result type | notes |
---|---|---|---|---|
<= < >= > | t1 | t2 | bool | |
== != | t1 | t2 | bool | |
+ | t1 | t2 | int | except for nat + nat |
+ | nat | nat | nat | |
- | t1 | t2 | int | |
* | t1 | t2 | int | except for nat * nat |
* | nat | nat | nat | |
/ | t | t | int | for i8…isize, int |
/ | t | t | t | for u8…usize, nat |
% | t | t | t | |
add(_, _) | t | t | t | |
sub(_, _) | t | t | t | |
mul(_, _) | t | t | t | |
& | ^ | t | t | t | |
<< >> | t1 | t2 | t1 |
Definitions: Quotient and remainder
In Verus specifications, /
and %
are defined by Euclidean division. Euclidean division may differ from the usual Rust /
and %
operators
when operands are negative.
For b != 0
, the quotient a / b
and remainder a % b
are defined as the unique integers
q
and r
such that:
b * q + r == a
0 <= r < |b|
.
Note that:
- The remainder
a % b
is always nonnegative - The quotient is “floor division” when b is positive
- The quotient is “ceiling division” when b is negative
Also note that a / b
and a % b
are unspecified when b == 0
.
However, because all spec functions
are total, division-by-0 or mod-by-0 are not hard errors.
More advanced arithmetic
The Verus standard library includes the following additional arithmetic functions usable in spec expressions:
- Exponentiation (
vstd::arithmetic::power::pow
) - Power of two (
vstd::arithmetic::power2::pow2
) - Integer logarithm (
vstd::arithmetic::logarithm::log
)
Bitwise ops
See bitwise operators.
Bit operators
Definitions
&
, |
, and ^
These have the usual meaning: bitwise-OR, bitwise-AND, and bitwise-XOR. Verus, like Rust, requires the input operands to be the same type, even in specification code. However, as binary operators defined over the integers, ℤ x ℤ → ℤ, these operations are independent of bitwidth. This is true even for negative operands, as a result of the way two’s complement sign-extension works.
>>
and <<
Verus specifications, like Rust, does not require the left and right sides of a shift operator to be the same type. Shift is unspecified when the right-hand side is negative. Unlike in executable code, however, there is no upper bound on the right-hand side.
a << b
and and a >> b
both have the same type as a
.
Right shifts can be defined over the integers ℤ x ℤ → ℤ independently of the input bitwidth.
For <<
, however, the result does depend on the input type
because a left shift may involve truncation if some bits get shifted “off to the left”.
There is no widening to an int
(unlike, say, Verus specification +
).
Reasoning about bit operators
In Verus’s default prover mode, the definitions of these bitwise operators are not exported. To prove nontrivial facts about bitwise operators, use the bit-vector solver or the compute solver.
Coercion with as
In spec code, any “integer type” may be coerced to any other integer type via as
.
For the sake of this page, “integer type” means any of the following:
i8
,i16
,i32
,i64
,i128
,isize
u8
,u16
,u32
,u64
,u128
,usize
int
nat
char
Note that this is more permissive than as
in Rust exec code. For example, Rust does
not permit using as
to cast from a u16
to a char
, but this is allowed in Verus
spec code.
Definition
Verus defines as
-casting as follows:
- Casting to an
int
is always defined and does not require truncation. - Casting to a
nat
is unspecified if the input value is negative. - Casting to a
char
is unspecified if the input value is outside the allowedchar
values. - Casting to any other finite-size integer type is defined as truncation — taking the lower N bits for the appropriate N, then interpreting the result as a signed or unsigned integer.
Reasoning about truncation
The definition of truncation is not exported in Verus’s default prover mode (i.e., it behaves as if it is unspecified). To reason about truncation, use the bit-vector solver or the compute solver.
Also note that the value of N for usize
and isize
may be configured with the global
directive.
Spec equality (==
)
The spec equality operator ==
is explained in Equality.
Extensional equality (=~=
and =~~=
)
The extensional equality operators =~=
and =~~=
are explained in
Extensional equality.
Prefix and/or (&&&
and |||
)
The prefix and/or operators (&&&
and |||
) are explained in Expressions and operators for specifications.
Chained operators
In spec code, equality and inequality operators can be chained. For example,
a <= b < c
is equivalent to
a <= b && b < c
.
Chained inequalities support <
, <=
, >
, >=
, and ==
, and support sequences of chained
operators of arbitrary length.
Implication (==>, <==, and <==>)
The operator P ==> Q
, read P implies Q, is equivalent to !P || Q
.
This can also be written backwards: Q <== P
is equivalent to P ==> Q
.
Finally, P <==> Q
is equivalent to P == Q
. It is sometimes useful for readability,
and because <==>
has the same syntactic precedence as ==>
rather than the precedence of ==
.
Spec quantifiers (forall
, exists
)
Quantifiers are explained in the Quantifiers part of the
tutorial. Specifically, forall
is explained in forall and
triggers and exists
is explained in exists and
choose.
Such that (choose
)
The such-that operator (choose
) is explained in exists and choose.
Trigger annotations
To every quantifier expression (forall
, exists
, choose
) in the program, including “implicit”
quantifiers such as in broadcast
lemmas.
There are many implications of triggers on proof automation that Verus developers should be aware of. See the relevant chapter of the guide, particulary the section on multiple triggers and matching loops.
This page explains the procedure Verus uses to determine these triggers from Verus source code.
Terminology: trigger groups and trigger expressions
Every quantifier has a number of quantifier variables. To control how the solver instantiates these variables, trigger groups are used.
- To every quantifier, Verus determines a collection of trigger groups.
- Every trigger group is a collection of trigger expressions.
By necessity, any trigger group is only well-formed if every quantifier variable is used by at least one trigger expression in the group.
Note that:
- The SMT solver will instantiate any quantifier whenever any trigger group fires.
- However, a trigger group will only fire if every expression in the group matches.
Therefore:
- Having more trigger groups makes the quantifier be instantiated more often.
- A trigger group with more trigger expressions will fire less often.
Selecting trigger groups
Verus determines the collection of trigger groups as follows:
- Verus finds all applicable
#[trigger]
and#[trigger(n)]
annotations in the body of the quantifier.- In the case of nested quantifiers, every
#[trigger]
or#[trigger(n)]
annotation is applicable to exactly one quantifier expression: the innermost quantifier which binds a variable used by the trigger expression.
- In the case of nested quantifiers, every
- All applicable expressions marked by
#[trigger]
become a trigger group. - All applicable expressions marked by
#[trigger(n)]
for the samen
become a trigger group. - Every annotation
#![trigger EXPR1, …, EXPRk]
at the root of the quantifier expression becomes a trigger group. - If, after all of the above, no trigger groups have been identified, Verus may use
heuristics to determine the trigger group(s) based on the body of the quantifier expression.
- If
#![all_triggers]
is provided, Verus uses an “aggressive” strategy, all trigger groups that can reasonably be inferred as applicable from the body. - If
#![auto]
is provided, Verus uses a “conservative” strategy that selects only on trigger group. - If neither
#![all_triggers]
nor#![auto]
are provided, Verus uses the same “conservative” strategy as it does for#![auto]
.
- If
- If, after all of the above, Verus is unable to find any trigger groups, it produces an error.
Trigger logging
By default, Verus often prints verbose information about selected triggers in cases where
Verus’s heuristics are “un-confident” in the selected trigger groups.
You can silence this information on a case-by-case basis using the #![auto]
attribute.
When #![auto]
is applied to a quantifier, this tells Verus
that you want the automatically selected triggers
even when Verus is un-confident, in which case this logging will be silenced.
The behavior can be configured through the command line:
Option | Behavior |
---|---|
--triggers-silent | Do not show automatically chosen triggers |
--triggers-selective | Default. Show automatically chosen triggers only when heuristics are un-confident, and when #![auto] has not been supplied |
--triggers | Show all automatically chosen triggers for verified modules |
--triggers-verbose | Show all automatically chosen triggers for verified modules and imported definitions from other module |
The view function @
The expression expr@
is a shorthand for expr.view()
. The view()
function is a Verus
convention for the abstraction of an exec-mode object, usually defined by the View
trait.
However, the expansion of the @
syntax is purely syntactic, so it does not necessarily
correspond to the trait function.
Spec index operator []
In spec expressions, the index operator is treated differently than in exec expressions, where it corresponds to the usual Rust index operator.
Specifically, in a spec expression, the expression expr[i]
is a shorthand for
expr.spec_index(i)
. This is a purely syntactic transformation, and there is no
particular trait.
For example:
decreases_to!
The expression decreases_to!(e1, e2, …, en => f1, f2, …, fn)
is a bool indicating
if the left-hand sequence
e1, e2, …, en
lexicographically-decreases-to
the right-hand sequence
f1, f2, …, fn
The lexicographic-decreases-to
is used to check the decreases
measure for spec functions.
See this tutorial chapter for an introductory discussion of lexicographic-decreases.
Definition
We say that
e1, e2, …, en
lexicographically-decreases-to
f1, f2, …, fn
if there exists a k
where 1 <= k <= n
such that:
ek
decreases-tofk
.- For each
i
where1 <= i < k
,ei == fi
.
The decreases-to relation is a partial order on all values; values of different types are comparable. The relation permits, but is not necessarily limited to:
- If
x
andy
are integers, wherex > y >= 0
, thenx
decreases-toy
. - If
a
is a datatype (struct, tuple, or enum) andf
is one of its “potentially recursive” fields, thena
decreases-toa.f
.- For a datatype
X
, a field is considered “potentially recursive” if it either mentionsX
or a generic type parameter ofX
.
- For a datatype
- If
f
is aspec_fn
, thenf
decreases-tof(i)
. - If
s
is aSeq
, thens
decreases-tos[i]
. - If
s
is aSeq
, thens
decreases-tos.subrange(i, j)
if the given range is strictly smaller than0 .. s.len()
.axiom_seq_len_decreases
provides a more general axiom; it must be invoked explicitly.
- If
v
is aVec
, thenv
decreases-tov@
.
These axioms are triggered when the relevant expression (e.g., x.f
, x->f
, s[i]
, v@
) is used as part of a decreases-to
expression.
Notes
-
Tuples are not compared lexicographically; tuples are datatypes, which are compared as explained above, e.g.,
a
decreases_toa.0
. Only the “top level” sequences in adecreases_to!
expression are compared lexicographically. -
Sequences are not compared on
len()
alone. However, you can always uses.len()
as a decreases-measure instead ofs
.
Examples
proof fn example_decreases_to(s: Seq<int>)
requires s.len() == 5
{
assert(decreases_to!(8int => 4int));
// fails: can't decrease to negative number
// assert(decreases_to!(8 => -2));
// Comma-separated elements are treated lexicographically:
assert(decreases_to!(12int, 8int, 1int => 12int, 4int, 50000int));
// Datatypes decrease-to their fields:
let x = Some(8int);
assert(decreases_to!(x => x->0));
let y = (true, false);
assert(decreases_to!(y => y.0));
// fails: tuples are not treated lexicographically
// assert(decreases_to!((20, 9) => (11, 15)));
// sequence decreases-to an element of the sequence
assert(decreases_to!(s => s[2]));
// sequence decreases-to a subrange of the sequence
assert(decreases_to!(s => s.subrange(1, 3)));
}
assert … by
The assert ... by
statement is used to encapsulate a proof. For a boolean spec
expression, P
, one writes:
assert(P) by {
// ... proof here
}
// ... remainder
Verus will validate the proof and then attempt to use it to prove the P.
The contents of the proof, however, will not be included in the context used to
prove the remainder.
Only P
will be introduced into the context for the remainder.
assert forall … by
The assert forall ... by
statement is used to write a proof of a forall
expression
while introducing the quantified variables into the context.
assert forall |idents| P by {
// ... proof here
}
// ... remainder
Much like an ordinary assert ... by
statement, the proof
inside the body does not enter the context for the remainder of the proof.
Only the forall |idents| P
expression enters the context.
Furthermore, within the proof body, the variables in the idents
may be
Note that the parentheses must be left off, in contrast to other kinds of assert
statements.
For convenience, you can use implies
to introduce a hypothesis automatically into
the proof block:
assert forall |idents| H implies P by {
// ... proof here
}
// ... remainder
This will make H
available in the proof block, so you only have to prove P
.
In the end, the predicate forall |idents| H ==> P
will be proved.
assert … by(bit_vector)
Invoke Verus’s bitvector solver to prove the given predicate. This is particularly useful for bitwise operators and integer arithmetic on finite-width integers. Internally, the solver uses a technique called bit-blasting, which represents each numeric variable by its binary representation as a bit vector, and every operation as a boolean circuit.
assert(P) by(bit_vector);
assert(P) by(bit_vector)
requires Q;
The prover does not have access to any prior context except that which is given in
the requires
clause, if provided. If the requires
clause is provided, then the
bit vector solver attempts to prove Q ==> P
. Verus will also check (using its normal solver)
that Q
holds from the prior proof context.
The expressions P
and Q
may only contain expressions that the bit solver understands.
This includes:
- Variables of type
bool
or finite-width integer types (u64
,i64
,usize
, etc.)- All free variables are treated symbolically. Even if a variable is defined via a
let
statement declared outside the bitvector assertion, this definition is not visible to the solver.
- All free variables are treated symbolically. Even if a variable is defined via a
- Integer and boolean literals
- Non-truncating arithmetic (
+
,-
,*
,/
, and%
) - Truncating arithmetic (
add
,sub
,mul
functions) - Bit operations (
&
,|
,^
,!
,<<
,>>
) - Equality and inequality (
==
,!=
,<
,>
,<=
,>=
) - Boolean operators (
&&
,||
,^
) and conditional expressions - The
usize::BITS
constant
Internal operation
Verus’s bitvector solver encodes the expression by representing all integers using an SMT “bitvector” type. Most of the above constraints arise because of the fact that Verus has to choose a fixed bitwidth for any given expression.
Note that, although the bitvector solver cannot handle free variables of type
int
or nat
, it can handle other kinds of expressions that are typed int
or nat
.
For example, if x
and y
have type u64
, then x + y
has type int
,
but the Verus bitvector solver knows that x + y
is representable with 65 bits.
Handling usize
and isize
If the expression uses any symbolic values whose width is architecture-dependent,
and the architecture bitwidth has not been specified via a global
directive,
Verus will generate multiple queries, one for each possible bitwidth (32 bits or 64 bits).
assert … by(nonlinear_arith)
Invoke Z3’s nonlinear solver to prove the given predicate.
assert(P) by(bit_vector);
assert(P) by(bit_vector)
requires Q;
The solver uses Z3’s theory of nonlinear arithmetic. This can often solve problems
that involve multiplication or division of symbolic values. For example,
commutativity axioms like a * b == b * a
are accessible in this mode.
The prover does not have access to any prior context except that which is given in
the requires
clause, if provided. If the requires
clause is provided, then the
bit vector solver attempts to prove Q ==> P
. Verus will also check (using its normal solver)
that Q
holds from the prior proof context.
assert … by(compute) / by(compute_only)
See this section of the tutorial for motivation and an example.
A statement of the form:
assert(P) by(compute_only);
Will evaluate the expression P
as far a possible, and Verus accepts the result if it
evaluates to the boolean expression true
. It unfolds function definitions and evaluates
arithmetic expressions. It is capable of some symbolic manipulation, but it does not handle
algebraic laws like a + b == b + a
, and it works best when evaluating constant expressions.
Note that it will not substitute local variables, instead treating them as symbolic values.
This statement:
assert(P) by(compute);
Will first run the interpreter as above, but if it doesn’t succeed, it will then attempt
to finish the problem through the normal solver. So for example, if after expansion
P
results in a trivial expression like a+b == b+a
, then it should be solved
with by(compute)
.
Memoization
The #[verifier::memoize]
attribute can be used to mark
certain functions for memoizing.
This will direct Verus’s internal interpreter to only evaluate the function once for any
given combination of arguments. This is useful for functions that would be impractical
to evaluate naively, as in this example:
#[verifier::memoize]
spec fn fibonacci(n: nat) -> nat
decreases n
{
if n == 0 {
0
} else if n == 1 {
1
} else {
fibonacci((n - 2) as nat) + fibonacci((n - 1) as nat)
}
}
proof fn test_fibonacci() {
assert(fibonacci(63) == 6557470319842) by(compute_only);
}
reveal
, reveal_with_fuel
, hide
These attributes control whether and how Verus will unfold the definition of a spec function
while solving. For a spec function f
:
reveal(f)
directs Verus to unfold the definition off
when it encounters a use off
.hide(f)
directs Verus to treatf
as an uninterpreted function without reasoning about its definition.
Technically speaking, Verus handles “function unfolding” by
creating axioms of the form forall |x| f(x) == (definition of f(x))
.
Thus, reveal(f)
makes this axiom accessible to the solver,
while hide(f)
makes this axiom inaccessible.
By default, functions are always revealed when they are in scope. This can be changed
by marking the function with the #[verifier::opaque]
attribute.
The reveal_with_fuel(f, n)
directive is used for recursive functions.
The integer n
indicates how many times Verus should unfold a recursive function.
Limiting the fuel to a finite amount is necessary to avoid
trigger loops.
The default fuel (absent any reveal_with_fuel
directive) is 1.
opens_invariants
The opens_invariants
clause may be applied to any proof
or exec
function.
This indicates the set of names of tracked invariants that may be opened by the function.
At this time, it has three forms. See the documentation for open_local_invariant
for more information about why Verus enforces these restrictions.
fn example()
opens_invariants any
{
// Any invariant may be opened here
}
or:
fn example()
opens_invariants none
{
// No invariant may be opened here
}
or:
fn example()
opens_invariants [ $EXPR1, $EXPR2, ... ]
{
// Only invariants with names in [ $EXPR1, $EXPR2, ... ] may be opened.
}
Defaults
For exec
functions, the default is opens_invariants any
.
For proof
functions, the default is opens_invariants none
.
Unwinding signature
For any exec
-mode function, it is possible to specify whether that function may unwind. The allowed forms of the signature are:
- No signature (default) - This means the function may unwind.
no_unwind
- This means the function may not unwind.no_unwind when {boolean expression in the input arguments}
- If the given condition holds, then the call is guaranteed to not unwind.no_unwind when true
is equivalent tono_unwind
no_unwind when false
is equivalent to the default behavior
By default, a function is allowed to unwind. (Note, though, that Verus does rule out common sources of unwinding, such as integer overflow, even when the function signature technically allows unwinding.)
Example
Suppose you want to write a function which takes an index, and that you want to specify:
- The function will execute normally if the index is in-bounds
- The function will unwind otherwise
You might write it like this:
fn get(&self, i: usize) -> (res: T)
ensures i < self.len() && res == self[i]
no_unwind when i < self.len()
This effectively says:
- If
i < self.len()
, then the function will not unwind. - If the function returns normally, then
i < self.len()
(equivalently, ifi >= self.len()
, then the function must unwind).
Restrictions with invariants
You cannot unwind when an invariant is open. This restriction is necessary because an unwinding operation does not necessarily abort a program. Rust allows a program to “catch” an unwind, for example, or there might be other threads to continue execution. As a result, Verus cannot permit the program to exit an invariant-block early without restoring the invariant, not even for unwinding.
This is restriction is what enables Verus to rule out exception safety violations.
Drops
If you implement Drop
for a type, you are required to give it a signature of no_unwind
.
Signature inheritance
Usually, the developer does not write a signature for methods in a trait implementation, as the signatures are inherited from the trait declaration. However, the signature can be modified in limited ways. To ensure soundness of the trait system, Verus has to make sure that the signature on any function must be at least as strong as the corresponding signature on the trait declaration.
- All
requires
clauses in a trait declaration are inherited in the trait implementation. The user cannot add additionalrequires
clauses in a trait implementation. - All
ensures
clauses in a trait declaration are inherited in the trait implementation. Furthermore, the user can add additionalensures
clauses in the trait implementation. - The
opens_invariants
signature is inherited in the trait implementation and cannot be modified. - The unwinding signature is inherited in the trait implementation and cannot be modified.
When a trait function is called, Verus will attempt to statically resolve the function to a particular trait implementation. If this is possible, it uses the possibly-stronger specification from the trait implementation; in all other cases, it uses the generic specification from the trait declaration.
Specifications on FnOnce
For any function object, i.e., a value of any type that implements FnOnce
(for example, a named function, or a closure) the signature can be reasoned about generically
via the Verus built-in functions call_requires
and call_ensures
.
call_requires(f, args)
is a predicate indicating iff
is safe to call with the givenargs
. For any non-static call, Verus requires the developer to prove thatcall_requires(f, args)
is satisfied at the call-site.call_ensures(f, args, output)
is a predicate indicating if it is possible forf
to return the givenoutput
when called withargs
. For any non-static call, Verus will assume thatcall_ensures(f, args, output)
holds after the call-site.- At this time, the
opens_invariants
aspect of the signature is not treated generically. Verus conservatively treats any non-static call as if it might open any invariant.
The args
is always given as a tuple (possibly a 0-tuple or 1-tuple).
See the tutorial chapter for examples and more tips.
For any function with a Verus signature (whether a named function or a closure), Verus generates axioms resembling the following:
(user-declared requires clause) ==> call_requires(f, args)
call_ensures(f, args, output) ==> (user-declared ensures clauses)
Using implication (==>
) rather than a strict equivalence (<==>
) in part to allow
flexible signatures in traits.
However, our axioms use this form for all functions, not just trait functions.
This form reflects the proper way to write specifications for higher-order functions.
decreases … when … via …
The decreases
clause is necessary for ensuring termination of recursive and mutually-recursive
functions. See this tutorial page for an introduction.
Overview
A collection of functions is mutually recursive if their call graph is strongly connected (i.e., every function in the collection depends, directly or indirectly, on every function in the collection). (A single function that calls itself forms a mutually recursive collection of size 1.) A function is recursive if it is in some mutually recursive collection.
A recursive spec function is required to supply a decreases
clause, which takes
the form:
decreases EXPR_1, ...
[ when BOOL_EXPR ]?
[ via FUNCTION_NAME ]?
The sequence of expressions in the decreases clause is the decreases-measure.
The expressions in the decreases-measure and the expression in the when
-clause
may reference the function’s arguments.
Verus requires that, for any two mutually recursive functions, the number of elements in their decreases-measure must be the same.
The decreases-measure
Verus checks that, when a recursive function calls itself or any other function in its mutually recursive collection, the decreases-measure of the caller decreases-to the decreases-measure of the callee. See the formal definition of decreases-to.
The when
clause
If the when
clause is supplied, then the given condition may be assumed when
proving the decreases properties.
However, the function will only be defined when the when
clause is true.
In other words, something like this:
fn f(...) -> _
decreases ...
when condition
{
body
}
Will be equivalent to this:
fn f(args...) -> _
{
if condition {
body
} else {
some_unknown_function(args...)
}
}
The via
clause
Sometimes, it may be true that the decreases-measure decreases, but Verus cannot prove it automatically. In this case, the user can supply a lemma to prove the decreases property.
If the via
clause is supplied,
the FUNCTION_NAME
must be the name of a proof
function defined in the same module.
The number of expressions in the decreases
clause must be the same for each function
in a mutually recursive collection.
This proof function must also be annotated with the #[via_fn]
attribute.
It is the job of the proof function to prove the relevant decreases property for each call site.
Type invariants
Structs and enums may be augmented with a type invariant, a boolean predicate indicating well-formedness of a value. The type invariant applies to any exec object or tracked-mode ghost object and does not apply to spec objects.
Type invariants are primarily intended for encapsulating and hiding invariants.
Declaring a type invariant
A type invariant may be declared with the #[verifier::type_invariant]
attribute.
It can be declared either as a top-level item or in an impl block.
#[verifier::type_invariant]
spec fn type_inv(x: X) -> bool { ... }
impl X {
#[verifier::type_invariant]
spec fn type_inv(self) -> bool { ... }
}
It can be inside an impl
block and take self
, of it can be declared as a top-level item.
It can have any name.
The invariant predicate must:
- Be a spec function of type
(X) -> bool
or(&X) -> bool
, whereX
is the type the invariant is applied to. - Be applied to a datatype (
struct
orenum
) that:- Is declared in the same crate
- Has no fields public outside of the crate
There is no restriction that the type invariant function have the same visibility as the type it is declared for, only that it is visible whenever the type invariant needs to be asserted or assumed (as described below). Since type invariants are intended for encapsulation, it is recommended that it be as private as possible.
Enforcing that the type invariant holds
For any type X
with a type invariant,
Verus enforces that the predicate always hold for any exec object or tracked-mode ghost object
of type X
. Therefore, Verus add a proof obligation that the predicate holds:
- For any constructor expression of
X
- After any assignment to a field of
X
- After any function call that takes a mutable borrow to
X
Currently, there is no support for “temporarily breaking” a type invariant, though this capability may be added in the future. This can often be worked around by taking mutable borrows to the fields.
Applying the type invariant
Though the type invariant is enforced automatically, it is not provided to the user automatically.
For any object x: X
with a type invariant, you can call the builtin pseudo-lemma use_type_invariant
to learn that the type invariant holds on x
.
use_type_invariant(&x);
The value x
must be a tracked or exec variable.
This statement is a proof feature, and if it appears in an exec
function, it must be in
a proof
block.
Example
struct X {
i: u8,
j: u8,
}
impl X {
#[verifier::type_invariant]
spec fn type_inv(self) -> bool {
self.i <= self.j
}
}
fn example(x: X) {
proof {
use_type_invariant(&x);
}
assert(x.i <= x.j); // succeeds
}
fn example_caller() {
let x = X { i: 20, j: 30 }; // succeeds
example(x);
}
fn example_caller2() {
let x = X { i: 30, j: 20 }; // fails
example(x);
}
Attributes
all_triggers
atomic
auto
accept_recursive_types
external
external_body
external_fn_specification
external_type_specification
ext_equal
inline
loop_isolation
memoize
opaque
reject_recursive_types
reject_recursive_types_in_ground_variants
rlimit
trigger
truncate
type_invariant
when_used_as_spec
#![all_triggers]
Applied to a quantifier, and instructs Verus to aggressively select trigger groups for the quantifier. See the trigger specification procedure for more information.
Unlike most Verus attributes, this does not require the verifier::
prefix.
#[verifier::atomic]
The attribute #[verifier::atomic]
can be applied to any exec-mode function to indicate
that it is “atomic” for the purposes of the atomicity check by
open_atomic_invariant!
.
Verus checks that the body is indeed atomic, unless the function is also marked
external_body
, in which case this feature is assumed together with the rest of the function
signature.
This attribute is used by vstd
’s trusted atomic types.
#![auto]
Applied to a quantifier, and indicates intent for Verus to use heuristics to automatically infer Technically has no effect on verification, but may impact verbose trigger logging. See the trigger specification procedure for more information.
Unlike most Verus attributes, this does not require the verifier::
prefix.
#[verifier::external]
Tells Verus to ignore the given item. Verus will error if any verified code attempts to reference the given item.
This can have nontrivial implications for the TCB of a verified crate; see here.
#[verifier::inline]
The attribute #[verifier::inline]
can be applied to any spec-mode function to indicate
that that Verus should automatically expand its definition in the STM-LIB encoding.
This has no effect on the semantics of the function but may impact triggering.
#[verifier::loop_isolation]
The attributes #[verifier::loop_isolation(false)]
and #[verifier::loop_isolation(true)]
can be applied to modules, functions, or individual loops. For any loop, the most specific
applicable attribute will take precedence.
This attribute impacts the deductions that Verus can make automatically inside the loop
body (absent any loop invariants).
- When set to
true
: Verus does not automatically infer anything inside the loop body, not even function preconditions. - When set the
false
: Verus automatically makes some facts from outside the loop body available in the loop body. In particular, any assertion outside the loop body that depends only on variables not mutated by the loop body will also be available inside the loop.
#[verifier::memoize]
The attribute #[verifier::memoize]
can be applied to any spec-mode function to indicate
that the by(compute)
and by(compute_only)
prover-modes
should “memoize” the results of this function.
#[verifier::opaque]
Directs the solver to not automatically reveal the definition of this function.
The definition can then be revealed locally via the reveal
and reveal_with_fuel
directives.
#[verifier::rlimit(n)]
and #[verifier::rlimit(infinity)]
The rlimit
option can be applied to any function to configure the computation limit
applied to the solver for that function.
The default rlimit
is 10. The rlimit is roughly proportional to the amount of time taken
by the solver before it gives up. The default, 10, is meant to be around 2 seconds.
The rlmit may be set to infinity
to remove the limit.
The rlimit can also be configured with the --rlimit
command line option.
#[trigger]
Used to manually specify trigger groups for a quantifier. See the trigger specification procedure for more information.
Unlike most Verus attributes, this does not require the verifier::
prefix.
#[verifier::truncate]
The #[verifier::truncate]
attribute can be added to expressions to silence
recommends-checking regarding out-of-range as-casts.
When casting from one integer
type to another, Verus usually inserts recommends-checks that the source
value fits into the target type. For example, if x
is a u32
and we cast it
via x as u8
, Verus will add a recommends-check that 0 <= x < 256
.
However, sometimes truncation is the desired behavior, so
#[verifier::truncate]
can be used to signal this intent, suppressing
the recommends-check.
Note that the attribute is optional, even when truncation behavior is intended. The only effect of the attribute is to silence the recommends-check, which is already elided if the enclosing function body has no legitimate verification errors.
Aside. When truncation is intended, the bit-vector solver mode is often useful for writing proofs about truncation.
#[verifier::type_invariant]
Declares that a spec function is a type invariant for some datatype. See type invariants.
The “global” directive
By default, Verus has no access to layout information, such as the size
(std::mem::size_of::<T>()
)
or alignment (std::mem::align_of::<T>()
)
of a struct.
Such information is often unstable (i.e., it may vary between versions of Rust)
or may be platform-dependent (such as the size of usize
).
This information can be provided to Verus as needed using the global
directive.
For a type T
, and integer literals n
or m
, the global
directive is a Verus item
that takes the form:
global layout T is size == n, align == m;
Either size
or align
may be omitted. The global directive both:
- Exports the axioms
size_of::<T>() == n
andalign_of::<T> == m
for use in Verus proofs - Creates a “static” check ensuring the given values are actually correct when compiled.
Note that the second check only happens when codegen is run; an “ordinary” verification pass will not perform this check. This ensures that the check is always performed on the correct platform, but it may cause surprises if you spend time on verification without running codegen.
In order to keep the layout stable, it is recommended using Rust attributes
like #[repr(C)]
.
Keep in mind that the Verus verifier gets no information from these attributes.
Layout information can only be provided to Verus via the global
directive.
With usize
and isize
For the integer types usize
and isize
, the global
directive has additional behavior.
Specifically, it influences the integer range used in encoding usize
and isize
types.
For an integer literal n
, the directive,
global layout usize is size == n;
Tells Verus that:
usize::BITS == 8 * n
isize::BITS == 8 * n
- The integer range for
usize
(usize::MIN ..= usize::MAX
) is0 ..= 28*n - 1
- The integer range for
isize
(isize::MIN ..= isize::MAX
) is-28*n-1 ..= 28*n-1 - 1
By default (i.e., in the absence of a global
directive regarding usize
or isize
),
Verus assumes that the size is either 4 or 8, i.e., that the integer range is
either 32 bits or 64 bits.
Example
global layout usize is size == 4;
fn test(x: usize) {
// This passes because Verus assumes x is a 32-bit integer:
assert(x <= 0xffffffff);
assert(usize::BITS == 32);
}
Static items
Verus supports static items, similar to const
items. Unlike const
items, though,
static
items are only usable in exec
mode. Note that this requires them to be
explicitly marked as exec
:
exec static x: u64 = 0;
The reason for this is consistency with const
; for const
items, the default mode
for an unmarked const item is the dual spec
-exec
mode.
However, this mode is not supported for static
items; therefore, static items
need to be explicitly marked exec
.
Note there are some limitations to the current support for static
items.
Currently, a static item cannot be referenced from a spec expression. This means, for example,
that you can’t prove that two uses of the same static item give the same value
if those uses are in different functions. We expect this limitation will be lifted
in the future.
The char
primitive
Citing the Rust documentation on char
:
A char is a ‘Unicode scalar value’, which is any ‘Unicode code point’ other than a surrogate code point. This has a fixed numerical definition: code points are in the range 0 to 0x10FFFF, inclusive. Surrogate code points, used by UTF-16, are in the range 0xD800 to 0xDFFF.
Verus treats char
similarly to bounded integer primitives like u64
or u32
: We represent
char
as an integer. A char
always carries an invariant that it is in the prescribed set
of allowed values:
[0, 0xD7ff] ∪ [0xE000, 0x10FFFF]
In spec code, chars can be cast to an from other integer types using as
. This is more
permissive than exec code, which disallows many of these coercions.
As with other coercions, the result may be undefined if the integer being coerced does not
fit in the target range.
Unions
Verus supports Rust unions.
Internally, Verus represents unions a lot like enums. However, Rust syntax for accessing
unions is different than enums. In Rust, a field of a union is accessed with field access:
u.x
. Verus allows this operation in exec-mode, and Verus always checks it is well-formed,
i.e., it checks that u
is the correct “variant”.
In spec-mode, you can use the built-in spec operators is_variant
and get_union_field
to reason about a union. Both operators refer to the field name via string literals.
is_variant(u, "field_name")
returns true ifu
is in the"field_name"
variant.get_union_field::<U, T>(u, "field_name")
returns a value of typeT
, whereT
is the type of"field_name"
. (Verus will error ifT
does not match between the union and the generic parameterT
of the operator.)
Example
union U {
x: u8,
y: bool,
}
fn union_example() {
let u = U { x: 3 };
assert(is_variant(u, "x"));
assert(get_union_field::<U, u8>(u, "x") == 3);
unsafe {
let j = u.x; // this operation is well-formed
assert(j == 3);
let j = u.y; // Verus rejects this operation
}
}
Note on unsafe
The unsafe
keyword is needed to satisfy Rust, because Rust treats union field access
as an unsafe operation. However, the operation is safe in Verus because Verus is able to
check its precondition. See more on how Verus handles memory safety.
Pointers and cells
See the vstd documentation for more information on handling these features.
- For cells, see
PCell
- For pointers to fixed-sized heap allocations, see
PPtr
. - For general support for
*mut T
and*const T
, seevstd::raw_ptr
Record flag
Sometimes, you might wish to record an execution trace of Verus to share, along with all the necessary dependencies to reproduce an execution. This might be useful for either packaging up your verified project, or to report a Verus bug to the issue tracker.
The --record
flag will do precisely this. In particular, to record an execution of Verus (say, verus foo --bar --baz
), simply add the --record
flag (for example, verus foo --bar --baz --record
). This will re-run Verus, and package all the relevant source files, along with the execution output and version information into a zip file (yyyy-mm-dd-hh-mm-ss.zip
) in your current directory.