Warning: The formal verification approach presented within this article, that of swapping assertions and assumptions, can lead to false positives. See this article for more details.

Fig 1. The ZipCPU
Block diagram of the ZipCPU, showing five pipeline stages

If you’ve been following my work with the ZipCPU, you’ll know that I have formally verified all of the leaf modules, and that I am now working on verifying the CPU as a whole. By “leaf module”, I mean a design component that references no other design components. You might also remember that I am rather new to formal verification, having only picked it up within this last year. Verifying the ZipCPU may be the most complex design I’ve tried to verify yet.

Prior to my attempts at formally verifying the ZipCPU, the most difficult proof I had worked on was for a proprietary block floating point module. That module assigned a single exponent for all of the values within a block of 2^N numbers, and then output the input numbers sequentially together with their assigned floating point value. Like the ZipCPU, it also involved proving a module with a leaf module underneath it. The alignment was very difficult to get right in that implementation and so I dependended heavily upon the formally verified properties of those two modules in order to have the assurance that the component even worked.

Along the way, I’ve learned that there are some tricks to aggregating formally verified submodules together in order to prove the whole. I’ve now learned two particular techniques, abstraction and invariants, to help control the complexity of a formal proof. Today’s discussion will focus on invariants. If the Lord is willing, we’ll come back and discuss how to use abstraction, and present several examples of it.

I would love to declare, before starting out, that I am an expert on these techniques. I’m not. Perhaps if I had found some article or set of articles describing them I might be able to declare some amount of expertise. Sadly, while I found a small number of articles describing invariants, I struggled to relate the concepts presented within those articles to the formal verification problems I was dealing with. The one source I found to guide me along this road was a presentation from OneSpin Solutions at DVCon 2018 in San Jose.

So with credit to OneSpin Solutions, here’s what I have learned about invariants and how they apply to Formal Verification. More than that, I’ll try to keep this presentation simple enough to be understandable, and focused on how these topics apply to the formal verification of complex RTL modules.

Formal Property Review

If you are coming in to the discussion late, then there are two concepts you must understand in order to follow the discussion below. These are the assume() and assert() System Verilog operators. I like to explain these with a set of diagrams describing the state of a design at any given point in time.

Fig 2. Formal regions of relevance

One such example state diagram is shown in Fig 2. Consider every point in this image as representing one state of all the flip-flops in your design. Any change to one of the flip-flops in your design will move your system’s state to a new point on the diagram.

Several parts of this diagram are worth noting.

First, note that your design starts from an initial state, as defined by the initial statements within your design. From this point, the inputs to your design together with the logic within your design will cause the design to move from one state to another.

Second, note that there are three different types of states you can be in. The first set of states are the set of “valid” states. These are all of the states that may be reached from your initial state without violating any assertions within your design. These are shown in green.

The second set of states are those shown in red. These are the “illegal” or “invalid” states. These are defined by all those states that violate an assert() statements within your design. Your goal, using formal methods, will be to prove that you cannot cross from a point in the set of valid states to one of the invalid states.

There is a third set of states. This set is shown by the largest region containing all of the others. This is the universe of all possible sets of flip-flop (register) combinations of values within your design. Initially, this set is defined by every flip-flop combination in your design. Hence if you have N flip-flops in your design, then without any further assumptions there are initially 2^N states in this set. Not all of these states will be reachable from your initial state.

Fig 3. The Effect of an Assumption

Within your design, you have two sorts of statements you can make about the flip-flops within your design. The first is the assume() statement. This reduces the size of the universe of values your flip-flops may be set to.

To use the example of a counter,

always @(posedge i_clk)
	if (i_reset)
		r_value <= 0;
	else if (i_start)
		r_value <= TIMEOUT;
	else if (r_value != 0)
		r_value <= r_value - 1'b1;

we might assume that the i_reset signal is always zero.

always @(*)
	assume(!i_reset);

This will restrict the universe of possibilities to only those where i_reset is always zero.

The goal of formal methods is to examine every possible state your design can enter into in order to prove that your design will never reach an invalid state. As I’m sure you can imagine, this task can be a challenge. In general, this challenge is NP-hard–the computational complexity is roughly exponential in the size of the universe within your design. Hence, the smaller you can make the universe of possibilities the easier it will be to verify a design.

Fig 4. The Effect of an Assertion

The second basic formal statement is the assert() statement. Unlike the assume() statement which limits the size of the search space, the assert() statement declares particular states within that universe are somehow “illegal” as shown in Fig 4. When you use formal methods, your goal will be to guarantee your design remains in a legal state.

If we return to the timer example above, we might assert that value of the counter is never one greater than TIMEOUT.

always @(*)
	assert(r_value != TIMEOUT+1);

When applying formal methods to this example, the engine will quickly point out that the initial state may lie within the red or illegal states–since we did nothing to restrict r_values contents initially.

initial r_value = 0;

These two operators, assume and assert, alone should be enough to get you started with formal methods, and there’s a lot you can do with them. However, if you want to prove that your design will work for all time rather than just the first S time steps, then you will need to apply the formal method called induction.

Induction starts by assuming your design is somewhere among the universe of all possible states, and that the initial state the engine has chosen is not illegal. Its first step is to create S time steps where your design stays out of the illegal set of states. Then, on the S+1 time step, it tries to see if it is possible to enter into the set of illegal states.

The difficult part of this inductive step is that the formal engine cannot tell the difference between reachable states and unreachable states. This can be particularly problematic for the new user of formal methods to understand. A common refrain is, “how did the formal engine put my design into this state? There’s no way it can get there!”

Fig 5. The Lesson of Induction

Addressing this problem was the subject of a prior article. In that article, I explained that you must use either an assume or an assert to keep the formal engine from reaching any unreachable states, or you will never be able to fully prove your design meets the properties you have asserted for all time.

Following our example of a timer, the formal engine might pick a value for r_value that was TIMEOUT+S+2. It would then follow r_value for S steps until r_value was TIMEOUT+2. On the next step, our property that (r_value != TIMEOUT+1) would be violated.

To fix this, one would need to assert instead that r_value <= TIMEOUT.

always @(*)
	assert(r_value <= TIMEOUT);

This makes the size of the “red” area large enough to include the whole universe, as shown in Fig 5. To follow the consequence of this in our example above, r_value would never be allowed to be TIMEOUT+S+2. It would only be allowed to have a valid value.

To know which of the two, assert or assume, is appropriate at any given time, I’ve always used what I call the master rule of Formal Verification shown in Fig 6: assume inputs, then assert properties of both internal states and outputs.

Fig 6. Master rule of Formal Verification

This rule has served me well for all of my formal verification efforts to date. Now that I’ve discovered I need to aggregate modules together, I’ve had to learn some of the fine nuances of this rule. These nuances are the subject of the rest of this article.

The Concept of an Invariant

What the master rule of formal verification doesn’t capture is how you deal with things that have already been proven. For example, let’s suppose A is a set of assumptions, and B is a set of assertions. Once you’ve proven that A implies B, which I shall write as A->B, then you shouldn’t need to prove it again.

Fig 7. (Some caption)

This is the concept of an Invariant. Once you know that A->B, then A->B becomes an invariant of your design. Instead of reproving it, you may now treat it as an assumption. This becomes especially useful when trying to deal with complex proofs. If you can reduce the complexity, you can then verify larger and larger designs.

There are two situations where I have found to apply this. The first is that of a parent module with a child (or leaf) module underneath it. The second place is when the proof of a design can be broken into separate sections. I’ve encountered both while trying to verify the ZipCPU. We’ll examine each in turn.

Fig 8. ZipCPU's module hierarchy

To explain the first situation, consider the module hierarchy (file structure) of the ZipCPU shown in Fig 8. The ZipCPU consists of a master CPU file, an instruction decoder, an ALU with a multiply component within it, a twin bus arbiter, and one of two memory controllers. I use the red bar in Fig 8 to indicate the formal properties within the design. Normally these properties are found at the end of the source file. You’d therefore normally expect this red bar to be at the bottom of each file within the diagram. However, since the concept of invariants turns these properties on end, I show the red bar in this figure at the top of the file–for illustration purposes.

The other three components of the ZipCPU were abstracted. These were the prefetch, divide, and multiply. These are shown in Fig 8 as empty files with dotted lines around them, indicating that due to the abstraction they have very little logic remaining within them. You can find their abstract representations in my bench/formal directory. These also made the proof easier, but for now we’ll leave the mechanics of this for a topic for a future article.

Fig 9. Invariants applied to a parent/leaf module hierarchy

To understand how I applied the principle of invariants to this design, consider Fig 9. This shows a parent module and a leaf or child module–both with assertions and assumptions within them.

In my first step, I verified that the leaf module works. This is shown in the left side of Fig 9. During this step, I ignored the parent module, and only proved that if the child’s assumptions held then the child would never enter into an illegal state.

Once the assumptions and module logic had been used to prove the assertions within the module, I then switched my focus to the parent. This is shown on the right of Fig 9. In this case, I no longer needed to prove the properties of the child. Instead, I needed to prove the properties of the parent. To do this, I asserted that the assumptions of the child held, and then assumed that the assertions therefore held as well.

Hmm, that statement was about as confusing as some of the mathematical articles I’ve read on this topic. Let me try explaining that again.

Suppose we let LA refer to the set of assumptions within the leaf module, and LB refer to the set of assertions within the leaf module. By formally verifying that module, I’ve now proved that LA -> LB. Unwrapping this a touch, this is equivalent to the statement that either LA is false, or LB must be true: (!LA)||(LB). Said another way, either one of the assumptions of this module must be false, or all the assertions must be true.

Now I want to move on to the parent module. In this case, I have the additional assumptions PA and assertions PB from the parent and I would like to prove that PA->PB. However, I also need to deal with the leaf module. For the leaf module, I know that LA->LB. What I don’t know is whether or not LA holds since it consists of values provided by the parent module. Unless and until I know that LA holds, I really know nothing about LB. Further, since the wires composing LA come from the parent, what I really need to do is to assert these properties hold in order to guarantee the proper functioning of the leaf. Hence, I will now assert LA and assume LB. This is backwards from how I treated these components before. As a result, I will assume PA&LB and assert PB&LA.

The fascinating part of this is that the master rule of formal verification still applies. We’re still assuming the inputs to the parent module and asserting the properties of the internal state and any outputs. How can this be? It works because the flip-flops composing the internal state of the parent module are the input connections to the child. Hence the assumptions of the child’s inputs are now outputs from the parent and so they may be asserted. Likewise the child’s outputs are now the parent’s inputs, and so it makes sense to make assumptions about them.

It this is still confusing, relax, I’ll show some code snippets to illustrate how I applied this concept in the next section.

Warning: This process just discussed is flawed, and can leave you believing your design works when it does not. See this article for more details.

The second way that invariants can be used within a design is within a given design component (module). In my case, it was within the ZipCPU core. In this case, we’ll separate the assertions into groups of increasingly complex logic, called stages in Fig 10 below.

Fig 10. Invariants applied within a file

Let’s call these sets of assertions B0, B1, and B2. For the first formal proof, we’ll prove that the assumptions together with the design logic prove the B0 property, A->B0. Within the ZipCPU core, I call these PHASE_ONE_ASSERTions. Once we know that A->B0, we turn our attention to proving B1. In that case, we now know that A is true by assumption, but we also know that B0 is true by the implication we just proved. So for this second stage proof, we assume A and B0 and attempt to prove B1. We can then repeat this again, assuming A, B0, and now B1 and proving B2.

If you’ve examined the (current) formal proof of the ZipCPU (it’s still a work in progress), you may have noticed that I am using both of these methods. First, I am verifying that the component pieces to the ZipCPU work as desired. Then, I am aggregating those into the proof of the ZipCPU as a whole. Second, within the ZipCPU, I’ve created two parts to the proof: B0 and B1. I’m expecting to create a third and possibly fourth component later. Currently, I can prove both B0 and B1 using SymbiYosys.

According to OneSpin Solutions, this method can increase the depth of the proof or rather the number of state transitions that can be examined in a reasonable amount of time by perhaps a thousand fold. These are their numbers, though. In my own experience, I can only say that formally verifying the second stage of the ZipCPU used to take longer than all night. (I’m not really sure how long–I never let it finish.) It now takes just over an hour.

At one time, I used this approach to formally verify the ZipCPU. I’ve since adjusted my approach, and the newer proof works better than it had when using this approach.

For the sake of OneSpin Solutions, who had recommended this method to me, I’m hoping they are quietly aware of the flaws within it.

How this concept appears in Verilog

The previous sections have discussed a lot about formal verification in the abstract. They’ve been so far from coded reality that I would imagine I’ve now left several readers wondering what I’m talking about. So let’s bring these abstract concepts to reality, and discuss how they might look within a piece of Verilog code.

We’ll start with a fairly plain Verilog module. Almost all of my modules have the following rough format.

// Copyright statement
`default_nettype	none
//
module thismodule( ...);
	// Core logic
`ifdef	FORMAL
	// Formal properties

	always @(*)
		assume(i_some_input);
	always @(*)
		assert(o_some_output);
`endif
endmodule

They start with a copyright statement, and then declare the default_nettype to be none instead of wire. (This catches a lot of bugs.) The module definition then follows with the core logic within it. Following the module logic, there’s an ifdef FORMAL delimited section ending just before the endmodule on the last line of the file. Inside that section I place any assumptions or assertions regarding the logic above. Indeed, if you browse through any of the code I’ve formally verified, you’ll find this to be the common form.

If it is possible that this might be a leaf or child module to some other module within a formal proof, then I’ll make some adjustments to the formal section. First, I’ll use a synthesis define to indicate THISMODULE is the module being verified. Inside the module’s yosys script, I’ll also modify the read_verilog command to include a -D THISMODULE. So, for example, within the pipelined memory controller module, there’s a check for whether or not PIPEMEM is defined. Likewise, you can see the definition within its its yosys script. Second, I’ll define macros which I can then use to reference either the assume and assert statements. If THISMODULE is defined, these will refer to assume and assert as expected. If not, they’ll be swapped. Then I rewrite the formal properties to use these macros.

`ifdef	FORMAL
`ifdef	THISMODULE
`define	ASSUME	assume
`define	ASSERT	assert
`else
`define	ASSUME	assert
`define	ASSERT	assume
`endif
	// Formal properties
	always @(*)
		`ASSUME(i_some_input);
	always @(*)
		`ASSERT(o_some_output);
`endif

This is how I handle creating the logic pictured in Fig 9 above within any of the non-abstracted child modules.

Perhaps the best example of how this might be useful is in the ZipCPU’s pipelined memory controller. The “contract” the ZipCPU has with the user is that it will not speculatively execute memory operations–since the ZipCPU places both memory and peripherals on the same bus. Bus operations may be pipelined, meaning that multiple reads may be ongoing at any given time. Now consider, what would happen if one of those reads set the program counter? There would be no way to undo any of the other reads that might be in progress by this point.

For example, consider the following string of loads. In the ZipCPU ISA, an LW instruction loads a word of data from the bus into the register given as the second argument. (ZipCPU instructions read left to right.) The second to the last of these loads reads a value into the program counter.

LW (R0),R1	; Load word from the address in R0 into the R1 register
LW 4(R0),R2	; R2 <- Mem[R0+4]
LW 8(R0),R3
LW 12(R0),R4
LW 16(R0),R5
LW 20(R0),PC	; PC <= Mem[R0+20]
LW 24(R0),R6	; Must not issue until the last load has completed

Any time the program counter (PC) is set, the CPU jumps to a new instruction. Hence, this memory read into the PC is really a jump instruction.

Should the CPU initiate the read into R6? No. Not until the read into the PC completes.

To check for this, the memory controller assumes that any read into the program counter must be the last read in a sequence. To formally prove this in an inductive manner, I need to check via assertions that the nothing in the pipeline of ongoing reads contains a read into the program counter. Once proven, the controller then asserts that if any output is to the program counter then it must be the last return value in the sequence. When this component is aggregated, the “check every FIFO element” code is quietly removed, being replaced only be the final assertion on the output. This final assertion, however, has been replaced by this process with an assumption that no longer needs to be checked–simplifying the proof of the ZipCPU as a whole.

The second method of applying invariants is the application within a given file, as shown in Fig 10 above. In this case, you want to prove several sets of assertions. Within the ZipCPU core, I call these PHASE_ONE, PHASE_TWO, and so on. Associated with each phase is an assertion macro, PHASE_ONE_ASSERT, PHASE_TWO_ASSERT, etc. To then verify the component, you’d run it through the formal engine twice: first as is, to test all of the phase one assertions, and then again with PHASE_TWO defined in order to test the phase two assertions. The second set of assertions are excluded from being evaluated during the first set by a synthesis ifdef directive.

You can see the SymbiYosys script for this here.

The relevant code, shown below, was lifted from the beginning of the ZipCPU’s formal properties section. It starts out with a commented list of all of the formal phases supported by the ZipCPU module.

`ifdef	FORMAL
//
// PHASE_X definitions control our assertion logic below.  They are to be
// defined by the synthesizer
//
// `define PHASE_TWO
// `define PHASE_THREE
// `define PHASE_FOUR
//

These aren’t strictly necessary, but they remind me which defines I am supporting.

Then each of the phases is given its own assertion macro.

`define	PHASE_ONE_ASSERT	assert
`define	PHASE_TWO_ASSERT	assert
`define	PHASE_THR_ASSERT	assert
//

If PHASE_TWO is defined, the PHASE_ONE_ASSERT macro is then redefined as an assumption.

`ifdef	PHASE_TWO
`undef	PHASE_ONE_ASSERT
`define	PHASE_ONE_ASSERT	assume

Likewise, if PHASE_THREE is also defined then the PHASE_TWO_ASSERT is redefined to be an assumption. This process then repeats for all of the phases supported by the design.

`ifdef	PHASE_THREE
`undef	PHASE_TWO_ASSERT
`define	PHASE_TWO_ASSERT	assume

`ifdef	PHASE_FOUR
`undef	PHASE_THR_ASSERT
`define	PHASE_THR_ASSERT	assume
`endif // PHASE_FOUR
`endif // PHASE_THREE
`endif // PHASE_TWO

At this point, the formal properties can proceed in sections. The first section makes any input assumptions necessary.

	// An example assumption
	always @(*)
		assume(i_some_input);

Then the first set of assertions follows these assumptions.

	// An example assertion
	always @(*)
		`PHASE_ONE_ASSERT(o_some_output);

This set of assertions is created using the PHASE_ONE_ASSERT macro. That will allow us to replace these assertions with assumptions in the next pass–once they’ve been proven true in a first pass.

Later, we can split into a second section of assertions–but only after we’ve used the formal tools to verify the first set of assertions, and only if PHASE_TWO is defined. In this latter section, assertions are created using the PHASE_TWO_ASSERT macro. Further, in order to get into this section the PHASE_ONE_ASSERT macro used the previous section will have been redefined to be an assume statement instead of the original assert statement.

`ifdef	PHASE_TWO
	always @(*)
		`PHASE_TWO_ASSERT(o_some_other_output);
`endif // PHASE_TWO
`endif // FORMAL

In the case of the ZipCPU, the second phase of formal verification includes several free variables (yosys’s $anyconst) used to track an arbitrary instruction from an arbitrary address working its way through the ZipCPU’s logic.

	assign	f_const_insn    = $anyconst;
	assign	f_const_addr    = $anyconst;

Using this method, I can reconstruct the operands for any operation before that operation is issued, and verify that they have the right values.

	always @(posedge i_clk)
	if // details
	begin
		if // more details ...
		begin
			// Check that the first operand matches
			if (fc_rA)
				`PHASE_TWO_ASSERT(f_Av == op_Av);
			// Now check the second operand
			`PHASE_TWO_ASSERT(f_Bv == op_Bv);
		end
	end

If you’ll recall from the article describing the ZipCPU’s instruction set, almost all ZipCPU instructions have the basic form: OP.C Ra,Rb+I, where Ra and Rb are arbitrary registers, and I is some immediate constant.

Fig 11. ZipCPU's ALU stage in context

Prior to entering the ALU stage, the ZipCPU adds the Rb register value and I together. Then, while waiting for the ALU stage to become available, any write to either Ra or Rb adjusts these values.

What makes this difficult is that, in order to keep clock speed high, there’s no opportunity to re-add the immediate constant I to Rb–that would take an extra clock tick. Dealing with this requires some careful pipeline stall logic, and getting this logic right has been tricky. By allowing the formal engine to pick an arbitrary pipeline state and then examine an arbitrary instruction going into the ALU, I can guarantee that the Ra and Rb+I operands are valid no matter which instruction precedes them.

Searching through all of the possibilities of these free variables can be expensive. Indeed, this was the phase that was taking many hours of processing as I was working on verifying this core module within the ZipCPU. Now, using this method of invariants, the entire formal verification of the (still not quite complete properties of the) ZipCPU takes less than two hours.

In hindsight, this made my CPU proof more complicated than it needed to be, while leaving the proof susceptible to false positives.

Conclusion

Invariants are only one method of handling formal complexity when aggregating multiple modules together. As I mentioned in the opening, I am by no means an expert in formal verification, and so this is only my first application of the principle of invariants to any project–in this case the ZipCPU. However, even in the case of the ZipCPU, the value of using invariants has saved many, many formal verification CPU cycles.

We’ll have to return to this topic later in order to handle the concept of abstraction in formal methods. Abstraction may be an even more powerful concept than invariants. As you may remember from above, I’ve been using abstracted components to represent the prefetch, multiply, and divide components. As another illustration, consider this: when formally verifying any container of the ZipCPU, whether the ZipSystem, ZipBones, or some yet to be determined container, I’m anticipating being able to replace the entire complicated ZipCPU core with an abstracted version of it, and then to be able to prove the formal properties of the wrapper in question.