This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Quick Start

How to use the open verification platform to participate in hardware verification.

This page will briefly introduce what verification is, as well as concepts used in the examples, such as DUT (Design Under Test) and RM (Reference Model).

Chip Verification

Chip verification is a crucial step to ensure the correctness and reliability of chip designs, including functional verification, formal verification, and physical verification. This material only covers functional verification, focusing on simulation-based chip functional verification. The processes and methods of chip functional verification have many similarities with software testing, such as unit testing, system testing, black-box testing, and white-box testing. They also share similar metrics, such as functional coverage and code coverage. In essence, apart from the different tools and programming languages used, their goals and processes are almost identical. Thus, software test engineers should be able to perform chip verification without considering the tools and programming languages. However, in practice, software testing and chip verification are two completely separate fields, primarily due to the different verification tools and languages, making it difficult for software test engineers to crossover. In chip verification, hardware description languages (e.g., Verilog or SystemVerilog) and specialized commercial tools for circuit simulation are commonly used. Hardware description languages differ from high-level software programming languages like C++/Python, featuring a unique “clock” characteristic, which poses a high learning curve for software engineers.

To bridge the gap between chip verification and traditional software testing, allowing more people to participate in chip verification, this project provides the following content:

Multi-language verification tools (Picker), allowing users to use their preferred programming language for chip verification.

Verification framework (MLVP), enabling functional verification without worrying about the clock.

Introduction to basic circuits and verification knowledge, helping software enthusiasts understand circuit characteristics more easily.

Basic learning materials for fundamental verification knowledge.

Real high-performance chip verification cases, allowing enthusiasts to participate in verification work remotely.

Basic Terms

DUT: Design Under Test, usually referring to the designed RTL code.

RM: Reference Model, a standard error-free model corresponding to the unit under test.

RTL: Register Transfer Level, typically referring to the Verilog or VHDL code corresponding to the chip design.

Coverage: The percentage of the test range relative to the entire requirement range. In chip verification, this typically includes line coverage, function coverage, and functional coverage.

DV: Design Verification, referring to the collaboration of design and verification.

Differential Testing (difftest): Selecting two (or more) functionally identical units under test, submitting the same test cases that meet the unit’s requirements to observe whether there are differences in the execution results.

Tool Introduction

The core tool used in this material is Picker(https://github.com/XS-MLVP/picker). Its purpose is to automatically provide high-level programming language interfaces (Python/C++) for RTL-written design modules. Based on this tool, verification personnel with a software development (testing) background can perform chip verification without learning hardware description languages like Verilog/VHDL.

System Requirements

Recommended operating system: Ubuntu 22.04 LTS

In the development and research of system architecture, Linux is the most commonly used platform, mainly because Linux has a rich set of software and tool resources. Due to its open-source nature, important tools and software (such as Verilator) can be easily developed for Linux. In this course, multi-language verification tools like Picker and Swig can run stably on Linux.

1 - Setting Up the Verification Environment

Install the necessary dependencies, download, build, and install the required tools.

Installing the Picker Tool from Source

Installing Dependencies

  1. cmake ( >=3.11 )

  2. gcc ( Supports C++20, at least GCC version 10, recommended 11 or higher )

  3. python3 ( >=3.8 )

  4. verilator ( ==4.218 )

  5. verible-verilog-format ( >=0.0-3428-gcfcbb82b )

  6. swig ( >=4.2.0 , for multi-language support )

Please ensure that the tools like verible-verilog-format have been added to the environment variable $PATH, so they can be called directly from the command line.

Source Code Download

git clone https://github.com/XS-MLVP/picker.git --depth=1
cd picker
make init
Bash

Build and Install

cd picker
make
# You can enable support for other languages by 
#   using `make BUILD_XSPCOMM_SWIG=python,java,scala,golang`.
# Each language requires its own development environment, 
#   which needs to be configured separately, such as `javac` for Java.
sudo -E make install
Bash

The default installation path is /usr/local, with binary files placed in /usr/local/bin and template files in /usr/local/share/picker. If you need to change the installation directory, you can pass arguments to cmake by specifying ARGS, for example: make ARGS="-DCMAKE_INSTALL_PREFIX=your_install_dir" The installation will automatically install the xspcomm base library (https://github.com/XS-MLVP/xcomm), which is used to encapsulate the basic types of RTL modules, located at /usr/local/lib/libxspcomm.so. You may need to manually set the link directory parameters (-L) during compilation.
If support for languages such as Java is enabled, the corresponding xspcomm multi-language packages will also be installed.

picker can also be compiled into a wheel file and installed via pip

To package picker into a wheel installation package, use the following command:

make wheel # or BUILD_XSPCOMM_SWIG=python,java,scala,golang make wheel
Bash

After compilation, the wheel file will be located in the dist directory. You can then install it via pip, for example:

pip install dist/xspcomm-0.0.1-cp311-cp311-linux_x86_64.whl
pip install dist/picker-0.0.1-cp311-cp311-linux_x86_64.whl
Bash

After installation, execute the picker command to except the flow output:

XDut Generate. 
Convert DUT(*.v/*.sv) to C++ DUT libs.

Usage: ./build/bin/picker [OPTIONS] [SUBCOMMAND]

Options:
  -h,--help                   Print this help message and exit
  -v,--version                Print version
  --show_default_template_path
                              Print default template path
  --show_xcom_lib_location_cpp
                              Print xspcomm lib and include location
  --show_xcom_lib_location_java
                              Print xspcomm-java.jar location
  --show_xcom_lib_location_scala
                              Print xspcomm-scala.jar location
  --show_xcom_lib_location_python
                              Print python module xspcomm location
  --show_xcom_lib_location_golang
                              Print golang module xspcomm location
  --check                     check install location and supproted languages

Subcommands:
  export                      Export RTL Projects Sources as Software libraries such as C++/Python
  pack                        Pack UVM transaction as a UVM agent and Python class

Installation Test

picker currently has two subcommands: export and pack.

The export subcommand is used to convert RTL designs into “libraries” corresponding to other high-level programming languages, which can be driven through software.

$picker export –help

Export RTL Projects Sources as Software libraries such as C++/Python
Usage: picker export [OPTIONS] file...

Positionals:
  file TEXT ... REQUIRED      DUT .v/.sv source file, contain the top module

Options:
  -h,--help                   Print this help message and exit
  --fs,--filelist TEXT ...    DUT .v/.sv source files, contain the top module, split by comma.
                              Or use '*.txt' file  with one RTL file path per line to specify the file list
  --sim TEXT [verilator]      vcs or verilator as simulator, default is verilator
  --lang,--language TEXT:{python,cpp,java,scala,golang} [python]
                              Build example project, default is python, choose cpp, java or python
  --sdir,--source_dir TEXT [/home/yaozhicheng/workspace/picker/template]
                              Template Files Dir, default is ${picker_install_path}/../picker/template
  --tdir,--target_dir TEXT [./picker_out]
                              Codegen render files to target dir, default is ./picker_out
  --sname,--source_module_name TEXT ...
                              Pick the module in DUT .v file, default is the last module in the -f marked file
  --tname,--target_module_name TEXT
                              Set the module name and file name of target DUT, default is the same as source.
                              For example, -T top, will generate UTtop.cpp and UTtop.hpp with UTtop class
  --internal TEXT             Exported internal signal config file, default is empty, means no internal pin
  -F,--frequency TEXT [100MHz]
                              Set the frequency of the **only VCS** DUT, default is 100MHz, use Hz, KHz, MHz, GHz as unit
  -w,--wave_file_name TEXT    Wave file name, emtpy mean don't dump wave
  -c,--coverage               Enable coverage, default is not selected as OFF
  --cp_lib,--copy_xspcomm_lib BOOLEAN [1]
                              Copy xspcomm lib to generated DUT dir, default is true
  -V,--vflag TEXT             User defined simulator compile args, passthrough.
                              Eg: '-v -x-assign=fast -Wall --trace' || '-C vcs -cc -f filelist.f'
  -C,--cflag TEXT             User defined gcc/clang compile command, passthrough. Eg:'-O3 -std=c++17 -I./include'
  --verbose                   Verbose mode
  -e,--example                Build example project, default is OFF
  --autobuild BOOLEAN [1]     Auto build the generated project, default is true
Bash

Static Multi-Module Support:

When generating the wrapper for dut_top.sv/v, picker allows specifying multiple module names and their corresponding quantities using the --sname parameter. For example, if there are modules A and B in the design files a.v and b.v respectively, and you need 2 instances of A and 3 instances of B in the generated DUT, and the combined module name is C (if not specified, the default name will be A_B). This can be achieved using the following command:

picker path/a.v,path/b.v --sname A,2,B,3 --tname C
Bash

Environment Variables:

  • DUMPVARS_OPTION: Sets the option parameter for $dumpvars. For example, DUMPVARS_OPTION="+mda" picker .... enables array waveform support in VCS.
  • SIMULATOR_FLAGS: Parameters passed to the backend simulator. Refer to the documentation of the specific backend simulator for details.
  • CFLAGS: Sets the -cflags parameter for the backend simulator.

The pack subcommand is used to convert UVM sequence_item into other languages and then communicate through TLM (currently supports Python, other languages are under development).

$picker pack –help

Pack uvm transaction as a uvm agent and python class
Usage: picker pack [OPTIONS] file...

Positionals:
  file TEXT ... REQUIRED      Sv source file, contain the transaction define

Options:
  -h,--help                   Print this help message and exit
  -e,--example                Generate example project based on transaction, default is OFF
  -c,--force                  Force delete folder when the code has already generated by picker
  -r,--rename TEXT ...        Rename transaction name in picker generate code
Bash

Test Examples

After picker compilation, execute the following commands in the picker directory to test the examples:

bash example/Adder/release-verilator.sh --lang cpp
bash example/Adder/release-verilator.sh --lang python

# Default enable cpp and python
#  for other languages support:make BUILD_XSPCOMM_SWIG=python,java,scala,golang
bash example/Adder/release-verilator.sh --lang java
bash example/Adder/release-verilator.sh --lang scala
bash example/Adder/release-verilator.sh --lang golang

bash example/RandomGenerator/release-verilator.sh --lang cpp
bash example/RandomGenerator/release-verilator.sh --lang python
bash example/RandomGenerator/release-verilator.sh --lang java

More Documents

For guidance on chip verification with picker, please refer to: https://open-verify.cc/mlvp/en/docs/

2 - Case 1: Adder

Demonstrates the principles and usage of the tool based on a simple adder verification. This adder is implemented using simple combinational logic.

RTL Source Code

In this case, we drive a 64-bit adder (combinational circuit) with the following source code:

// A verilog 64-bit full adder with carry in and carry out

module Adder #(
    parameter WIDTH = 64
) (
    input [WIDTH-1:0] a,
    input [WIDTH-1:0] b,
    input cin,
    output [WIDTH-1:0] sum,
    output cout
);

assign {cout, sum}  = a + b + cin;

endmodule
Verilog

This adder contains a 64-bit adder with inputs of two 64-bit numbers and a carry-in signal, outputting a 64-bit sum and a carry-out signal.

Testing Process

During the testing process, we will create a folder named Adder, containing a file called Adder.v. This file contains the above RTL source code.

Exporting RTL to Python Module

Generating Intermediate Files

Navigate to the Adder folder and execute the following command:

picker export --autobuild=false Adder.v -w Adder.fst --sname Adder --tdir picker_out_adder --lang python -e --sim verilator
Bash

This command performs the following actions:

  1. Uses Adder.v as the top file, with Adder as the top module, and generates a dynamic library using the Verilator simulator with Python as the target language.

  2. Enables waveform output, with the target waveform file as Adder.fst.

  3. Includes files for driving the example project (-e), and does not automatically compile after code generation (-autobuild=false).

  4. The final file output path is picker_out_adder.

Some command-line parameters were not used in this command, and they will be introduced in later sections. The output directory structure is as follows. Note that these are all intermediate files and cannot be used directly:

picker_out_adder
|-- Adder.v # Original RTL source code
|-- Adder_top.sv # Generated Adder_top top-level wrapper, using DPI to drive Adder module inputs and outputs
|-- Adder_top.v # Generated Adder_top top-level wrapper in Verilog, needed because Verdi does not support importing SV source code
|-- CMakeLists.txt # For invoking the simulator to compile the basic C++ class and package it into a bare DPI function binary dynamic library (libDPIAdder.so)
|-- Makefile # Generated Makefile for invoking CMakeLists.txt, allowing users to compile libAdder.so through the make command, with manual adjustment of Makefile configuration parameters, or to compile the example project
|-- cmake # Generated cmake folder for invoking different simulators to compile RTL code
|   |-- vcs.cmake
|   `-- verilator.cmake
|-- cpp # CPP example directory containing sample code
|   |-- CMakeLists.txt # For wrapping libDPIAdder.so using basic data types into a directly operable class (libUTAdder.so), not just bare DPI functions
|   |-- Makefile
|   |-- cmake
|   |   |-- vcs.cmake
|   |   `-- verilator.cmake
|   |-- dut.cpp # Generated CPP UT wrapper, including calls to libDPIAdder.so, and UTAdder class declaration and implementation
|   |-- dut.hpp # Header file
|   `-- example.cpp # Sample code calling UTAdder class
|-- dut_base.cpp # Base class for invoking and driving simulation results from different simulators, encapsulated into a unified class to hide all simulator-related code details
|-- dut_base.hpp
|-- filelist.f # Additional file list for multi-file projects, check the -f parameter introduction. Empty in this case
|-- mk
|   |-- cpp.mk # Controls Makefile when targeting C++ language, including logic for compiling example projects (-e, example)
|   `-- python.mk # Same as above, but with Python as the target language
`-- python
    |-- CMakeLists.txt
    |-- Makefile
    |-- cmake
    |   |-- vcs.cmake
    |   `-- verilator.cmake
    |-- dut.i # SWIG configuration file for exporting libDPIAdder.so’s base class and function declarations to Python, enabling Python calls
    `-- dut.py # Generated Python UT wrapper, including calls to libDPIAdder.so, and UTAdder class declaration and implementation, equivalent to libUTAdder.so
Bash

Building Intermediate Files

Navigate to the picker_out_adder directory and execute the make command to generate the final files.

Use the simulator invocation script defined by cmake/*.cmake to compile Adder_top.sv and related files into the libDPIAdder.so dynamic library.Use the compilation script defined by CMakeLists.txt to wrap libDPIAdder.so into the libUTAdder.so dynamic library through dut_base.cpp. Both outputs from steps 1 and 2 are copied to the UT_Adder directory.Generate the wrapper layer using the SWIG tool with dut_base.hpp and dut.hpp header files, and finally build a Python module in the UT_Adder directory.If the -e parameter is included, the pre-defined example.py is placed in the parent directory of the UT_Adder directory as a sample code for calling this Python module. The final directory structure is:

.
|-- Adder.fst # Waveform file for testing
|-- UT_Adder
|   |-- Adder.fst.hier
|   |-- _UT_Adder.so # Wrapper dynamic library generated by SWIG
|   |-- __init__.py # Python module initialization file, also the library definition file
|   |-- libDPIAdder.a # Library file generated by the simulator
|   |-- libUTAdder.so # DPI dynamic library wrapper generated based on dut_base
|   `-- libUT_Adder.py # Python module generated by SWIG
|   `-- xspcomm # Base library folder, no need to pay attention to this
`-- example.py # Sample code
Bash

Setting Up Test Code

Replace the content in example.py with the following Python test code.

from UT_Adder import *
import random

# Generate unsigned random numbers
def random_int():
    return random.randint(-(2**63), 2**63 - 1) & ((1 << 63) - 1)

# Reference model for the adder implemented in Python
def reference_adder(a, b, cin):
    sum = (a + b) & ((1 << 64) - 1)
    carry = sum < a
    sum += cin
    carry = carry or sum < cin
    return sum, 1 if carry else 0

def random_test():
    # Create DUT
    dut = DUTAdder()
    # By default, pin assignments do not write immediately but write on the next clock rising edge, which is suitable for sequential circuits. However, since the Adder is a combinational circuit, we need to write immediately
    # Therefore, the AsImmWrite() method is called to change pin assignment behavior
    dut.a.AsImmWrite()
    dut.b.AsImmWrite()
    dut.cin.AsImmWrite()
    # Loop test
    for i in range 114514):
        a, b, cin = random_int(), random_int(), random_int() & 1
        # DUT: Assign values to Adder circuit pins, then drive the combinational circuit (for sequential circuits or waveform viewing, use dut.Step() to drive)
        dut.a.value, dut.b.value, dut.cin.value = a, b, cin
        dut.RefreshComb()
        # Reference model: Calculate results
        ref_sum, ref_cout = reference_adder(a, b, cin)
        # Check results
        assert dut.sum.value == ref_sum, "sum mismatch: 0x{dut.sum.value:x} != 0x{ref_sum:x}"
        assert dut.cout.value == ref_cout, "cout mismatch: 0x{dut.cout.value:x} != 0x{ref_cout:x}"
        print(f"[test {i}] a=0x{a:x}, b=0x{b:x}, cin=0x{cin:x} => sum: 0x{ref_sum}, cout: 0x{ref_cout}")
    # Test complete
    dut.Finish()
    print("Test Passed")

if __name__ == "__main__":
    random_test()
Python

Running the Test

In the picker_out_adder directory, execute the python3 example.py command to run the test. After the test is complete, we can see the output of the example project.

[...]
[test 114507] a=0x7adc43f36682cffe, b=0x30a718d8cf3cc3b1, cin=0x0 => sum: 0x12358823834579604399, cout: 0x0
[test 114508] a=0x3eb778d6097e3a72, b=0x1ce6af17b4e9128, cin=0x0 => sum: 0x4649372636395916186, cout: 0x0
[test 114509] a=0x42d6f3290b18d4e9, b=0x23e4926ef419b4aa, cin=0x1 => sum: 0x7402657300381600148, cout: 0x0
[test 114510] a=0x505046adecabcc, b=0x6d1d4998ed457b06, cin=0x0 => sum: 0x7885127708256118482, cout: 0x0
[test 114511] a=0x16bb10f22bd0af50, b=0x5813373e1759387, cin=0x1 => sum: 0x2034576336764682968, cout: 0x0
[test 114512] a=0xc46c9f4aa798106, b=0x4d8f52637f0417c4, cin=0x0 => sum: 0x6473392679370463434, cout: 0x0
[test 114513] a=0x3b5387ba95a7ac39, b=0x1a378f2d11b38412, cin=0x0 => sum: 0x6164045699187683403, cout: 0x0
Test Passed

3 - Case 2: Random Number Generator

Demonstrating the tool usage with a 16-bit LFSR random number generator, which includes a clock signal, sequential logic, and registers.

RTL Source Code

In this example, we drive a random number generator, with the source code as follows:

module RandomGenerator (
    input wire clk,
    input wire reset,
    input [15:0] seed,
    output [15:0] random_number
);
    reg [15:0] lfsr;

    always @(posedge clk or posedge reset) begin
        if (reset) begin
            lfsr <= seed;
        end else begin
            lfsr <= {lfsr[14:0], lfsr[15] ^ lfsr[14]};
        end
    end

    assign random_number = lfsr;
endmodule
Verilog

This random number generator contains a 16-bit LFSR, with a 16-bit seed as input and a 16-bit random number as output. The LFSR is updated according to the following rules:

  1. XOR the highest bit and the second-highest bit of the current LFSR to generate a new_bit.

  2. Shift the original LFSR left by one bit, and place new_bit in the lowest bit.

  3. Discard the highest bit.

Testing Process

During testing, we will create a folder named RandomGenerator, which contains a RandomGenerator.v file. The content of this file is the RTL source code mentioned above.

Building the RTL into a Python Module

Generating Intermediate Files

Navigate to the RandomGenerator folder and execute the following command:

picker export --autobuild=false RandomGenerator.v -w RandomGenerator.fst --sname RandomGenerator --tdir picker_out_rmg --lang python -e --sim verilator
Bash

This command does the following:

  1. Uses RandomGenerator.v as the top file and RandomGenerator as the top module, generating a dynamic library with the Verilator simulator, targeting Python as the output language.

  2. Enables waveform output, with the target waveform file being RandomGenerator.fst.

  3. Includes files for driving the example project (-e), and does not automatically compile after code generation (-autobuild=false).

  4. Outputs the final files to the picker_out_rmg directory. The output directory structure is similar to Adder Verification - Generating Intermediate Files , so it will not be elaborated here.

Building Intermediate Files

Navigate to the picker_out_rmg directory and execute the make command to generate the final files.

Note: The compilation process is similar to Adder Verification - Compilation Process , so it will not be elaborated here. The final directory structure will be:

picker_out_rmg
|-- RandomGenerator.fst # Waveform file from the test
|-- UT_RandomGenerator
|   |-- RandomGenerator.fst.hier
|   |-- _UT_RandomGenerator.so # Swig-generated wrapper dynamic library
|   |-- __init__.py  # Initialization file for the Python module, also the library definition file
|   |-- libDPIRandomGenerator.a # Library file generated by the simulator
|   |-- libUTRandomGenerator.so # libDPI dynamic library wrapper generated based on dut_base
|   `-- libUT_RandomGenerator.py # Python module generated by Swig
|   `-- xspcomm  # xspcomm base library, fixed folder, no need to pay attention to it
`-- example.py # Example code
Shell

Configuring the Test Code

Replace the content of example.py with the following code.

from UT_RandomGenerator import *
import random

# Define the reference model
class LFSR_16:
    def __init__(self, seed):
        self.state = seed & ((1 << 16) - 1)

    def Step(self):
        new_bit = (self.state >> 15) ^ (self.state >> 14) & 1
        self.state = ((self.state << 1) | new_bit ) & ((1 << 16) - 1)

if __name__ == "__main__":
    dut = DUTRandomGenerator()            # Create the DUT
    dut.InitClock("clk")                  # Specify the clock pin and initialize the clock
    seed = random.randint(0, 2**16 - 1)   # Generate a random seed
    dut.seed.value = seed                 # Set the DUT seed
    ref = LFSR_16(seed)                   # Create a reference model for comparison

    # Reset the DUT
    dut.reset.value = 1                   # Set reset signal to 1
    dut.Step()                            # Advance one clock cycle (DUTRandomGenerator is a sequential circuit, it requires advancing via Step)
    dut.reset.value = 0                   # Set reset signal to 0
    dut.Step()                            # Advance one clock cycle

    for i in range(65536):                # Loop 65536 times
        dut.Step()                        # Advance one clock cycle for the DUT, generating a random number
        ref.Step()                        # Advance one clock cycle for the reference model, generating a random number
        assert dut.random_number.value == ref.state, "Mismatch"  # Compare the random numbers generated by the DUT and the reference model
        print(f"Cycle {i}, DUT: {dut.random_number.value:x}, REF: {ref.state:x}") # Print the results
    # Complete the test
    print("Test Passed")
    dut.Finish()    # Finish function will complete the writing of waveform, coverage, and other files
Python

Running the Test Program

Execute python example.py in the picker_out_rmg directory to run the test program. After the execution, if Test Passed is output, the test is considered passed. After the run is complete, a waveform file RandomGenerator.fst will be generated, which can be viewed in the terminal using the following command:

gtkwave RandomGenerator.fst Example output:

···
Cycle 65529, DUT: d9ea, REF: d9ea
Cycle 65530, DUT: b3d4, REF: b3d4
Cycle 65531, DUT: 67a9, REF: 67a9
Cycle 65532, DUT: cf53, REF: cf53
Cycle 65533, DUT: 9ea6, REF: 9ea6
Cycle 65534, DUT: 3d4d, REF: 3d4d
Cycle 65535, DUT: 7a9a, REF: 7a9a
Test Passed, destroy UT_RandomGenerator
Shell

4 - Case 3: Dual-Port Stack (Callback)

A dual-port stack is a stack with two ports, each supporting push and pop operations. This case study uses a dual-port stack as an example to demonstrate how to use callback functions to drive the DUT.

Introduction to the Dual-Port Stack

A dual-port stack is a data structure that supports simultaneous operations on two ports. Compared to a traditional single-port stack, a dual-port stack allows simultaneous read and write operations. In scenarios such as multithreaded concurrent read and write operations, the dual-port stack can provide better performance. In this example, we provide a simple dual-port stack implementation, with the source code as follows:

module dual_port_stack (
    input clk,
    input rst,

    // Interface 0
    input in0_valid,
    output in0_ready,
    input [7:0] in0_data,
    input [1:0] in0_cmd,
    output out0_valid,
    input out0_ready,
    output [7:0] out0_data,
    output [1:0] out0_cmd,

    // Interface 1
    input in1_valid,
    output in1_ready,
    input [7:0] in1_data,
    input [1:0] in1_cmd,
    output out1_valid,
    input out1_ready,
    output [7:0] out1_data,
    output [1:0] out1_cmd
);
    // Command definitions
    localparam CMD_PUSH = 2'b00;
    localparam CMD_POP = 2'b01;
    localparam CMD_PUSH_OKAY = 2'b10;
    localparam CMD_POP_OKAY = 2'b11;

    // Stack memory and pointer
    reg [7:0] stack_mem[0:255];
    reg [7:0] sp;
    reg busy;

    reg [7:0] out0_data_reg, out1_data_reg;
    reg [1:0] out0_cmd_reg, out1_cmd_reg;
    reg out0_valid_reg, out1_valid_reg;

    assign out0_data = out0_data_reg;
    assign out0_cmd = out0_cmd_reg;
    assign out0_valid = out0_valid_reg;
    assign out1_data = out1_data_reg;
    assign out1_cmd = out1_cmd_reg;
    assign out1_valid = out1_valid_reg;

    always @(posedge clk or posedge rst) begin
        if (rst) begin
            sp <= 0;
            busy <= 0;
        end else begin
            // Interface 0 Request Handling
            if (!busy && in0_valid && in0_ready) begin
                case (in0_cmd)
                    CMD_PUSH: begin
                        busy <= 1;
                        sp <= sp + 1;
                        out0_valid_reg <= 1;
                        stack_mem[sp] <= in0_data;
                        out0_cmd_reg <= CMD_PUSH_OKAY;
                    end
                    CMD_POP: begin
                        busy <= 1;
                        sp <= sp - 1;
                        out0_valid_reg <= 1;
                        out0_data_reg <= stack_mem[sp - 1];
                        out0_cmd_reg <= CMD_POP_OKAY;
                    end
                    default: begin
                        out0_valid_reg <= 0;
                    end
                endcase
            end

            // Interface 1 Request Handling
            if (!busy && in1_valid && in1_ready) begin
                case (in1_cmd)
                    CMD_PUSH: begin
                        busy <= 1;
                        sp <= sp + 1;
                        out1_valid_reg <= 1;
                        stack_mem[sp] <= in1_data;
                        out1_cmd_reg <= CMD_PUSH_OKAY;
                    end
                    CMD_POP: begin
                        busy <= 1;
                        sp <= sp - 1;
                        out1_valid_reg <= 1;
                        out1_data_reg <= stack_mem[sp - 1];
                        out1_cmd_reg <= CMD_POP_OKAY;
                    end
                    default: begin
                        out1_valid_reg <= 0;
                    end
                endcase
            end

            // Interface 0 Response Handling
            if (busy && out0_ready) begin
                out0_valid_reg <= 0;
                busy <= 0;
            end

            // Interface 1 Response Handling
            if (busy && out1_ready) begin
                out1_valid_reg <= 0;
                busy <= 0;
            end
        end
    end

    assign in0_ready = (in0_cmd == CMD_PUSH && sp < 255 || in0_cmd == CMD_POP && sp > 0) && !busy;
    assign in1_ready = (in1_cmd == CMD_PUSH && sp < 255 || in1_cmd == CMD_POP && sp > 0) && !busy && !(in0_ready && in0_valid);

endmodule
Verilog

In this implementation, aside from the clock signal (clk) and reset signal (rst), there are also input and output signals for the two ports, which have the same interface definition. The meaning of each signal for the ports is as follows:

  • Request Port (in)

    • in_valid: Input data valid signal

    • in_ready: Input data ready signal

    • in_data: Input data

    • in_cmd: Input command (0: PUSH, 1: POP)

  • Response Port (out)

    • out_valid: Output data valid signal

    • out_ready: Output data ready signal

    • out_data: Output data

    • out_cmd: Output command (2: PUSH_OKAY, 3: POP_OKAY)

When we want to perform an operation on the stack through a port, we first need to write the required data and command to the input port, and then wait for the output port to return the result. Specifically, if we want to perform a PUSH operation on the stack, we should first write the data to be pushed into in_data, then set in_cmd to 0, indicating a PUSH operation, and set in_valid to 1, indicating that the input data is valid. Next, we need to wait for in_ready to be 1, ensuring that the data has been correctly received, at which point the PUSH request has been correctly sent.After the command is successfully sent, we need to wait for the stack’s response information on the response port. When out_valid is 1, it indicates that the stack has completed the corresponding operation. At this point, we can read the stack’s returned data from out_data (the returned data of the POP operation will be placed here) and read the stack’s returned command from out_cmd. After reading the data, we need to set out_ready to 1 to notify the stack that the returned information has been correctly received. If requests from both ports are valid simultaneously, the stack will prioritize processing requests from port 0.

Setting Up the Driver Environment

Similar to Case Study 1 and Case Study 2, before testing the dual-port stack, we first need to use the Picker tool to build the RTL code into a Python Module. After the build is complete, we will use a Python script to drive the RTL code for testing. First, create a file named dual_port_stack.v and copy the above RTL code into this file. Then, execute the following command in the same folder:

picker export --autobuild=true dual_port_stack.v -w dual_port_stack.fst --sname dual_port_stack --tdir picker_out_dual_port_stack --lang python -e --sim verilator
Bash

The generated driver environment is located in the picker_out_dual_port_stack folder. Inside, UT_dual_port_stack is the generated Python Module, and example.py is the test script. You can run the test script with the following commands:

cd picker_out_dual_port_stack
python3 example.py
Bash

If no errors occur during the run, it means the environment has been set up correctly.

Driving the DUT with Coroutines

Driving the DUT with Callback Functions

In this case, we need to drive a dual-port stack to test its functionality. However, you may quickly realize that the methods used in Cases 1 and 2 are insufficient for driving a dual-port stack. In the previous tests, the DUT had a single execution logic where you input data into the DUT and wait for the output.

However, a dual-port stack is different because its two ports operate with independent execution logic. During the drive process, these two ports might be in entirely different states. For example, while port 0 is waiting for data from the DUT, port 1 might be sending a new request. In such situations, simple sequential execution logic will struggle to drive the DUT effectively.

Therefore, in this case, we will use the dual-port stack as an example to introduce a callback function-based driving method to handle such DUTs.

Introduction to Callback Functions

A callback function is a common programming technique that allows us to pass a function as an argument, which is then called when a certain condition is met. In the generated Python Module, we provide an interface StepRis for registering callback functions with the internal execution environment. Here’s how it works:

from UT_dual_port_stack import DUTdual_port_stack

def callback(cycles):
    print(f"The current clock cycle is {cycles}")

dut = DUTdual_port_stack()
dut.StepRis(callback)
dut.Step(10)
Python

You can run this code directly to see the effect of the callback function. In the above code, we define a callback function callback that takes a cycles parameter and prints the current clock cycle each time it is called. We then register this callback function to the DUT via StepRis.Once the callback function is registered, each time the Step function is run, which corresponds to each clock cycle, the callback function is invoked on the rising edge of the clock signal, with the current clock cycle passed as an argument. Using this approach, we can write different execution logics as callback functions and register multiple callback functions to the DUT, thereby achieving parallel driving of the DUT.

Dual-Port Stack Driven by Callback Functions

To complete a full execution logic using callback functions, we typically write it in the form of a state machine. Each callback function invocation triggers a state change within the state machine, and multiple invocations complete a full execution logic.

Below is an example code for driving a dual-port stack using callback functions:

import random
from UT_dual_port_stack import *
from enum import Enum

class StackModel:
    def __init__(self):
        self.stack = []

    def commit_push(self, data):
        self.stack.append(data)
        print("push", data)

    def commit_pop(self, dut_data):
        print("Pop", dut_data)
        model_data = self.stack.pop()
        assert model_data == dut_data, f"The model data {model_data} is not equal to the dut data {dut_data}"
        print(f"Pass: {model_data} == {dut_data}")

class SinglePortDriver:
    class Status(Enum):
        IDLE = 0
        WAIT_REQ_READY = 1
        WAIT_RESP_VALID = 2
    class BusCMD(Enum):
        PUSH = 0
        POP = 1
        PUSH_OKAY = 2
        POP_OKAY = 3

    def __init__(self, dut, model: StackModel, port_dict):
        self.dut = dut
        self.model = model
        self.port_dict = port_dict

        self.status = self.Status.IDLE
        self.operation_num = 0
        self.remaining_delay = 0

    def push(self):
        self.port_dict["in_valid"].value = 1
        self.port_dict["in_cmd"].value = self.BusCMD.PUSH.value
        self.port_dict["in_data"].value = random.randint(0, 2**32-1)

    def pop(self):
        self.port_dict["in_valid"].value = 1
        self.port_dict["in_cmd"].value = self.BusCMD.POP.value

    def step_callback(self, cycle):
        if self.status == self.Status.WAIT_REQ_READY:
            if self.port_dict["in_ready"].value == 1:
                self.port_dict["in_valid"].value = 0
                self.port_dict["out_ready"].value = 1
                self.status = self.Status.WAIT_RESP_VALID

                if self.port_dict["in_cmd"].value == self.BusCMD.PUSH.value:
                    self.model.commit_push(self.port_dict["in_data"].value)

        elif self.status == self.Status.WAIT_RESP_VALID:
            if self.port_dict["out_valid"].value == 1:
                self.port_dict["out_ready"].value = 0
                self.status = self.Status.IDLE
                self.remaining_delay = random.randint(0, 5)

                if self.port_dict["out_cmd"].value == self.BusCMD.POP_OKAY.value:
                    self.model.commit_pop(self.port_dict["out_data"].value)

        if self.status == self.Status.IDLE:
            if self.remaining_delay == 0:
                if self.operation_num < 10:
                    self.push()
                elif self.operation_num < 20:
                    self.pop()
                else:
                    return

                self.operation_num += 1
                self.status = self.Status.WAIT_REQ_READY
            else:
                self.remaining_delay -= 1

def test_stack(stack):
    model = StackModel()

    port0 = SinglePortDriver(stack, model, {
        "in_valid": stack.in0_valid,
        "in_ready": stack.in0_ready,
        "in_data": stack.in0_data,
        "in_cmd": stack.in0_cmd,
        "out_valid": stack.out0_valid,
        "out_ready": stack.out0_ready,
        "out_data": stack.out0_data,
        "out_cmd": stack.out0_cmd,
    })

    port1 = SinglePortDriver(stack, model, {
        "in_valid": stack.in1_valid,
        "in_ready": stack.in1_ready,
        "in_data": stack.in1_data,
        "in_cmd": stack.in1_cmd,
        "out_valid": stack.out1_valid,
        "out_ready": stack.out1_ready,
        "out_data": stack.out1_data,
        "out_cmd": stack.out1_cmd,
    })

    dut.StepRis(port0.step_callback)
    dut.StepRis(port1.step_callback)

    dut.Step(200)


if __name__ == "__main__":
    dut = DUTdual_port_stack()
    dut.InitClock("clk")
    test_stack(dut)
    dut.Finish()
Python

In the code above, the driving process is implemented such that each port independently drives the DUT, with a random delay added after each request is completed. Each port performs 10 PUSH operations and 10 POP operations.When a PUSH or POP request takes effect, the corresponding commit_push or commit_pop function in the StackModel is called to simulate stack behavior. After each POP operation, the data returned by the DUT is compared with the model’s data to ensure consistency.To implement the driving behavior for a single port, we created the SinglePortDriver class, which includes a method for sending and receiving data. The step_callback function handles the internal update logic.In the test_stack function, we create a SinglePortDriver instance for each port of the dual-port stack, pass the corresponding interfaces, and register the callback function to the DUT using the StepRis function. When dut.Step(200) is called, the callback function is automatically invoked each clock cycle to complete the entire driving logic.SinglePortDriver Driving Logic As mentioned earlier, callback functions typically require the execution logic to be implemented as a state machine. Therefore, in the SinglePortDriver class, the status of each port is recorded, including:

  • IDLE: Idle state, waiting for the next operation.

    • In the idle state, check the remaining_delay status to determine whether the current delay has ended. If the delay has ended, proceed with the next operation; otherwise, continue waiting.

    • When the next operation is ready, check the operation_num status (the number of operations already performed) to determine whether the next operation should be PUSH or POP. Then, call the corresponding function to assign values to the port and switch the status to WAIT_REQ_READY.

  • WAIT_REQ_READY: Waiting for the request port to be ready.

    • After the request is sent (in_valid is valid), wait for the in_ready signal to be valid to ensure the request has been correctly received.

    • Once the request is correctly received, set in_valid to 0 and out_ready to 1, indicating the request is complete and ready to receive a response.

  • WAIT_RESP_VALID: Waiting for the response port to return data.

    • After the request is correctly received, wait for the DUT’s response, i.e., wait for the out_valid signal to be valid. When the out_valid signal is valid, it indicates that the response has been generated and the request is complete. Set out_ready to 0 and switch the status to IDLE.

Running the Test

Copy the above code into example.py, and then run the following command:

cd picker_out_dual_port_stack
python3 example.py
Bash

You can run the test code for this case directly, and you will see output similar to the following:

...
push 77
push 140
push 249
push 68
push 104
push 222
...
Pop 43
Pass: 43 == 43
Pop 211
Pass: 211 == 211
Pop 16
Pass: 16 == 16
Pop 255
Pass: 255 == 255
Pop 222
Pass: 222 == 222
Pop 104
...
Shell

In the output, you can see the data for each PUSH and POP operation, as well as the result of each POP operation. If there is no error message in the output, it indicates that the test has passed.

Pros and Cons of Callback-Driven Design

By using callbacks, we can achieve parallel driving of the DUT, as demonstrated in this example. We utilized two callbacks to drive two ports with independent execution logic. In simple scenarios, callbacks offer a straightforward method for parallel driving.

However, as shown in this example, even implementing a simple “request-response” flow requires maintaining a significant amount of internal state. Callbacks break down what should be a cohesive execution logic into multiple function calls, adding considerable complexity to both the code writing and debugging processes.

5 - Case 4: Dual-Port Stack (Coroutines)

The dual-port stack is a stack with two ports, each supporting push and pop operations. This case study uses the dual-port stack as an example to demonstrate how to drive a DUT using coroutines.

Introduction to the Dual-Port Stack and Environment Setup

The dual-port stack used in this case is identical to the one implemented in Case 3. Please refer to the Introduction to the Dual-Port Stack and Driver Environment Setup in Case 3 for more details.

Driving the DUT Using Coroutines

In Case 3, we used callbacks to drive the DUT. While callbacks offer a way to perform parallel operations, they break the execution flow into multiple function calls and require maintaining a large amount of intermediate state, making the code more complex to write and debug.

In this case, we will introduce a method of driving the DUT using coroutines. This method not only allows for parallel operations but also avoids the issues associated with callbacks.

Introduction to Coroutines

Coroutines are a form of “lightweight” threading that enables behavior similar to concurrent execution without the overhead of traditional threads. Coroutines operate on a single-threaded event loop, where multiple coroutines can be defined and added to the event loop, with the event loop managing their scheduling.

Typically, a defined coroutine will continue to execute until it encounters an event that requires waiting. At this point, the event loop pauses the coroutine and schedules other coroutines to run. Once the event occurs, the event loop resumes the paused coroutine to continue execution.

For parallel execution in hardware verification, this behavior is precisely what we need. We can create multiple coroutines to handle various verification tasks. We can treat the clock execution as an event, and within each coroutine, wait for this event. When the clock signal arrives, the event loop wakes up all the waiting coroutines, allowing them to continue executing until they wait for the next clock signal. We use Python’s asyncio to implement coroutine support:

import asyncio
from UT_dual_port_stack import *

async def my_coro(dut, name):
    for i in range(10):
        print(f"{name}: {i}")
        await dut.AStep(1)

async def test_dut(dut):
    asyncio.create_task(my_coro(dut, "coroutine 1"))
    asyncio.create_task(my_coro(dut, "coroutine 2"))
    await asyncio.create_task(dut.RunStep(10))

dut = DUTdual_port_stack()
dut.InitClock("clk")
asyncio.run(test_dut(dut))
dut.Finish()
Python

You can run the above code directly to observe the execution of coroutines. In the code, we use create_task to create two coroutine tasks and add them to the event loop. Each coroutine task continuously prints a number and waits for the next clock signal.We use dut.RunStep(10) to create a background clock, which continuously generates clock synchronization signals, allowing other coroutines to continue execution when the clock signal arrives.

Driving the Dual-Port Stack with Coroutines

Using coroutines, we can write the logic for driving each port of the dual-port stack as an independent execution flow without needing to maintain a large amount of intermediate state.

Below is a simple verification code using coroutines:

import asyncio
import random
from UT_dual_port_stack import *
from enum import Enum

class StackModel:
    def __init__(self):
        self.stack = []

    def commit_push(self, data):
        self.stack.append(data)
        print("Push", data)

    def commit_pop(self, dut_data):
        print("Pop", dut_data)
        model_data = self.stack.pop()
        assert model_data == dut_data, f"The model data {model_data} is not equal to the dut data {dut_data}"
        print(f"Pass: {model_data} == {dut_data}")

class SinglePortDriver:
    class BusCMD(Enum):
        PUSH = 0
        POP = 1
        PUSH_OKAY = 2
        POP_OKAY = 3

    def __init__(self, dut, model: StackModel, port_dict):
        self.dut = dut
        self.model = model
        self.port_dict = port_dict

    async def send_req(self, is_push):
        self.port_dict["in_valid"].value = 1
        self.port_dict["in_cmd"].value = self.BusCMD.PUSH.value if is_push else self.BusCMD.POP.value
        self.port_dict["in_data"].value = random.randint(0, 2**8-1)
        await self.dut.AStep(1)

        await self.dut.Acondition(lambda: self.port_dict["in_ready"].value == 1)
        self.port_dict["in_valid"].value = 0

        if is_push:
            self.model.commit_push(self.port_dict["in_data"].value)

    async def receive_resp(self):
        self.port_dict["out_ready"].value = 1
        await self.dut.AStep(1)

        await self.dut.Acondition(lambda: self.port_dict["out_valid"].value == 1)
        self.port_dict["out_ready"].value = 0

        if self.port_dict["out_cmd"].value == self.BusCMD.POP_OKAY.value:
            self.model.commit_pop(self.port_dict["out_data"].value)

    async def exec_once(self, is_push):
        await self.send_req(is_push)
        await self.receive_resp()
        for _ in range(random.randint(0, 5)):
            await self.dut.AStep(1)

    async def main(self):
        for _ in range(10):
            await self.exec_once(is_push=True)
        for _ in range(10):
            await self.exec_once(is_push=False)

async def test_stack(stack):
    model = StackModel()

    port0 = SinglePortDriver(stack, model, {
        "in_valid": stack.in0_valid,
        "in_ready": stack.in0_ready,
        "in_data": stack.in0_data,
        "in_cmd": stack.in0_cmd,
        "out_valid": stack.out0_valid,
        "out_ready": stack.out0_ready,
        "out_data": stack.out0_data,
        "out_cmd": stack.out0_cmd,
    })

    port1 = SinglePortDriver(stack, model, {
        "in_valid": stack.in1_valid,
        "in_ready": stack.in1_ready,
        "in_data": stack.in1_data,
        "in_cmd": stack.in1_cmd,
        "out_valid": stack.out1_valid,
        "out_ready": stack.out1_ready,
        "out_data": stack.out1_data,
        "out_cmd": stack.out1_cmd,
    })

    asyncio.create_task(port0.main())
    asyncio.create_task(port1.main())
    await asyncio.create_task(dut.RunStep(200))

if __name__ == "__main__":
    dut = DUTdual_port_stack()
    dut.InitClock("clk")
    asyncio.run(test_stack(dut))
    dut.Finish()
Python

Similar to Case 3, we define a SinglePortDriver class to handle the logic for driving a single port. In the main function, we create two instances of SinglePortDriver, each responsible for driving one of the two ports. We place the driving processes for both ports in the main function and add them to the event loop using asyncio.create_task. Finally, we use dut.RunStep(200) to create a background clock to drive the test. This code implements the same test logic as in Case 3, where each port performs 10 PUSH and 10 POP operations, followed by a random delay after each operation. As you can see, using coroutines eliminates the need to maintain any intermediate state. SinglePortDriver Logic In the SinglePortDriver class, we encapsulate a single operation into the exec_once function. In the main function, we first call exec_once(is_push=True) 10 times to complete the PUSH operations, and then call exec_once(is_push=False) 10 times to complete the POP operations.In the exec_once function, we first call send_req to send a request, then call receive_resp to receive the response, and finally wait for a random number of clock signals to simulate a delay.The send_req and receive_resp functions have similar logic; they set the corresponding input/output signals to the appropriate values and wait for the corresponding signals to become valid. The implementation can be written according to the execution sequence of the ports.Similarly, we use the StackModel class to simulate stack behavior. The commit_push and commit_pop functions simulate the PUSH and POP operations, respectively, with the POP operation comparing the data.

Running the Test

Copy the above code into example.py and then execute the following commands:

cd picker_out_dual_port_stack
python3 example.py
Bash

You can run the test code for this case directly, and you will see output similar to the following:

...
Push 141
Push 102
Push 63
Push 172
Push 208
Push 130
Push 151
...
Pop 102
Pass: 102 == 102
Pop 138
Pass: 138 == 138
Pop 56
Pass: 56 == 56
Pop 153
Pass: 153 == 153
Pop 129
Pass: 129 == 129
Pop 235
Pass: 235 == 235
Pop 151
...
Shell

In the output, you can see the data for each PUSH and POP operation, as well as the result of each POP operation. If there are no error messages in the output, it indicates that the test passed.

Pros and Cons of Coroutine-Driven Design

Using coroutine functions, we can effectively achieve parallel operations while avoiding the issues that come with callback functions. Each independent execution flow can be fully retained as a coroutine, which greatly simplifies code writing.

However, in more complex scenarios, you may find that having many coroutines can make synchronization and timing management between them more complicated. This is especially true when you need to synchronize between two coroutines that do not directly interact with the DUT. At this point, you’ll need a set of coroutine writing standards and design patterns for verification code to help you write coroutine-based verification code more effectively. Therefore, we provide the mlvp library, which offers a set of design patterns for coroutine-based verification code. You can learn more about mlvp and how it can help you write better verification code by visiting here .